Monitoring Corda Nodes With Prometheus, Grafana and ELK on Docker

June 22, 2020

Motivation

When deploying Corda nodes in different environments, it’s good practice to deploy monitoring and extract performance metrics to determine how well your node or network is doing.

  • Analysing CorDapp flow performance
  • Monitoring JVM and machine resource usage

A useful diagram and dashboards

Here is a diagram showing our local deployment architecture:

Monitoring deployment architecture with Corda nodes, Filebeat, ELK Stack, Prometheus and Grafana

Grafana dashboard showing JMX metrics processed by Prometheus


Github

Ready to get started?

➜ git clone https://github.com/neal-shah/corda-monitoring-prometheus-grafana-elk

Step by step instructions

Step 1: Prepare the workspace directory

Create the directory structure and download the necessary in this step. 3 jars are required:

  • corda-finance-contracts-4.4.jar
  • corda-finance-workflows-4.4.jar
➜ ./01_setup-directory.sh
➜ tree mynetworkmynetwork
├── corda-finance-contracts-4.4.jar
├── corda-finance-workflows-4.4.jar
├── corda-tools-network-bootstrapper-4.4.jar
├── grafana
├── prometheus
├── filebeat
├── logstash
└── shared
    ├── additional-node-infos
    ├── cordapps
    └── drivers
        └── jmx_prometheus_javaagent-0.13.0.jar

Step 2: Create node configuration files

You will require 3 node configurations:

  • PartyA
  • PartyB
➜ ./02_create-node-configurations.sh
➜  tree mynetworkmynetwork
├── corda-finance-contracts-4.4.jar
├── corda-finance-workflows-4.4.jar
├── corda-tools-network-bootstrapper-4.4.jar
├── grafana
├── filebeat
├── logstash
├── notary_node.conf
├── partya_node.conf
├── partyb_node.conf
├── prometheus
└── shared
    ├── additional-node-infos
    ├── cordapps
    └── drivers
        └── jmx_prometheus_javaagent-0.13.0.jar
devMode=true
emailAddress="[email protected]"
myLegalName="O=PartyA, L=London, C=GB"
p2pAddress="partya:10200"
rpcSettings {
    address="0.0.0.0:10201"
    adminAddress="0.0.0.0:10202"
}
security {
    authService {
        dataSource {
            type=INMEMORY
            users=[
                {
                    password="password"
                    permissions=[
                        ALL
                    ]
                    username=user
                }
            ]
        }
    }
}
cordappSignerKeyFingerprintBlacklist = []
sshd {
  port = 2222
}

Step 3: Run the Corda Network Bootstrapper

The Corda Network Bootstrapper creates a development network of peer nodes, using dev certificates. You don’t need to worry about registering nodes — the bootstrapper takes care of that for you.

➜ ./03_run-corda-network-bootstrapper.sh

Bootstrapping local test network in /corda-monitoring-prometheus-grafana/mynetwork
Generating node directory for partya
Generating node directory for notary
Generating node directory for partyb
Nodes found in the following sub-directories: [notary, partya, partyb]
Found the following CorDapps: [corda-finance-workflows-4.4.jar, corda-finance-contracts-4.4.jar]
Copying CorDapp JARs into node directories
Waiting for all nodes to generate their node-info files...
Distributing all node-info files to all nodes
Loading existing network parameters... none found
Gathering notary identities
Generating contract implementations whitelist
New NetworkParameters {
      minimumPlatformVersion=6
      notaries=[NotaryInfo(identity=O=Notary, L=London, C=GB, validating=false)]
      maxMessageSize=10485760
      maxTransactionSize=524288000
      whitelistedContractImplementations {

      }
      eventHorizon=PT720H
      packageOwnership {

      }
      modifiedTime=2020-06-09T15:29:59.724Z
      epoch=1
  }
Bootstrapping complete!

Step 4: Prepare for Docker

There are some common files that are shared between the peer nodes. You can put these in one folder — this will make your Docker-Compose service volumes a bit clearer to read.

➜ ./04_copy-common-files.sh

Step 5: Create the Prometheus configuration files

Execute the 05_create-monitoring-configurations.sh shell script:

➜ ./05_create-monitoring-configurations.sh
global:
  scrape_interval: 5s
  external_labels:
    monitor: "corda-network"
scrape_configs:
  - job_name: "notary"
    static_configs:
      - targets: ["notary:8080"]
    relabel_configs:
      - source_labels: [__address__]
        regex: "([^:]+):\\d+"
        target_label: instance
  - job_name: "nodes"
    static_configs:
      - targets: ["partya:8080", "partyb:8080"]
    relabel_configs:
      - source_labels: [__address__]
        regex: "([^:]+):\\d+"
        target_label: instance

Step 6: Create the Docker-Compose file

Finally, you need a docker-compose.yml file which allows us to bring up all the services in just one command.

➜ ./06_create-docker-compose.sh
...
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    ports:
      - 9090:9090
    command:
      - --config.file=/etc/prometheus/prometheus.yml
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro

  grafana:
    hostname: grafana
    container_name: grafana
    image: grafana/grafana:latest
    ports:
      - 3000:3000
    volumes:
      - ./grafana/data:/var/lib/grafana
    environment:
      - "GF_INSTALL_PLUGINS=grafana-clock-panel"

  elk:
      hostname: elk
      container_name: elk
      image: sebp/elk
      volumes:
        - ./logstash/02-beats-input.conf:/etc/logstash/conf.d/02-beats-input.conf
      ports:
        - "5601:5601"
        - "9200:9200"
        - "5044:5044"

  filebeat:
    hostname: filebeat
    container_name: filebeat
    image: docker.elastic.co/beats/filebeat:7.7.1
    volumes:
      - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - ./partya/logs:/var/log/partya
      - ./partyb/logs:/var/log/partyb
      - ./notary/logs:/var/log/notary
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
    environment:
      - "setup.kibana.host=elk:5601"
      - "output.elasticsearch.hosts=["elk:9200"]"
    depends_on:
      - elk
➜ docker-compose -f ./mynetwork/docker-compose.yml up -d

Creating network "mynetwork_default" with the default driver
Creating volume "mynetwork_grafana-storage" with default driver
Creating grafana    ... done
Creating partyb     ... done
Creating prometheus ... done
Creating partya     ... done
Creating elk        ... done
Creating notary     ... done
Creating filebeat   ... done
➜ docker logs -f elk
...
[2020-06-14T20:12:53,575][INFO ][logstash.inputs.beats    ] Beats inputs: ?Starting input listener {:address=>"0.0.0.0:5044"}
[2020-06-14T20:12:53,595][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2020-06-14T20:12:53,765][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-06-14T20:12:53,772][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2020-06-14T20:12:54,398][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
➜ docker ps -a
CONTAINER ID        IMAGE                                    COMMAND                  CREATED             STATUS              PORTS                                                                              NAMES
a35d3042490d        docker.elastic.co/beats/filebeat:7.7.1   "/usr/local/bin/dock…"   4 minutes ago       Up 4 minutes                                                                                           filebeat
16e142719907        sebp/elk                                 "/usr/local/bin/star…"   4 minutes ago       Up 4 minutes        0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 0.0.0.0:9200->9200/tcp, 9300/tcp   elk
f2310f6d336f        corda/corda-zulu-java1.8-4.4:latest      "run-corda"              4 minutes ago       Up 4 minutes        10200/tcp, 0.0.0.0:2222->2222/tcp, 10202/tcp, 0.0.0.0:10005->10201/tcp             partya
e04278061856        corda/corda-zulu-java1.8-4.4:latest      "run-corda"              4 minutes ago       Up 4 minutes        10200/tcp, 10202/tcp, 0.0.0.0:10002->10201/tcp                                     notary
61aeb9a442a9        corda/corda-zulu-java1.8-4.4:latest      "run-corda"              4 minutes ago       Up 4 minutes        10200/tcp, 10202/tcp, 0.0.0.0:3333->2222/tcp, 0.0.0.0:10008->10201/tcp             partyb
712b28c69970        prom/prometheus:latest                   "/bin/prometheus --c…"   4 minutes ago       Up 4 minutes        0.0.0.0:9090->9090/tcp                                                             prometheus
97a5c3bc19ce        grafana/grafana:latest                   "/run.sh"                4 minutes ago       Up 4 minutes        0.0.0.0:3000->3000/tcp                                                             grafana

Step 7: Set up Kibana

Head over to your browser, and go to http://localhost:5601 . You should see the Kibana homepage.

Setting up Kibana index pattern for Filebeat sources

Kibana Discover viewing log files from Corda nodes

Step 8: Set up Grafana

On your browser, go to http://localhost:3000 .

  • Password: admin

Grafana homepage after logging in for the first time

Grafana Prometheus data source configuration

Step 9: Run some Corda Finance flows

SSH into the PartyA node Crash shell:

➜ ssh user@localhost -p 2222
Welcome to the Corda interactive shell.
You can see the available commands by typing 'help'.

Mon Jun 15 07:52:13 GMT 2020>>>
Mon Jun 15 07:53:52 GMT 2020>>> flow start CashIssueAndPaymentFlow amount: 1000 GBP, issueRef: TestTransaction, recipient: PartyB, anonymous: false, notary: Notary

 ✓ Starting
 ✓ Issuing cash
          Generating anonymous identities
     ✓ Generating transaction
     ✓ Signing transaction
     ✓ Finalising transaction
              Requesting signature by notary service
                  Requesting signature by Notary service
                  Validating response from Notary service
         ✓ Broadcasting transaction to participants
 ✓ Paying recipient
     ✓ Generating anonymous identities
     ✓ Generating transaction
     ✓ Signing transaction
     ✓ Finalising transaction
         ✓ Requesting signature by notary service
             ✓ Requesting signature by Notary service
             ✓ Validating response from Notary service
         ✓ Broadcasting transaction to participants
▶︎ Done
Flow completed with result: Result(stx=SignedTransaction(id=FB08662B2E0A19ECF9B0E3E44D2DF25934F9576DBF262D794EE2C795C3269503), recipient=O=PartyB, L=London, C=GB)

Step 10: Explore Grafana and ELK

Go back to your Grafana dashboard . You will see the following:

Grafana dashboard showing the Corda transaction

Kibana log files from Corda nodes

Further reading

JMX Monitoring and Prometheus

Corda nodes expose JMX metrics that can be collected to provide data analysis and monitoring functionality.

  • The Prometheus Server collects these metrics using a pre-defined scrape interval, providing both a UI and an API for processed data to be consumed.
➜ java -Dcapsule.jvm.args= -javaagent:/path/to/jmx_prometheus_javaagent-0.13.0.jar= 8080:/path/to/config.yml -jar corda.jar
global:
  scrape_interval: 10s        # How frequently to scrape targets
  external_labels:            # Label that can be used by external systems for filtering
    monitor: "corda-network"  
scrape_configs:               # A list of scrape configurations
  - job_name: "nodes"         # The job name assigned to scraped metrics
    static_configs:           # Static list of targets to scrape metrics from
      - targets: ["partya:8080", "partyb:8080"]
    relabel_configs:          # List of target relabel configurations
      - source_labels: [__address__]
        regex: "([^:]+):\\d+"
        target_label: instance

Grafana

Grafana provides a slick UI interface for consuming and aggregating realtime data from our exposed JMX metrics. You can indeed use Graphite rather than Prometheus — there are a number of Graphite vs. Prometheus articles online to determine your choice.

Grafana data sources

ELK (Elasticsearch, Logstash, Kibana) & Filebeat

The ELK stack is a great way to source, process and display log files. Elasticsearch and Logstash work in harmony to process data from multiple sources (in our case, the node log files), whilst Kibana is able to visualise the normalised data, producing highly responsive searchable data from multiple root sources.

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - "/var/log/node-*"

output.logstash:
    hosts: ["logstash:5044"]

Conclusion

Deploying monitoring and performance services for Corda nodes can be successfully carried out with the use of Prometheus, Grafana and ELK.


Share: