In this chapter we will walk you through the setup of local monitoring for your validator node.
You must have your validator node up and running.
This guide was tested on Ubuntu 20.04 LTS release.
In the first step we will set up the Prometheus server.
We create a user just for monitoring purposes which has no home directory and can't be used to log in.
Then we create directories for the executable and the configuration file.
Change ownership of the directories to restrict them to our new monitoring user.
Check latest version number of Prometheus at the GitHub release page.
At the time of writing it is v2.25.2. Insert the latest release version in the following commands.
Now copy over the binaries into the local folder.
We now need to assign those binaries to our freshly created user.
Next up we'll copy the web interface and the configuration presets.
You may have guessed it already but we're also changing the ownership of those directories.
We now have everything we need from the downloaded package so we will go one step back and do some cleanup.
Let's create a
YAML configuration file for Prometheus with the editor of your choice (nano / vim / pico).
Our config is divided in three sections:
global: sets the default values for
scrape_intervaland the rule-execution interval with
rule_files: specify rule-files the Prometheus server should load
scrape_configs: this is where you set the monitoring-resources
We will keep it very basic and end up with something like this:
The first scrape job exports data of Prometheus itself, the second one exports the HydraDX node metrics.
We adjusted the
scrape_interval of both jobs to get more detailed statistics. This overrides the global values.
static_configs sets where the exporters run, we stick to the default ports here.
After saving the configuration we will - once again - change the ownership.
To have Prometheus starting automatically and running in the background we'll use
Create a new config (again with the editor of your choice):
Paste the following configuration and save the file.
Next we will perform the following three steps:
systemctl deamon-reload loads new configurations and updates existing
systemctl enable activates our new service
systemctl start triggers the execution of the service
You can perform the steps above in one command by executing:
You should now be able to access Prometheus' web interface at http://localhost:9090/.
We will install Node Exporter to scrape server metrics that will be used in the dashboard.
Please check the version number of the latest release here and update the command.
At the time of writing the latest version was
Download the latest release.
Unpack the archive you just downloaded. This will create a folder called
Next we copy the binary into our local application directory and assign it to our monitoring user.
We can now do some cleanup and remove the downloaded and unpacked package.
Similar to prometheus we want Node Exporter to run as a service too.
Create a systemd service with your editor of choice.
And paste the following configuration into it.
We will now activate and start the service with this one-liner.
The Node Exporter is now up and running but we need to tell Prometheus to scrape its data.
We will open the configuration file once again with the editor of choice.
And at the very bottom of the file we will append one more scrape config.
Paste the following content and save the file.
The apply the changes configuration a restart of the Prometheus service is required.
Your server metrics are now scraped and can be found in the Prometheus web interface.
We will need them later for our dashboard.
We can see our metrics in the web interface, but that's not how we want to monitor it.
We want it nice and beautiful. That's where Grafana comes into play.
Please check what's the latest Grafana Version with this link.
You can either change the version number in the following commands or copy the install commands directly from the link.
At the time of writing the latest version was
The package comes with a builtin
systemd-service which we will configure and start just like the Prometheus service.
We'll be able to open the Grafana web interface at http://localhost:3000/.
The default login Grafana is:
Please click the settings gear in the menu and select datasources.
In the next window you click "Add Datasource" and select "Prometheus".
In the following form you don't need to change anything but the URL.
http://localhost:9090/ and click
Save and Test.
Please click the
Plus-button in the main navigation and select
We will use the HydraDX Dashboard and to load it you simply input the id
14158 and hit the
You don't need much configuration here, just make sure Prometheus is used as the datasource.
You can now finish the import.
You should now see your dashboard right away.
If some panels are empty please ensure your selection above the panels is like this:
Chain Metrics: Substrate
Chain Instance: localhost:9615
Server Job: node_exporter
Server Host: localhost:9100