Node Monitoring
In this chapter we will walk you through the setup of local monitoring for your validator node.
#
PrerequisitesYou must have your validator node up and running.
This guide was tested on Ubuntu 20.04 LTS release.
#
Prometheus SetupIn the first step we will set up the Prometheus server.
#
User and DirectoriesWe create a user just for monitoring purposes which has no home directory and can't be used to log in.
Then we create directories for the executable and the configuration file.
Change ownership of the directories to restrict them to our new monitoring user.
#
Install PrometheusCheck latest version number of Prometheus at the GitHub release page.
At the time of writing it is v2.25.2. Insert the latest release version in the following commands.
Now copy over the binaries into the local folder.
We now need to assign those binaries to our freshly created user.
Next up we'll copy the web interface and the configuration presets.
You may have guessed it already but we're also changing the ownership of those directories.
We now have everything we need from the downloaded package so we will go one step back and do some cleanup.
Let's create a YAML
configuration file for Prometheus with the editor of your choice (nano / vim / pico).
Our config is divided in three sections:
global
: sets the default values forscrape_interval
and the rule-execution interval withevaluation_interval
rule_files
: specify rule-files the Prometheus server should loadscrape_configs
: this is where you set the monitoring-resources
We will keep it very basic and end up with something like this:
The first scrape job exports data of Prometheus itself, the second one exports the HydraDX node metrics.
We adjusted the scrape_interval
of both jobs to get more detailed statistics. This overrides the global values.
The target
in static_configs
sets where the exporters run, we stick to the default ports here.
After saving the configuration we will - once again - change the ownership.
#
Starting PrometheusTo have Prometheus starting automatically and running in the background we'll use systemd
.
Create a new config (again with the editor of your choice):
Paste the following configuration and save the file.
Next we will perform the following three steps:
systemctl deamon-reload
loads new configurations and updates existing
systemctl enable
activates our new service
systemctl start
triggers the execution of the service
You can perform the steps above in one command by executing:
You should now be able to access Prometheus' web interface at http://localhost:9090/.
#
Node ExporterWe will install Node Exporter to scrape server metrics that will be used in the dashboard.
Please check the version number of the latest release here and update the command.
At the time of writing the latest version was 1.1.2
.
#
Install Node ExporterDownload the latest release.
Unpack the archive you just downloaded. This will create a folder called node_exporter-1.1.2.linux-amd64
.
Next we copy the binary into our local application directory and assign it to our monitoring user.
We can now do some cleanup and remove the downloaded and unpacked package.
#
Create a Systemd ServiceSimilar to prometheus we want Node Exporter to run as a service too.
Create a systemd service with your editor of choice.
And paste the following configuration into it.
We will now activate and start the service with this one-liner.
#
Add Scrape Job for Node ExporterThe Node Exporter is now up and running but we need to tell Prometheus to scrape its data.
We will open the configuration file once again with the editor of choice.
And at the very bottom of the file we will append one more scrape config.
Paste the following content and save the file.
The apply the changes configuration a restart of the Prometheus service is required.
Your server metrics are now scraped and can be found in the Prometheus web interface.
We will need them later for our dashboard.
#
Grafana SetupWe can see our metrics in the web interface, but that's not how we want to monitor it.
We want it nice and beautiful. That's where Grafana comes into play.
#
Install GrafanaPlease check what's the latest Grafana Version with this link.
You can either change the version number in the following commands or copy the install commands directly from the link.
At the time of writing the latest version was 7.5.1
.
The package comes with a builtin systemd
-service which we will configure and start just like the Prometheus service.
#
Accessing the Web InterfaceWe'll be able to open the Grafana web interface at http://localhost:3000/.
The default login Grafana is:
User: admin
Password: admin

#
Configuring the DatasourcePlease click the settings gear in the menu and select datasources.
In the next window you click "Add Datasource" and select "Prometheus".
In the following form you don't need to change anything but the URL.
Set http://localhost:9090/
and click Save and Test
.

#
Importing the DashboardPlease click the Plus
-button in the main navigation and select import
.

We will use the HydraDX Dashboard and to load it you simply input the id 14158
and hit the Load
-button.

You don't need much configuration here, just make sure Prometheus is used as the datasource.
You can now finish the import.

You should now see your dashboard right away.
If some panels are empty please ensure your selection above the panels is like this:
Chain Metrics
: SubstrateChain Instance
: localhost:9615Server Job
: node_exporterServer Host
: localhost:9100