Remote polling allows you to poll devices from disparate locations, allowing you to get an aggregate of availability across sites. The remote polling architecture utilizes the same interfaces and services that OpenNMS's traditional centralized polling uses, but has it's own availability data tuned towards keeping track of multiple pollers originating from the same site, as well as combining that data with other site data for an overall view of your monitored devices. This data can be viewed either through a tabular report page, or a GUI map.
The OpenNMS remote polling facility uses a client-server model, with clients checking in at regular intervals with updated poll data and a heartbeat, and the server considering a remote poller down if that client doesn't "check in" in a certain amount of time.
The remote poller client is designed to be a standalone jar file which can be run from the command-line, or through Java webstart.
This May Not Do What You Think It Does
Before investing a ton of time and effort into configuring the remote poller, it's important to be aware of what the remote poller is and is not.
What the Remote Poller Is
- Pure Java, so architecture-independent
- Packaged as Java WebStart (JNLP) or as RPM/DEB packages
- Capable of both GUI and headless operation
What the Remote Poller Is Not (yet)
- A distributed poller or data collector
- A mechanism for "scaling out" the work done by the central poller or for skirting firewalls or other inconvenient network topologies
Server-Side (Your Central OpenNMS Server)
Before You Start
Make sure OpenNMS is working first! Before you work on trying to add remote pollers to the make, be sure that the central OpenNMS server is doing what you want. Able to poll, send notifications, get graphs, etc. You don't want to be debugging server issues while trying to figure out why a remote poller might not be reporting in.
The first step is to decide what interfaces and services you wish to poll remotely. The default $OPENNMS_HOME/etc/poller-configuration.xml comes with an "example1" <package> section, which is used by the central OpenNMS server for polling. You will now want to add another package to define which services should be polled remotely.
Define a new <package>, giving it a unique name, and making sure that it contains the attribute remote="true". Then, include any services you wish to poll remotely. Note that some services (like ICMP and SNMP) are not distributable because they rely on configuration outside of the <service> definition, or on native code.
Here is an example that matches all IP addresses, and enables polling HTTP remotely:
<package name="raleigh" remote="true"> <filter>IPADDR IPLIKE *.*.*.*</filter> <include-range begin="184.108.40.206" end="254.254.254.254"/> <rrd step = "300"> <rra>RRA:AVERAGE:0.5:1:2016</rra> <rra>RRA:AVERAGE:0.5:12:4464</rra> <rra>RRA:MIN:0.5:12:4464</rra> <rra>RRA:MAX:0.5:12:4464</rra> </rrd> <service name="HTTP" interval="30000" user-defined="false" status="on"> <parameter key="retry" value="1"/> <parameter key="timeout" value="3000"/> <parameter key="port" value="80"/> <parameter key="url" value="/"/> <parameter key="rrd-repository" value="/var/log/opennms/rrd/response"/> <parameter key="ds-name" value="http"/> </service> <outage-calendar>zzz from poll-outages.xml zzz</outage-calendar> <!-- 30s, 0, 5m --> <downtime interval="30000" begin="0" end="300000"/> <!-- 5m, 5m, 12h --> <downtime interval="300000" begin="300000" end="43200000"/> <!-- 10m, 12h, 5d --> <downtime interval="600000" begin="43200000" end="432000000"/> <!-- anything after 5 days delete --> <downtime begin="432000000" delete="true"/> </package>
The next step is to decide what locations your remote pollers will be able to check in from. These will generally be site-specific, and will have a GPS location associated with them (so they can be displayed on the maps).
$OPENNMS_HOME/etc/monitoring-locations.xml file defines the different locations from which remote poller monitoring instances will be running. Inside the
<locations> tag, create one or more
<location> entries with a set of attributes that uniquely identify it:
- The short name of the location, used on the remote-poller startup command-line.
- Used to group multiple locations together.
- The package in
poller-configuration.xmlthat the monitor will use to determine the services to poll.
- (As of OpenNMS 1.7.11) The geographical location of the monitor. This should be a street address or similar. If none is specified or Google can't resolve the address to a latitude and longitude, the marker will be placed on the map at OpenNMS World HQ in Pittsboro, NC. :)
- (As of OpenNMS 1.7.11) The geographical location of the monitor in the format "latitude,longitude".
- (As of OpenNMS 1.7.11) The sort priority of this location for the UI (1 is lowest, 100 is highest).
It can also be optionally associated with 0 or more tags that identify a location. Generally these will be arbitrary metadata associated with that monitoring location.
For example, here's a monitoring-locations.xml that defines a location for The OpenNMS Group, Inc. headquarters in Pittsboro, NC:
<?xml version="1.0" encoding="UTF-8"?> <monitoring-locations-configuration xmlns="http://www.opennms.org/xsd/config/monitoring-locations" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.opennms.org/xsd/config/monitoring-locations http://www.opennms.org/xsd/config/monitoring-locations.xsd "> <locations> <location-def location-name="RDU" monitoring-area="raleigh" polling-package-name="raleigh" geolocation="The OpenNMS Group, Pittsboro, NC" coordinates="35.7174,-79.1619" priority="50"> <tags> <tag name="store" /> <tag name="production" /> </tags> </location-def> </locations> </monitoring-locations-configuration>
users.xml and magic-users.properties
By default, only administrators have the rights to send remote poller data to the central server, so you will want to create a user, or users, with remote polling rights to avoid using an admin username and password.
The easiest way to do so is to go to the OpenNMS admin UI, and add a new user there:
Once you've created the user, edit $OPENNMS_HOME/etc/magic-users.properties and add that user's ID to the
Now that you've finished the server-side configuration, restart OpenNMS.
Client Side (The Remote Pollers)
Installing the Remote Poller
If you are on an RPM or Debian-based install, you should be able to just install the "opennms-remote-poller" package through yum or apt.
If not, you can download the latest remote poller standalone distribution at the OpenNMS SourceForge project page.
These instructions will assume you are using one of the pre-packaged remote pollers, which provides the shell wrapper script in $OPENNMS_HOME/bin/remote-poller.sh.
Running the Remote Poller
To run the remote poller and make sure everything's working, the easiest way to do so is just to start it on the command-line. No real configuration is necessary, but you will need to know the following information:
- the location you wish to use (the "location-name" tag in monitoring-locations.xml)
- how to reach your OpenNMS server from the remote poller system
- the username and password you created above
If you run it without options, it gives examples on what options are available:
$ $OPENNMS_HOME/bin/remote-poller.sh usage: -d,--debug write debug messages to the log -g,--gui start a GUI (default: false) -h,--help this help -l,--location <arg> the location name of this remote poller -n,--name <arg> the name of the user to connect as -p,--password <arg> the password to use when connecting -u,--url <arg> the URL for OpenNMS (default: rmi://server-name/)
If your OpenNMS server is reachable at http://192.168.0.1:8980/opennms, then you will want to use the following command:
$OPENNMS_HOME/bin/remote-poller.sh -l RDU -n remoteuser -p remotepass \ -u http://192.168.0.1:8980/opennms-remoting
You should start to see data in the distributed status page within your polling interval (usually 5 minutes).
Applications allow you to create a collection of arbitrary services and treat them as a single unit with its own availability calculation. This is useful for creating an overall "service" that represents a number of different things.
For example, if you have a public-facing web application which uses tomcat, retrieves files from a SAN, and reads data from a database on another machine, you could create a single application which contains the HTTP service from the tomcat system and the SAN machine, and a JDBCStoredProcedureMonitor service from the database machine.
Applications can be configured in the applications UI:
As of OpenNMS 1.7.11, support for distributed maps was added, which lets you visualize locations and applications on a world map, based on the geolocation data in your monitoring-locations.xml.
All distributed map configuration is done in the $OPENNMS_HOME/etc/opennms.properties file on the central server.
Configure Map Type
First, configure the type of map API you wish to use. If you have a Google Maps API key, or a MapQuest API key, you can choose "GoogleMaps" or "Mapquest" as the implementation, otherwise, OpenLayers uses OpenStreetMaps, an open-data project for providing map data in an open source manner, and should work for any user.
Geocoding is what converts addresses into coordinates, and can be necessary to look up your addresses in monitoring-locations.xml if you did not provide exact coordinates. If you choose the Google or MapQuest geocoders, they will use the API key you configured earlier in opennms.properties. If you choose the Nominatim geocoder, you will have to configure your email address by setting:
It is a requirement from the OpenStreetMaps geocoder that you provide a contact address so they can contact you if you are making too many queries, since their server is run by a volunteer organization.
About Using OpenStreetMaps
As of 1.8.7, our default OpenStreetMaps implementation uses MapQuest's servers for data. MapQuest has contributed a large amount of resources to the OpenStreetMaps community, including servers which are free for all to use. If you see references to MapQuest relating to OpenLayers in our configuration, it is because we are using the open MapQuest resources, not the paid enterprise ones.
Accessing the Maps
Just click on the "Distributed Maps" link in the menu bar in your OpenNMS server's web UI.
Starting the Remote Poller
While the most common way to run the remote poller is as a console-only, command-line tool, it also provides a GUI version which you can start either from the command-line, or through Java webstart.
To start the GUI from the command-line, add the "-g" option to your remote-poller command, like so:
$OPENNMS_HOME/bin/remote-poller.sh -g -l RDU \ -n remoteuser -p remotepass \ -u http://192.168.0.1:8980/opennms-remoting
If you need to use an HTTP proxy to communicate with the OpenNMS server, add the
http.proxyPort options to the java command-line:
$OPENNMS_HOME/bin/remote-poller.sh \ -Dhttp.proxyHost=proxy.mydomain.net \ -Dhttp.proxyPort=8080 -g -l RDU \ -u http://192.168.0.1:8980/opennms-remoting
For a complete overview of the remote poller architecture, see the Remote Poller Design Overview page.