Graph Gallery
Subscribe

From OpenNMS

Jump to: navigation, search

This page was built to share unique graphs that OpenNMS admins have built to suit their needs and tastes. It assumes a basic understanding of graph definition within OpenNMS. This page will help you to familiarize yourself with data collection and graph definitions.

Feel free to edit this page and add yours!

JRobin vs RRD caveats

There are a few differences in definition code that may not translate between JRobin and RRD definitions. While JRobin currently provides increased collection performance, RRD has a greater variety of graph parameters you may find useful. Essentially JRobin needs to have its feature set match RRDs or RRD needs to improve collection performance.

JRobin has a great tool available for data inspection. Here is a very helpful guide.

PLEASE NOTE: Any changes to data collection type (JRB, RRD) or aging scheme will result in losing all historic data!

Some parameters currently unique to RRD collection:

--slope-mode command prefix provides smoother-appearing graphs
CDEF
TREND operation allows sliding window averages
VDEF (note: VDEF token working with JRobin since OpenNMS 1.8.5)
PERCENT allows for 95th percentile calculation
LSLSLOPE, LSLINT predictive functions

Syntax differences for OpenNMS graph definitions:

RRD VDEF-based variable GPRINT statement
GPRINT:bitsIn95:" 95th pct\\: %6.2lf %s" \
JRobin VDEF-based variable GPRINT statement
GPRINT:bitsIn95:AVERAGE:" 95th pct\\: %6.2lf %s" \

Please see the excellent documentation at Tobias Oetiker's site for more detail.

Default graph command.prefix

by Charles G. Hopkins

Very often, OpenNMS is used inside a NOC / SOC environment as well as normal office environment. In such an environment you might want to change or make the graphs more consistent in size and color for use in both. If so then you will want to modify the line at the top of the "snmp-graph.properties" file in the OpenNMS configuration directory similar to the following:

command.prefix=/usr/bin/rrdtool graph - --width 600 --height 200 --imgformat PNG  --font DEFAULT:8 --font TITLE:14:Helvetica-Bold --font AXIS:8:Helvetica-Bold --font UNIT:8:Helvetica-Bold --imgformat PNG --font LEGEND:8 --font WATERMARK:12:Helvetica-Bold --watermark "YOUR_ORGNAME_HERE - City State" --color CANVAS#a4a4a4 --start {startTime} --end {endTime} --disable-rrdtool-tag

The above makes the following changes:

1. Sets graphs to a larger uniform size without being too large.
2. Uses a cleaner font for most of the print in the graph leaving the legend in the default Courier monospaced font.
3. It puts a watermark at the bottom of the graph with your text of choice.
4. It eliminates the "RRDTOOL / TOBI OETIKER" watermark from all of the graphs.


Graphs

Bits In/Out integrating Bandwidth Utilization

by Ken Eshelby

Customers often want to know bandwidth utilization either as a percentage of available or a bitrate. I liked how the default net-snmp CPU Statistics graph did this with CPU % and load average, and have come up with a compromise. Bandwidth has more chaotic transitions than CPU, so I chose to represent utilization percent differently. Using some ugly-looking logic code to darken colors appropriate to percent ranges, here is a balance that works.

Definition uses 95th percentile calculation and 64-bit counter values. Used on v1.3.11 server.

Color gradient steps were calculated with this helpful online tool.

Newbwgraph.png

report.mib2.HCbits.name=Bits In/Out (High Speed)
report.mib2.HCbits.suppress=mib2.bits
report.mib2.HCbits.columns=ifHCInOctets,ifHCOutOctets
report.mib2.HCbits.type=interfaceSnmp
report.mib2.HCbits.externalValues=ifSpeed
report.mib2.HCbits.command=--title="Bits In/Out (High Speed)" \
 --width 580 \
 --height 200 \
 --vertical-label="Bits per second" \
 DEF:octIn={rrd1}:ifHCInOctets:AVERAGE \
 DEF:octOut={rrd2}:ifHCOutOctets:AVERAGE \
 CDEF:bitsIn=octIn,8,* \
 CDEF:bitsOut=octOut,8,* \
 CDEF:bitsOutNeg=0,bitsOut,- \
 CDEF:pctIn=bitsIn,{ifSpeed},/,100,* \
 CDEF:pctOut=bitsOut,{ifSpeed},/,100,* \
 CDEF:block={ifSpeed},.1,*,bitsIn,0,*,+ \
 CDEF:IFSpeed=block,10,* \
 CDEF:divider=bitsIn,0,* \
 CDEF:i100=bitsIn,{ifSpeed},.9,*,-,0,GE,bitsIn,{ifSpeed},.9,*,-,0,IF \
 CDEF:pctIn100=91,pctIn,LT,i100,0,IF \
 CDEF:i90=bitsIn,{ifSpeed},.8,*,-,0,GE,bitsIn,{ifSpeed},.8,*,-,0,IF \
 CDEF:pctIn90=81,pctIn,LT,pctIn,0,IF,90,GT,block,i90,IF \
 CDEF:i80=bitsIn,{ifSpeed},.7,*,-,0,GE,bitsIn,{ifSpeed},.7,*,-,0,IF \
 CDEF:pctIn80=71,pctIn,LT,pctIn,0,IF,80,GT,block,i80,IF \
 CDEF:i70=bitsIn,{ifSpeed},.6,*,-,0,GE,bitsIn,{ifSpeed},.6,*,-,0,IF \
 CDEF:pctIn70=61,pctIn,LT,pctIn,0,IF,70,GT,block,i70,IF \
 CDEF:i60=bitsIn,{ifSpeed},.5,*,-,0,GE,bitsIn,{ifSpeed},.5,*,-,0,IF \
 CDEF:pctIn60=51,pctIn,LT,pctIn,0,IF,60,GT,block,i60,IF \
 CDEF:i50=bitsIn,{ifSpeed},.4,*,-,0,GE,bitsIn,{ifSpeed},.4,*,-,0,IF \
 CDEF:pctIn50=41,pctIn,LT,pctIn,0,IF,50,GT,block,i50,IF \
 CDEF:i40=bitsIn,{ifSpeed},.3,*,-,0,GE,bitsIn,{ifSpeed},.3,*,-,0,IF \
 CDEF:pctIn40=31,pctIn,LT,pctIn,0,IF,40,GT,block,i40,IF \
 CDEF:i30=bitsIn,{ifSpeed},.2,*,-,0,GE,bitsIn,{ifSpeed},.2,*,-,0,IF \
 CDEF:pctIn30=21,pctIn,LT,pctIn,0,IF,30,GT,block,i30,IF \
 CDEF:i20=bitsIn,{ifSpeed},.1,*,-,0,GE,bitsIn,{ifSpeed},.1,*,-,0,IF \
 CDEF:pctIn20=11,pctIn,LT,pctIn,0,IF,20,GT,block,i20,IF \
 CDEF:pctIn10=pctIn,10,GT,block,bitsIn,IF \
 CDEF:o100=bitsOut,{ifSpeed},.9,*,-,0,GE,bitsOut,{ifSpeed},.9,*,-,0,IF \
 CDEF:pctOut100=91,pctOut,LT,o100,0,IF \
 CDEF:o90=bitsOut,{ifSpeed},.8,*,-,0,GE,bitsOut,{ifSpeed},.8,*,-,0,IF \
 CDEF:pctOut90=81,pctOut,LT,pctOut,0,IF,90,GT,block,o90,IF \
 CDEF:o80=bitsOut,{ifSpeed},.7,*,-,0,GE,bitsOut,{ifSpeed},.7,*,-,0,IF \
 CDEF:pctOut80=71,pctOut,LT,pctOut,0,IF,80,GT,block,o80,IF \
 CDEF:o70=bitsOut,{ifSpeed},.6,*,-,0,GE,bitsOut,{ifSpeed},.6,*,-,0,IF \
 CDEF:pctOut70=61,pctOut,LT,pctOut,0,IF,70,GT,block,o70,IF \
 CDEF:o60=bitsOut,{ifSpeed},.5,*,-,0,GE,bitsOut,{ifSpeed},.5,*,-,0,IF \
 CDEF:pctOut60=51,pctOut,LT,pctOut,0,IF,60,GT,block,o60,IF \
 CDEF:o50=bitsOut,{ifSpeed},.4,*,-,0,GE,bitsOut,{ifSpeed},.4,*,-,0,IF \
 CDEF:pctOut50=41,pctOut,LT,pctOut,0,IF,50,GT,block,o50,IF \
 CDEF:o40=bitsOut,{ifSpeed},.3,*,-,0,GE,bitsOut,{ifSpeed},.3,*,-,0,IF \
 CDEF:pctOut40=31,pctOut,LT,pctOut,0,IF,40,GT,block,o40,IF \
 CDEF:o30=bitsOut,{ifSpeed},.2,*,-,0,GE,bitsOut,{ifSpeed},.2,*,-,0,IF \
 CDEF:pctOut30=21,pctOut,LT,pctOut,0,IF,30,GT,block,o30,IF \
 CDEF:o20=bitsOut,{ifSpeed},.1,*,-,0,GE,bitsOut,{ifSpeed},.1,*,-,0,IF \
 CDEF:pctOut20=11,pctOut,LT,pctOut,0,IF,20,GT,block,o20,IF \
 CDEF:pctOut10=pctOut,10,GT,block,bitsOut,IF \
 CDEF:pctOutNeg10=0,pctOut10,- \
 CDEF:pctOutNeg20=0,pctOut20,- \
 CDEF:pctOutNeg30=0,pctOut30,- \
 CDEF:pctOutNeg40=0,pctOut40,- \
 CDEF:pctOutNeg50=0,pctOut50,- \
 CDEF:pctOutNeg60=0,pctOut60,- \
 CDEF:pctOutNeg70=0,pctOut70,- \
 CDEF:pctOutNeg80=0,pctOut80,- \
 CDEF:pctOutNeg90=0,pctOut90,- \
 CDEF:pctOutNeg100=0,pctOut100,- \
 CDEF:outSum=bitsOut,{diffTime},* \
 CDEF:inSum=bitsIn,{diffTime},* \
 CDEF:totBits=octIn,octOut,+,8,* \
 CDEF:totSum=totBits,{diffTime},* \
 VDEF:bitsIn95=bitsIn,95,PERCENT \
 VDEF:bitsOut95=bitsOut,95,PERCENT \
 VDEF:pctIn95=pctIn,95,PERCENT \
 VDEF:pctOut95=pctOut,95,PERCENT \
 COMMENT:"Bandwidth Utilization (%)                                               " \
 GPRINT:IFSpeed:AVERAGE:"            Max Speed\\: %6.0lf%sb/s\\n" \
 COMMENT:"In " \
 AREA:pctIn10#ffffff:" 0-10%" \
 STACK:pctIn20#e2ffe2:"11-20%" \
 STACK:pctIn30#c6ffc6:"21-30%" \
 STACK:pctIn40#aaffaa:"31-40%" \
 STACK:pctIn50#8dff8d:"41-50%" \
 STACK:pctIn60#71ff71:"51-60%" \
 STACK:pctIn70#55ff55:"61-70%" \
 STACK:pctIn80#38ff38:"71-80%" \
 STACK:pctIn90#1cff1c:"81-90%" \
 STACK:pctIn100#00ff00:"91-100%\\n" \
 GPRINT:pctIn:AVERAGE:"     Avg\\: %6.2lf" \
 GPRINT:pctIn95:AVERAGE:" 95th pct\\: %6.2lf" \
 GPRINT:pctIn:MIN:"Min\\: %6.2lf" \
 GPRINT:pctIn:MAX:"Max\\: %6.2lf" \
 GPRINT:pctIn:LAST:"Current\\: %6.2lf\\n" \
 COMMENT:"\\n" \
 COMMENT:"Out" \
 AREA:pctOutNeg10#ffffff:" 0-10%" \
 STACK:pctOutNeg20#e2e2ff:"11-20%" \
 STACK:pctOutNeg30#c6c6ff:"21-30%" \
 STACK:pctOutNeg40#aaaaff:"31-40%" \
 STACK:pctOutNeg50#8d8dff:"41-50%" \
 STACK:pctOutNeg60#7171ff:"51-60%" \
 STACK:pctOutNeg70#5555ff:"61-70%" \
 STACK:pctOutNeg80#3838ff:"71-80%" \
 STACK:pctOutNeg90#1c1cff:"81-90%" \
 STACK:pctOutNeg100#0000ff:"91-100%\\n" \
 GPRINT:pctOut:AVERAGE:"     Avg\\: %6.2lf" \
 GPRINT:pctOut95:AVERAGE:" 95th pct\\: %6.2lf" \
 GPRINT:pctOut:MIN:"Min\\: %6.2lf" \
 GPRINT:pctOut:MAX:"Max\\: %6.2lf" \
 GPRINT:pctOut:LAST:"Current\\: %6.2lf\\n" \
 COMMENT:"\\n" \
 COMMENT:"Bit-rate (per second)\\n" \
 LINE1:bitsIn#00ff00:"In" \
 GPRINT:bitsIn:AVERAGE:" Avg\\: %6.2lf %s" \
 GPRINT:bitsIn95:AVERAGE:" 95th pct\\: %6.2lf %s" \
 GPRINT:bitsIn:MIN:"Min\\: %6.2lf %s" \
 GPRINT:bitsIn:MAX:"Max\\: %6.2lf %s" \
 GPRINT:inSum:AVERAGE:"Tot\\: %6.2lf %s" \
 GPRINT:bitsIn:LAST:"Current\\: %6.2lf %s\\n" \
 LINE1:divider#000000 \
 LINE1:bitsOutNeg#0000ff:"Out" \
 GPRINT:bitsOut:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:bitsOut95:AVERAGE:" 95th pct\\: %6.2lf %s" \
 GPRINT:bitsOut:MIN:"Min\\: %6.2lf %s" \
 GPRINT:bitsOut:MAX:"Max\\: %6.2lf %s" \
 GPRINT:outSum:AVERAGE:"Tot\\: %6.2lf %s" \
 GPRINT:bitsOut:LAST:"Current\\: %6.2lf %s\\n" \
 GPRINT:totSum:AVERAGE:"                                                        Total Bits transferred\\: %6.2lf %s"

  • NOTE:* If you are using the JRobin strategy, then you need to adjust the 95th percentile lines to include the consolidation function (AVERAGE) or the graphs will not load.

A slightly different version combining interface utilization and MBits/second. I've added also a suppress command directive to not show redundant graph information. You can remove the line if you want to additional display the other graphs.

by --_indigo (talk) 11:00, 21 January 2014 (EST)

Mib2-traffic.png

report.mib2.bitsTraffic.name=Bits In/Out with interface utilization
report.mib2.bitsTraffic.columns=ifInOctets,ifOutOctets
report.mib2.HCbitsTraffic.suppress=mib2.bits,mib2.traffic-inout
report.mib2.bitsTraffic.type=interfaceSnmp
report.mib2.bitsTraffic.externalValues=ifSpeed
report.mib2.bitsTraffic.command=--title="Bits In/Out with interface utilization" \
 --vertical-label="Bits per second" \
 --units=si \
 DEF:octIn={rrd1}:ifInOctets:AVERAGE \
 DEF:minOctIn={rrd1}:ifInOctets:MIN \
 DEF:maxOctIn={rrd1}:ifInOctets:MAX \
 DEF:octOut={rrd2}:ifOutOctets:AVERAGE \
 DEF:minOctOut={rrd2}:ifOutOctets:MIN \
 DEF:maxOctOut={rrd2}:ifOutOctets:MAX \
 CDEF:rawbitsIn=octIn,8,* \
 CDEF:minRawbitsIn=minOctIn,8,* \
 CDEF:maxRawbitsIn=maxOctIn,8,* \
 CDEF:rawbitsOut=octOut,8,* \
 CDEF:minRawbitsOut=minOctOut,8,* \
 CDEF:maxRawbitsOut=maxOctOut,8,* \
 CDEF:rawbitsOutNeg=0,rawbitsOut,- \
 CDEF:rawtotBits=octIn,octOut,+,8,* \
 CDEF:bitsIn=rawbitsIn,UN,0,rawbitsIn,IF \
 CDEF:bitsOut=rawbitsOut,UN,0,rawbitsOut,IF \
 CDEF:totBits=rawtotBits,UN,0,rawtotBits,IF \
 CDEF:outSum=bitsOut,{diffTime},* \
 CDEF:inSum=bitsIn,{diffTime},* \
 CDEF:totSum=totBits,{diffTime},* \
 CDEF:block={ifSpeed},.1,*,bitsIn,0,*,+ \
 CDEF:IFSpeed=block,10,* \
 CDEF:percentIn=octIn,8,*,{ifSpeed},/,100,* \
 CDEF:percentOut=octOut,8,*,{ifSpeed},/,100,* \
 CDEF:percentIn10=0,percentIn,GE,0,rawbitsIn,IF \
 CDEF:percentIn20=10,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn30=20,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn40=30,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn50=40,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn60=50,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn70=60,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn80=70,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn90=80,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn100=90,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentOut10=0,percentOut,GE,0,rawbitsOutNeg,IF \
 CDEF:percentOut20=10,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut30=20,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut40=30,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut50=40,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut60=50,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut70=60,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut80=70,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut90=80,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut100=90,percentOut,GT,0,rawbitsOutNeg,IF \
 COMMENT:"\\n" \
 COMMENT:"In-/Out interface utilization (%) (Maximum interface speed\\:" \
 GPRINT:IFSpeed:AVERAGE:"%0.0lf%sb/s)\\n" \
 AREA:percentIn10#5ca53f:" 0-10%" \
 AREA:percentIn20#75b731:"11-20%" \
 AREA:percentIn30#90c22f:"21-30%" \
 AREA:percentIn40#b8d029:"31-40%" \
 AREA:percentIn50#e4e11e:"41-50%" \
 AREA:percentIn60#fee610:"51-60%" \
 AREA:percentIn70#f4bd1b:"61-70%" \
 AREA:percentIn80#eaa322:"71-80%" \
 AREA:percentIn90#de6822:"81-90%" \
 AREA:percentIn100#d94c20:"91-100% \\n" \
 LINE1:rawbitsIn#424242 \
 AREA:percentOut10#4c952f:" 0-10%" \
 AREA:percentOut20#65a721:"11-20%" \
 AREA:percentOut30#80b21f:"21-30%" \
 AREA:percentOut40#a8c019:"31-40%" \
 AREA:percentOut50#d4d10e:"41-50%" \
 AREA:percentOut60#eed600:"51-60%" \
 AREA:percentOut70#e4ad0b:"61-70%" \
 AREA:percentOut80#da9312:"71-80%" \
 AREA:percentOut90#ce5812:"81-90%" \
 AREA:percentOut100#c93c10:"91-100%\\n" \
 LINE1:rawbitsOutNeg#424242 \
 COMMENT:" \\n" \
 GPRINT:rawbitsIn:AVERAGE:"Avg In  \\: %8.2lf %s" \
 GPRINT:rawbitsIn:MIN:"Min In  \\: %8.2lf %s" \
 GPRINT:rawbitsIn:MAX:"Max In  \\: %8.2lf %s\\n" \
 GPRINT:rawbitsOut:AVERAGE:"Avg Out \\: %8.2lf %s" \
 GPRINT:rawbitsOut:MIN:"Min Out \\: %8.2lf %s" \
 GPRINT:rawbitsOut:MAX:"Max Out \\: %8.2lf %s\\n" \
 GPRINT:inSum:AVERAGE:"Tot In  \\: %8.2lf %s" \
 GPRINT:outSum:AVERAGE:"Tot Out \\: %8.2lf %s" \
 GPRINT:totSum:AVERAGE:"Tot     \\: %8.2lf %s\\n" \
 HRULE:0#424242

report.mib2.HCbitsTraffic.name=Bits In/Out with interface utilization (HC)
report.mib2.HCbitsTraffic.suppress=mib2.bitsTraffic,mib2.HCbits,mib2.bits,mib2.HCtraffic-inout,mib2.traffic-inout
report.mib2.HCbitsTraffic.columns=ifHCInOctets,ifHCOutOctets
report.mib2.HCbitsTraffic.type=interfaceSnmp
report.mib2.HCbitsTraffic.externalValues=ifSpeed
report.mib2.HCbitsTraffic.command=--title="Bits In/Out with interface utilization (HC)" \
 --vertical-label="Bits per second" \
 --units=si \
 DEF:octIn={rrd1}:ifHCInOctets:AVERAGE \
 DEF:minOctIn={rrd1}:ifHCInOctets:MIN \
 DEF:maxOctIn={rrd1}:ifHCInOctets:MAX \
 DEF:octOut={rrd2}:ifHCOutOctets:AVERAGE \
 DEF:minOctOut={rrd2}:ifHCOutOctets:MIN \
 DEF:maxOctOut={rrd2}:ifHCOutOctets:MAX \
 CDEF:rawbitsIn=octIn,8,* \
 CDEF:minRawbitsIn=minOctIn,8,* \
 CDEF:maxRawbitsIn=maxOctIn,8,* \
 CDEF:rawbitsOut=octOut,8,* \
 CDEF:minRawbitsOut=minOctOut,8,* \
 CDEF:maxRawbitsOut=maxOctOut,8,* \
 CDEF:rawbitsOutNeg=0,rawbitsOut,- \
 CDEF:rawtotBits=octIn,octOut,+,8,* \
 CDEF:bitsIn=rawbitsIn,UN,0,rawbitsIn,IF \
 CDEF:bitsOut=rawbitsOut,UN,0,rawbitsOut,IF \
 CDEF:totBits=rawtotBits,UN,0,rawtotBits,IF \
 CDEF:outSum=bitsOut,{diffTime},* \
 CDEF:inSum=bitsIn,{diffTime},* \
 CDEF:totSum=totBits,{diffTime},* \
 CDEF:block={ifSpeed},.1,*,bitsIn,0,*,+ \
 CDEF:IFSpeed=block,10,* \
 CDEF:percentIn=octIn,8,*,{ifSpeed},/,100,* \
 CDEF:percentOut=octOut,8,*,{ifSpeed},/,100,* \
 CDEF:percentIn10=0,percentIn,GE,0,rawbitsIn,IF \
 CDEF:percentIn20=10,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn30=20,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn40=30,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn50=40,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn60=50,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn70=60,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn80=70,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn90=80,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn100=90,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentOut10=0,percentOut,GE,0,rawbitsOutNeg,IF \
 CDEF:percentOut20=10,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut30=20,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut40=30,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut50=40,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut60=50,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut70=60,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut80=70,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut90=80,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut100=90,percentOut,GT,0,rawbitsOutNeg,IF \
 COMMENT:"\\n" \
 COMMENT:"In-/Out interface utilization (%) (Maximum interface speed\\:" \
 GPRINT:IFSpeed:AVERAGE:"%0.0lf%sb/s)\\n" \
 AREA:percentIn10#5ca53f:" 0-10%" \
 AREA:percentIn20#75b731:"11-20%" \
 AREA:percentIn30#90c22f:"21-30%" \
 AREA:percentIn40#b8d029:"31-40%" \
 AREA:percentIn50#e4e11e:"41-50%" \
 AREA:percentIn60#fee610:"51-60%" \
 AREA:percentIn70#f4bd1b:"61-70%" \
 AREA:percentIn80#eaa322:"71-80%" \
 AREA:percentIn90#de6822:"81-90%" \
 AREA:percentIn100#d94c20:"91-100% \\n" \
 LINE1:rawbitsIn#424242 \
 AREA:percentOut10#4c952f:" 0-10%" \
 AREA:percentOut20#65a721:"11-20%" \
 AREA:percentOut30#80b21f:"21-30%" \
 AREA:percentOut40#a8c019:"31-40%" \
 AREA:percentOut50#d4d10e:"41-50%" \
 AREA:percentOut60#eed600:"51-60%" \
 AREA:percentOut70#e4ad0b:"61-70%" \
 AREA:percentOut80#da9312:"71-80%" \
 AREA:percentOut90#ce5812:"81-90%" \
 AREA:percentOut100#c93c10:"91-100% \\n" \
 LINE1:rawbitsOutNeg#424242 \
 COMMENT:" \\n" \
 GPRINT:rawbitsIn:AVERAGE:"Avg In  \\: %8.2lf %s" \
 GPRINT:rawbitsIn:MIN:"Min In  \\: %8.2lf %s" \
 GPRINT:rawbitsIn:MAX:"Max In  \\: %8.2lf %s\\n" \
 GPRINT:rawbitsOut:AVERAGE:"Avg Out \\: %8.2lf %s" \
 GPRINT:rawbitsOut:MIN:"Min Out \\: %8.2lf %s" \
 GPRINT:rawbitsOut:MAX:"Max Out \\: %8.2lf %s\\n" \
 GPRINT:inSum:AVERAGE:"Tot In  \\: %8.2lf %s" \
 GPRINT:outSum:AVERAGE:"Tot Out \\: %8.2lf %s" \
 GPRINT:totSum:AVERAGE:"Tot     \\: %8.2lf %s\\n" \
 HRULE:0#424242

Another different version added 95 percentile calculation:

--_indigo (talk) 07:07, 28 January 2014 (EST)

Mib2-HCbits95.png

reports=mib2.HCbits95

report.mib2.HCbits95.name=Bits In/Out with 95 percentile (HC)
report.mib2.HCbits95.suppress=mib2.bitsTraffic,mib2.HCbits,mib2.bits,mib2.HCtraffic-inout,mib2.traffic-inout
report.mib2.HCbits95.columns=ifHCInOctets,ifHCOutOctets
report.mib2.HCbits95.type=interfaceSnmp
report.mib2.HCbits95.externalValues=ifSpeed
report.mib2.HCbits95.command=--title="Bits In/Out with 95 percentile (HC)" \
 --vertical-label="Bits per second" \
 --units=si \
 DEF:octIn={rrd1}:ifHCInOctets:AVERAGE \
 DEF:minOctIn={rrd1}:ifHCInOctets:MIN \
 DEF:maxOctIn={rrd1}:ifHCInOctets:MAX \
 DEF:octOut={rrd2}:ifHCOutOctets:AVERAGE \
 DEF:minOctOut={rrd2}:ifHCOutOctets:MIN \
 DEF:maxOctOut={rrd2}:ifHCOutOctets:MAX \
 CDEF:rawbitsIn=octIn,8,* \
 CDEF:minRawbitsIn=minOctIn,8,* \
 CDEF:maxRawbitsIn=maxOctIn,8,* \
 CDEF:rawbitsOut=octOut,8,* \
 CDEF:minRawbitsOut=minOctOut,8,* \
 CDEF:maxRawbitsOut=maxOctOut,8,* \
 CDEF:rawbitsOutNeg=0,rawbitsOut,- \
 CDEF:rawtotBits=octIn,octOut,+,8,* \
 CDEF:bitsIn=rawbitsIn,UN,0,rawbitsIn,IF \
 CDEF:bitsOut=rawbitsOut,UN,0,rawbitsOut,IF \
 CDEF:totBits=rawtotBits,UN,0,rawtotBits,IF \
 CDEF:outSum=bitsOut,{diffTime},* \
 CDEF:inSum=bitsIn,{diffTime},* \
 CDEF:totSum=totBits,{diffTime},* \
 CDEF:block={ifSpeed},.1,*,bitsIn,0,*,+ \
 VDEF:inpct=bitsIn,95,PERCENT \
 VDEF:outpct=bitsOut,95,PERCENT \
 CDEF:outpctneg=bitsOut,POP,outpct,-1,* \
 CDEF:IFSpeed=block,10,* \
 CDEF:percentIn=octIn,8,*,{ifSpeed},/,100,* \
 CDEF:percentOut=octOut,8,*,{ifSpeed},/,100,* \
 CDEF:percentIn10=0,percentIn,GE,0,rawbitsIn,IF \
 CDEF:percentIn20=10,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn30=20,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn40=30,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn50=40,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn60=50,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn70=60,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn80=70,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn90=80,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentIn100=90,percentIn,GT,0,rawbitsIn,IF \
 CDEF:percentOut10=0,percentOut,GE,0,rawbitsOutNeg,IF \
 CDEF:percentOut20=10,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut30=20,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut40=30,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut50=40,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut60=50,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut70=60,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut80=70,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut90=80,percentOut,GT,0,rawbitsOutNeg,IF \
 CDEF:percentOut100=90,percentOut,GT,0,rawbitsOutNeg,IF \
 COMMENT:"\\n" \
 COMMENT:"In-/Out interface utilization (%) (Maximum interface speed\\:" \
 GPRINT:IFSpeed:AVERAGE:"%0.0lf%sb/s)\\n" \
 AREA:percentIn10#5ca53f:" 0-10%" \
 AREA:percentIn20#75b731:"11-20%" \
 AREA:percentIn30#90c22f:"21-30%" \
 AREA:percentIn40#b8d029:"31-40%" \
 AREA:percentIn50#e4e11e:"41-50%" \
 AREA:percentIn60#fee610:"51-60%" \
 AREA:percentIn70#f4bd1b:"61-70%" \
 AREA:percentIn80#eaa322:"71-80%" \
 AREA:percentIn90#de6822:"81-90%" \
 AREA:percentIn100#d94c20:"91-100% \\n" \
 LINE1:rawbitsIn#424242 \
 AREA:percentOut10#4c952f:" 0-10%" \
 AREA:percentOut20#65a721:"11-20%" \
 AREA:percentOut30#80b21f:"21-30%" \
 AREA:percentOut40#a8c019:"31-40%" \
 AREA:percentOut50#d4d10e:"41-50%" \
 AREA:percentOut60#eed600:"51-60%" \
 AREA:percentOut70#e4ad0b:"61-70%" \
 AREA:percentOut80#da9312:"71-80%" \
 AREA:percentOut90#ce5812:"81-90%" \
 AREA:percentOut100#c93c10:"91-100% \\n" \
 LINE1:rawbitsOutNeg#424242 \
 COMMENT:" \\n" \
 LINE2:inpct#ce5c00:"95 pct in" \
 GPRINT:inpct:"  \\: %8.2lf %s\\n" \
 LINE2:outpctneg#f57900:"95 pct out" \
 GPRINT:outpct:" \\: %8.2lf %s\\n" \
 GPRINT:rawbitsIn:AVERAGE:"Avg In  \\: %8.2lf %s" \
 GPRINT:rawbitsIn:MIN:"Min In  \\: %8.2lf %s" \
 GPRINT:rawbitsIn:MAX:"Max In  \\: %8.2lf %s\\n" \
 GPRINT:rawbitsOut:AVERAGE:"Avg Out \\: %8.2lf %s" \
 GPRINT:rawbitsOut:MIN:"Min Out \\: %8.2lf %s" \
 GPRINT:rawbitsOut:MAX:"Max Out \\: %8.2lf %s\\n" \
 GPRINT:inSum:AVERAGE:"Tot In  \\: %8.2lf %s" \
 GPRINT:outSum:AVERAGE:"Tot Out \\: %8.2lf %s" \
 GPRINT:totSum:AVERAGE:"Tot     \\: %8.2lf %s\\n"

Unicast + Non-Unicast Traffic

Non-unicast traffic is collected by default without a graph definition. I added it to my default unicast graph.

Used on v1.3.11 server.

Packettype.png

report.mib2.packets.name=Packets In/Out
report.mib2.packets.columns=ifInUcastpkts,ifOutUcastPkts,ifInNUcastpkts,ifOutNUcastPkts
report.mib2.packets.type=interfaceSnmp
report.mib2.packets.command=--title="Packets In/Out by Type" \
 --vertical-label="Packets per second" \
 --width 425 \
 --height 130 \
 DEF:UpktsIn={rrd1}:ifInUcastpkts:AVERAGE \
 DEF:minUPktsIn={rrd1}:ifInUcastpkts:MIN \
 DEF:maxUPktsIn={rrd1}:ifInUcastpkts:MAX \
 DEF:UpktsOut={rrd2}:ifOutUcastPkts:AVERAGE \
 DEF:minUPktsOut={rrd2}:ifOutUcastPkts:MIN \
 DEF:maxUPktsOut={rrd2}:ifOutUcastPkts:MAX \
 DEF:NUpktsIn={rrd3}:ifInNUcastpkts:AVERAGE \
 DEF:minNUPktsIn={rrd3}:ifInNUcastpkts:MIN \
 DEF:maxNUPktsIn={rrd3}:ifInNUcastpkts:MAX \
 DEF:NUpktsOut={rrd4}:ifOutNUcastPkts:AVERAGE \
 DEF:minNUPktsOut={rrd4}:ifOutNUcastPkts:MIN \
 DEF:maxNUPktsOut={rrd4}:ifOutNUcastPkts:MAX \
 CDEF:UpktsOutNeg=0,UpktsOut,- \
 CDEF:NUpktsOutNeg=0,NUpktsOut,- \
 AREA:UpktsIn#00ff00:"Unicast In " \
 GPRINT:UpktsIn:AVERAGE:"    Avg\\: %6.2lf %s" \
 GPRINT:UpktsIn:MIN:"Min\\: %6.2lf %s" \
 GPRINT:UpktsIn:MAX:"Max\\: %6.2lf %s" \
 GPRINT:UpktsIn:LAST:"Current\\: %6.2lf %s\\n" \
 STACK:NUpktsIn#66cc00:"NonUnicast In " \
 GPRINT:NUpktsIn:AVERAGE:" Avg\\: %6.2lf %s" \
 GPRINT:NUpktsIn:MIN:"Min\\: %6.2lf %s" \
 GPRINT:NUpktsIn:MAX:"Max\\: %6.2lf %s" \
 GPRINT:NUpktsIn:LAST:"Current\\: %6.2lf %s\\n" \
 AREA:UpktsOutNeg#0000ff:"Unicast Out" \
 GPRINT:UpktsOut:AVERAGE:"    Avg\\: %6.2lf %s" \
 GPRINT:UpktsOut:MIN:"Min\\: %6.2lf %s" \
 GPRINT:UpktsOut:MAX:"Max\\: %6.2lf %s" \
 GPRINT:UpktsOut:LAST:"Current\\: %6.2lf %s\\n" \
 STACK:NUpktsOut#66ccff:"NonUnicast Out " \
 GPRINT:NUpktsOut:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:NUpktsOut:MIN:"Min\\: %6.2lf %s" \
 GPRINT:NUpktsOut:MAX:"Max\\: %6.2lf %s" \
 GPRINT:NUpktsOut:LAST:"Current\\: %6.2lf %s\\n"

Cisco Interface Error Detail

These are some of the counters seen with "show interface" on a Cisco physical interface.

Used on a v1.3.11 server. Added data collection code.

Ciscoerrors.png

report.cisco.iferrors.name=Cisco Interface Errors
report.cisco.iferrors.columns=locIfInCRC,locIfInFrame,locIfInRunts,locIfInGiants,locIfInOverrun,locIfCarTrans
report.cisco.iferrors.type=interfaceSnmp
report.cisco.iferrors.width=565
report.cisco.iferrors.height=200
report.cisco.iferrors.command=--title="Cisco Interface Error Detail" \
 --width 565 \
 --height 200 \
 --lower-limit 0 \
 --vertical-label="Errors" \
 DEF:incrc={rrd1}:locIfInCRC:AVERAGE \
 DEF:inframe={rrd2}:locIfInFrame:AVERAGE \
 DEF:inrunts={rrd3}:locIfInRunts:AVERAGE \
 DEF:ingiants={rrd4}:locIfInGiants:AVERAGE \
 DEF:inoverrun={rrd5}:locIfInOverrun:AVERAGE \
 DEF:cartrans={rrd6}:locIfCarTrans:AVERAGE \
 AREA:incrc#dd4400:"CRCs In" \
 GPRINT:incrc:AVERAGE:"            Avg\\: %6.2lf %s" \
 GPRINT:incrc:MIN:"Min\\: %6.2lf %s" \
 GPRINT:incrc:MAX:"Max\\: %6.2lf %s" \
 GPRINT:incrc:LAST:"Current\\: %6.2lf %s\\n" \
 STACK:inframe#00ffff:"Frame Errors In" \
 GPRINT:inframe:AVERAGE:"    Avg\\: %6.2lf %s" \
 GPRINT:inframe:MIN:"Min\\: %6.2lf %s" \
 GPRINT:inframe:MAX:"Max\\: %6.2lf %s" \
 GPRINT:inframe:LAST:"Current\\: %6.2lf %s\\n" \
 STACK:inrunts#00aa00:"Runts In" \
 GPRINT:inrunts:AVERAGE:"           Avg\\: %6.2lf %s" \
 GPRINT:inrunts:MIN:"Min\\: %6.2lf %s" \
 GPRINT:inrunts:MAX:"Max\\: %6.2lf %s" \
 GPRINT:inrunts:LAST:"Current\\: %6.2lf %s\\n" \
 STACK:ingiants#00ff00:"Giants In" \
 GPRINT:ingiants:AVERAGE:"          Avg\\: %6.2lf %s" \
 GPRINT:ingiants:MIN:"Min\\: %6.2lf %s" \
 GPRINT:ingiants:MAX:"Max\\: %6.2lf %s" \
 GPRINT:ingiants:LAST:"Current\\: %6.2lf %s\\n" \
 STACK:inoverrun#ffff00:"Overruns In" \
 GPRINT:inoverrun:AVERAGE:"        Avg\\: %6.2lf %s" \
 GPRINT:inoverrun:MIN:"Min\\: %6.2lf %s" \
 GPRINT:inoverrun:MAX:"Max\\: %6.2lf %s" \
 GPRINT:inoverrun:LAST:"Current\\: %6.2lf %s\\n" \
 LINE2:cartrans#0000ff:"Carrier Transitions" \
 GPRINT:cartrans:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:cartrans:MIN:"Min\\: %6.2lf %s" \
 GPRINT:cartrans:MAX:"Max\\: %6.2lf %s" \
 GPRINT:cartrans:LAST:"Current\\: %6.2lf %s"

datacollection-config.xml group-

      <group name="cisco-router-interface" ifType="all">
        <mibObj oid=".1.3.6.1.4.1.9.2.2.1.1.10" instance="ifIndex" alias="locIfInRunts" type="counter" />
        <mibObj oid=".1.3.6.1.4.1.9.2.2.1.1.11" instance="ifIndex" alias="locIfInGiants" type="counter" />
        <mibObj oid=".1.3.6.1.4.1.9.2.2.1.1.12" instance="ifIndex" alias="locIfInCRC" type="counter" />
        <mibObj oid=".1.3.6.1.4.1.9.2.2.1.1.13" instance="ifIndex" alias="locIfInFrame" type="counter" />
        <mibObj oid=".1.3.6.1.4.1.9.2.2.1.1.14" instance="ifIndex" alias="locIfInOverrun" type="counter" />
        <mibObj oid=".1.3.6.1.4.1.9.2.2.1.1.21" instance="ifIndex" alias="locIfCarTrans" type="counter" />
      </group>

Cisco Router flow count

Added by: Alek Patsouris

Screenshot: RouterFlows.png

Graphs of flows (active and inactive) from Cisco routers. Added in 2 graphs due to the gap between active and inactive. I have also put them on the same graph if that is what you prefer.

The Cisco mib defines these as:

Object cnfCIActiveFlows Description Number of currently active flow entries.

Object cnfCIInactiveFlows Description Number of available flow entries.

snmp-graph.properties


At the top of the file add: 
cisco.ActiveFlows, cisco.IactiveFlows, cisco.flowtogether, \

Then further down:

######
###### Reports for cisco router flows 
######
report.cisco.ActiveFlows.name=Cisco Active Router Flows
report.cisco.ActiveFlows.columns=ActiveFlows
report.cisco.ActiveFlows.type=nodeSnmp
report.cisco.ActiveFlows.command=--title="Active Router Flows" \
 --vertical-label="Active flows" \
 DEF:aflows={rrd1}:ActiveFlows:AVERAGE \
 DEF:minaflows={rrd1}:ActiveFlows:MIN \
 DEF:maxaflows={rrd1}:ActiveFlows:MAX \
 LINE1:aflows#0000ff:"Active Flows" \
 GPRINT:aflows:AVERAGE:" Average \\:%8.2lf" \
 GPRINT:aflows:MIN:" Min \\:%8.2lf"  \
 GPRINT:aflows:MAX:" Max \\:%8.2lf\\n" \

report.cisco.IactiveFlows.name=Cisco Inactive Router Flows
report.cisco.IactiveFlows.columns=IactiveFlows
report.cisco.IactiveFlows.type=nodeSnmp
report.cisco.IactiveFlows.command=--title="Inactive Router Flows" \
 --vertical-label="Inactive flows" \
 DEF:iaflows={rrd1}:IactiveFlows:AVERAGE \
 DEF:miniaflows={rrd1}:IactiveFlows:MIN \
 DEF:maxiaflows={rrd1}:IactiveFlows:MAX \
 LINE1:iaflows#0000ff:"Inactive Flows" \
 GPRINT:iaflows:AVERAGE:" Average \\:%8.2lf" \
 GPRINT:iaflows:MIN:" Min \\:%8.2lf"  \
 GPRINT:iaflows:MAX:" Max \\:%8.2lf\\n" \

report.cisco.flowtogether.name=Cisco Router Flows
report.cisco.flowtogether.columns=ActiveFlows, IactiveFlows
report.cisco.flowtogether.type=nodeSnmp
report.cisco.flowtogether.command=--title="Router Flows" \
 --vertical-label="Flows" \
 DEF:aflows={rrd1}:ActiveFlows:AVERAGE \
 DEF:minaflows={rrd1}:ActiveFlows:MIN \
 DEF:maxaflows={rrd1}:ActiveFlows:MAX \
 DEF:iaflows={rrd2}:IactiveFlows:AVERAGE \
 DEF:miniaflows={rrd2}:IactiveFlows:MIN \
 DEF:maxiaflows={rrd2}:IactiveFlows:MAX \
 LINE1:aflows#ff0000:"Active Flows" \
 GPRINT:aflows:AVERAGE:" Average \\:%8.2lf" \
 GPRINT:aflows:MIN:" Min \\:%8.2lf"  \
 GPRINT:aflows:MAX:" Max \\:%8.2lf\\n" \
 LINE1:iaflows#0000ff:"Inactive Flows" \
 GPRINT:iaflows:AVERAGE:" Average \\:%8.2lf" \
 GPRINT:iaflows:MIN:" Min \\:%8.2lf"  \
 GPRINT:iaflows:MAX:" Max \\:%8.2lf\\n" \

And in datacollection/cisco.xml



Add: 
    <!-- Cisco Netflow flow count -->

      <group name="cisco-netflow" ifType="ignore">
        <mibObj oid=".1.3.6.1.4.1.9.9.387.1.1.2.1.4"  instance="0" alias="ActiveFlows" type="Gauge32" />
        <mibObj oid=".1.3.6.1.4.1.9.9.387.1.1.2.1.5"  instance="0" alias="IactiveFlows" type="Gauge32" />
      </group>

And down the bottom add:
          <includeGroup>cisco-netflow</includeGroup>

To the <systemDef name="Cisco Routers"> definition.

ADONIS-DNS-MIB

Added by Andy Millett

ADONIS-DNS.jpg

report.adonis.name=Adonis DNS Statistics
report.adonis.columns=dnsStatsReferral,dnsStatsNXRRSet,dnsStatsNXDomain,dnsStatsFailure,dnsStatsRecursion
report.adonis.width=565
report.adonis.height=200
report.adonis.type=nodeSnmp
report.adonis.command=--title="Adonis DNS Statistics" \
 --vertical-label="Adonis DNS Statistics" \
 DEF:dnsStatsReferral={rrd1}:dnsStatsReferral:AVERAGE \
 DEF:dnsStatsNXRRSet={rrd2}:dnsStatsNXRRSet:AVERAGE \
 DEF:dnsStatsNXDomain={rrd3}:dnsStatsNXDomain:AVERAGE \
 DEF:dnsStatsFailure={rrd4}:dnsStatsFailure:AVERAGE \
 DEF:dnsStatsRecursion={rrd5}:dnsStatsRecursion:AVERAGE \
 LINE1:dnsStatsReferral#00ff00:"dnsStatsReferral" \
 GPRINT:dnsStatsReferral:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:dnsStatsReferral:MIN:"Min \\: %8.2lf %s" \
 GPRINT:dnsStatsReferral:MAX:"Max \\: %8.2lf %s\\n" \
 LINE1:dnsStatsNXRRSet#0000ff:"dnsStatsNXRRSet" \
 GPRINT:dnsStatsNXRRSet:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:dnsStatsNXRRSet:MIN:"Min \\: %8.2lf %s" \
 GPRINT:dnsStatsNXRRSet:MAX:"Max \\: %8.2lf %s\\n" \
 LINE1:dnsStatsNXDomain#ff0000:"dnsStatsNXDomain" \
 GPRINT:dnsStatsNXDomain:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:dnsStatsNXDomain:MIN:"Min \\: %8.2lf %s" \
 GPRINT:dnsStatsNXDomain:MAX:"Max \\: %8.2lf %s\\n" \
 LINE1:dnsStatsFailure#0ffff0:"dnsStatsFailure" \
 GPRINT1:dnsStatsFailure:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:dnsStatsFailure:MIN:"Min \\: %8.2lf %s" \
 GPRINT:dnsStatsFailure:MAX:"Max \\: %8.2lf %s\\n" \
 LINE1:dnsStatsRecursion#ff00ff:"dnsStatsRecursion" \
 GPRINT:dnsStatsRecursion:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:dnsStatsRecursion:MIN:"Min \\: %8.2lf %s" \
 GPRINT:dnsStatsRecursion:MAX:"Max \\: %8.2lf %s\\n" 

<group name="adonisdns" ifType="ignore">
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.1" instance="0" alias="dnsDaemonRunning" type="INTEGER" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.2" instance="0" alias="dnsDaemonNumOfZones" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.3" instance="0" alias="dnsDaemonDebugLevel" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.4" instance="0" alias="dnsDaemonZoneTInPrg" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.5" instance="0" alias="dnsDaemonZoneTDefer" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.6" instance="0" alias="dnsDaemonSOAInProg" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.7" instance="0" alias="dnsDaemonQLogState" type="INTEGER" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.1.8" instance="0" alias="dnsDaemon" type="STRING" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.2.1" instance="0" alias="dnsStatsSuccess" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.2.2" instance="0" alias="dnsStatsReferral" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.2.3" instance="0" alias="dnsStatsNXRRSet" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.2.4" instance="0" alias="dnsStatsNXDomain" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.2.5" instance="0" alias="dnsStatsRecursion" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.13315.100.101.1.1.2.6" instance="0" alias="dnsStatsFailure" type="Counter64" />
</group>

<systemDef name="Bluecat">
         <sysoidMask>.1.3.6.1.4.1.8072.3.2.</sysoidMask>
         <collect>
	          <includeGroup>adonisdns</includeGroup>
         </collect>
</systemDef>

    <resourceType name="adonisdns" label="Bluecat Networks Adonis DNS" resourceLabel="${dnsDaemon}">
      <persistenceSelectorStrategy class="org.opennms.netmgt.collectd.PersistAllSelectorStrategy"/>
      <storageStrategy class="org.opennms.netmgt.dao.support.IndexStorageStrategy"/>
    </resourceType>

JUNIPER-IVE-MIB

Added by Andy Millett

JUNIPER-IVE-MIB.jpg

report.ive.connections.name=Juniper IVE Users
report.ive.connections.columns=signedInWebUsers,signedInMailUsers,iveConcurrentUsers
report.ive.connections.width=565
report.ive.connections.height=200
report.ive.connections.type=nodeSnmp
report.ive.connections.command=--title="Juniper IVE Users" \
 --vertical-label="Juniper IVE Users" \
 DEF:signedInWebUsers={rrd1}:signedInWebUsers:AVERAGE \
 DEF:signedInMailUsers={rrd2}:signedInMailUsers:AVERAGE \
 DEF:iveConcurrentUsers={rrd3}:iveConcurrentUsers:AVERAGE \
 AREA:signedInWebUsers#00ff00:"signedInWebUsers" \
 GPRINT:signedInWebUsers:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:signedInWebUsers:MIN:"Min \\: %8.2lf %s" \
 GPRINT:signedInWebUsers:MAX:"Max \\: %8.2lf %s\\n" \
 AREA:signedInMailUsers#0000ff:"signedInMailUsers" \
 GPRINT:signedInMailUsers:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:signedInMailUsers:MIN:"Min \\: %8.2lf %s" \
 GPRINT:signedInMailUsers:MAX:"Max \\: %8.2lf %s\\n" \
 LINE2:iveConcurrentUsers#ff0000:"iveConcurrentUsers" \
 GPRINT:iveConcurrentUsers:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:iveConcurrentUsers:MIN:"Min \\: %8.2lf %s" \
 GPRINT:iveConcurrentUsers:MAX:"Max \\: %8.2lf %s\\n" 

<group name="ive" ifType="ignore">
<mibObj oid=".1.3.6.1.4.1.12532.6" instance="0" alias="productName" type="string" />
<mibObj oid=".1.3.6.1.4.1.12532.1" instance="0" alias="logFullPercent" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.2" instance="0" alias="signedInWebUsers" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.3" instance="0" alias="signedInMailUsers" type="Gauge32" />
<!-- <mibObj oid=".1.3.6.1.4.1.12532.4" instance="0" alias="blockedIP" type="IpAddress" /> -->
<mibObj oid=".1.3.6.1.4.1.12532.9" instance="0" alias="meetingUserCount" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.10" instance="0" alias="iveCpuUtil" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.11" instance="0" alias="iveMemoryUtil" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.12" instance="0" alias="iveConcurrentUsers" type="Gauge32" />
<!-- <mibObj oid=".1.3.6.1.4.1.12532.13" instance="0" alias="clusterConcurrentUsersTOOLONG" type="Gauge32" /> -->
<mibObj oid=".1.3.6.1.4.1.12532.14" instance="0" alias="iveTotalHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.15" instance="0" alias="iveFileHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.16" instance="0" alias="iveWebHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.17" instance="0" alias="iveAppletHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.18" instance="0" alias="ivetermHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.19" instance="0" alias="iveSAMHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.20" instance="0" alias="iveNCHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.21" instance="0" alias="meetingHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.12532.22" instance="0" alias="meetingCount" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.24" instance="0" alias="iveSwapUtil" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.12532.25" instance="0" alias="diskFullPercent" type="Gauge32" />
<!-- <mibObj oid=".1.3.6.1.4.1.12532.26.1.1" instance="ipIndex" alias="ipIndex" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.12532.26.1.2" instance="ipIndex" alias="ipValue" type="NetworkAddress" /> -->
</group>

    <resourceType name="ive" label="Juniper IVE Resources" resourceLabel="${productName}">
      <persistenceSelectorStrategy class="org.opennms.netmgt.collectd.PersistAllSelectorStrategy"/>
      <storageStrategy class="org.opennms.netmgt.dao.support.IndexStorageStrategy"/>
    </resourceType>

<systemDef name="Juniper IVE Resources">
         <sysoidMask>.1.3.6.1.4.1.12532.</sysoidMask>
         <collect>
	          <includeGroup>ive</includeGroup>
        </collect>
</systemDef>

Juniper Netscreen SSG series

Added by: Alek Patsouris

Screenshot: Juniperssggraphs.png

Graph of session counts on a Juniper Netscreen SSG series firewall.

Tested on a SSG-5, SSG-320M and SSG-550

In datacollection/juniper.xml

      <group name="juniper-netscreen-system" ifType="ignore">
        <mibObj oid=".1.3.6.1.4.1.3224.16.1.1" instance="0" alias="nsResCpuAvg"       type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.1.2" instance="0" alias="nsResCpuLast1Min"  type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.1.3" instance="0" alias="nsResCpuLast5Min"  type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.1.4" instance="0" alias="nsResCpuLast15Min" type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.2.1" instance="0" alias="nsResMemAllocate"  type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.2.2" instance="0" alias="nsResMemLeft"      type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.2.3" instance="0" alias="nsResMemFrag"      type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.3.2" instance="0" alias="nsResSessAllocate" type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.3.3" instance="0" alias="nsResSessMaxium"   type="integer" />
        <mibObj oid=".1.3.6.1.4.1.3224.16.3.4" instance="0" alias="nsResSessFailed"   type="integer" />
      </group>

And down the bottom add a new definition:

      <systemDef name="Juniper Netscreen System">
        <sysoidMask>.1.3.6.1.4.1.3224.1.</sysoidMask>
        <collect>
          <includeGroup>juniper-netscreen-system</includeGroup>
        </collect>
      </systemDef>


Now for the graphs!
In snmp-graph.properties

Up the top: 
juniper.nsResCpuAvg, juniper.nsResCpuLast1Min, juniper.nsResCpuLast5Min, juniper.nsResCpuLast15Min, juniper.nsResMemAllocate, juniper.nsResMemLeft, \

Skip a couple of lines, the graphs!
##
## Juniper Netscreen SSG series Graphs
##
report.juniper.nsResSessAllocate.name=Juniper SSG Allocated Session Count
report.juniper.nsResSessAllocate.columns=nsResSessAllocate
report.juniper.nsResSessAllocate.type=nodeSnmp
report.juniper.nsResSessAllocate.command=--title="Juniper SSG Allocated Sessions" \
 DEF:ssgsession={rrd1}:nsResSessAllocate:AVERAGE \
 DEF:minssgsession={rrd1}:nsResSessAllocate:MIN \
 DEF:maxssgsession={rrd1}:nsResSessAllocate:MAX \
 LINE1:ssgsession#0000ff:"Allocated Sessions" \
 GPRINT:ssgsession:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ssgsession:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ssgsession:MAX:"Max \\: %8.2lf %s\\n"

report.juniper.nsResSessMaxium.name=Juniper SSG Maximum Session
report.juniper.nsResSessMaxium.columns=nsResSessMaxium
report.juniper.nsResSessMaxium.type=nodeSnmp
report.juniper.nsResSessMaxium.command=--title="Juniper SSG Maximum Sessions" \
 DEF:ssgmsession={rrd1}:nsResSessMaxium:AVERAGE \
 DEF:minssgmsession={rrd1}:nsResSessMaxium:MIN \
 DEF:maxssgmsession={rrd1}:nsResSessMaxium:MAX \
 LINE1:ssgmsession#0000ff:"Maximum Sessions" \
 GPRINT:ssgmsession:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ssgmsession:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ssgmsession:MAX:"Max \\: %8.2lf %s\\n"

report.juniper.nsResSessFailed.name=Juniper SSG Failed Sessions
report.juniper.nsResSessFailed.columns=nsResSessFailed
report.juniper.nsResSessFailed.type=nodeSnmp
report.juniper.nsResSessFailed.command=--title="Juniper SSG Failed Sessions" \
 DEF:ssgfsession={rrd1}:nsResSessFailed:AVERAGE \
 DEF:minssgfsession={rrd1}:nsResSessFailed:MIN \
 DEF:maxssgfsession={rrd1}:nsResSessFailed:MAX \
 LINE1:ssgfsession#0000ff:"Failed Sessions" \
 GPRINT:ssgfsession:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ssgfsession:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ssgfsession:MAX:"Max \\: %8.2lf %s\\n"

report.juniper.nsResMemFrag.name=Juniper SSG Memory
report.juniper.nsResMemFrag.columns=nsResMemAllocate, nsResMemLeft, nsResMemFrag
report.juniper.nsResMemFrag.type=nodeSnmp
report.juniper.nsResMemFrag.command=--title="Juniper SSG Memory" \
 DEF:ssgmema={rrd1}:nsResMemAllocate:AVERAGE \
 DEF:minssgmema={rrd1}:nsResMemAllocate:MIN \
 DEF:maxssgmema={rrd1}:nsResMemAllocate:MAX \
 DEF:ssgmeml={rrd2}:nsResMemLeft:AVERAGE \
 DEF:minssgmeml={rrd1}:nsResMemAllocate:MIN \
 DEF:maxssgmeml={rrd1}:nsResMemAllocate:MAX \
 DEF:ssgmemf={rrd3}:nsResMemFrag:AVERAGE \
 DEF:minssgmemf={rrd1}:nsResMemAllocate:MIN \
 DEF:maxssgmemf={rrd1}:nsResMemAllocate:MAX \
 LINE2:ssgmema#0000ff:"Memory Allocated"  \
 GPRINT:ssgmema:AVERAGE:" Last\:%8.2lf %s"  \
 GPRINT:ssgmema:MIN:" Min\:%8.2lf %s"  \
 GPRINT:ssgmema:MAX:" Max\:%8.2lf %s\n"  \
 LINE2:ssgmeml#00ff00:"Memory Available"  \
 GPRINT:ssgmeml:AVERAGE:" Last\:%8.2lf %s" \
 GPRINT:ssgmeml:MIN:" Min\:%8.2lf %s"  \
 GPRINT:ssgmeml:MAX:" Max\:%8.2lf %s\n"  \
 LINE2:ssgmemf#ff0000:"Memory Fragmented"  \
 GPRINT:ssgmemf:AVERAGE:"Last\:%8.2lf %s" \
 GPRINT:ssgmemf:MIN:" Min\:%8.2lf %s"  \
 GPRINT:ssgmemf:MAX:" Max\:%8.2lf %s\\n"  \

report.juniper.nsResCpuLast1Min.name=Juniper SSG CPU
report.juniper.nsResCpuLast1Min.columns=nsResCpuLast1Min, nsResCpuLast5Min, nsResCpuLast15Min
report.juniper.nsResCpuLast1Min.type=nodeSnmp
report.juniper.nsResCpuLast1Min.command=--title="Juniper SSG CPU" \
 DEF:ssgcpu1={rrd1}:nsResCpuLast1Min:AVERAGE \
 DEF:minssgcpu1={rrd1}:nsResCpuLast1Min:MIN \
 DEF:maxssgcpu1={rrd1}:nsResCpuLast1Min:MAX \
 DEF:ssgcpu5={rrd2}:nsResCpuLast5Min:AVERAGE \
 DEF:minssgcpu5={rrd2}:nsResCpuLast5Min:MIN \
 DEF:maxssgcpu5={rrd2}:nsResCpuLast5Min:MAX \
 DEF:ssgcpu15={rrd3}:nsResCpuLast15Min:AVERAGE \
 DEF:minssgcpu15={rrd3}:nsResCpuLast15Min:MIN \
 DEF:maxssgcpu15={rrd3}:nsResCpuLast15Min:MAX \
 AREA:ssgcpu1#EACC00:"CPU 1 min avg":STACK  \
 GPRINT:ssgcpu1:LAST:" Current\:%8.2lf %s"  \
 GPRINT:ssgcpu1:MIN:" Min\:%8.2lf %s"  \
 GPRINT:ssgcpu1:MAX:" Max\:%8.2lf %s\n"  \
 AREA:ssgcpu5#EA8F00:"CPU 5 min avg":STACK \
 GPRINT:ssgcpu5:LAST:" Current\:%8.2lf %s"  \
 GPRINT:ssgcpu5:MIN:" Min\:%8.2lf %s"  \
 GPRINT:ssgcpu5:MAX:" Max\:%8.2lf %s\n"  \
 AREA:ssgcpu15#FF0000:"CPU 15 min avg":STACK \
 GPRINT:ssgcpu15:LAST:"Current\:%8.2lf %s" \
 GPRINT:ssgcpu15:MIN:" Min\:%8.2lf %s"  \
 GPRINT:ssgcpu15:MAX:" Max\:%8.2lf %s\\n"  \


BLUECOAT-SG-PROXY-MIB

Added by Andy Millett

Not all of the values are enabled. I only enabled the ones I was interested in.

BLUECOAT-PROXY-WORKERS.jpg

BLUECOAT-CPU-IDLE-BUSY.jpg

  • ProxySG CPU
report.sgProxy.cpu.name=ProxySG CPU Usage
report.sgProxy.cpu.columns=CpuBusyPerCent,CpuIdlePerCent
report.sgProxy.cpu.width=565
report.sgProxy.cpu.height=200
report.sgProxy.cpu.type=nodeSnmp
report.sgProxy.cpu.command=--title="CPU Usage" \
 --vertical-label="CPU Usage" \
 DEF:CpuBusyPerCent={rrd1}:CpuBusyPerCent:AVERAGE \
 DEF:CpuIdlePerCent={rrd2}:CpuIdlePerCent:AVERAGE \
 AREA:CpuBusyPerCent#ff0000:"CpuBusyPerCent" \
 GPRINT:CpuBusyPerCent:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:CpuBusyPerCent:MIN:"Min \\: %8.2lf %s" \
 GPRINT:CpuBusyPerCent:MAX:"Max \\: %8.2lf %s\\n" \
 STACK:CpuIdlePerCent#00ff00:"CpuIdlePerCent" \
 GPRINT:CpuIdlePerCent:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:CpuIdlePerCent:MIN:"Min \\: %8.2lf %s" \
 GPRINT:CpuIdlePerCent:MAX:"Max \\: %8.2lf %s\\n" \
  • ProxySG Server Workers
report.sgProxy.server.connections.name=ProxySG Server Workers
report.sgProxy.server.connections.columns=ServerConnections,ServerConnectionsAc,ServerConnectionsId
report.sgProxy.server.connections.width=565
report.sgProxy.server.connections.height=200
report.sgProxy.server.connections.type=nodeSnmp
report.sgProxy.server.connections.command=--title="Server Workers" \
--vertical-label="Server Workers" \
 DEF:ServerConnections={rrd1}:ServerConnections:AVERAGE \
 DEF:ServerConnectionsAc={rrd2}:ServerConnectionsAc:AVERAGE \
 DEF:ServerConnectionsId={rrd3}:ServerConnectionsId:AVERAGE \
 AREA:ServerConnections#00ff00:"ServerConnections" \
 GPRINT:ServerConnections:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ServerConnections:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ServerConnections:MAX:"Max \\: %8.2lf %s\\n" \
 STACK:ServerConnectionsAc#0000ff:"ServerConnectionsAc" \
 GPRINT:ServerConnectionsAc:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ServerConnectionsAc:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ServerConnectionsAc:MAX:"Max \\: %8.2lf %s\\n" \
 STACK:ServerConnectionsId#ff0000:"ServerConnectionsId" \
 GPRINT:ServerConnectionsId:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ServerConnectionsId:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ServerConnectionsId:MAX:"Max \\: %8.2lf %s\\n"
  • ProxySG Client Workers
report.sgProxy.client.connections.name=ProxySG Client Workers
report.sgProxy.client.connections.columns=ClientConnections,ClientConnectionsAc,ClientConnectionsId
report.sgProxy.client.connections.width=565
report.sgProxy.client.connections.height=200
report.sgProxy.client.connections.type=nodeSnmp
report.sgProxy.client.connections.command=--title="Client Workers" \
 --vertical-label="Client Workers" \
 DEF:ClientConnections={rrd1}:ClientConnections:AVERAGE \
 DEF:ClientConnectionsAc={rrd2}:ClientConnectionsAc:AVERAGE \
 DEF:ClientConnectionsId={rrd3}:ClientConnectionsId:AVERAGE \
 AREA:ClientConnections#00ff00:"ClientConnections" \
 GPRINT:ClientConnections:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ClientConnections:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ClientConnections:MAX:"Max \\: %8.2lf %s\\n" \
 STACK:ClientConnectionsAc#0000ff:"ClientConnectionsAc" \
 GPRINT:ClientConnectionsAc:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ClientConnectionsAc:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ClientConnectionsAc:MAX:"Max \\: %8.2lf %s\\n" \
 STACK:ClientConnectionsId#ff0000:"ClientConnectionsId" \
 GPRINT:ClientConnectionsId:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:ClientConnectionsId:MIN:"Min \\: %8.2lf %s" \
 GPRINT:ClientConnectionsId:MAX:"Max \\: %8.2lf %s\\n" 

<group name="sgProxy" ifType="ignore">
<mibObj oid=".1.3.6.1.4.1.3417.2.11.1.1" instance="0" alias="Admin" type="string" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.1.2" instance="0" alias="Software" type="string" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.1.3" instance="0" alias="Version" type="string" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.1.4" instance="0" alias="SerialNumber" type="string" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.1" instance="0" alias="CpuUpTime" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.2" instance="0" alias="CpuBusyTime" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.3" instance="0" alias="CpuIdleTime" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.4" instance="0" alias="CpuUpTimeSinceLastAccess" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.5" instance="0" alias="CpuBusyTimeSinceLastAccess" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.6" instance="0" alias="CpuIdleTimeSinceLastAccess" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.7" instance="0" alias="CpuBusyPerCent" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.1.8" instance="0" alias="CpuIdlePerCent" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.2.1" instance="0" alias="Storage" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.2.2" instance="0" alias="NumObjects" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.3.1" instance="0" alias="MemAvailable" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.3.2" instance="0" alias="MemCacheUsage" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.3.3" instance="0" alias="MemSysUsage" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.2.3.4" instance="0" alias="MemoryPressure" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.1" instance="0" alias="ClientRequests" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.2" instance="0" alias="ClientHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.3" instance="0" alias="ClientPartialHits" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.4" instance="0" alias="ClientMisses" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.5" instance="0" alias="ClientErrors" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.6" instance="0" alias="ClientRequestRate" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.7" instance="0" alias="ClientHitRate" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.8" instance="0" alias="ClientByteHitRate" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.9" instance="0" alias="ClientInBytes" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.1.10" instance="0" alias="ClientOutBytes" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.2.1" instance="0" alias="ServerRequests" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.2.2" instance="0" alias="ServerErrors" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.2.3" instance="0" alias="ServerInBytes" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.2.4" instance="0" alias="ServerOutBytes" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.3.1" instance="0" alias="ClientConnections" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.3.2" instance="0" alias="ClientConnectionsAc" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.3.3" instance="0" alias="ClientConnectionsId" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.3.4" instance="0" alias="ServerConnections" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.3.5" instance="0" alias="ServerConnectionsAc" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.1.3.6" instance="0" alias="ServerConnectionsId" type="Gauge32" />
<!-- <mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.1" instance="0" alias="ServiceTimeAll" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.2" instance="0" alias="ServiceTimeHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.3" instance="0" alias="ServiceTimePartialHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.4" instance="0" alias="ServiceTimeMiss" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.5" instance="0" alias="TotalFetchTimeAll" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.6" instance="0" alias="TotalFetchTimeHit" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.7" instance="0" alias="TotalFetchTimePartialHit" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.1.8" instance="0" alias="TotalFetchTimeMiss" type="Counter64" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.2.1" instance="0" alias="FirstByteAll" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.2.2" instance="0" alias="FirstByteHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.2.3" instance="0" alias="FirstBytePartialHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.2.4" instance="0" alias="FirstByteMiss" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.3.1" instance="0" alias="ByteRateAll" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.3.2" instance="0" alias="ByteRateHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.3.3" instance="0" alias="ByteRatePartialHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.3.4" instance="0" alias="ByteRateMiss" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.4.1" instance="0" alias="ResponseSizeAll" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.4.2" instance="0" alias="ResponseSizeHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.4.3" instance="0" alias="ResponseSizePartialHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.2.4.4" instance="0" alias="ResponseSizeMiss" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.3.1.1.2" instance="MedianServiceTime" alias="MedianServiceTimeAll" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.3.1.1.3" instance="MedianServiceTime" alias="MedianServiceTimeHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.3.1.1.4" instance="MedianServiceTime" alias="MedianServiceTimePartialHit" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.3.1.1.5" instance="MedianServiceTime" alias="MedianServiceTimeMiss" type="Gauge32" />
<mibObj oid=".1.3.6.1.4.1.3417.2.11.3.3.1.1.6" instance="MedianServiceTime" alias="DnsMedianServiceTime" type="Gauge32" /> -->
</group>

<systemDef name="Blue Coat ProxySG">
         <sysoidMask>.1.3.6.1.4.1.3417.</sysoidMask>
         <collect>
	          <includeGroup>sgProxy</includeGroup>
        </collect>
</systemDef>

    <resourceType name="sgProxy" label="ProxySG HTTP Resources" resourceLabel="${SerialNumber}">
      <persistenceSelectorStrategy class="org.opennms.netmgt.collectd.PersistAllSelectorStrategy"/>
      <storageStrategy class="org.opennms.netmgt.dao.support.IndexStorageStrategy"/>
    </resourceType>

CYCLADES-ACS-PM-MIB

Added by Andy Millett


PM10-Current.jpg

report.cyclades.cur.name=Cyclades PM Current
report.cyclades.cur.columns=cyPMUnitCurrent,cyPMUnitMaxCurrent
report.cyclades.cur.width=565
report.cyclades.cur.height=200
report.cyclades.cur.type=cyPMSerialPortNum
report.cyclades.cur.command=--title="Cyclades PM Current" \
 --vertical-label="Cyclades PM Amps" \
 DEF:cyPMUnitCurrent={rrd1}:cyPMUnitCurrent:AVERAGE \
 DEF:cyPMUnitMaxCurrent={rrd2}:cyPMUnitMaxCurrent:AVERAGE \
 CDEF:Current=cyPMUnitCurrent,10,/ \
 CDEF:MaxCurrent=cyPMUnitMaxCurrent,10,/ \
 LINE2:Current#ff9900:"cyPMUnitCurrent" \
 GPRINT:Current:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:Current:MIN:"Min \\: %8.2lf %s" \
 GPRINT:Current:MAX:"Max \\: %8.2lf %s\\n" \
 LINE2:MaxCurrent#00cc00:"cyPMUnitMaxCurrent" \
 GPRINT:MaxCurrent:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:MaxCurrent:MIN:"Min \\: %8.2lf %s" \
 GPRINT:MaxCurrent:MAX:"Max \\: %8.2lf %s\\n"
 GPRINT:MaxCurrent:MAX:"Max \\: %8.2lf %s\\n"

PM10-Temp.jpg

report.cyclades.temp.name=Cyclades PM Temperature
report.cyclades.temp.columns=cyPMUnitTemp,cyPMUnitMaxTemp
report.cyclades.temp.width=565
report.cyclades.temp.height=200
report.cyclades.temp.type=cyPMSerialPortNum
report.cyclades.temp.command=--title="Cyclades PM Temperature" \
 --vertical-label="Temperature (Celsius)" \
 DEF:cyPMUnitTemp={rrd1}:cyPMUnitTemp:AVERAGE \
 DEF:cyPMUnitMaxTemp={rrd2}:cyPMUnitMaxTemp:AVERAGE \
 CDEF:Temp=cyPMUnitTemp,10,/ \
 CDEF:MaxTemp=cyPMUnitMaxTemp,10,/ \
 AREA:Temp#ff9990:"cyPMUnitTemp" \
 GPRINT:Temp:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:Temp:MIN:"Min \\: %8.2lf %s" \
 GPRINT:Temp:MAX:"Max \\: %8.2lf %s\\n" \
 LINE2:MaxTemp#00cc00:"cyPMUnitMaxTemp" \
 GPRINT:MaxTemp:AVERAGE:"Avg \\: %8.2lf %s" \
 GPRINT:MaxTemp:MIN:"Min \\: %8.2lf %s" \
 GPRINT:MaxTemp:MAX:"Max \\: %8.2lf %s\\n"
 GPRINT:MaxTemp:MAX:"Max \\: %8.2lf %s\\n"

    <resourceType name="cyPMSerialPortNum" label="Cyclades PM10 Ports">
      <persistenceSelectorStrategy class="org.opennms.netmgt.collectd.PersistAllSelectorStrategy"/>
      <storageStrategy class="org.opennms.netmgt.dao.support.IndexStorageStrategy"/>
    </resourceType>

<group name = "cyPMSerialPortNum" ifType = "all">
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.1" instance="0" alias="cyPMSerialPortNum" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.1" instance="cyPMSerialPortNum" alias="cyPMSerialPortNum" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.2" instance="cyPMSerialPortNum" alias="cyPMNumberOutlets" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.3" instance="cyPMSerialPortNum" alias="cyPMNumberUnits" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.4" instance="cyPMSerialPortNum" alias="cyPMCurrent" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.5" instance="cyPMSerialPortNum" alias="cyPMVersion" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.6" instance="cyPMSerialPortNum" alias="cyPMTemperature" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.2.1.7" instance="cyPMSerialPortNum" alias="cyPMCommand" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.2" instance="cyPMSerialPortNum" alias="cyPMUnitVersion" type="string" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.3" instance="cyPMSerialPortNum" alias="cyPMUnitOutlets" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.4" instance="cyPMSerialPortNum" alias="cyPMUnitFirstOutlet" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.5" instance="cyPMSerialPortNum" alias="cyPMUnitCurrent" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.6" instance="cyPMSerialPortNum" alias="cyPMUnitMaxCurrent" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.7" instance="cyPMSerialPortNum" alias="cyPMUnitTemp" type="Integer32" />
<mibObj oid=".1.3.6.1.4.1.2925.4.5.3.1.8" instance="cyPMSerialPortNum" alias="cyPMUnitMaxTemp" type="Integer32" />
</group>

<systemDef name="Cyclades PM10i">
         <sysoidMask>.1.3.6.1.4.1.2925.</sysoidMask>
         <collect>
	       <includeGroup>cyPMSerialPortNum</includeGroup>
        </collect>
</systemDef>

Usage Prediction

by Ken Eshelby

Based on this article and a request on the opennms-discuss list.

Here is a graph that can predict a threshold crossing based on the RRD LSLSLOPE function. The example is for NetSNMP disk utilization but the approach can be adapted to any percentage calculation. Likewise, a current usage can be substituted by calculating something like bandwidth utilization from ifInOctets(/8)/ifSpeed.

The following are known caveats:

  • likely to work only with RRD storage strategy
  • somewhat modern rrdtool version required (1.4.7 was used)
    • can use default opennms rrdtool version (1.2.23) if
      • remove :dashes=10
      • remove :strftime text (which eliminates prediction data)
  • slopes are built to begin at -1 week and -1 month and probably need tweaking to suit your data
  • If your rrd samples does not reach 90% at least, the trend prediction result will be: 1 jan 1970 (unix_timestamp). This is not a bug, it's due to: "CDEF:abc2=avg2,90,100,LIMIT \" that requires 90% limit

On top of the caveats, the function of doing threshold prediction inside a graph display can have limited use. For an enterprise, this kind of analysis would have greater benefit if calculated in a report across multiple data sources.

Predictiongraph.png

report.netsnmp.disktrend.name=NetSNMP Disk Usage Prediction
report.netsnmp.disktrend.columns=ns-dskPercent
report.netsnmp.disktrend.type=dskIndex
report.netsnmp.disktrend.propertiesValues=ns-dskPath
report.netsnmp.disktrend.command=--title="Disk Usage Prediction: {ns-dskPath}" \
 --width 620 \
 --height 200 \
 --interlace \
 --vertical-label="Disk used (%)" \
 --lower-limit=0 \
 --upper-limit=100 \
 --rigid \
 DEF:pused1={rrd1}:ns-dskPercent:AVERAGE \
 DEF:pused2={rrd1}:ns-dskPercent:AVERAGE:start=-1w \
 DEF:pused3={rrd1}:ns-dskPercent:AVERAGE:start=-1m \
 VDEF:D2=pused2,LSLSLOPE \
 VDEF:H2=pused2,LSLINT \
 CDEF:avg2=pused2,POP,D2,COUNT,*,H2,+ \
 CDEF:abc2=avg2,90,100,LIMIT \
 LINE1:90 \
 AREA:5#FF000022::STACK \
 AREA:5#FF000044::STACK \
 COMMENT:"                       Now          Min              Avg             Max\\n" \
 AREA:pused1#00880077:"Disk Used" \
 GPRINT:pused1:LAST:"%12.0lf%%" \
 GPRINT:pused1:MIN:"%10.0lf%%" \
 GPRINT:pused1:AVERAGE:"%13.0lf%%" \
 GPRINT:pused1:MAX:"%13.0lf%%\\n" \
 COMMENT:" \\n" \
 VDEF:minabc2=abc2,FIRST \
 VDEF:maxabc2=abc2,LAST \
 VDEF:D3=pused3,LSLSLOPE \
 VDEF:H3=pused3,LSLINT \
 CDEF:avg3=pused3,POP,D3,COUNT,*,H3,+ \
 CDEF:abc3=avg3,90,100,LIMIT \
 VDEF:minabc3=abc3,FIRST \
 VDEF:maxabc3=abc3,LAST \
 AREA:abc2#FFBB0077 \
 AREA:abc3#0077FF77 \
 LINE2:abc2#FFBB00 \
 LINE2:abc3#0077FF \
 LINE2:avg2#FFBB00:"Trend since 1 week                           :dashes=10" \
 LINE2:avg3#0077FF:"Trend since 1 month\\n:dashes=10" \
 GPRINT:minabc2:"  Reach  90% at %c :strftime" \
 GPRINT:minabc3:"  Reach  90% at %c \\n:strftime" \
 GPRINT:maxabc2:"  Reach 100% at %c :strftime" \
 GPRINT:maxabc3:"  Reach 100% at %c \\n:strftime"

Based on Kens great example, here a variation which shows also the percent usage in colors. I've added also last read usage in bytes including Inode usage in percent.

  • NOTES (by Dinde): It appears that certains net-snmp agents (especialy Debian's one) does not provide the required OIDs (Centos does).
  • As a workaround I use both definition, and replaced one line on the one below:
  • report.netsnmp.disktrend.suppress=netsnmp.disktrendold

where netsnmp.disktrendold is the previous definition by eshelbk that has been renamed.

  • This will result the below graph to show if OIDs (missing on Debian) have been collected. If the OIDs are missing, it will show up the previous graph.

by --_indigo (talk) 11:01, 21 January 2014 (EST) Disk-prediction-2.png

report.netsnmp.disktrend.name=NetSNMP Disk Usage Prediction
report.netsnmp.disktrend.columns=ns-dskPercent,ns-dskTotalLow,ns-dskTotalHigh,ns-dskUsedLow,ns-dskUsedHigh,ns-dskPercentNode
report.netsnmp.disktrend.type=dskIndex
report.netsnmp.disktrend.suppress=netsnmp.diskpercent,netsnmp.diskHighLow,netsnmp.diskpercentinode
report.netsnmp.disktrend.propertiesValues=ns-dskPath
report.netsnmp.disktrend.command=--title="Disk Usage Prediction: {ns-dskPath}" \
 --width 620 \
 --height 200 \
 --interlace \
 --vertical-label="Disk used (%)" \
 --lower-limit=0 \
 --upper-limit=100 \
 --rigid \
 DEF:pused1={rrd1}:ns-dskPercent:AVERAGE \
 DEF:pused2={rrd1}:ns-dskPercent:AVERAGE:start=-1w \
 DEF:pused3={rrd1}:ns-dskPercent:AVERAGE:start=-1m \
 DEF:dtotalkLow={rrd2}:ns-dskTotalLow:AVERAGE \
 DEF:dtotalkHigh={rrd3}:ns-dskTotalHigh:AVERAGE \
 CDEF:total1=dtotalkHigh,4294967296,*,dtotalkLow,+ \
 DEF:dusedkLow={rrd4}:ns-dskUsedLow:AVERAGE \
 DEF:dusedkHigh={rrd5}:ns-dskUsedHigh:AVERAGE \
 CDEF:used1=dusedkHigh,4294967296,*,dusedkLow,+ \
 DEF:ipercent={rrd6}:ns-dskPercentNode:AVERAGE \
 CDEF:total=total1,1024,* \
 CDEF:used=used1,1024,* \
 CDEF:free=total,used,- \
 VDEF:D2=pused2,LSLSLOPE \
 VDEF:H2=pused2,LSLINT \
 CDEF:avg2=pused2,POP,D2,COUNT,*,H2,+ \
 CDEF:abc2=avg2,90,100,LIMIT \
 CDEF:pused10=0,pused1,GT,0,pused1,IF \
 CDEF:pused20=10,pused1,GT,0,pused1,IF \
 CDEF:pused30=20,pused1,GT,0,pused1,IF \
 CDEF:pused40=30,pused1,GT,0,pused1,IF \
 CDEF:pused50=40,pused1,GT,0,pused1,IF \
 CDEF:pused60=50,pused1,GT,0,pused1,IF \
 CDEF:pused70=60,pused1,GT,0,pused1,IF \
 CDEF:pused80=70,pused1,GT,0,pused1,IF \
 CDEF:pused90=80,pused1,GT,0,pused1,IF \
 CDEF:pused100=90,pused1,GT,0,pused1,IF \
 LINE1:90 \
 AREA:5#fcaf3e88::STACK \
 AREA:5#f5790088::STACK \
 COMMENT:"Disk space in (%)\\n" \
 AREA:pused10#5ca53f:" 0-10%" \
 AREA:pused20#75b731:"11-20%" \
 AREA:pused30#90c22f:"21-30%" \
 AREA:pused40#b8d029:"31-40%" \
 AREA:pused50#e4e11e:"41-50%" \
 COMMENT:"\\n" \
 AREA:pused60#fee610:"51-60%" \
 AREA:pused70#f4bd1b:"61-70%" \
 AREA:pused80#eaa322:"71-80%" \
 AREA:pused90#de6822:"81-90%" \
 AREA:pused100#d94c20:"91-100%\\n" \
 COMMENT:" \\n" \
 COMMENT:"                 Last      Minimum    Average    Maximum\\n" \
 COMMENT:"Percent   " \
 GPRINT:pused1:LAST:"%8.2lf%%" \
 GPRINT:pused1:MIN:"%8.2lf%%" \
 GPRINT:pused1:AVERAGE:"%8.2lf%%" \
 GPRINT:pused1:MAX:"%8.2lf%%\\n" \
 COMMENT:"Inodes    " \
 GPRINT:ipercent:LAST:"%8.2lf%%" \
 GPRINT:ipercent:MIN:"%8.2lf%%" \
 GPRINT:ipercent:AVERAGE:"%8.2lf%%" \
 GPRINT:ipercent:MAX:"%8.2lf%%\\n" \
 COMMENT:" \\n" \
 COMMENT:"                 Used       Free       Total\\n" \
 COMMENT:"Last Bytes" \
 GPRINT:used:LAST:"%8.2lf%s" \
 GPRINT:free:LAST:"%8.2lf%s" \
 GPRINT:total:LAST:"%8.2lf%s\\n" \
 COMMENT:" \\n" \
 VDEF:minabc2=abc2,FIRST \
 VDEF:maxabc2=abc2,LAST \
 VDEF:D3=pused3,LSLSLOPE \
 VDEF:H3=pused3,LSLINT \
 CDEF:avg3=pused3,POP,D3,COUNT,*,H3,+ \
 CDEF:abc3=avg3,90,100,LIMIT \
 VDEF:minabc3=abc3,FIRST \
 VDEF:maxabc3=abc3,LAST \
 AREA:abc2#a4000055 \
 AREA:abc3#cc000055 \
 COMMENT:"\\n" \
 LINE2:ipercent#2e3436 \
 LINE2:abc2#ef2929 \
 LINE2:abc3#ef2929 \
 LINE2:avg2#ef2929:"Trend since 1 week                            :dashes=10" \
 LINE2:avg3#a40000:"Trend since 1 month\\n:dashes=10" \
 GPRINT:minabc2:"  Reach  90% at %c :strftime" \
 GPRINT:minabc3:"  Reach  90% at %c \\n:strftime" \
 GPRINT:maxabc2:"  Reach 100% at %c :strftime" \
 GPRINT:maxabc3:"  Reach 100% at %c \\n:strftime"

Riverbed Steelhead Bandwidth Aggregate

by Nomadtales 21:23, 17 June 2012 (EDT)

Update on the built in graph to clearly show how much WAN vs LAN traffic is being reduced. Also shows reduction ratios and percentage in the legend.

Riverbed Steelhead Aggregate Bandwidth.png

report.riverbed.steelhead.aggBandwidth.name=Riverbed Steelhead Aggregate Bandwidth
report.riverbed.steelhead.aggBandwidth.columns=rbshBwAggInLan,rbshBwAggInWan,rbshBwAggOutLan,rbshBwAggOutWan
report.riverbed.steelhead.aggBandwidth.type=nodeSnmp
report.riverbed.steelhead.aggBandwidth.command=--title="Riverbed Steelhead Aggregate Bandwidth" \
 --width 580 \
 --height 200 \
 --vertical-label="Bits/sec" \
 --upper-limit=10000 \
 DEF:inLanBytes={rrd1}:rbshBwAggInLan:AVERAGE \
 DEF:inWanBytes={rrd2}:rbshBwAggInWan:AVERAGE \
 DEF:outLanBytes={rrd3}:rbshBwAggOutLan:AVERAGE \
 DEF:outWanBytes={rrd4}:rbshBwAggOutWan:AVERAGE \
 CDEF:inLan=inLanBytes,8,* \
 CDEF:inWan=inWanBytes,8,* \
 CDEF:outLan=outLanBytes,8,* \
 CDEF:outWan=outWanBytes,8,* \
 CDEF:outLanInv=outLan,-1,* \
 CDEF:outWanInv=outWan,-1,* \
 VDEF:totalLanIn=inLanBytes,TOTAL \
 VDEF:totalWanIn=inWanBytes,TOTAL \
 CDEF:reductIn=totalLanIn,totalWanIn,/ \
 CDEF:reductInPercentTemp=totalWanIn,totalLanIn,/,100,* \
 CDEF:reductInPercent=100,reductInPercentTemp,- \
 VDEF:totalLanOut=outLanBytes,TOTAL \
 VDEF:totalWanOut=outWanBytes,TOTAL \
 CDEF:reductOut=totalLanOut,totalWanOut,/ \
 CDEF:reductOutPercentTemp=totalWanOut,totalLanOut,/,100,* \
 CDEF:reductOutPercent=100,reductOutPercentTemp,- \
 CDEF:totalLan=totalLanIn,totalLanOut,+ \
 CDEF:totalWan=totalWanIn,totalWanOut,+ \
 CDEF:totalReduct=totalLan,totalWan,/ \
 CDEF:reductPercent=totalWan,totalLan,/,100,* \
 CDEF:totalReductPercent=100,reductPercent,- \
 COMMENT:"Inbound\\n" \
 AREA:inWan#0000ff:"WAN" \
 GPRINT:inWan:AVERAGE:"Avg  \\: %8.2lf %s" \
 GPRINT:inWan:MIN:"Min  \\: %8.2lf %s" \
 GPRINT:inWan:MAX:"Max  \\: %8.2lf %s\\n" \
 LINE1:inLan#8080ff:"LAN" \
 GPRINT:inLan:AVERAGE:"Avg  \\: %8.2lf %s" \
 GPRINT:inLan:MIN:"Min  \\: %8.2lf %s" \
 GPRINT:inLan:MAX:"Max  \\: %8.2lf %s\\n" \
 COMMENT:"\\n" \
 COMMENT:"Outbound\\n" \
 AREA:outWanInv#008800:"WAN" \
 GPRINT:outWan:AVERAGE:"Avg  \\: %8.2lf %s" \
 GPRINT:outWan:MIN:"Min  \\: %8.2lf %s" \
 GPRINT:outWan:MAX:"Max  \\: %8.2lf %s\\n" \
 LINE1:outLanInv#00dd00:"LAN" \
 GPRINT:outLan:AVERAGE:"Avg  \\: %8.2lf %s" \
 GPRINT:outLan:MIN:"Min  \\: %8.2lf %s" \
 GPRINT:outLan:MAX:"Max  \\: %8.2lf %s\\n" \
 COMMENT:"\\n" \
 GPRINT:totalLanIn:AVERAGE:"Inbound LAN  \\: %8.2lf %s" \
 GPRINT:totalWanIn:AVERAGE:"Inbound WAN  \\: %8.2lf %s" \
 GPRINT:reductIn:AVERAGE:"Inbound Reduction  \\: %8.2lf x" \
 GPRINT:reductInPercent:AVERAGE:"(%8.1lf %%)\\n" \
 GPRINT:totalLanOut:AVERAGE:"Outbound LAN \\: %8.2lf %s" \
 GPRINT:totalWanOut:AVERAGE:"Outbound WAN \\: %8.2lf %s" \
 GPRINT:reductOut:AVERAGE:"Outbound Reduction \\: %8.2lf x" \
 GPRINT:reductOutPercent:AVERAGE:"(%8.1lf %%)\\n" \
 GPRINT:totalLan:AVERAGE:"Total LAN    \\: %8.2lf %s" \
 GPRINT:totalWan:AVERAGE:"Total WAN    \\: %8.2lf %s" \
 GPRINT:totalReduct:AVERAGE:"Total Reduction    \\: %8.2lf x" \
 GPRINT:totalReductPercent:AVERAGE:"(%8.1lf %%)" \

OpenNMS Queued Activity

by Ken Eshelby

If you are doing JMX collections on your OpenNMS server, you already collect these values but there is no graph definition by default. Here is one.

This is a good graph to verify the Queued system is handling updates and that a system isn't being overtaxed.

Queued activity graph.png

report.onms.queued.activity.name=OpenNMS Queued Activity
report.onms.queued.activity.columns=ONMSQueCreates,ONMSQueItemDeque,ONMSQueDequeOps,ONMSQueEnqueOps,ONMSQueErrors,ONMSQuePromo
report.onms.queued.activity.type=interfaceSnmp
report.onms.queued.activity.command=--title="OpenNMS Queued Activity" \
 --width 580 \
 --height 200 \
 --vertical-label="Operations per second" \
 DEF:creates={rrd1}:ONMSQueCreates:AVERAGE \
 DEF:itemdeq={rrd2}:ONMSQueItemDeque:AVERAGE \
 DEF:opsdeq={rrd3}:ONMSQueDequeOps:AVERAGE \
 DEF:opsenq={rrd4}:ONMSQueEnqueOps:AVERAGE \
 DEF:errors={rrd5}:ONMSQueErrors:AVERAGE \
 DEF:promo={rrd6}:ONMSQuePromo:AVERAGE \
 AREA:opsenq#ffbb00:"Operations Enqueued" \
 GPRINT:opsenq:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:opsenq:MIN:"Min\\: %6.2lf %s" \
 GPRINT:opsenq:MAX:"Max\\: %6.2lf %s\\n" \
 LINE1:opsdeq#1924b1:"Operations Dequeued" \
 GPRINT:opsdeq:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:opsdeq:MIN:"Min\\: %6.2lf %s" \
 GPRINT:opsdeq:MAX:"Max\\: %6.2lf %s\\n" \
 LINE1:itemdeq#06799f:"Items Dequeued     " \
 GPRINT:itemdeq:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:itemdeq:MIN:"Min\\: %6.2lf %s" \
 GPRINT:itemdeq:MAX:"Max\\: %6.2lf %s\\n" \
 LINE1:creates#0000ff:"Creates Completed  " \
 GPRINT:creates:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:creates:MIN:"Min\\: %6.2lf %s" \
 GPRINT:creates:MAX:"Max\\: %6.2lf %s\\n" \
 LINE1:promo#0000ff:"Promotion Count    " \
 GPRINT:promo:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:promo:MIN:"Min\\: %6.2lf %s" \
 GPRINT:promo:MAX:"Max\\: %6.2lf %s\\n" \
 LINE1:errors#ff8300:"Queue Errors       " \
 GPRINT:errors:AVERAGE:"Avg\\: %6.2lf %s" \
 GPRINT:errors:MIN:"Min\\: %6.2lf %s" \
 GPRINT:errors:MAX:"Max\\: %6.2lf %s\\n"

Exim MTA Mail queue

Added by: Alek Patsouris

The info started to get a bit lengthy, so i split it to its on page here

Screenshots of what you would be getting on that other page but.

Eximgraphs.png


See Also