Pysnmp can't resolve OID's from snmp trap - python

I'm trying to resolve the OIDs that are received on an SNMP Trap from an HP switch stack but they only resolve down to a certain level and stop. It's like the HP MIBs are not being loaded. It's unclear from all the documentation I can find on pysnmp if this is the appropriate way to add custom MIBs and resolve OIDs from a trap.
MIBs can be downloaded here.
from pysnmp.entity import engine, config
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.smi import view, builder, rfc1902
from pysnmp.entity.rfc3413 import ntfrcv, mibvar
# Create SNMP engine with autogenernated engineID and pre-bound
# to socket transport dispatcher
snmpEngine = engine.SnmpEngine()
build = snmpEngine.getMibBuilder()
build.addMibSources(builder.DirMibSource("C:/Users/t/Documents/mibs"))
viewer = view.MibViewController(build)
# Transport setup
# UDP over IPv4, first listening interface/port
config.addTransport(
snmpEngine,
udp.domainName + (1,),
udp.UdpTransport().openServerMode(('0.0.0.0', 162))
)
# SNMPv1/2c setup
# SecurityName <-> CommunityName mapping
config.addV1System(snmpEngine, '????', 'public')
# Callback function for receiving notifications
# noinspection PyUnusedLocal,PyUnusedLocal,PyUnusedLocal
def cbFun(snmpEngine, stateReference, contextEngineId, contextName, varBinds, cbCtx):
print('Notification from ContextEngineId "%s", ContextName "%s"' % (contextEngineId.prettyPrint(),
contextName.prettyPrint()))
for name, val in varBinds:
print(name)
symbol = rfc1902.ObjectIdentity(name).resolveWithMib(viewer).getMibSymbol()
print(symbol[1])
# Register SNMP Application at the SNMP engine
ntfrcv.NotificationReceiver(snmpEngine, cbFun)
snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
# Run I/O dispatcher which would receive queries and send confirmations
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
Output upon receiving a trap:
Notification from ContextEngineId "0x80004fb8056ed891e8", ContextName ""
1.3.6.1.2.1.1.3.0
sysUpTime
1.3.6.1.6.3.1.1.4.1.0
snmpTrapOID
1.3.6.1.6.3.18.1.3.0
snmpTrapAddress
1.3.6.1.6.3.18.1.4.0
snmpTrapCommunity
1.3.6.1.6.3.1.1.4.3.0
snmpTrapEnterprise
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.9
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.1
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.2
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.3
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.4
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.5
enterprises
As you can see many distinct OIDs just resolve to "enterprises". I am using pysnmp 4.4.4.

Yes, it seems that only the core MIBs are loaded.
If you want to follow this quite low-level path, then you need to pre-compile all your ASN.1 MIBs (those you pulled from HPE site) with the mibdump tool into pysnmp format. Then put those *.py files into some directory and point pysnmp to it through build.addMibSources(builder.DirMibSource()) call.
Also, make sure to pre-load up all those MIBs at once on startup by invoking build.loadModules() (w/o arguments).

Related

Trouble subscribing to ActiveMQ Artemis with Stomp. Queue already exists

What am I doing wrong here? I'm trying to use Stomp to test some things with Artemis 2.13.0, but when I uses either the command line utility of a Python script, I can't subscribe to a queue, even after I use the utility to publish a message to an address.
Also, if I give it a new queue name, it creates it, but then doesn't pull messages I publish to it. This is confusing. My actual Java app behaves nothing like this -- it's using JMS
I'm connection like this with the utility:
stomp -H 192.168.56.105 -P 61616 -U user -W password
> subscribe test3.topic::test.A.queue
Which give me this error:
Subscribing to 'test3.topic::test.A.queue' with acknowledge set to 'auto', id set to '1'
>
AMQ229019: Queue test.A.queue already exists on address test3.topic
Which makes me think Stomp is trying to create the queue when it subscribes, but I don't see how to manage this in the documentation. http://jasonrbriggs.github.io/stomp.py/api.html
I also have a Python script giving me the same issue.
import os
import time
import stomp
def connect_and_subscribe(conn):
conn.connect('user', 'password', wait=True)
conn.subscribe(destination='test3.topic::test.A.queue', id=1, ack='auto')
class MyListener(stomp.ConnectionListener):
def __init__(self, conn):
self.conn = conn
def on_error(self, headers, message):
print('received an error "%s"' % message)
def on_message(self, headers, message):
print('received a message "%s"' % message)
"""for x in range(10):
print(x)
time.sleep(1)
print('processed message')"""
def on_disconnected(self):
print('disconnected')
connect_and_subscribe(self.conn)
conn = stomp.Connection([('192.168.56.105', 61616)], heartbeats=(4000, 4000))
conn.set_listener('', MyListener(conn))
connect_and_subscribe(conn)
time.sleep(1000)
conn.disconnect()
I recommend you try the latest release of ActiveMQ Artemis. Since 2.13.0 was released a year ago a handful of STOMP related issues have been fixed specifically ARTEMIS-2817 which looks like your use-case.
It's not clear to me why you're using the fully-qualified-queue-name (FQQN) so I'm inclined to think this is not the right approach, but regardless the issue you're hitting should be fixed in later versions. If you want multiple consumers to share the messages on a single subscription then using FQQN would be a good option there.
Also, if you want to use the topic/ or queue/ prefix to control routing semantics from the broker then you should set the anycastPrefix and multicastPrefix appropriately as described in the documentation.
This may be coincidence but ARTEMIS-2817 was originally reported by "BENJAMIN Lee WARRICK" which is surprisingly similar to "BenW" (i.e. your name).

python Connecting To Multiple Sessions over QuickFix

So i have to connect to two(2) sessions (same host different ports) so i used two different initiator
application = Application()
settings = quickfix.SessionSettings(config_file) # stream settings
storefactory = quickfix.FileStoreFactory(settings)
logfactory = quickfix.FileLogFactory(settings)
initiator = quickfix.SocketInitiator(
application, storefactory, settings, logfactory)
initiator.start()
application.run()
initiator.stop()
and used two different config (.cfg) files for both sessions
# This is a client (initiator)
[DEFAULT]
DefaultApplVerID=FIX.4.4
#settings which apply to all the Sessions.
ConnectionType=initiator
# FIX messages have a sequence ID, which shouldn't be used for uniqueness as specification doesn't guarantee anything about them. If Y is provided every time logon message is sent, server will reset the sequence.
FileLogPath=./Logs/
#Path where logs will be written
StartTime=00:00:00
# Time when session starts and ends
EndTime=00:00:00
UseDataDictionary=Y
#Time in seconds before your session will expire, keep sending heartbeat requests if you don't want it to expire
ReconnectInterval=60
LogoutTimeout=5
LogonTimeout=30
# Time in seconds before reconnecting
ResetOnLogout=N
ResetOnDisconnect=N
SendRedundantResendRequests=Y
# RefreshOnLogon=Y
SocketNodelay=N
# PersistMessages=Y
ValidateUserDefinedFields=N
ValidateFieldsOutOfOrder=N
# CheckLatency=Y
# session stream
[SESSION]
ResetOnLogon=Y
BeginString=FIX.4.4
SenderCompID=TESTING1
TargetCompID=TESTACC1
HeartBtInt=30
SocketConnectPort=4000
SocketConnectHost=127.0.0.1
DataDictionary=./spec/FIX44.xml
FileStorePath=./Sessions/
# session market
[SESSION]
ResetOnLogon=Y
BeginString=FIX.4.4
SenderCompID=TESTING2
TargetCompID=TESTACC2
HeartBtInt=30
SocketConnectPort=5000
SocketConnectHost=127.0.0.1
DataDictionary=./spec/FIX44.xml
FileStorePath=./Sessions/
But this doesn't work
I saw a post where they recommended using ThreadedSocketInitiator but i don't think its used on the python quickfix lib.
Thanks in advance

Using PySNMP as Trap Receiver with own/vendor MIB

I try to use PySNMP to receive SNMPv3 Traps. I found this example code:
#!/usr/bin/env /usr/bin/python3
from pysnmp.entity import engine, config
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.entity.rfc3413 import ntfrcv
from pysnmp.proto.api import v2c
from pysnmp.smi.rfc1902 import ObjectIdentity
snmpEngine = engine.SnmpEngine()
# Transport setup
# UDP over IPv4
config.addTransport(
snmpEngine,
udp.domainName,
udp.UdpTransport().openServerMode(('0.0.0.0', 162)),
)
# SNMPv3/USM setup
config.addV3User(
snmpEngine, '<username>',
config.usmHMACMD5AuthProtocol, '<password>',
config.usmAesCfb128Protocol, '<password>',
securityEngineId=v2c.OctetString(hexValue='<engineid>')
)
def cbFun(snmpEngine, stateReference, contextEngineId, contextName,
varBinds, cbCtx):
print('Notification from ContextEngineId "%s", ContextName "%s"' (contextEngineId.prettyPrint(), contextName.prettyPrint()))
for name, val in varBinds:
print('%s = %s' % (name.prettyPrint(), val.prettyPrint()))
# Register SNMP Application at the SNMP engine
ntfrcv.NotificationReceiver(snmpEngine, cbFun)
snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
# Run I/O dispatcher which would receive queries and send confirmations
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
This code works for me, but i get the raw Traps. I have an vendor specific MIB File I want to use. But I can't find any documentation how to bind the mib to the snmpEngine. The examples using MIBs from the PySNMP documentation show only the usage for SNMP GET operations and are not applicable here.
Has someone tried this before and can help me?
Thanks!
If your goal is to resolve raw variable-bindings you receive into human-friendly form, then you need to process those variable-bindings through the MIB browser object.
You are right, that's exactly the same operation that command generator frequently performs in the examples.
from pysnmp.smi import builder, view, compiler, rfc1902
# Assemble MIB browser
mibBuilder = builder.MibBuilder()
mibViewController = view.MibViewController(mibBuilder)
compiler.addMibCompiler(
mibBuilder, sources=['file:///usr/share/snmp/mibs',
'http://mibs.snmplabs.com/asn1/#mib#'])
# Pre-load MIB modules that define objects we receive in TRAPs
mibBuilder.loadModules('SNMPv2-MIB', 'SNMP-COMMUNITY-MIB')
# This is what we would get in a TRAP PDU
varBinds = [
('1.3.6.1.2.1.1.3.0', 12345),
('1.3.6.1.6.3.1.1.4.1.0', '1.3.6.1.6.3.1.1.5.2'),
('1.3.6.1.6.3.18.1.3.0', '0.0.0.0'),
('1.3.6.1.6.3.18.1.4.0', ''),
('1.3.6.1.6.3.1.1.4.3.0', '1.3.6.1.4.1.20408.4.1.1.2'),
('1.3.6.1.2.1.1.1.0', 'my system')
]
# Pass raw var-binds through MIB browser
varBinds = [
rfc1902.ObjectType(rfc1902.ObjectIdentity(x[0]), x[1]).resolveWithMib(mibViewController)
for x in varBinds
]
for varBind in varBinds:
print(varBind.prettyPrint())

Get printer status with SNMP using Pysnmp

I try to get status from my printer using SNMP protocol
The problem is, I've never used the SNMP and I have trouble understanding how can I get my status like ( PAPER OUT, RIBBON OUT, etc... ).
I configured my printer to enable the SNMP protocol using the community name "public"
I presume SNMP messages are sent on the port 161
I'm using Pysnmp because I want to integrate the python script in my program to listen to my printer and display status if there is a problem with the printer.
For now I've tried this code :
import socket
import random
from struct import pack, unpack
from datetime import datetime as dt
from pysnmp.entity.rfc3413.oneliner import cmdgen
from pysnmp.proto.rfc1902 import Integer, IpAddress, OctetString
ip = '172.20.0.229'
community = 'public'
value = (1,3,6,1,2,1,25,3,5,1,2)
generator = cmdgen.CommandGenerator()
comm_data = cmdgen.CommunityData('server', community, 1) # 1 means version SNMP v2c
transport = cmdgen.UdpTransportTarget((ip, 161))
real_fun = getattr(generator, 'getCmd')
res = (errorIndication, errorStatus, errorIndex, varBinds) \
= real_fun(comm_data, transport, value)
if not errorIndication is None or errorStatus is True:
print "Error: %s %s %s %s" % res
else:
print "%s" % varBinds
The IP address is the IP of my printer
The problem is the OID: I don't know what to put in the OID field because I have trouble understanding how does OID work.
I found this page but I'm not sure it fits with all printers ==> click here
You need your printer specific MIB file in common case. E.g., printer in my office seems to be not support both oids by your link. Also you can use snmpwalk to get available oids and values on your printer and if you somehow understand which values you need, you can use it for specific instance of your printer.

tornado - transferring a file to cdn without blocking

I have the nginx upload module handling site uploads, but still need to transfer files (let's say 3-20mb each) to our cdn, and would rather not delegate that to a background job.
What is the best way to do this with tornado without blocking other requests? Can i do this in an async callback?
You may find it useful in the overall architecture of your site to add a message queuing service such as RabbitMQ.
This would let you complete the upload via the nginx module, then in the tornado handler, post a message containing the uploaded file path and exit. A separate process would be watching for these messages and handle the transfer to your CDN. This type of service would be useful for many other tasks that could be handled offline ( sending emails, etc.. ). As your system grows, this also provides you a mechanism to scale by moving queue processing to separate machines.
I am using an architecture very similar to this. Just make sure to add your message consumer process to supervisord or whatever you are using to manage your processes.
In terms of implementation, if you are on Ubuntu installing RabbitMQ is a simple:
sudo apt-get install rabbitmq-server
On CentOS w/EPEL repositories:
yum install rabbit-server
There are a number of Python bindings to RabbitMQ. Pika is one of them and it happens to be created by an employee of LShift, who is responsible for RabbitMQ.
Below is a bit of sample code from the Pika repo. You can easily imagine how the handle_delivery method would accept a message containing a filepath and push it to your CDN.
import sys
import pika
import asyncore
conn = pika.AsyncoreConnection(pika.ConnectionParameters(
sys.argv[1] if len(sys.argv) > 1 else '127.0.0.1',
credentials = pika.PlainCredentials('guest', 'guest')))
print 'Connected to %r' % (conn.server_properties,)
ch = conn.channel()
ch.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False)
should_quit = False
def handle_delivery(ch, method, header, body):
print "method=%r" % (method,)
print "header=%r" % (header,)
print " body=%r" % (body,)
ch.basic_ack(delivery_tag = method.delivery_tag)
global should_quit
should_quit = True
tag = ch.basic_consume(handle_delivery, queue = 'test')
while conn.is_alive() and not should_quit:
asyncore.loop(count = 1)
if conn.is_alive():
ch.basic_cancel(tag)
conn.close()
print conn.connection_close
advice on the tornado google group points to using an async callback (documented at http://www.tornadoweb.org/documentation#non-blocking-asynchronous-requests) to move the file to the cdn.
the nginx upload module writes the file to disk and then passes parameters describing the upload(s) back to the view. therefore, the file isn't in memory, but the time it takes to read from disk–which would cause the request process to block itself, but not other tornado processes, afaik–is negligible.
that said, anything that doesn't need to be processed online shouldn't be, and should be deferred to a task queue like celeryd or similar.

Categories

Resources