How does PySNMP handle tables with read-create permissions? - python

I'm new to SNMP, and finding it difficult to understand some of the mechanisms in PySNMP. I need to implement a table with read-create permissions to monitor and control a bridge on my network. I think it would be helpful if I had more clarity on one of the pieces of example code to understand what's happening in the framework when a manager attempts to create a new row.
I've been examining the sample code for implementing a conceptual table and executing the example snmpset/walk commands:
$ snmpset -v2c -c public 127.0.0.1 1.3.6.6.1.5.2.97.98.99 s “my value”
$ snmpset -v2c -c public 127.0.0.1 1.3.6.6.1.5.4.97.98.99 i 4
$ snmpwalk -v2c -c public 127.0.0.1 1.3.6
As far as I can tell, the set commands work because the MIB promises that exampleTableColumn2 describes OctetString scalars. How is this data created/stored by the agent? Is a generic scalar object created with the suffix ".97.98.99," or is this information somehow associated with the instance of exampleTableColumn2? If I were to subsequently run an snmpget or snmpset command on the object we just created, what would I be interacting with in the eyes of the framework?
In a real-world implementation, the agent would really be querying the device to create a new entry in some internal table, and you would need custom scalar objects with modified readGet/writeCommit methods, but the sample code hasn't established scalar classes to implement get/set methods. By understanding how columns with read-create permissions should be handled in PySNMP, I think I can implement a more robust agent application. Any help/clarity is sincerely appreciated.

How is this data created/stored by the agent? Is a generic scalar object created with the suffix ".97.98.99," or is this information somehow associated with the instance of exampleTableColumn2?
This is a generic scalar value of type OctetString associated with a leaf node in a tree of objects (MIB tree) of type MibTableColumn. In the MIB tree you will find a handful of node types each exhibiting distinct behavior (see docstrings), but otherwise they are very similar. Each node is identified by an OID.
If I were to subsequently run an snmpget or snmpset command on the object we just created, what would I be interacting with in the eyes of the framework?
The MIB tree object responsible for the OID you are querying will receive read* (for SNMP GET) or read*Next (for SNMP GETNEXT/GETBULK) events to which it should respond with a value.
In a real-world implementation, the agent would really be querying the device to create a new entry in some internal table, and you would need custom scalar objects with modified readGet/writeCommit methods
There are a couple of approaches to this problem, the way I've been pursuing it so far is to override some of these read*/read*Next/write* methods to read or write the value from/to its ultimate source (your internal table).
To simplify and keep your code in-sync with the MIB you are implementing, the pysmi library can turn a MIB into Python code with stubs via Jinja2 templates. From these stubs you can access your internal table whenever SNMP request triggers a read or write event. You can put your custom code into these stubs and/or into Jinja2 templates from which these stubs are generated.
Alternatively to implementing your own SNMP agent, you might consider this general purpose tool, which is driven by the same technology.

Related

Instantiating Flyte task to create workflow at runtime

Is it possible to instantiate a Flyte Task at runtime so that I can create a Workflow with a variable number of Tasks and with each Task running a runtime-determined Python callable? In the documentation, I only see references to compile-time Workflows that are declaratively composed of Python function annotated with the #Task decorator.
If you can provide any existing examples in open source code or a new, small inline example, please do! Thanks!
Have you looked at dynamic workflows https://docs.flyte.org/projects/cookbook/en/stable/auto/core/control_flow/dynamics.html.
Dynamic in Flyte is like JITing in a language like Java. The new workflow graph is created, compiled, verified and then executed. But the graph is created in response to the inputs and you control the shape / structure at runtime
The functionality I was looking for is provided by the FlyteRemote class. With this class, one can register instantiated entities, i.e. tasks, workflows, and launchplans.

Will PKCS11 always find objects in the same order?

I have observed that both the bash command and what is probably a corresponding method from the Python PyKCS11 library seem to always find objects in the same order. My code relies on this being true, but have not read it anywhere, just observed it.
In the terminal:
$ pkcs11-tool --list-objects
Using slot 0 with a present token (0x0)
Public Key Object; RSA 2048 bits
label: bob_key
ID: afe438bbe0e0c2784c5385b8fbaa9146c75d704a
Usage: encrypt, verify, wrap
Public Key Object; RSA 2048 bits
label: alice_key
ID: b03a4f6c375e8a8a53bd7a35947511e25cbdc34b
Usage: encrypt, verify, wrap
With Python:
objects = session.findObjects([(CKA_CLASS, CKO_PUBLIC_KEY)])
for i, object in enumerate(objects):
d = object.to_dict()
print(d['CKA_LABEL'])
output:
bob_key
alice_key
objects is of type list and each element in objects is of type <class 'PyKCS11.CK_OBJECT_HANDLE'>
Will session.findObjects([(CKA_CLASS, CKO_PRIVATE_KEY)]) when run from a logged-in session also always be a list with exactly the same order as the expression above? In this case with two keys, would never want to see Alice come before Bob.
(Wanted to write a comment, but it got quite long...)
PKCS#11 does not guarantee any specific order of returned object handles so it is up to the particular implementation.
Even though your implementation might seem to be consistently giving the same order of objects there are some examples when this could unexpectedly change:
key renewal (keys do not last forever. You will need to generate some new keys in the future)
middleware upgrade (newer implementations might return objects in a different order)
HSM firmware upgrade (major upgrades might change the way objects are stored and change object enumeration order)
HSM recovery from backup (object order can change after HSM restore)
host OS data recovery (some implementatins store HSM objects encrypted in external folders and object search order might be the same as directory listing order which could change without a warning)
HSM change (are you sure that you will be using the same device for the whole lifetime of your application)
Relying on an undefined behaviour in general is a bad practice. Especially in security you should be very cautious.
It is definitely worth the time to stay on the safe side.
I would recommend to perform a separate search for each required object (using some strong identifier -- e.g. label) -- this way you can perform additional checks (e.g. enforce expected object type, ensure that object is unique etc.).
A similar example is Cryptoki object handle re-use. PKCS#11 states that object handle is bound to particular session (i.e. if you obtained object handle in session A you should not use it in session B -- even if both sessions are running in the same application).
There are implementations that preserve object handle for the same object across sessions. There are even implementations that preserve the same object handle in different applications (i.e. if you get object handle 123 in application A you will get object handle 123 in application B for the same object).
This behaviour is even described in the respective developer manual. But if you ask the vendor if you can rely on it you are told that there are some corner cases for some setups and that you must perform additional checks to be 100% sure that it will work as expected...
Good luck with your project!

MongoDB Python and C++ clients - error with multiple instances

I'm still new to the MongoDB. My test C++ application is composed from a number of object files, and two of them have their own MongoDB instances. I've found that was a mistake, cause I've got an exception:
terminate called after throwing an instance of 'mongocxx::v_noabi::logic_error'
what(): cannot create a mongocxx::instance object if one has already been created
Aborted (core dumped)
So, I'll try to define a single MongoDB instance in this application.
And now I worry about my another application - it's top-level program in Python, which loads a number of dynamic libraries, written in C++ and having their own MongoDB instances. Where should I define the MongoDB instance - in the top-level code, in each library, in one of the libraries?
You should create one shared library which manages a singleton instance of mongocxx::instance and have all of the other libraries which need to use the driver access that singleton via some common API. Please see the instance management example.

Dynamic updates and API - Bind or something else?

At place where I work we've few upcoming projects which require Dynamic-DNS functionality, eg. being able to dynamicaly insert/modify/delete DNS records.
So far we've been using simple Bind setup with one master and few slaves. Master's data (zone files) are in git and we've simple but pretty effective Fabric file which we use to make sure all changes are committed to git and then to deploy changes to Master which propagates changes further to slaves.
We use views, we use them a lot and given how much internal stuff we've got, it's a must to keep this kind of functionality, i.e. to not expose internal records to public.
I've been researching possible solutions for quite a while which would a - allow us to perform dynamic updates on all zones including the same zones in different views, b - expose ideally restful API we can talk to to issue those updates, c - be open source so we can either use it or at least base on something.
Sadly I haven't found anything that's even close to this requirement set which I think is not very individual. We started considering actually writing something on our own - use Python with Python DNS that will talk to Bind through nsupdate protocol and issue changes as we want but before I dive in, I thought to get some advice whether I haven't missed anything ?
Any advice, much appreciated.

Python object hierarchy, and REST resources?

I am currently running into an "architectural" problem in my Python app (using Twisted) that uses a REST api, and I am looking for feedback.
Warning ! long post ahead!
Lets assume the following Object hiearchy:
class Device(object):
def __init__():
self._driver=Driver()
self._status=Status()
self._tasks=TaskManager()
def __getattr__(self, attr_name):
if hasattr(self._tasks, attr_name):
return getattr(self._tasks, attr_name)
else:
raise AttributeError(attr_name)
class Driver(object):
def __init__(self):
self._status=DriverStatus()
def connect(self):
"""some code here"""
def disconnect(self):
"""some code here"""
class DriverStatus(object):
def __init__(self):
self._isConnected=False
self._isPluggedIn=False
I also have a rather deep object hiearchy (the above elements are only a sub part of it) So, right now this gives me following resources, in the rest api (i know, rest isn't about url hierarchy, but media types, but this is for simplicity's sake):
/rest/environments
/rest/environments/{id}
/rest/environments/{id}/devices/
/rest/environments/{id}/devices/{deviceId}
/rest/environments/{id}/devices/{deviceId}/driver
/rest/environments/{id}/devices/{deviceId}/driver/driverstatus
I switched a few months back from a "dirty" soap type Api to REST, but I am becoming unsure about how to handle what seems like added complexity:
Proliferation of REST resources/media types : for example instead of having just a Device resource I now have all these resources:
Device
DeviceStatus
Driver
DriverStatus
While these all make sense from a Resfull point of view, is it normal to have a lot of sub resources that each map to a separate python class ?
Mapping a method rich application core to a Rest-Full api : in Rest resources should be nouns, not verbs : are there good rules /tips to inteligently define a set of resources from a set of methods ? (The most comprehensive example I found so far seems to be this article)
Api logic influencing application structure: should an application's API logic at least partially guide some of its internal logic, or is it good practice to apply separation of concerns ? Ie , should I have an intermediate layer of "resource" objects that have the job of communicating with the application core , but that do not map one to one to the core's classes ?
How would one correctly handle the following in a rest-full way : I need to be able to display a list of available driver types (ie class names, not Driver instance) in the client : would this mean creating yet another resource like "DriverTypes" ?
These are rather long winded questions, so thanks for your patience, and any pointers, feedback and criticism is more than welcome !
To S.Lott:
By "too fragmented resources" what i meant was, lots of different sub resources that basically still apply to the same server side entity
For The "connection" : So that would be a modified version of the "DriverStatus" resource then ? I consider the connection to be always existing, hence the use of "PUT" , but would that be bad thing considering "PUT" should be idempotent ?
You are right about "stopping coding and rethinking", that is the reason I asked all these questions and putting things down, on paper to get a better overview.
-The thing is, right now the basic "real world objects" as you put them make sense to me as rest resources /collections of resources, and they are correctly manipulated via POST, GET, UPDATE, DELETE , but I am having a hard time getting my head around the Rest approach for things that I do not instinctively view as "Resources".
Rule 1. REST is about objects. Not methods.
The REST "resources" have become too fragmented
False. Always false. REST resources are independent. They can't be "too" fragmented.
instead of having just a Device resource I now have all these resources:
Device DeviceStatus Driver DriverStatus
While these all make sense
from a [RESTful] point of view, is it normal to have a lot of sub
resources that each map to a separate python class ?
Actually, they don't make sense. Hence your question.
Device is a thing. /rest/environments/{id}/devices/{deviceId}
It has status. You should consider providing the status and the device information together as a single composite document that describes a device.
Just because your relational database is normalized does not mean your RESTful objects need to be precisely as normalized as your database. While it's simpler (and many frameworks make it very, very simple to do this) it may not be meaningful.
consider the connection to be always existing, hence the use of "PUT"
, but would that be bad thing considering "PUT" should be idempotent ?
Connections do not always exist. They may come and go.
While a relational database may have a many-to-many association table which you can UPDATE, that's a peculiar special case that doesn't really make much sense outside the world of DBA's.
The connection between two RESTful things is rarely a separate thing. It's an attribute of each of the RESTful things.
It's perfectly unclear what this "connection" thing is. You talk vaguely about it, but provide no details.
Lacking any usable facts, I'll guess that you're connecting devices to drivers and there's some kind of [Device]<-[Driver Status]->[Driver] relationship. The connection from device to driver can be a separate RESTful resource.
It can just as easily be an attribute of Device or Driver that does not actually have a separate, visible, RESTful resource.
[Again. Some frameworks like Django-Piston make it trivial to simple expose the underlying classes. This may not always be appropriate, however.]
are there good rules /tips to inteligently define a set of resources from a set of methods ?
Yes. Don't do it. Resources aren't methods. Pretty much that's that.
If you have a lot of methods -- outside CRUD -- then you may have a data model issue. You may have too few classes of things expressed in your relational model and too many stateful updates of things.
Stateful objects are not inherently evil, but they need to be examined critically. In some cases, a PUT to change status of an object perhaps should have been a POST to add to the history of an object. The "current" state is the last thing POSTed.
Also.
You don't have to trivialize each resource as a class of things. You can have resources which are collections. You can POST a fairly complex document to a composite (properly a Facade) "resource". That complex document can imply several CRUD operations in the database.
You're wandering away from simple RESTful. Your question remains intentionally murky. "method rich application core" doesn't mean much. Without concrete examples, it's impossible to imagine.
Api logic influencing application structure
If these are somehow different, you're probably creating needless, no-value complexity.
is it good practice to apply separation of concerns ?
Always. Why ask?
a lot of this seems to come from my confusion about how to map a rather method rich api to a Rest-Full one , where resources should be nouns, not verbs : so when is it wise to consider an element a rest "resource"?
A resource is defined by your problem domain. It's usually something tangible. The methods (as in "method-rich API" are usually irrelevant. They're CRUD (Create, Retrieve, Update, Delete) operations. If you have something that's not essentially CRUD, you have to STOP coding. STOP writing code, and rethink the API to make it CRUD-like.
CRUD - Create-Retrieve-Update-Delete maps to REST's POST-GET-PUT-DELETE. If you can't recast your problem domain into those terms, stop coding. Stop coding until you get to CRUD rules.
i need to be able to display a list of available driver types (ie class names, not Driver instance) in the client : would this mean creating yet another resource like "DriverTypes" ?
Correct. They're already part of your problem domain. You already have this class defined. You're just making it available through REST.
Here's the point. The problem domain has real-world objects. You have class definitions. They're tangible things. REST transfers the state of those tangible things.
Your software may have intangible things like "associations" or "links" or "connections" other junk that's part of the software solution. This junk doesn't matter very much. It's implementation detail. Not real-world things.
An "association" is always visible from both of the two real-world RESTful resources. One resource may have an foreign-key like reference that allows the client to do a RESTful fetch of another, related object. Or a resource may have a collection of other, related objects, and a single GET retrieves an object and a collection of related objects.
Either way, the real-world RESTful resources are what's available. The relationship is merely implied. Even if it's a physical many-to-many database table -- that doesn't mean it must be exposed. [Again. Some frameworks make it trivially easy to expose everything. This isn't always good.]
You can represent the path portion /rest with a Site object, but environments in the path must be a Resource. From there you have to handle the hierarchy yourself in the render_* methods of environments. The request object you get will have a postpath attribute that gives you the remainder of the path (i.e. after /rest/environments). You'll have to parse out the id, detect whether or not devices is given in the path, and if so pass the remainder of the path (and the request) down to your devices collection. Unfortunately, Twisted will not handle this decision for you.

Categories

Resources