I'm still new to the MongoDB. My test C++ application is composed from a number of object files, and two of them have their own MongoDB instances. I've found that was a mistake, cause I've got an exception:
terminate called after throwing an instance of 'mongocxx::v_noabi::logic_error'
what(): cannot create a mongocxx::instance object if one has already been created
Aborted (core dumped)
So, I'll try to define a single MongoDB instance in this application.
And now I worry about my another application - it's top-level program in Python, which loads a number of dynamic libraries, written in C++ and having their own MongoDB instances. Where should I define the MongoDB instance - in the top-level code, in each library, in one of the libraries?
You should create one shared library which manages a singleton instance of mongocxx::instance and have all of the other libraries which need to use the driver access that singleton via some common API. Please see the instance management example.
Related
Is it possible to instantiate a Flyte Task at runtime so that I can create a Workflow with a variable number of Tasks and with each Task running a runtime-determined Python callable? In the documentation, I only see references to compile-time Workflows that are declaratively composed of Python function annotated with the #Task decorator.
If you can provide any existing examples in open source code or a new, small inline example, please do! Thanks!
Have you looked at dynamic workflows https://docs.flyte.org/projects/cookbook/en/stable/auto/core/control_flow/dynamics.html.
Dynamic in Flyte is like JITing in a language like Java. The new workflow graph is created, compiled, verified and then executed. But the graph is created in response to the inputs and you control the shape / structure at runtime
The functionality I was looking for is provided by the FlyteRemote class. With this class, one can register instantiated entities, i.e. tasks, workflows, and launchplans.
I'm new to SNMP, and finding it difficult to understand some of the mechanisms in PySNMP. I need to implement a table with read-create permissions to monitor and control a bridge on my network. I think it would be helpful if I had more clarity on one of the pieces of example code to understand what's happening in the framework when a manager attempts to create a new row.
I've been examining the sample code for implementing a conceptual table and executing the example snmpset/walk commands:
$ snmpset -v2c -c public 127.0.0.1 1.3.6.6.1.5.2.97.98.99 s “my value”
$ snmpset -v2c -c public 127.0.0.1 1.3.6.6.1.5.4.97.98.99 i 4
$ snmpwalk -v2c -c public 127.0.0.1 1.3.6
As far as I can tell, the set commands work because the MIB promises that exampleTableColumn2 describes OctetString scalars. How is this data created/stored by the agent? Is a generic scalar object created with the suffix ".97.98.99," or is this information somehow associated with the instance of exampleTableColumn2? If I were to subsequently run an snmpget or snmpset command on the object we just created, what would I be interacting with in the eyes of the framework?
In a real-world implementation, the agent would really be querying the device to create a new entry in some internal table, and you would need custom scalar objects with modified readGet/writeCommit methods, but the sample code hasn't established scalar classes to implement get/set methods. By understanding how columns with read-create permissions should be handled in PySNMP, I think I can implement a more robust agent application. Any help/clarity is sincerely appreciated.
How is this data created/stored by the agent? Is a generic scalar object created with the suffix ".97.98.99," or is this information somehow associated with the instance of exampleTableColumn2?
This is a generic scalar value of type OctetString associated with a leaf node in a tree of objects (MIB tree) of type MibTableColumn. In the MIB tree you will find a handful of node types each exhibiting distinct behavior (see docstrings), but otherwise they are very similar. Each node is identified by an OID.
If I were to subsequently run an snmpget or snmpset command on the object we just created, what would I be interacting with in the eyes of the framework?
The MIB tree object responsible for the OID you are querying will receive read* (for SNMP GET) or read*Next (for SNMP GETNEXT/GETBULK) events to which it should respond with a value.
In a real-world implementation, the agent would really be querying the device to create a new entry in some internal table, and you would need custom scalar objects with modified readGet/writeCommit methods
There are a couple of approaches to this problem, the way I've been pursuing it so far is to override some of these read*/read*Next/write* methods to read or write the value from/to its ultimate source (your internal table).
To simplify and keep your code in-sync with the MIB you are implementing, the pysmi library can turn a MIB into Python code with stubs via Jinja2 templates. From these stubs you can access your internal table whenever SNMP request triggers a read or write event. You can put your custom code into these stubs and/or into Jinja2 templates from which these stubs are generated.
Alternatively to implementing your own SNMP agent, you might consider this general purpose tool, which is driven by the same technology.
Python docs mention this word a lot and I want to know what it means.
It simply means it can be serialized by the pickle module. For a basic explanation of this, see What can be pickled and unpickled?. Pickling Class Instances provides more details, and shows how classes can customize the process.
Things that are usually not pickable are, for example, sockets, file(handler)s, database connections, and so on. Everything that's build up (recursively) from basic python types (dicts, lists, primitives, objects, object references, even circular) can be pickled by default.
You can implement custom pickling code that will, for example, store the configuration of a database connection and restore it afterwards, but you will need special, custom logic for this.
All of this makes pickling a lot more powerful than xml, json and yaml (but definitely not as readable)
These are all great answers, but for anyone who's new to programming and still confused here's the simple answer:
Pickling an object is making it so you can store it as it currently is, long term (to often to hard disk). A bit like Saving in a video game.
So anything that's actively changing (like a live connection to a database) can't be stored directly (though you could probably figure out a way to store the information needed to create a new connection, and that you could pickle)
Bonus definition: Serializing is packaging it in a form that can be handed off to another program. Unserializing it is unpacking something you got sent so that you can use it
Pickling is the process in which the objects in python are converted into simple binary representation that can be used to write that object in a text file which can be stored. This is done to store the python objects and is also called as serialization. You can infer from this what de-serialization or unpickling means.
So when we say an object is picklable it means that the object can be serialized using the pickle module of python.
I am using the standalone zodbbrowser 0.11.1 with a ZODB3 database. I can access the database fine but when I insert objects of unknown type into the ZODB the browser only displays:
Attributes
data: {u'account-1': <persistent broken __main__.Account instance '\x00\x00\x00\x00\x00\x00\x00\x01'>,
u'account-2': <persistent broken __main__.Account instance '\x00\x00\x00\x00\x00\x00\x00\x01'>
}
I'd like to see a formatted printout from __repr__ (or __str__) instead. The short user guide on pypi at Help! Broken objects everywhere recommends to make sure your application objects are importable from the Python path. But I don't know how.
How do I make the Account class (from the tutorial):
class Account(Persistent):
def __init__(self):
...
known to zodbbrowser in standalone mode so that the persistent broken type are replaced with a __str__ representation of the object instance?
To answer the question generally:
The easiest way would be to pip install zodbbrowser into the same virtualenv you used for your ZODB application that created the database in question. This assumes you use virtualenv.
The second easiest way would be to add zodbbrowser to the list of eggs in buildout.cfg in the buildout you used for your ZODB application that created the database in question. This assumes you use zc.buildout.
Finally, you can try to set PYTHONPATH so that the module you used to create the persistent objects is importable.
None of the above will help your specific case, because the persisted objects thing they belong to the module called __main__. That's a bad idea! There's only one __main__ in every Python invocation, and it depends on the script you run. If that script is zodbbrowser, then it can't also be your application.
For best results don't define any Persistent subclasses in your main script. Always define them in a separate module and import them.
However, if you already have such a database, and need to access the objects for forensic purposes or whatnot, there's a possible workaround:
write a new script, say, myzodbbrowser.py, that looks something like this:
from myapp import Account # replace myapp with the script name of your app
import zodbbrowser.standalone
zodbbrowser.standalone.main()
run it with the Python from your virtualenv or buildout, where you installed zodbbrowser.
I am new to Python. Just want to know is there any module in python similar to ruby's drb? Like a client can use object provided by the drb server?
This is generally called "object brokering" and a list of some Python packages in this area can be found by browsing the Object Brokering topic area of the Python Package Index here.
The oldest and most widely used of these is Pyro.
Pyro does what I think you're discribing (although I've not used drb).
From the website:
Pyro is short for PYthon Remote Objects. It is an advanced and powerful Distributed Object Technology system written entirely in Python, that is designed to be very easy to use. Never worry about writing network communication code again, when using Pyro you just write your Python objects like you would normally. With only a few lines of extra code, Pyro takes care of the network communication between your objects once you split them over different machines on the network. All the gory socket programming details are taken care of, you just call a method on a remote object as if it were a local object!
The standard multiprocessing module might do what you want.
I have no idea what drb is, but from the little information you have given,
it might be something like the Perspective Broker in Twisted
Introduction
Suppose you find yourself in control
of both ends of the wire: you have two
programs that need to talk to each
other, and you get to use any protocol
you want. If you can think of your
problem in terms of objects that need
to make method calls on each other,
then chances are good that you can use
twisted's Perspective Broker protocol
rather than trying to shoehorn your
needs into something like HTTP, or
implementing yet another RPC
mechanism.
The Perspective Broker system
(abbreviated PB, spawning numerous
sandwich-related puns) is based upon a
few central concepts:
serialization: taking fairly arbitrary
objects and types, turning them into a
chunk of bytes, sending them over a
wire, then reconstituting them on the
other end. By keeping careful track of
object ids, the serialized objects can
contain references to other objects
and the remote copy will still be
useful.
remote method calls: doing
something to a local object and
causing a method to get run on a
distant one. The local object is
called a RemoteReference, and you do
something by running its .callRemote
method.
Have you looked at execnet?
http://codespeak.net/execnet/
For parallel processing and distributed computing I use parallel python.