Is there an easy way to pass a complete data structure from C++ to Python and vice-versa easily with multiple data types?
I have a complex class with pointer objects of floats, longs etc. I could convert this into a json string and parse it both ways, but this would be really slow.
However, if we had a special format, that has takes this data, but also stores meta data of the start/end of the json string, it would parse much faster. Is there anything like this?
I would personally recommend serializing your data into JSON in C++ using e.g. rapidjson or Qt and then to transfer the resulting string to Python using the C API bindings for Python and deserializing it into Python Dictionary there. One-way or two-way transfer should be easy enough.
Note about the C API bindings however. I have used it in the past and it was not pleasant experience in any way or form. Eventually you will make it work and do what you want but it will cost you some nerves.
Lastly do not worry about performance. Since you are using Python (an interpreted language) you are apparently not doing anything performance critical anyway so cost of the JSON (de)serialization can be ignored here.
Good luck because you are going to need it with the Python's C API bindings.
Related
I have some data being stored in redis cache which will be read by my application in Rust. The data is being stored by python. Whenever I am storing a string or an array, it stores it in a weird form which I was not able to read into Rust. Vice versa, I want to write from Rust and be able to read it in python.
Using django shell:
In [0]: cache.set("test","abc")
In [1]: cache.get("test")
Out[1]:'abc'
Using redis-cli:
127.0.0.1:6379> GET :1:test
"\x80\x04\x95\a\x00\x00\x00\x00\x00\x00\x00\x8c\x03abc\x94."
Output from Rust:
Err(Invalid UTF-8)
Rust code read data using redis-rs library:
let client = redis::Client::open("redis://127.0.0.1:6379")?;
let mut con = client.get_connection()?;
let q:Result<String, redis::RedisError> = con.get(":1:test");
println!("{:?}",q);
I want to be able to read a string or array into Rust as it was written in Python and vice-versa.
Also, data in one key will only be ever written by either Rust or Python, not both.
This question is not a duplicate of this as that deals specifically for accent encoding, however, I want to solve my problem for arrays as well. Moreover, the value being set in redis by django for a string is not simply the UTF encoding for the string.
Ah, the joys of trying to throw data across environments. The thing you're being bitten by right now is called Pickle and is the default serializer of django-redis. What a serializer does in this case (in python) is the transformation of your data between python and redis so you can store it, regardless of the type, but more importantly so you can retrieve it with the type it came in.
The python side
Obviously, if you had infinite time and effort, you could rewrite pickle in rust and you'd then be able to read this format. I'm pretty sure you have neither, and depending on the data you're storing, you might not even want to do so.
Instead, what I'm going to suggest is to change the serializer from pickle to json. The description of what to change in the config is located at https://django-redis-cache.readthedocs.io/en/latest/advanced_configuration.html#pluggable-serializers , and in particular, I'm pretty sure the class name you want to use is django_redis.serializers.JSONSerializer.
This comes with drawbacks. In particular, there will be some object types you will no longer be able to store on the python side, but if you do really intend to read data on the rust side, this should not concern you.
Sven Marnach mentioned in one of the comments that the serde-pickle crate exists. I have not used it myself, but it does look promising and might save you a ton of interop work if it does function.
The rust side
To read stuff, now that every key is going to be json, you'll be decoding types with either serde or miniserde. This should be pretty straightforward; do bear in mind that you will not get native types out of this; instead, you'll get members of the serde::Value enum (Boolean, Number, Object, etc), which you will then have to filter through.
Edit your question to indicate what you are trying to store, and I'll happily expand on how to do this on here!
I want to classify some text. So I have to compare it with other texts. After representing texts as vectors how can I store them (very big lists of float values) to SQL database for using them later?
My idea is using pickle module:
vector=text_to_vector(text)
present=pickle.dumps(big_list)
some_db.save(text_id,present)
#later
present=some_db.get(text_id)
vector=pickle.loads(present)
Is it fast and effective if I have thousends of texts?
You may find that pickle and databases don't work too well together.
Python's pickle is for serializing Python objects to a format, that can then be read back in to Python objects by Python. Although it's very easy to serialize with pickle, you can't* query this serialized format, you can't* read it into a program in another language. Check out cPickle, another Python module, for faster pickle-ing.
Databases, on the other hand, are great for persisting data in such a way that it is queryable and non-language-specific. But the cost is that it's generally harder to get/put data into/from the database. That's why there's special tools like SQL Alchemy, and endless blog-based debates about the benefits/horrors of Object-Relation-Mapping software.
Pickle-ing objects, and then sending them to a database such as MySQL or SQL Server is probably not a good idea. However, check out shelve, another Python module for database-like persistence of Python objects.
So, to sum up:
use pickle or shelve if you just need to save the data for later use by a Python program
map objects to a database if you want to persist the data for general use, with the understanding that this requires more effort
performance-wise, cPickle will probably win over a database + object/relation mapping
*: at least, not without a lot of effort and/or special libraries.
I'm currently working on a project where I need to transfer objects from ruby to python and back again, obviously serialization is the way to go. I've looked at things like yaml but decided to write my own as I didn't want to deal with the dependencies of the libraries and such when it came time to distribute. I've wrote up how this serialization format works here.
my question is that as this format is intended to work cross language between ruby and python,
how should I serialize ruby's symbols? I'm not aware of a object that works the same way in python. should a dump containing a symbol fail? should I just serialize it as a string? what would be best?
Doesn't that depend on what your project needs? If symbols are important, you'll need some way to deal with them.
I'm not a Ruby programmer, but from what I've just read, I think converting them to strings is probably easiest. The standard Python interpreter will reuse memory for identical short strings, which seems to be a key reason suggested for using symbols.
EDIT: If it needs to work for other programmers, passing values back and forth shouldn't change them. So you either have to handle symbols properly, or throw an error straight away. It should be simple enough in Python:
class Symbol(str):
pass
# In serialising code:
if isinstance(x, Symbol):
serialise_as_symbol(x)
Any reason you're not using a standard data interchange format like JSON or XML? They seem to be acceptable to countless applications, services, and programmers.
If symbols are a stumbling block then you have three choices, don't allow them, convert them to strings on the fly or figure out a way to make them universal and/or innocuous in other languages.
For a while I've been using a package called "gnosis-utils" which provides an XML pickling service for Python. This class works reasonably well, however it seems to have been neglected by it's developer for the last four years.
At the time we originally selected gnosis it was the only XML serization tool for Python. The advantage of Gnosis was that it provided a set of classes whose function was very similar to the built-in Python XML pickler. It produced XML which python-developers found easy to read, but non-python developers found confusing.
Now that the proejct has grown we have a new requirement: We need to be able to exchange XML with our colleagues who prefer Java or .Net. These non-python developers will not be using Python - they intend to produce XML directly, hence we have a need to simplify the format of the XML.
So are there any alternatives to Gnosis. Our requirements:
Must work on Python 2.4 / Windows x86 32bit
Output must be XML, as simple as possible
API must resemble Pickle as closely as possible
Performance is not hugely important
Of course we could simply adapt Gnosis, however we'd prefer to simply use a component which already provides the functions we requrie (assuming that it exists).
So what you're looking for is a python library that spits out arbitrary XML for your objects? You don't need to control the format, so you can't be bothered to actually write something that iterates over the relevant properties of your data and generates the XML using one of the existing tools?
This seems like a bad idea. Arbitrary XML serialization doesn't sound like a good way to move forward. Any format that includes all of pickle's features is going to be ugly, verbose, and very nasty to use. It will not be simple. It will not translate well into Java.
What does your data look like?
If you tell us precisely what aspects of pickle you need (and why lxml.objectify doesn't fulfill those), we will be better able to help you.
Have you considered using JSON for your serialization? It's easy to parse, natively supports python-like data structures, and has wide-reaching support. As an added bonus, it doesn't open your code to all kinds of evil exploits the way the native pickle module does.
Honestly, you need to bite the bullet and define a format, and build a serializer using the standard XML tools, if you absolutely must use XML. Consider JSON.
There is xml_marshaller which provides a simple way of dumping arbitrary Python objects to XML:
>>> from xml_marshaller import xml_marshaller
>>> class Foo(object): pass
>>> foo = Foo()
>>> foo.bar = 'baz'
>>> dump_str = xml_marshaller.dumps(foo)
Pretty printing the above with lxml (which is a dependency of xml_marshaller anyway):
>>> from lxml.etree import fromstring, tostring
>>> print tostring(fromstring(dump_str), pretty_print=True)
You get output like this:
<marshal>
<object id="i2" module="__main__" class="Foo">
<tuple/>
<dictionary id="i3">
<string>bar</string>
<string>baz</string>
</dictionary>
</object>
</marshal>
I did not check for Python 2.4 compatibility since this question was asked long ago, but a solution for xml dumping arbitrary Python objects remains relevant.
This question may be seen as subjective, but I'd like to ask SO users which common structured textual data format is best supported in Python.
My initial choices are:
XML
JSON
and YAML
Which of these three is easiest to work with in Python (ie. has the best library support / performance) ... or is there another format that I haven't mentioned that is better supported in Python.
I cannot use a Python only format (e.g. Pickling) since interop is quite important, but the majority of the code that handles these files will be written in Python, so I'm keen to use a format that has the strongest support in Python.
CSV or fixed column text may also be viable for most use cases, however I'd prefer the flexibility of a more scalable format.
Thank you
Note
Regarding interop I will be generating these files initially from Ruby, using Builder, however Ruby will not be consuming these files again.
I would go with JSON, I mean YAML is awesome but interop with it is not that great.
XML is just an ugly mess to look at and has too much fat.
Python has a built-in JSON module since version 2.6.
JSON has great python support and it is much more compact than XML (and the API is generally more convenient if you're just trying to dump and load objects). There's no out of the box support for YAML that I know of, although I haven't really checked. In the abstract I would suggest using JSON due to the low overhead of the format and the wide range of language support, but it does depend a bit on your application - if you're working in a space that already has established applications, the formats they use might be preferable, even if they're technically deficient.
I think it depends a lot on what you need to do with the data. If you're going to be building a complex database and doing processing and transformations on it, I suspect you'd be better off with XML. I've found the lxml module pretty useful in this regard. It has full support for standards like xpath and xslt, and this support is implemented in native code so you'll get good performance.
But if you're doing something more simple, then likely you'd be better off to use a simpler format like yaml or json. I've heard tell of "json transforms" but don't know how mature the technology is or how developed Python's access to it is.
It's pretty much all the same, out of those three. Use whichever is easier to inter-operate with.