TLDR: I am making a python wrapper around something for LabVIEW to use and I want to pass a dict (or even kwargs) [i.e. key/value pairs] to a python script so I can have more dynamic function arguments.
LabVIEW 2018 implemented a Python Node which allows LabVIEW to interact with python scripts by calling, passing, and getting returned variables.
The issue is it doesn't appear to have native support for the dict type:
Python Node Details Supported Data Types
The Python Node supports a large number of data types. You can use
this node to call the following data types:
Numerics Arrays, including multi-dimensional arrays Strings Clusters
Calling Conventions
This node converts integers and strings to the corresponding data
types in Python, converts arrays to lists, and converts clusters to
tuples.
Of course python is built around dictionaries but it appears LabVIEW does not support any way to pass a dictionary object.
Does anyone know of a way I can pass a cluster of named elements (or any other dictionary type) to a python script as a dict object?
There is no direct way to do it.
The simplest way on both sides would be to use JSON strings.
From LabVIEW to Python
LabVIEW Clusters can be flattened to JSON (Strings > Flatten/unflatten):
The resulting string can be converted to a dict in just one line (plus an import) python:
>>> import json
>>> myDict=json.loads('{"MyString":"FooBar","MySubCluster":{"MyInt":42,"MyFloat":3.1410000000000000142},"myIntArray":[1,2,3]}')
>>> myDict
{u'MyString': u'FooBar', u'MySubCluster': {u'MyInt': 42, u'MyFloat': 3.141}, u'myIntArray': [1, 2, 3]}
>>> myDict['MySubCluster']['MyFloat']
3.141
From Python to LabVIEW
The Python side is easy again:
>>> MyJson = json.dumps(myDict)
In LabVIEW, unflatten JSON from string, and wire a cluster of the expected structure with default values:
This of course requires that the structure of the dict is fixed.
If it is not, you can still access single elements by giving the path to them as array:
Limitations:
While this works like a charm (did you even notice that my locale uses comma as decimal sign?), not all datatypes are supported. For example, JSON itself does not have a time datatype, nor a dedicated path datatype, and so, the JSON VIs refuse to handle them. Use a numerical or string datatype, and convert it within LabVIEW.
Excourse: A dict-ish datatype in LabVIEW
If you ever need a dynamic datatype in LabVIEW, have a look at attributes of variants.
These are pairs of keys (string) and values (any datatype!), which can be added and reads about as simple as in Python. But there is no (builtin, simple) way to use this to interchange data with Python.
Following the syntax from jsonpath_ng:
path = '$.data.objects[*].currencies[*].name'
related = '$.data.objects[4].currencies[0].name'
unrelated = '$.data.objects[4].currencies[0].value'
I am looking to compare whether two strings representing JSON paths in Python are equivalent if we ignore any indexes. For example, related would be compliant with path whereas unrelated wouldn't.
Is there a cleaner way to do this other than regex? I am already using jsonpath_ng in this module but cannot see support for this functionality.
To be clear: this operation is independent of any subsequent JSON object referencing, I just want to determine whether the paths themselves are similar.
Say I have some text of some Python code,
"''.join(something)"
I may not know what something is but if the text is parsed, we'd see join, which is a built-in python function. There must be a way to know that it returns a str. Ideally, I'd like a function to just do
get_type("''.join(something)") # Returns: 'str', because join make a 'str'
or another situation,
"['asdfom', whatever].append('ttt')"
something like eval() to get object types can't be used because key 'whatever' is unknown, but we know that append returns None so the solution should know that the return type is None
Here's another more complex situation, where the type is unknown but its method is likely* (*within a reasonable guess) to be a certain type
"some_string_variable.split('\n')"
some_string_variable could be any name and it may not even be defined in the same file, but split() is a method known to the python str type so, if some get_type function were to try and guess the return type, it would use what it knows, split() and match it with what str.split() returns, which is a list. I'm fine with this circumstance to not have 100% accuracy but the two situations I listed above should have working solutions.
Right now, I'm using pydoc.locate() which works pretty well in a lot of cases, even for user-defined types, but it returns None in all of the cases above. Does anyone know of a way to make what I'm looking for?
I'm trying to use the Python wrapper for CouchDB to update a database. The file is structured as a nested dictionary as follows.
doc = { ...,
'RLSoo': {'RT_freq': 2, 'tweet': "They're going to play monopoly now.
This makes me feel like an excellent mother. #Sandy #NYC"},
'GiltCityNYC': {},
....}
I would like to put each entry of the larger dicitionary, for example RLSoo into its own document. However, I get an error message when I try the following code.
for key in doc:
db.update(doc[key],all_or_nothing=True)
Error Message
TypeError: expected dict, got <type 'str'>
I don't understand why CouchDB won't accept the dictionary.
According Database.update() method realization and his documentation, first argument should be list of document objects (e.g. list of dicts). Since you doc variable has dict type, direct iteration over it actually iterates over all his keys which are string typed. If I understood your case right, probably your doc contains nested documents as values. So, try just:
db.update(doc.values(), all_or_nothing=True)
And it all first level values are dicts, it should works!
Currently expensively parsing a file, which generates a dictionary of ~400 key, value pairs, which is seldomly updated. Previously had a function which parsed the file, wrote it to a text file in dictionary syntax (ie. dict = {'Adam': 'Room 430', 'Bob': 'Room 404'}) etc, and copied and pasted it into another function whose sole purpose was to return that parsed dictionary.
Hence, in every file where I would use that dictionary, I would import that function, and assign it to a variable, which is now that dictionary. Wondering if there's a more elegant way to do this, which does not involve explicitly copying and pasting code around? Using a database kind of seems unnecessary, and the text file gave me the benefit of seeing whether the parsing was done correctly before adding it to the function. But I'm open to suggestions.
Why not dump it to a JSON file, and then load it from there where you need it?
import json
with open('my_dict.json', 'w') as f:
json.dump(my_dict, f)
# elsewhere...
with open('my_dict.json') as f:
my_dict = json.load(f)
Loading from JSON is fairly efficient.
Another option would be to use pickle, but unlike JSON, the files it generates aren't human-readable so you lose out on the visual verification you liked from your old method.
Why mess with all these serialization methods? It's already written to a file as a Python dict (although with the unfortunate name 'dict'). Change your program to write out the data with a better variable name - maybe 'data', or 'catalog', and save the file as a Python file, say data.py. Then you can just import the data directly at runtime without any clumsy copy/pasting or JSON/shelve/etc. parsing:
from data import catalog
JSON is probably the right way to go in many cases; but there might be an alternative. It looks like your keys and your values are always strings, is that right? You might consider using dbm/anydbm. These are "databases" but they act almost exactly like dictionaries. They're great for cheap data persistence.
>>> import anydbm
>>> dict_of_strings = anydbm.open('data', 'c')
>>> dict_of_strings['foo'] = 'bar'
>>> dict_of_strings.close()
>>> dict_of_strings = anydbm.open('data')
>>> dict_of_strings['foo']
'bar'
If the keys are all strings, you can use the shelve module
A shelf is a persistent, dictionary-like object. The difference with
“dbm” databases is that the values (not the keys!) in a shelf can be
essentially arbitrary Python objects — anything that the pickle module
can handle. This includes most class instances, recursive data types,
and objects containing lots of shared sub-objects. The keys are
ordinary strings.
json would be a good choice if you need to use the data from other languages
If storage efficiency matters, use Pickle or CPickle(for execution performance gain). As Amber pointed out, you can also dump/load via Json. It will be human-readable, but takes more disk.
I suggest you consider using the shelve module since your data-structure is a mapping.
That was my answer to a similar question titled If I want to build a custom database, how could I? There's also a bit of sample code in another answer of mine promoting its use for the question How to get a object database?
ActiveState has a highly rated PersistentDict recipe which supports csv, json, and pickle output file formats. It's pretty fast since all three of those formats are implement in C (although the recipe itself is pure Python), so the fact that it reads the whole file into memory when it's opened might be acceptable.
JSON (or YAML, or whatever) serialisation is probably better, but if you're already writing the dictionary to a text file in python syntax, complete with a variable name binding, you could just write that to a .py file instead. Then that python file would be importable and usable as is. There's no need for the "function which returns a dictionary" approach, since you can directly use it as a global in that file. e.g.
# generated.py
please_dont_use_dict_as_a_variable_name = {'Adam': 'Room 430', 'Bob': 'Room 404'}
rather than:
# manually_copied.py
def get_dict():
return {'Adam': 'Room 430', 'Bob': 'Room 404'}
The only difference is that manually_copied.get_dict gives you a fresh copy of the dictionary every time, whereas generated.please_dont_use_dict_as_a_variable_name[1] is a single shared object. This may matter if you're modifying the dictionary in your program after retrieving it, but you can always use copy.copy or copy.deepcopy to create a new copy if you need to modify one independently of the others.
[1] dict, list, str, int, map, etc are generally viewed as bad variable names. The reason is that these are already defined as built-ins, and are used very commonly. So if you give something a name like that, at the least it's going to cause cognitive-dissonance for people reading your code (including you after you've been away for a while) as they have to keep in mind that "dict doesn't mean what it normally does here". It's also quite likely that at some point you'll get an infuriating-to-solve bug reporting that dict objects aren't callable (or something), because some piece of code is trying to use the type dict, but is getting the dictionary object you bound to the name dict instead.
on the JSON direction there is also something called simpleJSON. My first time using json in python the json library didnt work for me/ i couldnt figure it out. simpleJSON was...easier to use