Getting values from imported dictionary - python

Lets say I have a file named tools.py. From tools.py I have a dictionary with sample key/value pairs:
TOOLS ={0x100: "Hammer",
0x101: "Screw",
0x102: "Nail",
0x103: "Wedge",
0x104: "Drill",
0x105: "Socket",
}
I have another file named file.py where I import tools.py. I have a value of 0x101 and I want to access tools.py dictionary TOOLS such that I can create a new dictionary with 0x101 key being Screw. In other words I have defined
import tools
value = 10
placeholder = 0x101
new_dict = {}
I want to add the entry Screw:10 by looking up 0x101 in tools.py and getting the value Screw. How do I do this in python?

Here you have it as an answer:
import tools
new_dict[tools.TOOLS[placeholder]] = value

Here is another way to do it with the format that you had, if you want to add more to the dictionary.
from tools import TOOLS
value = 10
placeholder = 0x101
new_dict = {}
new_dict.update({TOOLS[placeholder]: value})
print(new_dict)
This way, you look up the placeholder you had in TOOLS, and use it as a key in the new_dict. For the value, you use your "value" variable.

Related

Why am I gettng a KeyError when adding a key to a dictonary?

I am still trying to learn the ins and outs of Python dictionaries. When I run this:
#!/usr/bin/env python3
d = {}
d['foo']['bar'] = 1
I get KeyError: 'foo'. But in How can I add new keys to a dictionary? it says that "you create a new key\value pair on a dictionary by assigning a value to that key. If the key doesn't exist, it's added and points to that value. If it exists, the current value it points to is overwritten." So why am I getting the key error?
You have, at least, two options:
Create nested dictionaries in order:
d = {}
d['foo'] = {}
d['foo']['bar'] = 1
Use collections.defaultdict, passing the default factory as dict:
from collections import defaultdict
d = defaultdict(dict)
d['foo']['bar'] = 1
you need to assign d['key'] to be a dictionary first
d['a']={}
d['a']['b']= value

Best way to manipulate variables inside a JSON config file in Python3

I want to have a JSON config file where I can reference values ​​internally. For example, consider this JSON config file at bellow:
{
"hdfs-base":"/user/SOME_HDFS_USER/SOME_PROJECT"
,"incoming-path":"$hdfs-base/incoming"
,"processing-path":"$hdfs-base/processing"
,"processed-path":"$hdfs-base/processed"
}
The main idea is to leverage values already stored in json object. In this case, replacing '$hdfs-base' to 'hdfs-base' attribute's value. Do you know something that do that already? I don't wanna use ConfigParser module because I want to use JSON.
Thanks!
Loop over your values, and substitute the keys if there's a match:
import json
js = open('input.json').read()
data = json.loads(js)
for k, v in data.items():
for key in data.keys():
if key in v:
data[k] = v.replace("$" + key, data[key])
BEFORE
hdfs-base /user/SOME_HDFS_USER/SOME_PROJECT
incoming-path $hdfs-base/incoming
processing-path $hdfs-base/processing
processed-path $hdfs-base/processed
AFTER
hdfs-base /user/SOME_HDFS_USER/SOME_PROJECT
incoming-path /user/SOME_HDFS_USER/SOME_PROJECT/incoming
processing-path /user/SOME_HDFS_USER/SOME_PROJECT/processing
processed-path /user/SOME_HDFS_USER/SOME_PROJECT/processed
REPL: https://repl.it/repls/NumbMammothTelephone

How to sort a LARGE dictionary

I have a python script that is working with a large (~14gb) textfile. I end up with a dictionary of keys and values, but I am getting a memory error when I try to sort the dictionary by value.
I know the dictionary is too big to load into memory and then sort, but how could I go about accomplishing this?
You can use an ordered key/value store like wiredtiger, leveldb, bsddb. All of them support ordered keys using custom sort function. leveldb is the easiest to use but if you use python 2.7, bsddb is included in the stdlib. If you only require lexicographic sorting you can use the raw hashopen function to open a persistent sorted dictionary:
from bsddb import hashopen
db = hashopen('dict.db')
db['020'] = 'twenty'
db['002'] = 'two'
db['value'] = 'value'
db['key'] = 'key'
print(db.keys())
This outputs
>>> ['002', '020', 'key', 'value']
Don't forget to close the db after your work:
db.close()
Mind the fact that hashopen configuration might not suit your need. In this case I recommend you use leveldb which has a simple API or wiredtiger for speed.
To order by value in bsddb, you have to use the composite key pattern or key composition. Which boils down to create a dictionary key which keeps the ordering you look for. In this example we pack the original dict value first (so that small values appears first) with the original dict key (so that the bsddb key is unique):
import struct
from bsddb import hashopen
my_dict = {'a': 500, 'abc': 100, 'foobar': 1}
# insert
db = hashopen('dict.db')
for key, value in my_dict.iteritems():
composite_key = struct.pack('>Q', value) + key
db[composite_key] = '' # value is not useful in this case but required
db.close()
# read
db = hashopen('dict.db')
for key, _ in db.iteritems(): # iterate over database
size = struct.calcsize('>Q')
# unpack
value, key = key[:size], key[size:]
value = struct.unpack('>Q', value)[0]
print key, value
db.close()
This outputs the following:
foobar 1
abc 100
a 500

how to append a new value to the existing dict key without overwrite

The title describes the problem, this is what i tried and this is not giving me expected resoult.. where is the problem?
for name in file:
if name in list:
if dict.has_key(param):
dict[param] = [dict[param],name]
else:
dict[param] = name
expecting output:
dict = {
'param1': ['name11', 'name12', 'name13'],
'param2': ['name21', 'name22', 'name23'],
.
.
.
}
You need to append these names to a list. The following is a naive example and might break.
for name in file:
if name in list:
if dict.has_key(param):
dict[param].append(name)
else:
dict[param] = [name]
Or if you want to be a little cleaner, you can use collections.defaultdict. This is the pythonic way if you ask me.
d = collections.defaultdict(list)
for name in file:
if name in lst: # previously overwrote list()
d[param].append(name)
Please do not overwrite the builtin dict() or list() function with your own variable.
You are looking for a multi map, i.e. a map or dictionary which does not have a key -> value relation but a key -> many values, see:
See Is there a 'multimap' implementation in Python?

Parsing files with python

My input file is going to be something like this
key "value"
key "value"
... the above lines repeat
What I do is read the file contents, populate an object with the data and return it. There are only a set number of keys that can be present in the file. Since I am a beginner in python, I feel that my code to read the file is not that good
My code is something like this
ObjInstance = CustomClass()
fields = ['key1', 'key2', 'key3']
for field in fields:
for line in f:
if line.find(field) >= 0:
if pgn_field == 'key1':
objInstance.DataOne = get_value_using_re(line)
elif pgn_field == 'key2':
objInstance.DataTwo = get_value_using_re(line)
return objInstance;
The function "get_value_using_re" is very simple, it looks for a string in between the double quotes and returns it.
I fear that I will have multiple if elif statements and I don't know if this is the right way or not.
Am I doing the right thing here?
A normal approach in Python would be something like:
for line in f:
mo = re.match(r'^(\S+)\s+"(.*?)"\s*$',line)
if not mo: continue
key, value = mo.groups()
setattr(objInstance, key, value)
If the key is not the right attribute name, in the last line line in lieu of key you might use something like translate.get(key, 'other') for some appropriate dict translate.
I'd suggest looking at the YAML parser for python. It can conveniently read a file very similar to that and input it into a python dictionary. With the YAML parser:
import yaml
map = yaml.load(file(filename))
Then you can access it like a normal dictionary with map[key] returning value. The yaml files would look like this:
key1: 'value'
key2: 'value'
This does require that all the keys be unique.

Categories

Resources