I think this should be simple but I'm not sure how to do it. I have a tuple of list variables:
A = (a,b,c)
where
a = [1,2,3,...]
b = [2,4,6,4,...]
c = [4,6,4,...]
And I want to make a tuple or list where it is the names of the variables. So,
A_names = ('a','b','c')
How could I do this? My tuple will have more variables and it is not always the same variables. I tried something like
A_names = tuple([str(var) for var in A])
but this did not work.
My connection was messed up so I couldn't post this earlier but I believe this solves your problem with out using a dictionary.
import inspect
def retrieve_name(var):
local_vars = inspect.currentframe().f_back.f_locals.items()
return [var_name for var_name, var_val in local_vars if var_val is var]
a = [1,2,3]
b = [2,4,6,4]
c = [4,6,4]
a_list = (a,b,c)
a_names = []
for x in a_list:
a_names += (retrieve_name(x)[0])
print a_names
outputs ['a', 'b', 'c']
The problem with what you are asking is that doing A = (a, b, c) does not assign the variables "a", "b" and "c" to the tuple A. Rather, you are creating a new reference to each of the objects referred to by those names.
For example, if I did A = (a,), a tuple with a single object. I haven't assigned the variable "a". Instead, a reference is created at position 0 in the tuple object. That reference is to the same object referred to by the name a.
>>> a = 1
>>> b = 2
>>> A = (a, b)
>>> A
(1, 2)
>>> a = 3
>>> A
(1, 2)
Notice that assigning a new value to a does not change the value in the tuple at all.
Now, you could use the locals() or globals() dictionaries and look for values that match those in A, but there's no guarantee of accuracy since you can have multiple names referring to the same value and you won't know which is which.
>>> for key, val in locals().items():
if val in A:
print(key, val)
('a', 1)
('b', 2)
Assuming you want dynamic/accessible names, you need to use a dictionary.
Here is an implementation with a dictionary:
my_variables = {'a': [1,2,3,...],
'b': [2,4,6,4,...],
'c': [4,6,4,...]}
my_variable_names = my_variables.keys()
for name in my_variable_names:
print(my_variables[name])
Just out of academic interest:
dir() will give you a list of the variables currently visible,
locals() gives the list of local variables
globals() (guess)
Note that some unexpected variables will show up (starting and ending in __), which are already defined by Python.
A = {'a' : [1,2,3,...],
'b' : [2,4,6,4,...],
'c' : [4,6,4,...]}
A_names = A.keys()
for name in A_names:
print(A[name])
Then you can always add a new value to the dictionary by saying:
A.update({'d' : [3,6,3,8,...], 'e' : [1,7,2,2,...]})
Alternatively, you can change the value of an item by going:
A.update({'a' : [1,3,2,...]})
To print a specific value, you can just type:
print(A['c'])
Related
I need to create multiple dictionaries in one line, I tried like following.
a,b,c = dict(), dict(), dict()
Is there any pythonic way to achieve this? I tried with
a = b = c = dict()
But in this, if I change a it also reflects with other dicts
a['k'] = 'val'
a
{'k': 'val'}
b
{'k': 'val'}
c
{'k': 'val'}
I'm just posting some thoughts here:
Pep 8 is a style guide for python code: https://www.python.org/dev/peps/pep-0008/. However nothing about declaring variables there.
Although these work:
a,b,c = dict(), dict(), dict()
a, b, c = [dict() for _ in range(3)]
I think this is the most readable:
a = dict()
b = dict()
c = dict()
Reason:
You can always expect that variables are defined on separate rows. What about if you were to assign 20 items, would it be: a,b,c,d,e.... ??
Anyhow, another way of doing it would be to nest them inside one dictionary, and here too only one variable is declared:
dicts = {letter:dict() for letter in list("abc")} # {'a': {}, 'b': {}, 'c': {}}
Your first method is fine. use
a,b,c = dict(), dict(), dict()
The explanation for the second method :
Python variables are references to objects, but the actual data is
contained in the objects.
a = b = c = dict() is not creating three dict. In python, variables don't store the value. Variables point to the object and objects store the value, so here a,b,c variable pointing same object which contains one dict(). you can check
print(id(a),id(b),id(c))
4321042248 4321042248 4321042248
That's why when you change in one, it changes the other too because they are holding the same dict value.
I'm trying to iteratively write dictionaries to file, but am having issues creating the unique filenames for each dict.
def variable_to_value(value):
for n, v in globals().items():
if v == value:
return n
else:
return None
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
This prints:
a
obj
obj
How can I access the name of the original object itself instead of obj?
The problem is that obj, your iteration variable is also in globals. Whether you get a or obj is just by luck. You can't solve the problem in general because an object can have any number of assignments in globals. You could update your code to exclude known references, but that is very fragile.
For example
a = {'a': [1,2,3]}
b = {'b': [4,5,6]}
c = {'c': [7,8,9]}
print("'obj' is also in globals")
def variable_to_value(value):
return [n for n,v in globals().items() if v == value]
for obj in [a, b, c]:
name = variable_to_value(obj)
print(name)
print("you can update your code to exclude it")
def variable_to_value(value, exclude=None):
return [n for n,v in globals().items() if v == value and n != exclude]
for obj in [a, b, c]:
name = variable_to_value(obj, 'obj')
print(name)
print("but you'll still see other assignments")
foo = a
bar = b
bax = c
for obj in [a, b, c]:
name = variable_to_value(obj, 'obj')
print(name)
When run
'obj' is also in globals
['a', 'obj']
['b', 'obj']
['c', 'obj']
you can update your code to exclude it
['a']
['b']
['c']
but you'll still see other assignments
['a', 'foo']
['b', 'bar']
['c', 'bax']
The function returns the first name it finds referencing an object in your globals(). However, at each iteration, the name obj will reference each of the objects. So either the name a, b or c is returned or obj, depending on the one which was reached first in globals().
You can avoid returning obj by excluding that name from the search in your function - sort of hackish:
def variable_to_value(value):
for n, v in globals().items():
if v == value and n != 'obj':
return n
else:
return None
Python doesn't actually work like this.
Objects in Python don't have innate names. It's the names that belong to an object, not the other way around: an object can have many names (or no names at all).
You're getting two copies of "obj" printed, because at the time you call variable_to_value, both the name b and the name obj refer to the same object! (The dictionary {'b': [4,5,6]}) So when you search for the global namespace for any value which is equal to obj (note that you should be checking using is rather than ==) it's effectively random whether or not you get b or obj.
So you want to find the name of any object available in the globals()?
Inside the for loop, globals() dict is being mutated, adding obj in its namespace. So on your second pass, you have two references to the same object (originally only referenced by the name 'a').
Danger of using globals(), I suppose.
How would I solve this issue?
d = {
'a':3,
'b':(d['a']*3)
}
print(d)
results in an error because I try to set the value of 'b' using the name of the dictionary, and apparently python does not like that. How would I get around this?
Also:
l = [[2], [(l[0][0]*2)]]
print(l)
has the same issue.
Given how assignment works - the RHS expression is eval'd first, and only then is the resulting object bound to the LHS -, that was to be expected : you cannot reference a name that has not been yet created in the current scope.
The solutions are either to use an intermediate variable for the value you want to reuse, as explained in lambo's answer, or to first build the dict (or list or whatever) with the first key or index/value pair then build the other, ie:
d = {"a", 3}
d["b"] = d["a"] * 3
Assign the values to variables first:
x = 3
d = {'a': x, 'b': x*3}
y = 2
l = [[y], [y*2]]
The data structure looks like this (not ordered as dictionaries usually are):
{'b': {'2': 0.02}, 'a': {'1': 0.01}}
When creating such a dictionary, in Python, it's incorrect to not check if 'b' exists, before referencing d["b"]. It's not incorrect though to add the key '2' to the dictionary keyed by 'b', using d["b"]["2"] = float(0.02).
Below is the piece of code that explains this situation. Ignoring the two commented lines, the code splits a line of text and constructs a nested dictionary; in other words, a dictionary of a dictionary.
The two commented lines work if d is a simple dictionary (no nesting). It's not necessary to check if 'a' and 'b' exists.
What would be the explanation?
def f():
d = {}
m = ["a 1 0.01", "b 2 0.02"]
#d["a"] = 1
#d["b"] = 2
for i in range(2):
(m0,m1,m2) = m[i].split()
if m0 not in d:
d[m0] = {}
d[m0][m1] = float(m2)
The difference is that
d["b"] = 2
represents the call d.__setitem__("b", 2), while
print d["b"]
represents
d.__getitem__("b")
If "b" is not already a key in the dict, then the first simply adds the key, while the second raises a KeyError.
Your line,
d["b"]["2"] = float(0.02)
is evaluated from left to right, though. It is parsed the same as
(d["b"])["2"] = float(0.02)
which means d.__getitem__("b") must succeed before its result can call __setitem__. It is equivalent to
d.__getitem__("b").__setitem__("2", float(0.02))
As an aside, if Python supported true "multidimensional" dictionaries, then something like d["b"]["2"] = float(0.02) would map to something like d.__setitem__("b", "2", float(0.02)), and most uses of defaultdict would become unnecessary.
If I understand your question right, the reason is that d['b']['2'] is composed of two operations: one does d['b'] and the other does ...['2'], where ... is the result of d['b']. There is no notion of a "nested dict" per se; you just have to access the dicts one by one from the outside in.
In an operation like d['b']['2'] = 'blah', only the last operation (the ...['2'] = 'blah') is a setting operation. The other is a read operation, just reading the value of d['b']. This operation, as you note, fails if d['b'] does not exist.
In other words, d['b']['2'] = 'blah' is the same as:
x = d['b']
x['2'] = 'blah'
You seem to be aware that the first operation will fail if d['b'] does not exist. That is also why it fails for d['b']['2'] = 'blah'.
def f():
d = {}
m = ["a 1 0.01", "b 2 0.02"]
for i in range(2):
(m0,m1,m2) = m[i].split()
if m0 not in d:
d[m0] = {}
d[m0][m1] = float(m2)
print d
print type(d)
print type(m)
print (d['a']['1'])
print type (d['a']['1'])
print (d['a'])
print (d['b'])
f()
gives the output:
{'a': {'1': 0.01}, 'b': {'2': 0.02}}
type 'dict'
type 'list'
0.01
type 'float'
{'1': 0.01}
{'2': 0.02}
This is hopefully a little more expository than the example you gave: but I think the simple answer for the question you're asking is:
you're trying to re-assign an integer value, not a dictionary key
I want a for loop in Python that can modify variables in the iterator, not just handle the value of the variables. As a trivial example, the following clearly does not do what I want because b is still a string at the end.
a = 3
b = "4"
for x in (a, b):
x = int(x)
print("b is %s" % type(b))
(Result is "b is a <class 'str'>")
What is a good design pattern for "make changes to each variable in a long list of variables"?
Short answer: you can't do that.
a = "3"
b = "4"
for x in (a, b):
x = int(x)
Variables in Python are only tags that references values. Theres is not such thing as "tags on tags". When you write x = int(x) if the above code, you only change what x points to. Not the pointed value.
What is a good design pattern for "make changes to each variable in a long list of variables"?
I'm not sure to really understand, but if you want to do things like that, maybe you should store your values not as individual variables, but as value in a dictionary, or as instance variables of an object.
my_vars = {'a': "3",
'b': "4" }
for x in my_vars:
my_vars[x] = int(my_vars[x])
print type(my_vars['b'])
Now if you're in the hackish mood:
As your variables are globals they are in fact stored as entries in a dictionary (accessible through the globals() function). So you could change them:
a = "3"
b = "4"
for x in ('a', 'b'):
globals()[x] = int(globals()[x])
print type(b)
But, as of myself, I wouldn't call that "good design pattern"...
As mentioned in another answer, there's no way to update a variable indirectly. The best you can do is assign it explicitly with unpacking:
>>> a = 3
>>> b = 4
>>> a, b = [int(x) for x in a, b]
>>> print "b is %s" % type(b)
b is <type 'int'>
If you have an actual list of variables (as opposed to a number of individual variables you want to modify), then a list comprehension will do what you want:
>>> my_values = [3, "4"]
>>> my_values = [int(value) for value in my_values]
>>> print(my_values)
[3, 4]
If you want to do more complicated processing, you can define a function and use that in the list comprehension:
>>> my_values = [3, "4"]
>>> def number_crunching(value):
... return float(value)**1.42
...
>>> my_values = [number_crunching(value) for value in my_values]
>>> print(my_values)
[4.758961394052794, 7.160200567423779]