is there a way to instantiate variables from iterated output in python? - python

Say I have a list
my_list = ['a','b','c']
and I have a set of values
my_values = [1,2,3]
Is there a way to iterate through my list and set the values of my_list equal to my_values
for i in range(len(my_list)):
## an operation that instantiates my_list[i] as the variable a = my_values[i]
...
>>> print a
1
I just want to do this without copying the text of file that holds the program to a new file, inserting the new lines as strings where they need to go in the program. I'd like to skip the create, rename, destroy, file operations if possible, as I'm dealing with pretty large sets of stuff.

This is probably hackery that you shouldn't do, but since the globals() dict has all the global variables in it, you can add them to the global dict for the module:
>>> my_list = ['a','b','c']
>>> my_values = [1,2,3]
>>> for k, v in zip(my_list, my_values):
... globals()[k] = v
...
>>> a
1
>>> b
2
>>> c
3
But caveat emptor, best not to mix your namespace with your variable values. I don't see anything good coming of it.
I recommend using a normal dict instead to store your values instead of loading them into the global or local namespace.

Related

Python reassign value instead of variable

Is it possible to reassign the value referenced to by a variable, rather than the variable itself?
a = {"example": "foo"}
b = a
When I reassign a, it is reassigning the variable a to reference a new value. Therefore, b does not point to the new value.
a = {"example": "bar"}
print(b["example"]) # -> "foo"
How do I instead reassign the value referenced by a? Something like:
*a = {"example": "bar"}
print(b["example"]) # -> "bar"
I can understand if this isn't possible, as Python would need a double pointer under the hood.
EDIT Most importantly, I need this for reassigning an object value, similar to JavaScript's Object.assign function. I assume Python will have double pointers for objects. I can wrap other values in an object if necessary.
Python variables simply do not operate this way, and simple assignment won't do what you want. Instead, you can clear the existing dict and update it in-place.
>>> a = dict(example="foo")
>>> b = a
>>> a.clear()
>>> a
{}
>>> a.update({'example': 'bar'})
>>> b
{'example': 'bar'}
You are creating 2 dictionaries, so that's 2 different objects in memory. If you don't want that, keep 1 dictionary only.
a = {"example": "foo"}
b = a
a["example"] = "bar"
print(b["example"])

python dictionary: how does appending of items work? [duplicate]

This question already has answers here:
How do I initialize a dictionary of empty lists in Python?
(7 answers)
Closed 2 years ago.
I came across this behavior that surprised me in Python 2.6 and 3.2:
>>> xs = dict.fromkeys(range(2), [])
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: [1]}
However, dict comprehensions in 3.2 show a more polite demeanor:
>>> xs = {i:[] for i in range(2)}
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: []}
>>>
Why does fromkeys behave like that?
Your Python 2.6 example is equivalent to the following, which may help to clarify:
>>> a = []
>>> xs = dict.fromkeys(range(2), a)
Each entry in the resulting dictionary will have a reference to the same object. The effects of mutating that object will be visible through every dict entry, as you've seen, because it's one object.
>>> xs[0] is a and xs[1] is a
True
Use a dict comprehension, or if you're stuck on Python 2.6 or older and you don't have dictionary comprehensions, you can get the dict comprehension behavior by using dict() with a generator expression:
xs = dict((i, []) for i in range(2))
In the first version, you use the same empty list object as the value for both keys, so if you change one, you change the other, too.
Look at this:
>>> empty = []
>>> d = dict.fromkeys(range(2), empty)
>>> d
{0: [], 1: []}
>>> empty.append(1) # same as d[0].append(1) because d[0] references empty!
>>> d
{0: [1], 1: [1]}
In the second version, a new empty list object is created in every iteration of the dict comprehension, so both are independent from each other.
As to "why" fromkeys() works like that - well, it would be surprising if it didn't work like that. fromkeys(iterable, value) constructs a new dict with keys from iterable that all have the value value. If that value is a mutable object, and you change that object, what else could you reasonably expect to happen?
To answer the actual question being asked: fromkeys behaves like that because there is no other reasonable choice. It is not reasonable (or even possible) to have fromkeys decide whether or not your argument is mutable and make new copies every time. In some cases it doesn't make sense, and in others it's just impossible.
The second argument you pass in is therefore just a reference, and is copied as such. An assignment of [] in Python means "a single reference to a new list", not "make a new list every time I access this variable". The alternative would be to pass in a function that generates new instances, which is the functionality that dict comprehensions supply for you.
Here are some options for creating multiple actual copies of a mutable container:
As you mention in the question, dict comprehensions allow you to execute an arbitrary statement for each element:
d = {k: [] for k in range(2)}
The important thing here is that this is equivalent to putting the assignment k = [] in a for loop. Each iteration creates a new list and assigns it to a value.
Use the form of the dict constructor suggested by #Andrew Clark:
d = dict((k, []) for k in range(2))
This creates a generator which again makes the assignment of a new list to each key-value pair when it is executed.
Use a collections.defaultdict instead of a regular dict:
d = collections.defaultdict(list)
This option is a little different from the others. Instead of creating the new list references up front, defaultdict will call list every time you access a key that's not already there. You can there fore add the keys as lazily as you want, which can be very convenient sometimes:
for k in range(2):
d[k].append(42)
Since you've set up the factory for new elements, this will actually behave exactly as you expected fromkeys to behave in the original question.
Use dict.setdefault when you access potentially new keys. This does something similar to what defaultdict does, but it has the advantage of being more controlled, in the sense that only the access you want to create new keys actually creates them:
d = {}
for k in range(2):
d.setdefault(k, []).append(42)
The disadvantage is that a new empty list object gets created every time you call the function, even if it never gets assigned to a value. This is not a huge problem, but it could add up if you call it frequently and/or your container is not as simple as list.

Shallow copy list of objects

What would be the best way to transfer references of objects from one list to another (move objects from one list to another). For clarity, I need to remove the objects from d[1] after copying
class MyObject:
def __init__(self,v):
self.value = v
d = {1: [MyObject("obj1"),MyObject("obj2")], 2: []}
#which one?
#d[2] = [obj for obj in d[1]]
#d[2] = d[1][:]
#d[2] = d[1].copy()
#clear d[1]
#d[1] = []
for i in range(len(d[1])):
d[2].append(d[1].pop(0))
for o in d[2]:
print (o.value)
Which approach is best depends a bit on the details of the surrounding code. Does any other variable or data structure contain a reference to either of your lists? If not, you can just rebind the references in the dict (which takes O(1) time since no copying happens):
d = {1: [MyObject("obj1"),MyObject("obj2")], 2: []}
d[2] = d[1]
d[1] = []
If other references to the existing lists might exist and you want them to continue referring to the correct values (e.g. an old reference to d[1] should still reference d[1] after the changes), then you want to do a slice assignment followed by a clear (this is O(N)):
d[2][:] = d[1] # copy data
d[1].clear()
I don't think there's a good reason to use any other approach unless you have some other logic to apply (for instance, if you only want to copy some of the values and not others).
Adding a dictionary obscures the use case. Based on your examples, it's not clear if you want a copy of the object or a list referencing the same objects.
Assuming the latter, consider the simplified case. It's really as simple as assigning the list to another variable:
>>> class MyObject(object):
... def __init__(self, v):
... self.value = v
...
>>> x = [MyObject(1), MyObject(2)]
>>> y = x
>>> x[1].value
2
Now, both x and y are a list of the same referenced objects. If I change the object in one list, it will change in the other:
>>> y[1].value = 3
>>> x[1].value
3
In your use case (a dictionary with list values), this is quite simple:
d[2] = d[1]
You can then delete the 1 key if necessary:
del d[1]
Voila!

Alternative to using deepcopy for nested dictionaries?

I have a nested dict like this, but much larger:
d = {'a': {'b': 'c'}, 'd': {'e': {'f':2}}}
I've written a function which takes a dictionary and a path of keys as input and returns the value associated with that path.
>>> p = 'd/e'
>>> get_from_path(d, p)
>>> {'f':2}
Once I get the nested dictionary, I will need to modify it, however, d can not be modified. Do I need to use deepcopy, or is there a more efficient solution that doesn't require constantly making copies of the dictionary?
Depending on your use case, one approach to avoid making changes to an existing dictionary is to wrap it in a collections.ChainMap:
>>> import collections
>>> # here's a dictionary we want to avoid dirty'ing
>>> d = {i: i for in in range(10)}
>>> # wrap into a chain map and make changes there
>>> c = collections.ChainMap({}, d)
Now we can add new keys and values to c without corresponding changes happening in d
>>> c[0] = -100
>>> print(c[0], d[0])
-100 0
Whether this solution is appropriate depends on your use case ... in particular the ChainMap will:
not behave like a regular map when it comes to some things, like deleting keys:
>>> del c[0]
>>> print(c[0])
0
still allow you to modify values in place
>>> d = dict(a=[])
>>> collections.ChainMap({}, d)["a"].append(1)
will alter the list in d
However, if you are merely wishing to take your embedded dictionary and pop some new keys and values on it, then ChainMap may be appropriate.

python initialize nested dictionary with keys and ambiguous behavior of dict.fromkeys class method [duplicate]

This question already has answers here:
How do I initialize a dictionary of empty lists in Python?
(7 answers)
Closed 2 years ago.
I came across this behavior that surprised me in Python 2.6 and 3.2:
>>> xs = dict.fromkeys(range(2), [])
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: [1]}
However, dict comprehensions in 3.2 show a more polite demeanor:
>>> xs = {i:[] for i in range(2)}
>>> xs
{0: [], 1: []}
>>> xs[0].append(1)
>>> xs
{0: [1], 1: []}
>>>
Why does fromkeys behave like that?
Your Python 2.6 example is equivalent to the following, which may help to clarify:
>>> a = []
>>> xs = dict.fromkeys(range(2), a)
Each entry in the resulting dictionary will have a reference to the same object. The effects of mutating that object will be visible through every dict entry, as you've seen, because it's one object.
>>> xs[0] is a and xs[1] is a
True
Use a dict comprehension, or if you're stuck on Python 2.6 or older and you don't have dictionary comprehensions, you can get the dict comprehension behavior by using dict() with a generator expression:
xs = dict((i, []) for i in range(2))
In the first version, you use the same empty list object as the value for both keys, so if you change one, you change the other, too.
Look at this:
>>> empty = []
>>> d = dict.fromkeys(range(2), empty)
>>> d
{0: [], 1: []}
>>> empty.append(1) # same as d[0].append(1) because d[0] references empty!
>>> d
{0: [1], 1: [1]}
In the second version, a new empty list object is created in every iteration of the dict comprehension, so both are independent from each other.
As to "why" fromkeys() works like that - well, it would be surprising if it didn't work like that. fromkeys(iterable, value) constructs a new dict with keys from iterable that all have the value value. If that value is a mutable object, and you change that object, what else could you reasonably expect to happen?
To answer the actual question being asked: fromkeys behaves like that because there is no other reasonable choice. It is not reasonable (or even possible) to have fromkeys decide whether or not your argument is mutable and make new copies every time. In some cases it doesn't make sense, and in others it's just impossible.
The second argument you pass in is therefore just a reference, and is copied as such. An assignment of [] in Python means "a single reference to a new list", not "make a new list every time I access this variable". The alternative would be to pass in a function that generates new instances, which is the functionality that dict comprehensions supply for you.
Here are some options for creating multiple actual copies of a mutable container:
As you mention in the question, dict comprehensions allow you to execute an arbitrary statement for each element:
d = {k: [] for k in range(2)}
The important thing here is that this is equivalent to putting the assignment k = [] in a for loop. Each iteration creates a new list and assigns it to a value.
Use the form of the dict constructor suggested by #Andrew Clark:
d = dict((k, []) for k in range(2))
This creates a generator which again makes the assignment of a new list to each key-value pair when it is executed.
Use a collections.defaultdict instead of a regular dict:
d = collections.defaultdict(list)
This option is a little different from the others. Instead of creating the new list references up front, defaultdict will call list every time you access a key that's not already there. You can there fore add the keys as lazily as you want, which can be very convenient sometimes:
for k in range(2):
d[k].append(42)
Since you've set up the factory for new elements, this will actually behave exactly as you expected fromkeys to behave in the original question.
Use dict.setdefault when you access potentially new keys. This does something similar to what defaultdict does, but it has the advantage of being more controlled, in the sense that only the access you want to create new keys actually creates them:
d = {}
for k in range(2):
d.setdefault(k, []).append(42)
The disadvantage is that a new empty list object gets created every time you call the function, even if it never gets assigned to a value. This is not a huge problem, but it could add up if you call it frequently and/or your container is not as simple as list.

Categories

Resources