I am trying to copy the nested list a, but do not know how to do it without using the copy.deepcopy function.
a = [[1, 2], [3, 4]]
I used:
b = a[:]
and
b = a[:][:]
But they all turn out to be shallow copy.
Any hints?
My entry to simulate copy.deepcopy:
def deepcopy(obj):
if isinstance(obj, dict):
return {deepcopy(key): deepcopy(value) for key, value in obj.items()}
if hasattr(obj, '__iter__'):
return type(obj)(deepcopy(item) for item in obj)
return obj
The strategy: iterate across each element of the passed-in object, recursively descending into elements that are also iterable and making new objects of their same type.
I make no claim whatsoever that this is comprehensive or without fault [1] (don't pass in an object that references itself!) but should get you started.
[1] Truly! The point here is to demonstrate, not cover every possible eventuality. The source to copy.deepcopy is 50 lines long and it doesn't handle everything.
You can use a LC if there's but a single level.
b = [x[:] for x in a]
This is a complete cheat - but will work for lists of "primitives" - lists, dicts, strings, numbers:
def cheat_copy(nested_content):
return eval(repr(nested_content))
There are strong security implications to consider for this - and it will not be particularly fast. Using json.dumps and loads will be more secure.
I found a way to do it using recursion.
def deep_copy(nested_content):
if not isinstance(nested_content,list):
return nested_content
else:
holder = []
for sub_content in nested_content:
holder.append(deep_copy(sub_content))
return holder
For the recursive version, you have to keep track of a secondary list and return each time.
Related
Why this problem has no trivial solution is because I needs to be solved using only pure functions.
Using only pure functions from Python's functional programming page (https://docs.python.org/3/howto/functional.html#), how can one create a list with a value in it? If we'd like to create a list with a value in it, we'd (in code) just do
x = [1]
I do not consider [] to be to be a part of the functions we're looking at here, since it has no signature and is not callable like any other function.
Using only functions to do this is not so trivial. One thought I had was to create a new list using list() and then append values to it. But list().append is mutable and does not return a new, or the, list with the item in it.
What I really want to do is to turn ["a","b","c"] into [["a"],["b"],["c"]], with above constraints.
Other proposals has been made like creating my own (pure) function doing what I want:
def create_list(value) -> list:
return [value]
and then just do map(create_list, ["a","b","c"]) to get solution.
But this is a custom made function and is not from any of the python package functions (within https://docs.python.org/3/howto/functional.html, as mentioned)
lst=[1,2,3];
#this will print [[1],[2],[3]]
print(list(map(lambda x: [x],lst)));
Single element:
def to_list(elem):
return list(range(elem, elem+1)))
To convert [1,2,3] into [[1], [2], [3]] with list comprehesion (it can be easily changed to map):
return [to_list(el) for el in input_list]
And without (ugly, but works ^^)
import itertools
def make_gen(elem):
yield elem
def to_list(elem):
return list(make_gen(elem))
def helper(elem, l):
return list(itertools.chain(to_list(to_list(elem)), l))
def convert(l):
if not l:
return []
return helper(l[0], convert(l[1:]))
print(convert([1, 2, 3]))
To ensure non-mutability, you probably want to use tuples instead of lists (or be very disciplined with your lists).
Using a list comprehension would be a valid functional approach:
A = [1,2,3]
B = [ [i] for i in A ] # [[1], [2], [3]]
or with tuples:
A = (1,2,3)
B = tuple( (i,) for i in A ) # ((1,), (2,), (3,))
If you must use functions, then map() is probably a good solution to this:
A = [1,2,3]
B = list(map(lambda i:[i],A))
If even [i] is proscribed (but why would it be), you can use a a function to make a list directly from its arguments:
def makeList(*v): return list(*v)
A = makeList(1,2,3)
B = makeList(*map(makeList,A))
# combined
makeList(*map(makeList,makeList(1,2,3)))
BTW functional programming is not about "only using functions", it is more about non-mutability of results (and avoidance of side effects). You may want to question whoever is sending you on this wild goose chase.
Using only pure functions from Python's functional programming page
(https://docs.python.org/3/howto/functional.html#), how can one create
a list with a value in it? If we'd like to create a list with number 1
in it
You might exploit generator as generator are described therein as follows
def justone():
yield 1
lst = list(justone())
print(lst)
output
[1]
justone is function (which might be checked using inspect.isfunction) and is pure (as it does not alter anything outside)
In the documentation you link, there are references to Iterators and Generators, which are powerful constructs present in Python (and other languages). You can consider a function to build a list as follows:
def list_from_args(*args):
return [*args]
This is a (superfluous) wrapper around Iterator functionality. You can leverage the Iterator pattern in Python to accomplish a lot, whether that be creating/consuming objects (e.g. lists, tuples, dictionaries), or for processing data (e.g. reading/writing to a file line-by-line, paginating an API or DB Query, etc.)
The code above does the following, for example:
>>> example = list_from_args(1, 'a', 'ham', 'eggs', 44)
>>> example
[1, 'a', 'ham', 'eggs', 44]
The reason I labeled the above function as superfluous: Oftentimes, if you need to create a list on the fly, you can use list comprehensions.
This does it using only functions from https://docs.python.org/3/library/functional.html
import functools
import itertools
map(
list,
map(
functools.partial(
itertools.repeat,
times=1,
),
[1,2,3]
)
)
functools.partial creates a new function of itertools.repeat with "times" parameter set to 1. Each value in the list is then repeated once and turned into a new list using list function.
>>> [[1], [2], [3]]
Consider the following example:
from copy import deepcopy
item = [0]
orig = [item, item]
copy = deepcopy(orig)
orig[0][0] = 1
print(f"{orig=} {copy=}")
copy[0][0] = 2
print(f"{orig=} {copy=}")
The first print outputs what I would expect because the same reference is duplicated in the list.
orig=[[1], [1]] copy=[[0], [0]]
However, the second print surprised me.
orig=[[1], [1]] copy=[[2], [2]]
I would have expected the deepcopy to end up with two independent references inside the copy list. Instead it maintains the property of a single list reference duplicated. I'm guessing that's alluded to in this part of the docs:
A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
I see that the deepcopy function has a memo argument. Is there anything interesting that could be done with this argument to prevent the duplicated reference, such that the final output would become:
orig=[[1], [1]] copy=[[2], [0]]
If your whole point is to copy data that could come from JSON, i.e. list, dict, string, numbers, bool, then you can trivially implement your own function:
def copy_jsonlike(data):
if isinstance(data, list):
return [copy_jsonlike(x) for x in data]
elif isinstance(data, dict):
return {k: copy_jsonlike(v) for k,v in data.items()}
else:
return data
It has the added bonus of probably being faster than copy.deepcopy
Or, your original solution, json.loads(json.dumps(data)) isn't a bad idea either.
Huh, seems like this was easier to do than I thought, but I'm 90% sure it's evil. If someone posts a better answer or explains why this is totally awful, I'll remove it.
Implement a dict that only pretends to set a value. Then the example returns separate copies of the same reference.
class NoMemo(dict):
def __setitem__(self, key, value):
return value
...
copy = deepcopy(orig, memo=NoMemo())
...
Prints:
orig=[[1], [1]] copy=[[0], [0]]
orig=[[1], [1]] copy=[[2], [0]]
What is typically regarded as more Pythonic/better/faster to use, the reverse method or the reversed built-in function?
Both in action:
_list = list(xrange(4))
print _list
rlist = list(reversed(_list))
print rlist
_list.reverse()
print _list
foo.reverse() actually reverses the elements in the container. reversed() doesn't actually reverse anything, it merely returns an object that can be used to iterate over the container's elements in reverse order. If that's what you need, it's often faster than actually reversing the elements.
There seems to be a great difference. I really thought it's the other way round.
Why is rearranging the values in a list faster than creating a new one from an iterator ?
from decorators import bench
_list = range(10 ** 6)
# bench
def foo():
list(reversed(_list))
# bench
def bar():
_list.reverse()
foo()
bar()
print foo.time
print bar.time
0.167278051376
0.0122621059418
Depends on whether you want to reverse the list in-place (i.e. change the list) or not. No other real difference.
Often using reversed leads to nicer code.
Without knowing real stats about performance, _list.reverse() modifies the list itself, whereas reversed(_list) returns an iterator ready to traverse the list in reversed order. That's a big difference itself.
If that's not a problem, object.reverse() seems more readable to me, but maybe you have specific speed requirements. And if reverse() does not belong to 80% of software that's consuming resources, I wouldn't bother (as a general rule of thumb).
_list.reverse() does an in-place reversal and does not return a value
reversed(_list) does not change _list, but returns a reverse iterable
object
_list[::-1] does not change _list, but returns reversed slice
example:
_list = [1,2,3]
ret1 = list(reversed(_list))
ret2 = _list[::-1] #reverse order slice
ret3 = _list.reverse() #no value set in ret3
print('ret1,ret2,ret3,_list:',ret1,ret2,ret3,_list)
_list = [1,2,3]
for x in reversed(_list):
print(x)
output:
ret1,ret2,ret3,_list: [3, 2, 1] [3, 2, 1] None [3, 2, 1]
3
2
1
It is always better to use reversed() if eventually you are going to modify the list in iterator making list immutable and working with immutable data is always better especially when your doing functional programming.
Expanding on #Niklas R answer:
import timeit
print('list.reverse() - real-list', timeit.timeit('_list.reverse()', '_list = list(range(1_000))'))
print('list.reverse() - iterator', timeit.timeit('_list = range(1_000); list(_list).reverse()')) # can't really use .reverse() since you need to cast it first
print('reversed() - real-list', timeit.timeit('list(reversed(_list))', '_list = list(range(1_000))'))
print('reversed() - iterator', timeit.timeit('_list = range(1_000); list(reversed(_list))'))
print('list-comprehension - real-list', timeit.timeit('_list[::-1]', '_list = list(range(1_000))'))
print('list-comprehension - iterator', timeit.timeit('_list = range(1_000); _list[::-1]'))
Results:
list.reverse() - real-list 0.29828099999576807
list.reverse() - iterator 11.078685999964364 # can't really use .reverse() since you need to cast it first
reversed() - real-list 3.7131450000451878
reversed() - iterator 12.048991999938153
list-comprehension - real-list 2.2268580000381917
list-comprehension - iterator 0.4313809999730438
(less is better/faster)
the built-in function reversed(seq) that will return a reverse iterator which is an object representing the stream of data that will return successive items of this steam. Generating this reverse iterator is O(1) in time/space complexity and using it to iterate through the elements of a list will be O(N), O(1) in time/space complexity where N is the length of the list. This is the approach you want to go for if you simply want to iterate on the reversed list without modifying it. This approach gives the best performances in this case.
_list.reverse() reverses the list in-place. The operation is O(N), O(1) in time/space complexity as it will have to go through half of the elements of the list to reverse them and it doesn't store the results in a new list. This approach is good if you do NOT need to keep the original list and you have several passes/operations to do on the elements of the reversed list and if you have to save the processed reversed list.
Last but not least, using slicing [::-1] creates a new object of the list/a copy in reversed order. The operation is O(N), O(N) in space/time complexity as you have to copy all the elements of the list in the new one and this new list will also consume the same amount of space as the original list. This approach is great if you need to keep the original list and have a reversed copy of it stored in a different object for further processing.
To summarize, depending on your use case you will have to use one of those 3 approaches that have slightly different objectives and completely different performances.
I have 2 lists of objects in python2. They are of different types but have a common property ('name'). I control one list (my_list) and the other is sent to me (src_list).
I want to find new objects in src_list, that aren't in my_list, by comparing their 'name' property.
The pseudo code (and how I'd do this in C) is below, but I'm after a python way of doing it, probably list comprehensions and stuff but I couldn't figure it out.
new_list = []
for srco in src_list: # iterate everything in src list
found = False
for myo in my_list: # iterate everything in my list
if(srco.name.lower() == myo.name.lower()): # compare names, break if true
found = True
break
if not found: # add to new list if wasn't found
new_list.append(srco)
Use a set for fast lookups.
my_list_names = {obj.name.lower() for obj in my_list}
new_list = [obj for obj in src_list if obj.name.lower() not in my_list_names]
Also, if you want to learn to be more pythonic, don't do the found pattern. Do this:
for myo in my_list:
if(srco.name.lower() == myo.name.lower()):
break
else: # executes if there was no break
new_list.append(srco)
Yes, list comprehension comes to mind like you already said. Then, the map builtin can be used to create an iterator for the names from one of the lists.
result = [obj for obj in srco if obj.name.lower() not in set(map(lambda x: x.name.lower(), myo))]
One easy way is to first generate set of names to exclude from my_list and then iterate over src_list keeping the items that can't be found from set:
exclude = {x.name.lower() for x in my_list}
new_list = [x for x in src_list if x.name.lower() not in exclude]
You could use sets to do that.
Create a set with containing the my_list names, another set containing src_list and simply substract the two :
diff_set = src_list_set - my_list_set
And you can then go fetch the objects whose names appear in diff_set
my_list_set = {obj.name.lower() for obj in my_list}
src_list_set = {obj.name.lower() for obj in src_list}
diff_set = src_list_set - my_list_set
new_list = [obj for obj in src_list if obj.name.lower() in diff_set]
(the solution may not be specially short but by replacing the minus by an other operation you can find on the official doc, it fits at a minimal cost many other situations)
I've searched around and tried a lot of stuff but I can't get this to work. I think the problem is something to do with how Python list names point to the list, rather than being the actual list, but I still can't figure it out. The situation is this (it's a list of dictionaries):
list_original = [dictionary1, dictionary2, dictionary3]
dictionary2_modified = dictionarymodifier(dictionary2) #some function that modifies the chosen element
list_modified = [i for i in list_original] #makes copy of original list
for i,n in enumerate(dictionary_original):
if i==1:
list_modified[1] = dictionary2_modified #replaces element with modified version
return list_original, list_modified
And many similar things, but I always either get two of the original list or two of the new list! I add that I'm using python 2.4 and don't have a choice in that.
Many thanks for any help
Mutable vs. Immutable
You need to know the difference between mutable and immutable elements. Namely, both dictionaries and lists in Python are mutable. Which means that if you modify it in one place, it is also modified in the other place.
In addition, the variable of mutable type (like list or dict) can contain immutable elements (eg. str), as well as the other way around: variable of immutable type (eg. tuple) can contain mutable elements (such as list or dict).
Example for mutability
So, this shows the mutability using the example of list:
>>> a = [1, 2, 3, 4]
>>> b = a
>>> a
[1, 2, 3, 4]
>>> b
[1, 2, 3, 4]
>>> a[2] = 'x'
>>> a
[1, 2, 'x', 4]
>>> b
[1, 2, 'x', 4]
How to obtain a copy of list or dict
To obtain a copy of list, you simply can do this instead:
new_list = old_list[:] # the slicing at the end just takes the whole list
In case of dict this is generally sufficient:
new_dict = old_dict.copy()
Nested lists / dicts
However, although lists / dicts that are flat or contain only mutable elements, can be copied the way I showed, to obtain a copy of more complex mutable data structures you need to do something more...
In such case very helpful may be the copy module with its deepcopy function. Documentation of copy module says more about its purpose:
Assignment statements in Python do not copy objects, they create bindings between a target and an object. For collections that are mutable or contain mutable items, a copy is sometimes needed so one can change one copy without changing the other. This module provides generic shallow and deep copy operations (explained below).
Is your dictionarymodifier actually mutating dictionary2 in place? If so, the way you build your list is irrelevant.
Simply using list_modified = list(list_original) works fine to create a shallow copy of the list, which you can then modify to your heart's content, but only if you don't modify the items in the original list (which you can't if they're immutable built-in things like numbers or strings, so beginners often mistake this for a deep copy).
If you really need to copy the list, you can use copy.deepcopy to do so.
You need to create a copy of the list.
copy=list(original)
original[0] = None