When we need to copy full data from a dictionary containing primitive data types ( for simplicity, lets ignore presence of datatypes like datetime etc), the most obvious choice that we have is to use deepcopy, but deepcopy is slower than some other hackish methods of achieving the same i.e. using serialization-unserialization for example like json-dump-json-load or msgpack-pack-msgpack-unpack. The difference in efficiency can be seen here :
>>> import timeit
>>> setup = '''
... import msgpack
... import json
... from copy import deepcopy
... data = {'name':'John Doe','ranks':{'sports':13,'edu':34,'arts':45},'grade':5}
... '''
>>> print(timeit.timeit('deepcopy(data)', setup=setup))
12.0860249996
>>> print(timeit.timeit('json.loads(json.dumps(data))', setup=setup))
9.07182312012
>>> print(timeit.timeit('msgpack.unpackb(msgpack.packb(data))', setup=setup))
1.42743492126
json and msgpack (or cPickle) methods are faster than a normal deepcopy, which is obvious as deepcopy would be doing much more in copying all the attributes of the object too.
Question: Is there a more pythonic/inbuilt way to achieve just a data copy of a dictionary or list, without having all the overhead that deepcopy has ?
It really depends on your needs. deepcopy was built with the intention to do the (most) correct thing. It keeps shared references, it doesn't recurse into infinite recursive structures and so on... It can do that by keeping a memo dictionary in which all encountered "things" are inserted by reference. That's what makes it quite slow for pure-data copies. However I would almost always say that deepcopy is the most pythonic way to copy data even if other approaches could be faster.
If you have pure-data and a limited amount of types inside it you could build your own deepcopy (build roughly after the implementation of deepcopy in CPython):
_dispatcher = {}
def _copy_list(l, dispatch):
ret = l.copy()
for idx, item in enumerate(ret):
cp = dispatch.get(type(item))
if cp is not None:
ret[idx] = cp(item, dispatch)
return ret
def _copy_dict(d, dispatch):
ret = d.copy()
for key, value in ret.items():
cp = dispatch.get(type(value))
if cp is not None:
ret[key] = cp(value, dispatch)
return ret
_dispatcher[list] = _copy_list
_dispatcher[dict] = _copy_dict
def deepcopy(sth):
cp = _dispatcher.get(type(sth))
if cp is None:
return sth
else:
return cp(sth, _dispatcher)
This only works correct for all immutable non-container types and list and dict instances. You could add more dispatchers if you need them.
# Timings done on Python 3.5.3 - Windows - on a really slow laptop :-/
import copy
import msgpack
import json
import string
data = {'name':'John Doe','ranks':{'sports':13,'edu':34,'arts':45},'grade':5}
%timeit deepcopy(data)
# 11.9 µs ± 280 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit copy.deepcopy(data)
# 64.3 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit json.loads(json.dumps(data))
# 65.9 µs ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit msgpack.unpackb(msgpack.packb(data))
# 56.5 µs ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Let's also see how it performs when copying a big dictionary containing strings and integers:
data = {''.join([a,b,c]): 1 for a in string.ascii_letters for b in string.ascii_letters for c in string.ascii_letters}
%timeit deepcopy(data)
# 194 ms ± 5.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit copy.deepcopy(data)
# 1.02 s ± 46.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit json.loads(json.dumps(data))
# 398 ms ± 20.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit msgpack.unpackb(msgpack.packb(data))
# 238 ms ± 8.81 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I think you can manually implement what you need by overriding object.__deepcopy__.
A pythonic way to do this is creating your custom dict extends from builtin dict and implement your custom __deepcopy__.
#MSeifert The suggested answer is not accurate
So far i found ujson.loads(ujson.dumps(my_dict)) to be the fastest option which looks strange (how translating dict to string and then from string to new dict is faster then some pure copy)
Here is an example of the methods i tried and their running time for small dictionary (the results of course are more clear with larger dictionary):
x = {'a':1,'b':2,'c':3,'d':4, 'e':{'a':1,'b':2}}
#this function only handle dict of dicts very similar to the suggested solution
def fast_copy(d):
output = d.copy()
for key, value in output.items():
output[key] = fast_copy(value) if isinstance(value, dict) else value
return output
from copy import deepcopy
import ujson
%timeit deepcopy(x)
13.5 µs ± 146 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit fast_copy(x)
2.57 µs ± 31.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit ujson.loads(ujson.dumps(x))
1.67 µs ± 14.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
is there any other C extension that might work better than ujson?
it very strange that this is the fastest method to copy large dict.
It's always fastest to write your own copy function specific to your data structure.
Your example
data = {
'name': 'John Doe',
'ranks': {
'sports': 13,
'edu': 34,
'arts': 45
},
'grade': 5
}
is a dict consisting just of strs or dicts. Hence:
def copy(obj):
out = obj.copy() # Shallow copy
for k, v in obj.items():
if isinstance(obj[k], dict):
out[k] = obj[k].copy()
return obj
%timeit deepcopy(data)
5.26 µs ± 88.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit json.loads(json.dumps(data))
5.11 µs ± 117 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit msgpack.unpackb(msgpack.packb(data))
2.44 µs ± 76.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit ujson.loads(ujson.dumps(data))
1.63 µs ± 25.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit copy(data)
548 ns ± 5.77 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
#MSeifert's answer did not work for me. so I implemented a somewhat different approach.
def myDictDeepCopy(dictToCopy) -> dict:
'''
Parameters
----------
dictToCopy : dict
dict that you want to copy
Returns
-------
dict
'''
# Shallow copy
temp = dictToCopy.copy()
dictToReturn = {}
for key, value in temp.items():
dictToReturn[key] = copy(value)
return dictToReturn
Related
I have a dataframe with ~7.000.000 rows and a lot of columns.
Each row is a Tweet, and i have a column text with tweet's content.
I created a new column just for hashtags inside text:
df['hashtags'] = df.Tweets.str.findall(r'(?:(?<=\s)|(?<=^))#.*?(?=\s|$)')
So i have a column called hashtags with each row containing a list structure: ['#b747', '#test'].
I would like to count the number of each hashtag but i have a heavy number of rows. What is the most performatic way to do it?
Here are some different approaches, along with timing, ordered by speed (fastest first):
# setup
n = 10_000
df = pd.DataFrame({
'hashtags': np.random.randint(0, int(np.sqrt(n)), (n, 10)).astype(str).tolist(),
})
# 1. using itertools.chain to build an iterator on the elements of the lists
from itertools import chain
%timeit Counter(chain(*df.hashtags))
# 7.35 ms ± 58.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 2. as per #Psidom comment
%timeit df.hashtags.explode().value_counts()
# 8.06 ms ± 19.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 3. using Counter constructor, but specifying an iterator, not a list
%timeit Counter(h for hl in df.hashtags for h in hl)
# 10.6 ms ± 13.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 4. iterating explicitly and using Counter().update()
def count5(s):
c = Counter()
for hl in s:
c.update(hl)
return c
%timeit count5(df.hashtags)
# 12.4 ms ± 66.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 5. using itertools.reduce on Counter().update()
%timeit reduce(lambda x,y: x.update(y) or x, df.hashtags, Counter())
# 13.7 ms ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 6. as per #EzerK
%timeit Counter(sum(df['hashtags'].values, []))
# 2.58 s ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Conclusion: the fastest is #1 (using Counter(chain(*df.hashtags))), but the more intuitive and natural #2 (from #Psidom comment) is almost as fast. I would probably go with that. #6 (#EzerK approach) is very slow for large df slow because we are building a new (long) list before passing it as argument to Counter().
you can all the lists to one big list and then use collections.Counter:
import pandas as pd
from collections import Counter
df = pd.DataFrame()
df['hashtags'] = [['#b747', '#test'], ['#b747', '#test']]
Counter(sum(df['hashtags'].values, []))
I have to group a Dataframe and do some more calculations by using the keys as input parameter. Doing so I figured out some strange performance behavior.
The time for grouping is ok and the time to get the keys too. But if I execute both steps it takes 24x the time.
Am I using it wrong or is there another way to get the unique parameter pairs with all their indices?
Here is some easy example:
import numpy as np
import pandas as pd
def test_1(df):
grouped = df.groupby(['up','down'])
return grouped
def test_2(grouped):
keys = grouped.groups.keys()
return keys
def test_3(df):
keys = df.groupby(['up','down']).groups.keys()
return keys
def test_4(df):
grouped = df.groupby(['up','down'])
keys = grouped.groups.keys()
return keys
n = np.arange(1,10,1)
df = pd.DataFrame([],columns=['up','down'])
df['up'] = n
df['down'] = n[::-1]
grouped = df.groupby(['up','down'])
%timeit test_1(df)
169 µs ± 12.5 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit test_2(grouped)
1.01 µs ± 70.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit test_3(df)
4.36 ms ± 210 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit test_4(df)
4.2 ms ± 161 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Thanks in advance for comments or ideas.
learn python by myself. I made a function for filling list. But I have 2 variants, and I want to discover which one is better and why. Or they both awful anyway I want to know truth.
def foo (x):
l = [0] * x
for i in range(x):
l[i] = i
return l
def foo1 (x):
l = []
for i in range(x):
l.append(i)
return l
from a performance perspective the first version foo is better:
%timeit foo(1000000)
# 52.4 ms ± 1.99 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit foo1(1000000)
# 67.2 ms ± 916 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
but the pythonic way to unpack an iterator in a list will be:
list(range(x))
also is faster:
%timeit list(range(1000000))
# 26.7 ms ± 661 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Is there a way to speed up the following two lines of code?
choice = np.argmax(cust_profit, axis=0)
taken = np.array([np.sum(choice == i) for i in range(n_pr)])
%timeit np.argmax(cust_profit, axis=0)
37.6 µs ± 222 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.array([np.sum(choice == i) for i in range(n_pr)])
40.2 µs ± 206 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
n_pr == 2
cust_profit.shape == (n_pr+1, 2000)
Solutions:
%timeit np.unique(choice, return_counts=True)
53.7 µs ± 190 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.histogram(choice, bins=np.arange(n_pr + 2))
70.5 µs ± 205 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit np.bincount(choice)
7.4 µs ± 17.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
These microseconds worry me, cause this code locates under two layers of scipy.optimize.minimize(method='Nelder-Mead'), that locates in double nested loop, so 40µs equals 4 hours. And I think to wrap it all in genetic search.
The first line seems pretty straightforward. Unless you can sort the data or something like that, you are stuck with the linear lookup in np.argmax. The second line can be sped up simply by using numpy instead of vanilla python to implement it:
v, counts = np.unique(choice, return_counts=True)
Alternatively:
counts = np.histogram(choice, bins=np.arange(n_pr + 2))
A version of histogram optimized for integers also exists:
count = np.bincount(choice)
The latter two options are better if you want to guarantee that the bins include all possible values of choice, regardless of whether they are actually present in the array or not.
That being said, you probably shouldn't worry about something that takes microseconds.
When we need to copy full data from a dictionary containing primitive data types ( for simplicity, lets ignore presence of datatypes like datetime etc), the most obvious choice that we have is to use deepcopy, but deepcopy is slower than some other hackish methods of achieving the same i.e. using serialization-unserialization for example like json-dump-json-load or msgpack-pack-msgpack-unpack. The difference in efficiency can be seen here :
>>> import timeit
>>> setup = '''
... import msgpack
... import json
... from copy import deepcopy
... data = {'name':'John Doe','ranks':{'sports':13,'edu':34,'arts':45},'grade':5}
... '''
>>> print(timeit.timeit('deepcopy(data)', setup=setup))
12.0860249996
>>> print(timeit.timeit('json.loads(json.dumps(data))', setup=setup))
9.07182312012
>>> print(timeit.timeit('msgpack.unpackb(msgpack.packb(data))', setup=setup))
1.42743492126
json and msgpack (or cPickle) methods are faster than a normal deepcopy, which is obvious as deepcopy would be doing much more in copying all the attributes of the object too.
Question: Is there a more pythonic/inbuilt way to achieve just a data copy of a dictionary or list, without having all the overhead that deepcopy has ?
It really depends on your needs. deepcopy was built with the intention to do the (most) correct thing. It keeps shared references, it doesn't recurse into infinite recursive structures and so on... It can do that by keeping a memo dictionary in which all encountered "things" are inserted by reference. That's what makes it quite slow for pure-data copies. However I would almost always say that deepcopy is the most pythonic way to copy data even if other approaches could be faster.
If you have pure-data and a limited amount of types inside it you could build your own deepcopy (build roughly after the implementation of deepcopy in CPython):
_dispatcher = {}
def _copy_list(l, dispatch):
ret = l.copy()
for idx, item in enumerate(ret):
cp = dispatch.get(type(item))
if cp is not None:
ret[idx] = cp(item, dispatch)
return ret
def _copy_dict(d, dispatch):
ret = d.copy()
for key, value in ret.items():
cp = dispatch.get(type(value))
if cp is not None:
ret[key] = cp(value, dispatch)
return ret
_dispatcher[list] = _copy_list
_dispatcher[dict] = _copy_dict
def deepcopy(sth):
cp = _dispatcher.get(type(sth))
if cp is None:
return sth
else:
return cp(sth, _dispatcher)
This only works correct for all immutable non-container types and list and dict instances. You could add more dispatchers if you need them.
# Timings done on Python 3.5.3 - Windows - on a really slow laptop :-/
import copy
import msgpack
import json
import string
data = {'name':'John Doe','ranks':{'sports':13,'edu':34,'arts':45},'grade':5}
%timeit deepcopy(data)
# 11.9 µs ± 280 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit copy.deepcopy(data)
# 64.3 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit json.loads(json.dumps(data))
# 65.9 µs ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit msgpack.unpackb(msgpack.packb(data))
# 56.5 µs ± 2.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Let's also see how it performs when copying a big dictionary containing strings and integers:
data = {''.join([a,b,c]): 1 for a in string.ascii_letters for b in string.ascii_letters for c in string.ascii_letters}
%timeit deepcopy(data)
# 194 ms ± 5.37 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit copy.deepcopy(data)
# 1.02 s ± 46.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit json.loads(json.dumps(data))
# 398 ms ± 20.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit msgpack.unpackb(msgpack.packb(data))
# 238 ms ± 8.81 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
I think you can manually implement what you need by overriding object.__deepcopy__.
A pythonic way to do this is creating your custom dict extends from builtin dict and implement your custom __deepcopy__.
#MSeifert The suggested answer is not accurate
So far i found ujson.loads(ujson.dumps(my_dict)) to be the fastest option which looks strange (how translating dict to string and then from string to new dict is faster then some pure copy)
Here is an example of the methods i tried and their running time for small dictionary (the results of course are more clear with larger dictionary):
x = {'a':1,'b':2,'c':3,'d':4, 'e':{'a':1,'b':2}}
#this function only handle dict of dicts very similar to the suggested solution
def fast_copy(d):
output = d.copy()
for key, value in output.items():
output[key] = fast_copy(value) if isinstance(value, dict) else value
return output
from copy import deepcopy
import ujson
%timeit deepcopy(x)
13.5 µs ± 146 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit fast_copy(x)
2.57 µs ± 31.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit ujson.loads(ujson.dumps(x))
1.67 µs ± 14.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
is there any other C extension that might work better than ujson?
it very strange that this is the fastest method to copy large dict.
It's always fastest to write your own copy function specific to your data structure.
Your example
data = {
'name': 'John Doe',
'ranks': {
'sports': 13,
'edu': 34,
'arts': 45
},
'grade': 5
}
is a dict consisting just of strs or dicts. Hence:
def copy(obj):
out = obj.copy() # Shallow copy
for k, v in obj.items():
if isinstance(obj[k], dict):
out[k] = obj[k].copy()
return obj
%timeit deepcopy(data)
5.26 µs ± 88.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit json.loads(json.dumps(data))
5.11 µs ± 117 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit msgpack.unpackb(msgpack.packb(data))
2.44 µs ± 76.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit ujson.loads(ujson.dumps(data))
1.63 µs ± 25.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit copy(data)
548 ns ± 5.77 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
#MSeifert's answer did not work for me. so I implemented a somewhat different approach.
def myDictDeepCopy(dictToCopy) -> dict:
'''
Parameters
----------
dictToCopy : dict
dict that you want to copy
Returns
-------
dict
'''
# Shallow copy
temp = dictToCopy.copy()
dictToReturn = {}
for key, value in temp.items():
dictToReturn[key] = copy(value)
return dictToReturn