Can you help identify the bottleneck of this code? I am solving problem #7 of project Euler, and am failing to understand why this solution takes so long (30s). I know there are better solutions, I just want to understand more about why this, specifically is so bad.
def primes(n):
primes = set([2])
count = 1
i = 1
while count < n:
i += 2
if not any([i % num == 0 for num in primes]):
primes.add(i)
count += 1
print i
cProfile.run("primes(10001)") #slow! 30s
The profile is here below:
62374 function calls in 34.605 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 34.605 34.605 <string>:1(<module>)
1 34.273 34.273 34.605 34.605 problem_7.py:8(primes)
52371 0.328 0.000 0.328 0.000 {any}
10000 0.004 0.000 0.004 0.000 {method 'add' of 'set' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
I implemented a couple tweaks to help out. See my comments and results below. In all cases I changed the accumulator from being named primes (shadowing its function name) to being named ps. I didn't test to see if this improved performance, I just hate shadowing names :o)
# Your original code
def prime_orig(n):
ps = set([2])
count = 1
i = 1
while count < n:
i += 2
if not any([i % num == 0 for num in ps]):
ps.add(i)
count += 1
# replace the set accum with a list, per #Sheshnath's comment
def prime_list(n):
ps = [2]
count = 1
i = 1
while count < n:
i += 2
if not any([i % num == 0 for num in ps]):
ps.append(i)
count += 1
# replace the listcomp with a genexp
def prime_genexp(n):
ps = set([2])
count = 1
i = 1
while count < n:
i += 2
if not any(i % num == 0 for num in ps):
ps.add(i)
count += 1
# both optimizations at once
def prime_genexp_list(n):
ps = [2]
count = 1
i = 1
while count < n:
i += 2
if not any(i % num == 0 for num in ps):
ps.append(i)
count += 1
RESULTS:
cProfile.run('prime_orig(10001)')
114746 function calls in 27.283 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.701 0.701 27.283 27.283 <ipython-input-2-3059f1e23ab4>:1(prime_orig)
52371 26.330 0.001 26.330 0.001 <ipython-input-2-3059f1e23ab4>:7(<listcomp>)
1 0.000 0.000 27.283 27.283 <string>:1(<module>)
52371 0.250 0.000 0.250 0.000 {built-in method builtins.any}
1 0.000 0.000 27.283 27.283 {built-in method builtins.exec}
10000 0.003 0.000 0.003 0.000 {method 'add' of 'set' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
cProfile.run('prime_list(10001)')
114746 function calls in 24.523 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.666 0.666 24.523 24.523 <ipython-input-2-3059f1e23ab4>:11(prime_list)
52371 23.625 0.000 23.625 0.000 <ipython-input-2-3059f1e23ab4>:17(<listcomp>)
1 0.000 0.000 24.523 24.523 <string>:1(<module>)
52371 0.231 0.000 0.231 0.000 {built-in method builtins.any}
1 0.000 0.000 24.523 24.523 {built-in method builtins.exec}
10000 0.001 0.000 0.001 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
cProfile.run('prime_genexp(10001)')
50627376 function calls in 10.577 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.040 0.040 10.577 10.577 <ipython-input-2-3059f1e23ab4>:21(prime_genexp)
50565001 7.060 0.000 7.060 0.000 <ipython-input-2-3059f1e23ab4>:27(<genexpr>)
1 0.000 0.000 10.577 10.577 <string>:1(<module>)
52371 3.475 0.000 10.530 0.000 {built-in method builtins.any}
1 0.000 0.000 10.577 10.577 {built-in method builtins.exec}
10000 0.002 0.000 0.002 0.000 {method 'add' of 'set' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
cProfile.run('prime_genexp_list(10001)')
50400891 function calls in 9.781 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.040 0.040 9.781 9.781 <ipython-input-2-3059f1e23ab4>:31(prime_genexp_list)
50338516 6.272 0.000 6.272 0.000 <ipython-input-2-3059f1e23ab4>:37(<genexpr>)
1 0.000 0.000 9.781 9.781 <string>:1(<module>)
52371 3.468 0.000 9.735 0.000 {built-in method builtins.any}
1 0.000 0.000 9.781 9.781 {built-in method builtins.exec}
10000 0.001 0.000 0.001 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Related
When I run
my_str = "res = f(content1, content2, reso)"
cProfile.runctx(my_str, globals(), locals())
I get:
3 function calls in 0.038 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.037 0.037 0.037 0.037 <string>:1(<module>)
1 0.000 0.000 0.038 0.038 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Which is Nice, however, when I run:
with cProfile.Profile() as pr:
f(content1, content2, reso)
pr.print_stats()
I get something different (and all times are = to 0):
9 function calls in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 cProfile.py:40(print_stats)
1 0.000 0.000 0.000 0.000 cProfile.py:50(create_stats)
1 0.000 0.000 0.000 0.000 pstats.py:107(__init__)
1 0.000 0.000 0.000 0.000 pstats.py:117(init)
1 0.000 0.000 0.000 0.000 pstats.py:136(load_stats)
1 0.000 0.000 0.000 0.000 {built-in method builtins.hasattr}
1 0.000 0.000 0.000 0.000 {built-in method builtins.isinstance}
1 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
What's the difference between these 2? I would expect them printing the same result. Am I missing something?
Let's say we have an integer n = 123 and want to convert it into constituent digits d = [1, 2, 3]. Among two possible options,
def to_digits1(n):
return [int(c) for c in str(n)]
and
def to_digits2(n):
d = []
while n > 10:
n, m = divmod(n, 10)
d.insert(0, m)
d.insert(0, n)
return d
the former appears to be significantly faster on large inputs. For instance,
from cProfile import run
run('to_digits1(9999 ** 2048)')
run('to_digits2(9999 ** 2048)')
would yield something like:
5 function calls in 0.002 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.002 0.002 <string>:1(<module>)
1 0.001 0.001 0.002 0.002 to_digits.py:16(to_digits1)
1 0.001 0.001 0.001 0.001 to_digits.py:17(<listcomp>)
1 0.000 0.000 0.002 0.002 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
and
16387 function calls in 0.049 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.049 0.049 <string>:1(<module>)
1 0.002 0.002 0.049 0.049 to_digits.py:20(to_digits2)
8191 0.038 0.000 0.038 0.000 {built-in method builtins.divmod}
1 0.000 0.000 0.049 0.049 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
8192 0.009 0.000 0.009 0.000 {method 'insert' of 'list' objects}
respectively. Why would that be, considering that the first function converts n to a string and then converts each character back into integers, while the second funciton seems more concise and direct?
EDIT: As suggested in the comments, a list insert in the second example is expensive at O(n). This can be fixed by using a deque instead of a list, like so:
from collections import deque
def to_digits3(n):
d = deque()
while n > 10:
n, m = divmod(n, 10)
d.insert(0, m)
d.insert(0, n)
return list(d)
and while that minimizes the cost of the insert stage, it doesn't seem to give a substantial speed-up:
run('to_digits3(9999 ** 2048)')
16387 function calls in 0.038 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.038 0.038 <string>:1(<module>)
1 0.002 0.002 0.038 0.038 to_digits.py:30(to_digits3)
8191 0.035 0.000 0.035 0.000 {built-in method builtins.divmod}
1 0.000 0.000 0.038 0.038 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
8192 0.001 0.000 0.001 0.000 {method 'insert' of 'collections.deque' objects}
Implement a function with signature find_chars(string1, string2) that
takes two strings and returns a string that contains only the
characters found in string1 and string two in the order that they are
found in string1. Implement a version of order N*N and one of order N.
(Source: http://thereq.com/q/138/python-software-interview-question/characters-in-strings)
Here are my solutions:
Order N*N:
def find_chars_slow(string1, string2):
res = []
for char in string1:
if char in string2:
res.append(char)
return ''.join(res)
So the for loop goes through N elements, and each char in string2 check is another N operations so this gives N*N.
Order N:
from collections import defaultdict
def find_char_fast(string1, string2):
d = defaultdict(int)
for char in string2:
d[char] += 1
res = []
for char in string1:
if char in d:
res.append(char)
return ''.join(res)
First store the characters of string2 as a dictionary (O(N)). Then scan string1 (O(N)) and check if it is in the dict (O(1)). This gives a total runtime of O(2N) = O(N).
Is the above correct? Is there a faster method?
Your solution is algorithmically correct (the first is O(n**2), and the second is O(n)) but you're doing some things that are going to be possible red flags to an interviewer.
The first function is basically okay. You might get bonus points for writing it like this:
''.join([c for c in string1 if c in string2])
..which does essentially the same thing.
My problem (if I'm wearing my interviewer pants) with how you've written the second function is that you use a defaultdict where you don't care at all about the count - you only care about membership. This is the classic case for when to use a set.
seen = set(string2)
''.join([c for c in string1 if c in seen])
The way I've written these functions are going to be slightly faster than what you wrote, since list comprehensions loop in native code rather than in python bytecode. They are algorithmically the same complexity.
Your method is sound, and there is no method with time complexity less than O(N), since you obviously need to go through each character at least once.
That is not to say that there's no method that runs faster. There's no need to actually increment the numbers in the dictionary. You could, for example, use a set. You could also make further use of python's features, such as list comprehensions/generators:
def find_char_fast2(string1, string2):
s= set(string2)
return "".join( (x for x in string1 if x in s) )
The algorithms you have used are perfectly fine. There are few improvements you can do
Since you are converting the second string to a dictionary, I would recommend using set, like this
d = set(string2)
Apart from that you can use list comprehension, as a filter, like this
return "".join([char for char in string1 if char in d])
If the order of the characters in the output doesn't matter, you can simply convert both the strings to sets and simply find set difference, like this
return "".join(set(string1) - set(string2))
Hi, I am trying to profiling the various solution given here:
In my snippet, I am using a module called faker to generate fake words so i can test on very long string more than 20k characters:
Snippet:
from faker import Faker
from timeit import Timer
from collections import defaultdict
def first(string1, string2):
sets = set(string2)
return ''.join((c for c in string1 if c in sets))
def second(s1, s2):
res = []
for char in string1:
if char in string2:
res.append(char)
return ''.join(res)
def third(s1, s2):
d = defaultdict(int)
for char in string2:
d[char] += 1
res = []
for char in string1:
if char in d:
res.append(char)
return ''.join(res)
f = Faker()
string1 = ''.join(f.paragraph(nb_sentences=10000).split())
string2 = ''.join(f.paragraph(nb_sentences=10000).split())
funcs = [first, second, third]
import cProfile
print 'Length of String1: ', len(string1)
print 'Length of String2: ', len(string2)
print 'Time taken to execute:'
for f in funcs:
t = Timer(lambda: f(string1, string2))
print f.__name__, cProfile.run('t.timeit(number=100)')
Output:
Length of String1: 525133
Length of String2: 501050
Time taken to execute:
first 52513711 function calls in 18.169 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
100 0.001 0.000 18.164 0.182 s.py:39(<lambda>)
100 1.723 0.017 18.163 0.182 s.py:5(first)
52513400 9.442 0.000 9.442 0.000 s.py:7(<genexpr>)
1 0.000 0.000 0.000 0.000 timeit.py:143(setup)
1 0.000 0.000 18.169 18.169 timeit.py:178(timeit)
1 0.005 0.005 18.169 18.169 timeit.py:96(inner)
1 0.000 0.000 0.000 0.000 {gc.disable}
1 0.000 0.000 0.000 0.000 {gc.enable}
1 0.000 0.000 0.000 0.000 {gc.isenabled}
1 0.000 0.000 0.000 0.000 {globals}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
100 6.998 0.070 16.440 0.164 {method 'join' of 'str' objects}
2 0.000 0.000 0.000 0.000 {time.time}
None
second 52513611 function calls in 22.280 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 22.280 22.280 <string>:1(<module>)
100 0.121 0.001 22.275 0.223 s.py:39(<lambda>)
100 16.957 0.170 22.153 0.222 s.py:9(second)
1 0.000 0.000 0.000 0.000 timeit.py:143(setup)
1 0.000 0.000 22.280 22.280 timeit.py:178(timeit)
1 0.005 0.005 22.280 22.280 timeit.py:96(inner)
1 0.000 0.000 0.000 0.000 {gc.disable}
1 0.000 0.000 0.000 0.000 {gc.enable}
1 0.000 0.000 0.000 0.000 {gc.isenabled}
1 0.000 0.000 0.000 0.000 {globals}
52513300 4.018 0.000 4.018 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
100 1.179 0.012 1.179 0.012 {method 'join' of 'str' objects}
2 0.000 0.000 0.000 0.000 {time.time}
None
third 52513611 function calls in 28.184 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 28.184 28.184 <string>:1(<module>)
100 22.847 0.228 28.059 0.281 s.py:16(third)
100 0.120 0.001 28.179 0.282 s.py:39(<lambda>)
1 0.000 0.000 0.000 0.000 timeit.py:143(setup)
1 0.000 0.000 28.184 28.184 timeit.py:178(timeit)
1 0.005 0.005 28.184 28.184 timeit.py:96(inner)
1 0.000 0.000 0.000 0.000 {gc.disable}
1 0.000 0.000 0.000 0.000 {gc.enable}
1 0.000 0.000 0.000 0.000 {gc.isenabled}
1 0.000 0.000 0.000 0.000 {globals}
52513300 4.032 0.000 4.032 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
100 1.180 0.012 1.180 0.012 {method 'join' of 'str' objects}
2 0.000 0.000 0.000 0.000 {time.time}
None
Conclusion:
So, the first function with comprehension is the fastest.
But when you run on strings size around 25K characters second functions wins.
Length of String1: 22959
Length of String2: 452919
Time taken to execute:
first 2296311 function calls in 2.216 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
100 0.000 0.000 2.216 0.022 s.py:39(<lambda>)
100 1.530 0.015 2.216 0.022 s.py:5(first)
2296000 0.402 0.000 0.402 0.000 s.py:7(<genexpr>)
1 0.000 0.000 0.000 0.000 timeit.py:143(setup)
1 0.000 0.000 2.216 2.216 timeit.py:178(timeit)
1 0.000 0.000 2.216 2.216 timeit.py:96(inner)
1 0.000 0.000 0.000 0.000 {gc.disable}
1 0.000 0.000 0.000 0.000 {gc.enable}
1 0.000 0.000 0.000 0.000 {gc.isenabled}
1 0.000 0.000 0.000 0.000 {globals}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
100 0.284 0.003 0.686 0.007 {method 'join' of 'str' objects}
2 0.000 0.000 0.000 0.000 {time.time}
None
second 2296211 function calls in 0.939 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.939 0.939 <string>:1(<module>)
100 0.003 0.000 0.939 0.009 s.py:39(<lambda>)
100 0.729 0.007 0.936 0.009 s.py:9(second)
1 0.000 0.000 0.000 0.000 timeit.py:143(setup)
1 0.000 0.000 0.939 0.939 timeit.py:178(timeit)
1 0.000 0.000 0.939 0.939 timeit.py:96(inner)
1 0.000 0.000 0.000 0.000 {gc.disable}
1 0.000 0.000 0.000 0.000 {gc.enable}
1 0.000 0.000 0.000 0.000 {gc.isenabled}
1 0.000 0.000 0.000 0.000 {globals}
2295900 0.165 0.000 0.165 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
100 0.042 0.000 0.042 0.000 {method 'join' of 'str' objects}
2 0.000 0.000 0.000 0.000 {time.time}
None
third 2296211 function calls in 8.361 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 8.361 8.361 <string>:1(<module>)
100 8.145 0.081 8.357 0.084 s.py:16(third)
100 0.004 0.000 8.361 0.084 s.py:39(<lambda>)
1 0.000 0.000 0.000 0.000 timeit.py:143(setup)
1 0.000 0.000 8.361 8.361 timeit.py:178(timeit)
1 0.000 0.000 8.361 8.361 timeit.py:96(inner)
1 0.000 0.000 0.000 0.000 {gc.disable}
1 0.000 0.000 0.000 0.000 {gc.enable}
1 0.000 0.000 0.000 0.000 {gc.isenabled}
1 0.000 0.000 0.000 0.000 {globals}
2295900 0.169 0.000 0.169 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
100 0.043 0.000 0.043 0.000 {method 'join' of 'str' objects}
2 0.000 0.000 0.000 0.000 {time.time}
None
I have example:
for line in IN.readlines():
line = line.rstrip('\n')
mas = line.split('\t')
row = ( int(mas[0]), int(mas[1]), mas[2], mas[3], mas[4] )
self.inetnums.append(row)
IN.close()
If ffilesize == 120mb, script time = 10 sec. Can I decrease this time ?
Remove the readlines()
Just do
for line in IN:
Using readlines you are creating a list of all lines from the file and then accessing each one, which you don't need to do. Without it the for loop simply uses the generator which returns a line each time from the file.
You may gain some speed if you use a List Comprehension
inetnums=[(int(x) for x in line.rstrip('\n').split('\t')) for line in fin]
Here is the profile information with two different versions
>>> def foo2():
fin.seek(0)
inetnums=[]
for line in fin:
line = line.rstrip('\n')
mas = line.split('\t')
row = ( int(mas[0]), int(mas[1]), mas[2], mas[3])
inetnums.append(row)
>>> def foo1():
fin.seek(0)
inetnums=[[int(x) for x in line.rstrip('\n').split('\t')] for line in fin]
>>> cProfile.run("foo1()")
444 function calls in 0.004 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.003 0.003 0.004 0.004 <pyshell#362>:1(foo1)
1 0.000 0.000 0.004 0.004 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
220 0.000 0.000 0.000 0.000 {method 'rstrip' of 'str' objects}
1 0.000 0.000 0.000 0.000 {method 'seek' of 'file' objects}
220 0.000 0.000 0.000 0.000 {method 'split' of 'str' objects}
>>> cProfile.run("foo2()")
664 function calls in 0.006 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.005 0.005 0.006 0.006 <pyshell#360>:1(foo2)
1 0.000 0.000 0.006 0.006 <string>:1(<module>)
220 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
220 0.001 0.000 0.001 0.000 {method 'rstrip' of 'str' objects}
1 0.000 0.000 0.000 0.000 {method 'seek' of 'file' objects}
220 0.001 0.000 0.001 0.000 {method 'split' of 'str' objects}
>>>
I want to imitate a normal python list, except whenever elements are added or removed via slicing, I want to 'save' the list. Is this possible? This was my attempt but it will never print 'saving'.
class InterceptedList(list):
def addSave(func):
def newfunc(self, *args):
func(self, *args)
print 'saving'
return newfunc
__setslice__ = addSave(list.__setslice__)
__delslice__ = addSave(list.__delslice__)
>>> l = InterceptedList()
>>> l.extend([1,2,3,4])
>>> l
[1, 2, 3, 4]
>>> l[3:] = [5] # note: 'saving' is not printed
>>> l
[1, 2, 3, 5]
This does work for other methods like append and extend, just not for the slice operations.
EDIT: The real problem is I'm using Jython and not Python and forgot it. The comments on the question are correct. This code does work fine in Python (2.6). However, the code nor the answers work in Jython.
From the Python 3 docs:
__getslice__(), __setslice__() and __delslice__() were killed.
The syntax a[i:j] now translates to a.__getitem__(slice(i, j))
(or __setitem__() or __delitem__(), when used as an assignment
or deletion target, respectively).
That's enough speculation. Let's start using facts instead shall we?
As far as I can tell, the bottom line is that you must override both set of methods.
If you want to implement undo/redo you probably should try using undo stack and set of actions that can do()/undo() themselves.
Code
import profile
import sys
print sys.version
class InterceptedList(list):
def addSave(func):
def newfunc(self, *args):
func(self, *args)
print 'saving'
return newfunc
__setslice__ = addSave(list.__setslice__)
__delslice__ = addSave(list.__delslice__)
class InterceptedList2(list):
def __setitem__(self, key, value):
print 'saving'
list.__setitem__(self, key, value)
def __delitem__(self, key):
print 'saving'
list.__delitem__(self, key)
print("------------Testing setslice------------------")
l = InterceptedList()
l.extend([1,2,3,4])
profile.run("l[3:] = [5]")
profile.run("l[2:6] = [12, 4]")
profile.run("l[-1:] = [42]")
profile.run("l[::2] = [6,6]")
print("-----------Testing setitem--------------------")
l2 = InterceptedList2()
l2.extend([1,2,3,4])
profile.run("l2[3:] = [5]")
profile.run("l2[2:6] = [12,4]")
profile.run("l2[-1:] = [42]")
profile.run("l2[::2] = [6,6]")
Jython 2.5
C:\Users\wuu-local.pyza\Desktop>c:\jython2.5.0\jython.bat intercept.py
2.5.0 (Release_2_5_0:6476, Jun 16 2009, 13:33:26)
[Java HotSpot(TM) Client VM (Sun Microsystems Inc.)]
------------Testing setslice------------------
saving
3 function calls in 0.035 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:0(<module>)
1 0.000 0.000 0.000 0.000 intercept.py:9(newfunc)
1 0.034 0.034 0.035 0.035 profile:0(l[3:] = [5])
0 0.000 0.000 profile:0(profiler)
saving
3 function calls in 0.005 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.001 0.001 <string>:0(<module>)
1 0.001 0.001 0.001 0.001 intercept.py:9(newfunc)
1 0.004 0.004 0.005 0.005 profile:0(l[2:6] = [12, 4])
0 0.000 0.000 profile:0(profiler)
saving
3 function calls in 0.012 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:0(<module>)
1 0.000 0.000 0.000 0.000 intercept.py:9(newfunc)
1 0.012 0.012 0.012 0.012 profile:0(l[-1:] = [42])
0 0.000 0.000 profile:0(profiler)
2 function calls in 0.004 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:0(<module>)
1 0.004 0.004 0.004 0.004 profile:0(l[::2] = [6,6])
0 0.000 0.000 profile:0(profiler)
-----------Testing setitem--------------------
2 function calls in 0.004 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:0(<module>)
1 0.004 0.004 0.004 0.004 profile:0(l2[3:] = [5])
0 0.000 0.000 profile:0(profiler)
2 function calls in 0.006 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:0(<module>)
1 0.006 0.006 0.006 0.006 profile:0(l2[2:6] = [12,4])
0 0.000 0.000 profile:0(profiler)
2 function calls in 0.004 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 <string>:0(<module>)
1 0.004 0.004 0.004 0.004 profile:0(l2[-1:] = [42])
0 0.000 0.000 profile:0(profiler)
saving
3 function calls in 0.007 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.002 0.002 <string>:0(<module>)
1 0.001 0.001 0.001 0.001 intercept.py:20(__setitem__)
1 0.005 0.005 0.007 0.007 profile:0(l2[::2] = [6,6])
0 0.000 0.000 profile:0(profiler)
Python 2.6.2
C:\Users\wuu-local.pyza\Desktop>python intercept.py
2.6 (r26:66721, Oct 2 2008, 11:35:03) [MSC v.1500 32 bit (Intel)]
------------Testing setslice------------------
saving
4 function calls in 0.002 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.002 0.002 0.002 0.002 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 intercept.py:9(newfunc)
1 0.000 0.000 0.002 0.002 profile:0(l[3:] = [5])
0 0.000 0.000 profile:0(profiler)
saving
4 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 intercept.py:9(newfunc)
1 0.000 0.000 0.000 0.000 profile:0(l[2:6] = [12, 4])
0 0.000 0.000 profile:0(profiler)
saving
4 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 intercept.py:9(newfunc)
1 0.000 0.000 0.000 0.000 profile:0(l[-1:] = [42])
0 0.000 0.000 profile:0(profiler)
3 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 profile:0(l[::2] = [6,6])
0 0.000 0.000 profile:0(profiler)
-----------Testing setitem--------------------
3 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 profile:0(l2[3:] = [5])
0 0.000 0.000 profile:0(profiler)
3 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 profile:0(l2[2:6] = [12,4])
0 0.000 0.000 profile:0(profiler)
3 function calls in 0.000 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.000 0.000 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 profile:0(l2[-1:] = [42])
0 0.000 0.000 profile:0(profiler)
saving
4 function calls in 0.003 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 :0(setprofile)
1 0.000 0.000 0.003 0.003 <string>:1(<module>)
1 0.002 0.002 0.002 0.002 intercept.py:20(__setitem__)
1 0.000 0.000 0.003 0.003 profile:0(l2[::2] = [6,6])
0 0.000 0.000 profile:0(profiler)
"setslice" and "delslice" are deprecated, if you want to do the interception you need to work with python slice objects passed to "setitem" and "delitem". If you want to intecept both slices and ordinary accesses this code works perfectly in python 2.6.2.
class InterceptedList(list):
def addSave(func):
def newfunc(self, *args):
func(self, *args)
print 'saving'
return newfunc
def __setitem__(self, key, value):
print 'saving'
list.__setitem__(self, key, value)
def __delitem__(self, key):
print 'saving'
list.__delitem__(self, key)
the circumstances where __getslice__ and __setslice__ are called are pretty narrow. Specifically, slicing only occurs when you use a regular slice, where the first and end elements are mentioned exactly once. for any other slice syntax, or no slices at all, __getitem__ or __setitem__ is called.