is there a faster, more efficient way to split rows in a list. My current setup isn't slow but does take longer than I think to split the whole list, maybe due to how many iterations it is required to go through the whole list.
I currently have the code below
found_reader = pd.read_csv(file, delimiter='\n', engine='c')
loaded_list = found_reader
for i in range(len(loaded_list)):
loaded_email_list = loaded_email_list + [loaded_list[i].split(':')[0]]
I just would like a method to do the above in the quickest but efficient time
Here's how you do that efficiently if both loaded_list and loaded_email_list were regular lists (it may need slight adaptation for whatever it is that Pandas uses):
loaded_email_list += [x.partition(':')[0] for x in loaded_list]
Why this is better:
It iterates over the list directly, instead of using range, len, and an index variable
It uses partition, which stops looking after the first :, instead of split, which walks the whole string
It uses a list comprehension to create the new list all at once, rather than creating and concatenating a bunch of single-element lists
It uses x += y, instead of x = x + y, which could theoretically be faster if its __iadd__ is more efficient than assigning its __add__ result back to itself.
Related
So I was wondering how to best create a list of blank lists:
[[],[],[]...]
Because of how Python works with lists in memory, this doesn't work:
[[]]*n
This does create [[],[],...] but each element is the same list:
d = [[]]*n
d[0].append(1)
#[[1],[1],...]
Something like a list comprehension works:
d = [[] for x in xrange(0,n)]
But this uses the Python VM for looping. Is there any way to use an implied loop (taking advantage of it being written in C)?
d = []
map(lambda n: d.append([]),xrange(0,10))
This is actually slower. :(
The probably only way which is marginally faster than
d = [[] for x in xrange(n)]
is
from itertools import repeat
d = [[] for i in repeat(None, n)]
It does not have to create a new int object in every iteration and is about 15 % faster on my machine.
Edit: Using NumPy, you can avoid the Python loop using
d = numpy.empty((n, 0)).tolist()
but this is actually 2.5 times slower than the list comprehension.
The list comprehensions actually are implemented more efficiently than explicit looping (see the dis output for example functions) and the map way has to invoke an ophaque callable object on every iteration, which incurs considerable overhead overhead.
Regardless, [[] for _dummy in xrange(n)] is the right way to do it and none of the tiny (if existent at all) speed differences between various other ways should matter. Unless of course you spend most of your time doing this - but in that case, you should work on your algorithms instead. How often do you create these lists?
Here are two methods, one sweet and simple(and conceptual), the other more formal and can be extended in a variety of situations, after having read a dataset.
Method 1: Conceptual
X2=[]
X1=[1,2,3]
X2.append(X1)
X3=[4,5,6]
X2.append(X3)
X2 thus has [[1,2,3],[4,5,6]] ie a list of lists.
Method 2 : Formal and extensible
Another elegant way to store a list as a list of lists of different numbers - which it reads from a file. (The file here has the dataset train)
Train is a data-set with say 50 rows and 20 columns. ie. Train[0] gives me the 1st row of a csv file, train[1] gives me the 2nd row and so on. I am interested in separating the dataset with 50 rows as one list, except the column 0 , which is my explained variable here, so must be removed from the orignal train dataset, and then scaling up list after list- ie a list of a list. Here's the code that does that.
Note that I am reading from "1" in the inner loop since I am interested in explanatory variables only. And I re-initialize X1=[] in the other loop, else the X2.append([0:(len(train[0])-1)]) will rewrite X1 over and over again - besides it more memory efficient.
X2=[]
for j in range(0,len(train)):
X1=[]
for k in range(1,len(train[0])):
txt2=train[j][k]
X1.append(txt2)
X2.append(X1[0:(len(train[0])-1)])
To create list and list of lists use below syntax
x = [[] for i in range(10)]
this will create 1-d list and to initialize it put number in [[number] and set length of list put length in range(length)
To create list of lists use below syntax.
x = [[[0] for i in range(3)] for i in range(10)]
this will initialize list of lists with 10*3 dimension and with value 0
To access/manipulate element
x[1][5]=value
So I did some speed comparisons to get the fastest way.
List comprehensions are indeed very fast. The only way to get close is to avoid bytecode getting exectuded during construction of the list.
My first attempt was the following method, which would appear to be faster in principle:
l = [[]]
for _ in range(n): l.extend(map(list,l))
(produces a list of length 2**n, of course)
This construction is twice as slow as the list comprehension, according to timeit, for both short and long (a million) lists.
My second attempt was to use starmap to call the list constructor for me, There is one construction, which appears to run the list constructor at top speed, but still is slower, but only by a tiny amount:
from itertools import starmap
l = list(starmap(list,[()]*(1<<n)))
Interesting enough the execution time suggests that it is the final list call that is makes the starmap solution slow, since its execution time is almost exactly equal to the speed of:
l = list([] for _ in range(1<<n))
My third attempt came when I realized that list(()) also produces a list, so I tried the apperently simple:
l = list(map(list, [()]*(1<<n)))
but this was slower than the starmap call.
Conclusion: for the speed maniacs:
Do use the list comprehension.
Only call functions, if you have to.
Use builtins.
I am trying to convert a for loop with an assignment into a list comprehension.
More precisely I am trying to only replace one element from a list with three indexes.
Can it be done?
for i in range(len(data)):
data[i][0] = data[i][0].replace('+00:00','Z').replace(' ','T')
Best
If you really, really want to convert it to a list comprehension, you could try something like this, assuming the sub-lists have three elements, as you stated in the questions:
new_data = [[a.replace('+00:00','Z').replace(' ','T'), b, c] for (a, b, c) in data]
Note that this does not modify the existing list, but creates a new list, though. However, in this case I'd just stick with a regular for loop, which much better conveys what you are actually doing. Instead of iterating the indices, you could iterate the elements directly, though:
for x in data:
x[0] = x[0].replace('+00:00','Z').replace(' ','T')
I believe it could be done, but that's not the best way to do that.
First you would create a big Jones Complexity for a foreign reader of your code.
Second you would exceed preferred amount of chars on a line, which is 80. Which again will bring complexity problems for a reader.
Third is that list comprehension made to return things from comprehensing of lists, here you change your original list. Not the best practice as well.
List comprehension is useful when making lists. So, it is not recommended here. But still, you can try this simple solution -
print([ele[0].replace('+00:00','Z').replace(' ','T') for ele in data])
Although I don't recommend you use list-comprehension in this case, but if you really want to use it, here is a example.
It can handle different length of data, if you need it.
code:
data = [["1 +00:00",""],["2 +00:00","",""],["3 +00:00"]]
print([[i[0].replace('+00:00','Z').replace(' ','T'),*i[1:]] for i in data])
result:
[['1TZ', ''], ['2TZ', '', ''], ['3TZ']]
I noticed from this answer that the code
for i in userInput:
if i in wordsTask:
a = i
break
can be written as a list comprehension in the following way:
next([i for i in userInput if i in wordsTask])
I have a similar problem which is that I would like to write the following (simplified from original problem) code in terms of a list comprehension:
for i in xrange(N):
point = Point(long_list[i],lat_list[i])
for feature in feature_list:
polygon = shape(feature['geometry'])
if polygon.contains(point):
new_list.append(feature['properties'])
break
I expect each point to be associated with a single polygon from the feature list. Hence, once a polygon that contains the point is found, break is used to move on to the next point. Therefore, new_list will have exactly N elements.
I wrote it as a list comprehension as follows:
new_list = [feature['properties'] for i in xrange(1000) for feature in feature_list if shape(feature['geometry']).contains(Point(long_list[i],lat_list[i])]
Of course, this doesn't take into account the break in the if statement, and therefore takes significantly longer than using nested for loops. Using the advice from the above-linked post (which I probably don't fully understand), I did
new_list2 = next(feature['properties'] for i in xrange(1000) for feature in feature_list if shape(feature['geometry']).contains(Point(long_list[i],lat_list[i]))
However, new_list2 has much fewer than N elements (in my case, N=1000 and new_list2 had only 5 elements)
Question 1: Is it even worth doing this as a list comprehension? The only reason is that I read that list comprehensions are usually a bit faster than nested for loops. With 2 million data points, every second counts.
Question 2: If so, how would I go about incorporating the break statement in a list comprehension?
Question 3: What was the error going on with using next in the way I was doing?
Thank you so much for your time and kind help.
List comprehensions are not necessarily faster than a for loop. If you have a pattern like:
some_var = []
for ...:
if ...:
some_var.append(some_other_var)
then yes, the list comprehension is faster than the bunch of .append()s. You have extenuating circumstances, however. For one thing, it is actually a generator expression in the case of next(...) because it doesn't have the [ and ] around it.
You aren't actually creating a list (and therefore not using .append()). You are merely getting one value.
Your generator calls Point(long_list[i], lat_list[i]) once for each feature for each i in xrange(N), whereas the loop calls it only once for each i.
and, of course, your generator expression doesn't work.
Why doesn't your generator expression work? Because it finds only the first value overall. The loop, on the other hand, finds the first value for each i. You see the difference? The generator expression breaks out of both loops, but the for loop breaks out of only the inner one.
If you want a slight improvement in performance, use itertools.izip() (or just zip() in Python 3):
from itertools import izip
for long, lat in izip(long_list, lat_list):
point = Point(long, lat)
...
I don't know that complex list comprehensions or generator expressions are that much faster than nested loops if they're running the same algorithm (e.g. visiting the same number of values). To get a definitive answer you should probably try to implement a solution both ways and test to see which is faster for your real data.
As for how to short-circuit the inner loop but not the outer one, you'll need to put the next call inside the main list comprehension, with a separate generator expression inside of it:
new_list = [next(feature['properties'] for feature in feature_list
if shape(feature['shape']).contains(Point(long, lat)))
for long, lat in zip(long_list, lat_list)]
I've changed up one other thing: Rather than indexing long_list and lat_list with indexes from a range I'm using zip to iterate over them in parallel.
Note that if creating the Point objects over and over ends up taking too much time, you can streamline that part of the code by adding in another nested generator expression that creates the points and lets you bind them to a (reusable) name:
new_list = [next(feature['properties'] for feature in feature_list
if shape(feature['shape']).contains(point))
for point in (Point(long, lat) for long, lat in zip(long_list, lat_list))]
My current approach is:
rowiter = atable.where(condition)
rowiter_length = max([i for i, row in enumerate(rowiter)])
Is there a way to get the length of rowiter without looping through the entire iterator?
Not sure if there's a more efficient way, but you should really just be using len(rowiter) instead of that list comp, for two reasons:
If the iterator object DOES have a more efficient way of calculating its length, it'll most likely have made it accessible via the __len__ special method, so you'll get the speedup if it's available.
enumerate starts at index 0, so your construct will return 1 less than the actual length of the iterator.
So I was wondering how to best create a list of blank lists:
[[],[],[]...]
Because of how Python works with lists in memory, this doesn't work:
[[]]*n
This does create [[],[],...] but each element is the same list:
d = [[]]*n
d[0].append(1)
#[[1],[1],...]
Something like a list comprehension works:
d = [[] for x in xrange(0,n)]
But this uses the Python VM for looping. Is there any way to use an implied loop (taking advantage of it being written in C)?
d = []
map(lambda n: d.append([]),xrange(0,10))
This is actually slower. :(
The probably only way which is marginally faster than
d = [[] for x in xrange(n)]
is
from itertools import repeat
d = [[] for i in repeat(None, n)]
It does not have to create a new int object in every iteration and is about 15 % faster on my machine.
Edit: Using NumPy, you can avoid the Python loop using
d = numpy.empty((n, 0)).tolist()
but this is actually 2.5 times slower than the list comprehension.
The list comprehensions actually are implemented more efficiently than explicit looping (see the dis output for example functions) and the map way has to invoke an ophaque callable object on every iteration, which incurs considerable overhead overhead.
Regardless, [[] for _dummy in xrange(n)] is the right way to do it and none of the tiny (if existent at all) speed differences between various other ways should matter. Unless of course you spend most of your time doing this - but in that case, you should work on your algorithms instead. How often do you create these lists?
Here are two methods, one sweet and simple(and conceptual), the other more formal and can be extended in a variety of situations, after having read a dataset.
Method 1: Conceptual
X2=[]
X1=[1,2,3]
X2.append(X1)
X3=[4,5,6]
X2.append(X3)
X2 thus has [[1,2,3],[4,5,6]] ie a list of lists.
Method 2 : Formal and extensible
Another elegant way to store a list as a list of lists of different numbers - which it reads from a file. (The file here has the dataset train)
Train is a data-set with say 50 rows and 20 columns. ie. Train[0] gives me the 1st row of a csv file, train[1] gives me the 2nd row and so on. I am interested in separating the dataset with 50 rows as one list, except the column 0 , which is my explained variable here, so must be removed from the orignal train dataset, and then scaling up list after list- ie a list of a list. Here's the code that does that.
Note that I am reading from "1" in the inner loop since I am interested in explanatory variables only. And I re-initialize X1=[] in the other loop, else the X2.append([0:(len(train[0])-1)]) will rewrite X1 over and over again - besides it more memory efficient.
X2=[]
for j in range(0,len(train)):
X1=[]
for k in range(1,len(train[0])):
txt2=train[j][k]
X1.append(txt2)
X2.append(X1[0:(len(train[0])-1)])
To create list and list of lists use below syntax
x = [[] for i in range(10)]
this will create 1-d list and to initialize it put number in [[number] and set length of list put length in range(length)
To create list of lists use below syntax.
x = [[[0] for i in range(3)] for i in range(10)]
this will initialize list of lists with 10*3 dimension and with value 0
To access/manipulate element
x[1][5]=value
So I did some speed comparisons to get the fastest way.
List comprehensions are indeed very fast. The only way to get close is to avoid bytecode getting exectuded during construction of the list.
My first attempt was the following method, which would appear to be faster in principle:
l = [[]]
for _ in range(n): l.extend(map(list,l))
(produces a list of length 2**n, of course)
This construction is twice as slow as the list comprehension, according to timeit, for both short and long (a million) lists.
My second attempt was to use starmap to call the list constructor for me, There is one construction, which appears to run the list constructor at top speed, but still is slower, but only by a tiny amount:
from itertools import starmap
l = list(starmap(list,[()]*(1<<n)))
Interesting enough the execution time suggests that it is the final list call that is makes the starmap solution slow, since its execution time is almost exactly equal to the speed of:
l = list([] for _ in range(1<<n))
My third attempt came when I realized that list(()) also produces a list, so I tried the apperently simple:
l = list(map(list, [()]*(1<<n)))
but this was slower than the starmap call.
Conclusion: for the speed maniacs:
Do use the list comprehension.
Only call functions, if you have to.
Use builtins.