I have been searching this for a while, basically I am trying to conditionally increment a list of element by another list, element-wise...
my code is following, but is there a better way to do it? list comprehension, map??
I think a element-wise operator like ~+= from http://www.python.org/dev/peps/pep-0225/ would be really good, but why is it deferred?
for i in range(1,len(s)):
if s[i]<s[0]:
s[i]+=p[i]
based on some good feedbacks from you guys I have recoded to the following
i=s<s[0]
s[i]+=p[i]
and s,p are both arrays.
p.s still slow than matlab 5 times for one of my code.
Here is a quick version:
# sample data
s = [10, 5, 20]
p = [2,2,2]
# As a one-liner. (You could factor out the lambda)
s = map(lambda (si, pi): si + pi if si < s[0] else si, zip(s,p))
# s is now [10, 7, 20]
This assumes that len(s) <= len(p)
Hope this helps. Let me know. Good luck. :-)
If you don't want to create a new array, then your options are:
What you proposed (though you might want to use xrange depending on the python version)
Use Numpy arrays for s and p. Then you can do something like s[s<s[0]] += p[s<s[0]] if s and p are the same length.
Use Cython to speed up what you've proposed.
Check this SO question:
Merging/adding lists in Python
Basically, something like:
[sum(a) for a in zip(*[s, p]) if a[0] < 0]
Example:
>>> [sum(a) for a in zip(*[[1, 2, 3], [10, 20, 30]]) if a[0] > 2]
[33]
To clarify, here's what zip does:
>>> zip(*[[1, 2, 3], [4, 5, 6]])
[(1, 4), (2, 5), (3, 6)]
It concatenates two (or more) lists into a list of tuples. You can test for conditions on the elements of each of the tuples.
s = [s[i]+p[i]*(s[i]<s[0]) for i in range(1,len(s))]
Related
I've had multiple scenarios, where i had to find a huge array's items in another huge array.
I usually solved it like this:
for i in range(0,len(arr1)):
for k in range(0,len(arr1)):
print(arr1[i],arr2[k])
Which works fine, but its kinda slow.
Can someone help me, how to make the iteration faster?
arr1 = [1,2,3,4,5]
arr2 = [4,5,6,7]
same_items = set(arr1).intersection(arr2)
print(same_items)
Out[5]: {4,5}
Sets hashes items so instead of O(n) look up time for any element it has O(1). Items inside need to be hashable for this to work. And if they are not, I highly suggest you find a way to make them hashable.
If you need to handle huge arrays, you may want to use Python's numpy library, which assists you with high-efficiency manipulation methods and avoids you to use loops at all in most cases.
if your array has duplicates and you want to keep them all:
arr1 = [1,2,3,4,5,7,5,4]
arr2 = [4,5,6,7]
res = [i for i in arr1 if i in arr2]
>>> res
'''
[4, 5, 7, 5, 4]
or using numpy:
import numpy as np
res = np.array(arr1)[np.isin(arr1, arr2)].tolist()
>>> res
'''
[4, 5, 7, 5, 4]
I was reading about arrays and I'm wondering how can I sort the elements from an array from right to left.
For example:
n = 10
numbers = []
for i in range(1, n+1):
numbers.append(i)
print(numbers)
How can I show the components from the last one to the very first one (10, 9, 8...) using basics tools like cycles and conditions?.
And besides this alternative:
for i in range(-1, (-len(numbers) - 1), -1):
print(numbers[i])
You aren't sorting an array. You are attempting to construct one with a particular ordering. You can do this using range directly. range can be invoked with three arguments start, stop and step. This allows you to construct a range 10, 9, ...:
Python 2:
numbers = range(10, 0, -1)
print numbers
Python 3:
numbers = list(range(10, 0, -1))
print(numbers)
output
[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
Use sorted with reverse=True
sorted(numbers, reverse=True)
or try list.sort method
number.sort(reverse=True)
But in your case you simple can use reverse indexing
number[: : -1]
list(reversed([1,2]))
>> [2,1]
There are many different algorithms to sort arrays without default functions. Here's a list of some of them: https://en.wikipedia.org/wiki/Sorting_algorithm. You can use a sorting algorithm to sort the array, then, reverse it using reversed(list).
If you are looking to have list in reverse you could do this:
In [17]: [4,6,7][::-1]
Out[17]: [7, 6, 4]
You can use the range function for that, like
for i in range(n,0,-1):
print(number[i])
If you want to reverse your list in descending order then you can use the sort method of lists.
number.sort(reverse=True)
Is there any pre-made optimized tool/library in Python to cut/slice lists for values "less than" something?
Here's the issue: Let's say I have a list like:
a=[1,3,5,7,9]
and I want to delete all the numbers which are <= 6, so the resulting list would be
[7,9]
6 is not in the list, so I can't use the built-in index(6) method of the list. I can do things like:
#!/usr/bin/env python
a = [1, 3, 5, 7, 9]
cut=6
for i in range(len(a)-1, -2, -1):
if a[i] <= cut:
break
b = a[i+1:]
print "Cut list: %s" % b
which would be fairly quick method if the index to cut from is close to the end of the list, but which will be inefficient if the item is close to the beginning of the list (let's say, I want to delete all the items which are >2, there will be a lot of iterations).
I can also implement my own find method using binary search or such, but I was wondering if there's a more... wide-scope built in library to handle this type of things that I could reuse in other cases (for instance, if I need to delete all the number which are >=6).
Thank you in advance.
You can use the bisect module to perform a sorted search:
>>> import bisect
>>> a[bisect.bisect_left(a, 6):]
[7, 9]
bisect.bisect_left is what you are looking for, I guess.
If you just want to filter the list for all elements that fulfil a certain criterion, then the most straightforward way is to use the built-in filter function.
Here is an example:
a_list = [10,2,3,8,1,9]
# filter all elements smaller than 6:
filtered_list = filter(lambda x: x<6, a_list)
the filtered_list will contain:
[2, 3, 1]
Note: This method does not rely on the ordering of the list, so for very large lists it might be that a method optimised for ordered searching (as bisect) performs better in terms of speed.
Bisect left and right helper function
#!/usr/bin/env python3
import bisect
def get_slice(list_, left, right):
return list_[
bisect.bisect_left(list_, left):
bisect.bisect_left(list_, right)
]
assert get_slice([0, 1, 1, 3, 4, 4, 5, 6], 1, 5) == [1, 1, 3, 4, 4]
Tested in Ubuntu 16.04, Python 3.5.2.
Adding to Jon's answer, if you need to actually delete the elements less than 6 and want to keep the same reference to the list, rather than returning a new one.
del a[:bisect.bisect_right(a,6)]
You should note as well that bisect will only work on a sorted list.
I'm looking for a built in function in python that applies a function to each element and the next element within a list (or other iterable), returning the set of results in a new list. I don't know if one is built in or not, but I'm attempting to approach this in a functional way if possible.
Example:
l = [1,2,3,4,5]
# returns [3,5,7,9]
# add(1,2) add(2,3) add(3,4) add(4,5)
My actual use case is that I have a list of vectors of the form numpy.array([1,2,3]), and I want to find the difference between each successive vector.
Actual example:
l = [numpy.array([1,2,3]), numpy.array([2,7,6]), numpy.array([4,5,6])]
# find the difference between each vector (l[0]-l[1], l[1]-[l2], .. etc)
You want pairwise() and map().
The most straightforward way to do this would be in a list comprehension:
a = [l[i] + l[i+1] for i in range(len(l)-1)]
Alternatively, you could use a little builtin magic:
map(sum, zip(l, l[1:]))
Finding the differences between successive entries of a NumPy array can be done with numpy.diff():
>>> a = numpy.array([5, 2, 3, 1, 4])
>>> numpy.diff(a)
array([-3, 1, -2, 3])
This will be much faster than any pure-Python solution.
Edit: Here's an example for a 2d array:
>>> a = numpy.array([[1,2,3], [2,7,6], [4,5,6]])
>>> numpy.diff(a, axis=0)
array([[ 1, 5, 3],
[ 2, -2, 0]], dtype=int32)
I've been writing a program to brute force check a sequence of numbers to look for euler bricks, but the method that I came up with involves a triple loop. Since nested Python loops get notoriously slow, I was wondering if there was a better way using numpy to create the array of values that I need.
#x=max side length of brick. User Input.
for t in range(3,x):
a=[];b=[];c=[];
for u in range(2,t):
for v in range(1,u):
a.append(t)
b.append(u)
c.append(v)
a=np.array(a)
b=np.array(b)
c=np.array(c)
...
Is there a better way to generate the array af values, using numpy commands?
Thanks.
Example:
If x=10, when t=3 I want to get:
a=[3]
b=[2]
c=[1]
the first time through the loop. After that, when t=4:
a=[4, 4, 4]
b=[2, 3, 3]
c=[1, 1, 2]
The third time (t=5) I want:
a=[5, 5, 5, 5, 5, 5]
b=[2, 3, 3, 4, 4, 4]
c=[1, 1, 2, 1, 2, 3]
and so on, up to max side lengths around 5000 or so.
EDIT: Solution
a=array(3)
b=array(2)
c=array(1)
for i in range(4,x): #Removing the (3,2,1) check from code does not affect results.
foo=arange(1,i-1)
foo2=empty(len(foo))
foo2.fill(i-1)
c=hstack((c,foo))
b=hstack((b,foo2))
a=empty(len(b))
a.fill(i)
...
Works many times faster now. Thanks all.
Try to use .empty and .fill (http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)
There are couple of things which could help, but probably only for large values of x. For starters use xrange instead of range, that will save creating a list you never need. You could also create empty numpy arrays of the correct length and fill them up with the values as you go, instead of appending to a list and then converting it into a numpy array.
I believe this code will work (no python access right this second):
for t in xrange(3, x):
size = (t - 2) * (t - 3)
a = np.zeros(size)
b = np.zeros(size)
c = np.zeros(size)
idx = 0
for u in xrange(2,t):
for v in xrange(1,u):
a[idx] = t
b[idx] = u
c[idx] = v
idx += 1