find closest set in N-dimensional numpy array - python

Let's assume I have a numpy array composed of the combinations of all this ranges:
A = numpy.arange(0.05, 7.05, 0.5)
B = numpy.arange(0.0,0.75, 0.15 )
C = numpy.arange(8.0, 12.0, 0.15)
D = numpy.arange(-2, 2, 0.15)
E = [0.5, 0.55, 0.6, 0.65, 0.7]
F = numpy.arange(0.1,2.1, 0.1)
G = [0.5, 1.0, 1.5]
The resulting array is called A and is 7 lines and 15309000 columns.
now I have this groub:
group = [a ,b, c, d, e, f, g] = [1.22, 0.34, 9.45, -1.43, 0.52, 0.23, 0.64]
I am looking for a way to find the combination in A that is the closest to the former group in the sense that the selected set (closest combination) would be the group where all the value are the closest to the group to find: a would be the closest value in A, b the closest value in B, etc...
For this particular case, the closest group would be:
closest_group = [1.05, 0.3, 9.5, -1.4, 0.5, 0.2, 0.5]
I have found a different answers like with scipy.signal.KDTree for example:
A[spatial.KDTree(A).query(set)[1]]
but unfortunately this does not converge, raising this error:
RecursionError: maximum recursion depth exceeded

Related

How can I remove the rows of my dataset using pandas?

Here's the dataset I'm dealing with (called depo_dataset):
Some entries starting from the second column (0.0) might be 0.000..., the goal is for each column starting from the 2nd one, I will generate a separate array of Energy, with the 0.0... entry in the column and associated Energy removed. I'm trying to use a mask in pandas. Here's what I tried:
for column in depo_dataset.columns[1:]:
e = depo_dataset['Energy'].copy()
mask = depo_dataset[column] == 0
Then I don't know how can I drop the 0 entry (assume there is one), and the corresponding element is e?
For instance, suppose we have depo_dataset['0.0'] to be 0.4, 0.0, 0.4, 0.1, and depo_dataset['Energy'] is 0.82, 0.85, 0.87, 0.90, I hope to drop the 0.0 entry in depo_dataset['0.0'], and 0.85 in depo_dataset['Energy'] .
Thanks for the help!
You can just us .loc on the DataFrame to filter out some rows.
Here a little example:
df = pd.DataFrame({
'Energy': [0.82, 0.85, 0.87, 0.90],
0.0: [0.4, 0.0, 0.4, 0.1],
0.1: [0.0, 0.3, 0.4, 0.1]
})
energies = {}
for column in df.columns[1:]:
energies[column] = df.loc[df[column] != 0, ['Energy', column]]
energies[0.0]
you can use .loc:
depo_dataset = pd.DataFrame({'Energy':[0.82, 0.85, 0.87, 0.90],
'0.0':[0.4, 0.0, 0.4, 0.1],
'0.1':[1,2,3,4]})
dataset_no_zeroes = depo_dataset.loc[(depo_dataset.iloc[:,1:] !=0).all(axis=1),:]
Explanation:
(depo_dataset.iloc[:,1:] !=0)
makes a dataframe from all cols beginning with the second one with bool values indicating if the cell is zero.
.all(axis=1)
take the rows of the dataframe '(axis =1)' and only return true if all values of the row are true

How to extract data point from two numpy arrays based on two conditions?

I have two numpy arrays; x, y. I want to be able to extract the value of x that is closest to 1 that also has a y value greater than 0.96 and the get the index of that value.
x = [0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20]
y = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85]
In this case the x value would be 1.01 because it is closest to 1 and has a y value of 0.99.
Ideal result would be:
idx = 7
I know how to find the point nearest to 1 and how to get the index of it but I don't know how to add the second condition.
This code also works when there are multiple indexes satisfying the condition.
import numpy as np
x = [0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20]
y = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85]
# differences
first_check = np.abs(np.array(x) - 1)
# extracting index of the value that x is closest to 1
# (indexes in case there are two or more values that are closest to 1)
indexes = np.where(first_check == np.min(first_check))[0]
indexes = [index for index in indexes if y[index] > 0.96]
print(indexes)
OUTPUT:
[7]
You can use np.argsort(abs(x - 1)) to sort the indices according to the closest value to 1. Then, grab the first y index that satisfies y > 0.96 using np.where.
import numpy as np
x = np.array([0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20])
y = np.array([0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85])
closest_inds = np.argsort(abs(x - 1))
idx = closest_inds[np.where(y[closest_inds] > 0.96)][0]
This would give:
idx = 7
For short arrays (shorter than, say 10k elements), the above solution would be slow because there is no findfirst in numpy till the moment. Look at this long awaited feature request.
So, in this case, the following loop would be much faster and will give same result:
for i in closest_inds:
if y[i] > 0.96:
idx = i
break
This will work on multiple conditions and lists.
x = [0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20]
y = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85]
condition1 = 1.0
condition2 = 0.96
def convert(*args):
"""
Returns a list of tuples generated from multiple lists and tuples
"""
for x in args:
if not isinstance(x, list) and not isinstance(x, tuple):
return []
size = float("inf")
for x in args:
size = min(size, len(x))
result = []
for i in range(size):
result.append(tuple([x[i] for x in args]))
print(result)
return result
result = convert(x, y)
closest = min([tpl for tpl in result if tpl[0] >= condition1 and tpl[1] > condition2], key=lambda x: x[1])
index = result.index(closest)
print(f'The index of the closest numbers of x-list to 1 and y-list to 0.96 is {index}')
Output
[(0.5, 0.7), (0.8, 0.75), (0.99, 0.8), (0.8, 0.85), (0.85, 0.9), (0.9, 0.95), (0.91, 0.99), (1.01, 0.99), (10, 0.99), (20, 0.85)]
The index of the closest numbers is 7

How to use list comprehension to select lower value between 2 lists?

a = list of number with length k
b = list of number with length k
How to get a list c with the lower value of a and b, position by position?
Example:
a = [0.6, 0.8, 0.4]
b = [0.4, 1.0, 0.5]
c = [0.4, 0.8, 0.4]
You can use zip() like this:
c = [min(*item) for item in zip(a, b)]
Output:
>>> a = [0.6, 0.8, 0.4]
>>> b = [0.4, 1.0, 0.5]
>>>
>>> c = [min(*item) for item in zip(a, b)]
>>> c
[0.4, 0.8, 0.4]
Just pass map two list arguments:
c = map(min, a, b)
This is a simple application of min and zip:
c = [min(aa, bb) for aa, bb in zip(a, b)]
If you're going to do computations like this a lot, it might be worth using numpy:
c = numpy.minimum(a, b)
e.g.:
>>> a = numpy.array([0.6, 0.8, 0.4])
>>> b = numpy.array([0.4, 1.0, 0.5])
>>> numpy.minimum(a, b)
array([ 0.4, 0.8, 0.4])
Using numpy's minimum function which does element-wise minimum of array elements.
numpy.minimum(a, b)

python; counting elements of vectors

I would like to count and save in a vector a the number of elements of an array that are greater than a certain value t. I want to do this for different ts.
eg
My vector:c=[0.3 0.2 0.3 0.6 0.9 0.1 0.2 0.5 0.3 0.5 0.7 0.1]
I would like to count the number of elements of c that are greater than t=0.9, than t=0.8 than t=0.7 etc... I then want to save the counts for each different value of t in a vector
my code is (not working):
for t in range(0,10,1):
for j in range(0, len(c)):
if c[j]>t/10:
a.append(sum(c[j]>t))
my vector a should be of dimension 10, but it isn't!
Anybody can help me out?
I made a function that loops over the array and just counts whenever the value is greater than the supplied threshold
c=[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
def num_bigger(threshold):
count = 0
for num in c:
if num > threshold:
count +=1
return count
thresholds = [x/10.0 for x in range(10)]
for thresh in thresholds:
print thresh, num_bigger(thresh)
Note that the function checks for strictly greater, which is why, for example, the result is 0 when the threshold is .9.
There are few things wrong with your code.
my vector a should be of dimension 10, but it isn't!
That's because you don't append only 10 elements in your list. Look at your logic.
for t in range(0,10,1):
for j in range(0, len(c)):
if c[j]>t/10:
a.append(sum(c[j]>t))
For each threshold, t, you iterate over all 12 items in c one at a time and you append something to the list. Overall, you get 120 items. What you should have been doing instead is (in pseudocode):
for each threshold:
count = how many elements in c are greater than threshold
a.append(count)
numpy.where() gives you the indices in an array where a condition is satisfied, so you just have to count how many indices you get each time. We'll get to the full solution is a moment.
Another potential error is t/10, which in Python 2 is integer division and will return 0 for all thresholds. The correct way would be to force float division with t/10.. If you're on Python 3 though, you get float division by default so this might not be a problem. Notice though that you do c[j] > t, where t is between 0 and 10. Overall, your c[j] > t logic is wrong. You want to use a counter for all elements, like other answers have shown you, or collapse it all down to a one-liner list comprehension.
Finally, here's a solution fully utilising numpy.
import numpy as np
c = np.array([0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1])
thresh = np.arange(0, 1, 0.1)
counts = np.empty(thresh.shape, dtype=int)
for i, t in enumerate(thresh):
counts[i] = len(np.where(c > t)[0])
print counts
Output:
[12 10 8 5 5 3 2 1 1 0]
Letting numpy take care of the loops under the hood is faster than Python-level loops. For demonstration:
import timeit
head = """
import numpy as np
c = np.array([0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1])
thresh = np.arange(0, 1, 0.1)
"""
numpy_where = """
for t in thresh:
len(np.where(c > t)[0])
"""
python_loop = """
for t in thresh:
len([element for element in c if element > t])
"""
n = 10000
for test in [numpy_where, python_loop]:
print timeit.timeit(test, setup=head, number=n)
Which on my computer results in the following timings.
0.231292377372
0.321743753994
Your problem is here:
if c[j]>t/10:
Notice that both t and 10 are integers and so you perform integer division.
The easiest solution with the least changes is to change it to:
if c[j]>float(t)/10:
to force float division
So the whole code would look something like this:
a = []
c = [0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
for i in range(10): #10 is our 1.0 change it to 9 if you want to iterate to 0.9
sum = 0
cutoff = float(i)/10
for ele in c:
if ele <= cutoff:
sum += ele
a.append(sum)
print(len(a)) # prints 10, the numbers from 0.0 - 0.9
print(a) # prints the sums going from 0.0 cutoff to 1.0 cutoff
You have to divide t / 10.0 so the result is a decimal, the result of t / 10 is an integer
a = []
c=[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
for t in range(0,10,1):
count = 0
for j in range(0, len(c)):
if c[j]>t/10.0:
count = count+1
a.append(count)
for t in range(0,10,1):
print(str(a[t]) + ' elements in c are bigger than ' + str(t/10.0))
Output:
12 elements in c are bigger than 0.0
10 elements in c are bigger than 0.1
8 elements in c are bigger than 0.2
5 elements in c are bigger than 0.3
5 elements in c are bigger than 0.4
3 elements in c are bigger than 0.5
2 elements in c are bigger than 0.6
1 elements in c are bigger than 0.7
1 elements in c are bigger than 0.8
0 elements in c are bigger than 0.9
You can check the test here
If you simplify your code bugs won't have places to hide!
c=[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
a=[]
for t in [x/10 for x in range(10)]:
a.append((t,len([x for x in c if x>t])))
a
[(0.0, 12),
(0.1, 10),
(0.2, 8),
(0.3, 5),
(0.4, 5),
(0.5, 3),
(0.6, 2),
(0.7, 1),
(0.8, 1),
(0.9, 0)]
or even this one-liner
[(r/10,len([x for x in c if x>r/10])) for r in range(10)]
It depends on the sizes of your arrays, but your current solution has O(m*n) complexity, m being the number of values to test and n the size of your array. You may be better off with O((m+n)*log(n)) by first sorting your array in O(n*log(n)) and then using binary search to find the m values in O(m*log(n)). Using numpy and your sample c list, this would be something like:
>>> c
[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
>>> thresholds = np.linspace(0, 1, 10, endpoint=False)
>>> thresholds
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
>>> len(c) - np.sort(c).searchsorted(thresholds, side='right')
array([12, 10, 8, 5, 5, 3, 2, 1, 1, 0])

Calculate difference each time the sign changes in a list of values

Ok let's imagine that I have a list of values like so:
list = [-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6]
I would like to loop on this list and when the sign changes to get the absolute difference between the maximum (minimum if it's negative) of the first interval and maximum (minimum if it's negative) of the next interval.
For example on the previous list, we would like to have a result like so:
[1.6, 1.7, 1.5]
The tricky part is that it has to work also for lists like:
list = [0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4]
Which would return :
[0.35, 0.83, 0.9, 0.7]
It's tricky because some intervals are 1 value long, and I'm having difficulties with managing this.
How would you solve this with the least possible number of lines?
Here is my current code, but it's not working at the moment.
list is a list of 6 lists, where each of these 6 lists contains else a nan, else a np.array of 1024 values (the values I want to evaluate)
Hmax = []
for c in range(0,6):
Hmax_tmp = []
for i in range(len(list[c])):
if(not np.isnan(list[c][i]).any()):
zero_crossings = np.where(np.diff(np.sign(list[c][i])))[0]
if(not zero_crossings[0] == 0):
zero_crossings = [0] + zero_crossings.tolist() + [1023]
diff = []
for j in range(1,len(zero_crossings)-2):
if
diff.append(max(list[c][i][np.arange(zero_crossings[j-1],zero_crossings[j])].min(), list[c][i][np.arange(zero_crossings[j]+1,zero_crossings[j+1])].max(), key=abs) - max(list[c][i][np.arange(zero_crossings[j+1],zero_crossings[j+2])].min(), list[c][i][np.arange(zero_crossings[j+1],zero_crossings[j+2])].max(), key=abs))
Hmax_tmp.append(np.max(diff))
else:
Hmax_tmp.append(np.nan)
Hmax.append(Hmax_tmp)
This type of grouping operation can be greatly simplified using itertools.groupby. For example:
>>> from itertools import groupby
>>> lst = [-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6] # the list
>>> minmax = [min(v) if k else max(v) for k,v in groupby(lst, lambda a: a < 0)]
>>> [abs(j-i) for i,j in zip(minmax[:-1], minmax[1:])]
[1.6, 1.7000000000000002, 1.5]
And the second list:
>>> lst2 = [0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4] # the list
>>> minmax = [min(v) if k else max(v) for k,v in groupby(lst2, lambda a: a < 0)]
>>> [abs(j-i) for i,j in zip(minmax[:-1], minmax[1:])]
[0.35, 0.83, 0.8999999999999999, 0.7]
So here, the list is grouped into consecutive intervals of negative/positive values. The min/max is computed for each group and stored in a list minmax. Lastly, a list comprehension finds the differences.
Excuse the inexact representations of floating point numbers!
It would be straightforward to retrieve max/min values of intervals, and then do the calculation.
def difference(nums):
if not nums:
return []
pivots = []
last_sign = nums[0] >= 0
current = 0
for x in nums:
current_sign = x >= 0
if current_sign != last_sign:
pivots.append(current)
current = 0
last_sign = current_sign
current = max(current, x) if current_sign else min(current, x)
pivots.append(current)
result = []
for idx in xrange(len(pivots)):
if idx + 1 < len(pivots):
result.append(abs(pivots[idx] - pivots[idx + 1]))
return result
>>> print difference([-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6])
[1.6, 1.7000000000000002, 1.5]
>>> print difference([0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4])
[0.35, 0.83, 0.8999999999999999, 0.7]

Categories

Resources