numpy array - replace certain values - python

I have yet to get my head around numpy array referencing.
I have arrays where the first 2 columns will always have some negative values that are necessary and the remaining columns need to have their negative values substituted with 0s. I understand that there are various ways to do this. The part that's baffling me is how to combine one of these methods with only doing it for the columns past 2.
Example array:
[[x, y, flow, element1, element2, element3] [x, y, flow, element1, element2, element3] [x, y, flow, element1, element2, element3]]
Desired result would be that for the whole array, any of the values that are negative are replaced with 0 unless they are x or y.

It sounds like you want:
subset = array[:, 2:]
subset[subset < 0] = 0
or as a rather unreadable one-liner:
array[:, 2:][array[:, 2:] < 0] = 0
As a more complete example:
import numpy as np
array = np.random.normal(0, 1, (10, 5))
print array
# Note that "subset" is a view, so modifying it modifies "array"
subset = array[:, 2:]
subset[subset < 0] = 0
print array

You would need to clip subsets of the arrays.
something like this:
a[2:].clip(0, None)
You could do this a couple of ways. One would be in a for loop:
for list in list_of_lists:
list[2:] = list[2:].clip(0, None)
Or, using [:, 2:] which references your list of lists (:), and then all sublists of that (2:).
The result is basically what Joe Kingston has suggested:
list[:, 2:] = list[:, 2:].clip(0, None)

Related

Rearrange array element based on the number sequence and represent by array id

Suppose, I have multiple arrays in one array with the number from 0 to n in multiple order.
For example,
x = [[0,2,3,5],[1,4]]
Here we have two arrays in x. There could be more than two.
I want to get rearrange all the array elements based on their number sequence. However, they will represent their array ID. The result should be like this
y = [0,1,0,0,1,0]
That means 0,2,3,5 is in array id 0. So, they will show the id in their respective sequence. Same for 1 and 4. Can anyone help me to solve this? [N.B. There could be more than two arrays. So, it will be highly appreciated if the code work for different array numbers]
You can do this by using a dictionary
x = [[0,2,3,5],[1,4]]
lst = {}
for i in range(len(x)):
for j in range(len(x[i])):
lst[x[i][j]] = i
print(lst)
You can also do this by using list, list.insert(idx, value) means value is inserted to the list at the idxth index. Here, we are traversing through all the values of x and the value x[i][j] is in the i th number array.
x = [[0,2,3,5],[1,4]]
lst = []
for i in range(len(x)):
for j in range(len(x[i])):
lst.insert(x[i][j], i)
print(lst)
Output: [0, 1, 0, 0, 1, 0]
You might also consider using np.argsort for rearranging your array values and create the index-array with list comprehension:
x = [[0,2,3,5],[1,4]]
order = np.concatenate(x).argsort()
np.concatenate([ [i]*len(e) for i,e in enumerate(x) ])[order]
array([0, 1, 0, 0, 1, 0])

Finding the indices of distinct elements in a vectorized manner

I have a list of ints, a, between 0 and 3000. len(a) = 3000. I have a for loop that is iterating through this list, searching for the indices of each elemenent in a larger array.
import numpy as np
a = [i for i in range(3000)]
array = np.random.randint(0, 3000, size(12, 1000, 1000))
newlist = []
for i in range(0, len(a)):
coord = np.where(array == list[i])
newlist.append(coord)
As you can see, coord will be 3 arrays of the coordinates x, y, z for the values in the 3D matrix that equal the value in the list.
Is there a way to do this in a vectorized manner without the for loop?
The output should be a list of tuples, one for each element in a:
# each coord looks like this:
print(coord)
(array[1, ..., 1000], array[2, ..., 1000], array[2, ..., 12])
# combined over all the iterations:
print(newlist)
[coord1, coord2, ..., coord3000]
There is actually a fully vectorized solution to this, despite the fact that the resulting arrays are all of different sizes. The idea is this:
Sort all the elements of the array along with their coordinates. argsort is ideal for this sort of thing.
Find the cut-points in the sorted data, so you know where to split the array, e.g. with diff and flatnonzero.
split the coordinate array along the indices you found. If you have missing elements, you may need to generate a key based on the first element of each run.
Here is an example to walk you through it. Let's say you have an d-dimensional array with size n. Your coordinates will be a (d, n) array:
d = arr.ndim
n = arr.size
You can generate the coordinate arrays with np.indices directly:
coords = np.indices(arr.shape)
Now ravel/reshape the data and the coordinates into an (n,) and (d, n) array, respectively:
arr = arr.ravel() # Ravel guarantees C-order no matter the source of the data
coords = coords.reshape(d, n) # C-order by default as a result of `indices` too
Now sort the data:
order = np.argsort(arr)
arr = arr[order]
coords = coords[:, order]
Find the locations where the data changes values. You want the indices of the new values, so we can make a fake first element that is 1 less than the actual first element.
change = np.diff(arr, prepend=arr[0] - 1)
The indices of the locations give the break-points in the array:
locs = np.flatnonzero(change)
You can now split the data at those locations:
result = np.split(coords, locs[1:], axis=1)
And you can create the key of values actually found:
key = arr[locs]
If you are very confident that all the values are present in the array, then you don't need the key. Instead, you can compute locs as just np.diff(arr) and result as just np.split(coords, inds, axis=1).
Each element in result is already consistent with the indexing used by where/nonzero, but as a numpy array. If specifically want a tuple, you can map it to a tuple:
result = [tuple(inds) for inds in result]
TL;DR
Combining all this into a function:
def find_locations(arr):
coords = np.indices(arr.shape).reshape(arr.ndim, arr.size)
arr = arr.ravel()
order = np.argsort(arr)
arr = arr[order]
coords = coords[:, order]
locs = np.flatnonzero(np.diff(arr, prepend=arr[0] - 1))
return arr[locs], np.split(coords, locs[1:], axis=1)
You can return a list of index arrays with empty arrays for missing elements by replacing the last line with
result = [np.empty(0, dtype=int)] * 3000 # Empty array, so OK to use same reference
for i, j in enumerate(arr[locs]):
result[j] = coords[i]
return result
You can optionally filter for values that are in the specific range you want (e.g. 0-2999).
You can use logical OR in numpy to pass all those equality conditions at once instead of one by one.
import numpy as np
conditions = False
for i in list:
conditions = np.logical_or(conditions,array3d == i)
newlist = np.where(conditions)
This allows numpy to do filtering once instead of n passes for each condition separately.
Another way to do it more compactly
np.where(np.isin(array3d, list))

python find permutation mapping [duplicate]

I have two 1D arrays, x & y, one smaller than the other. I'm trying to find the index of every element of y in x.
I've found two naive ways to do this, the first is slow, and the second memory-intensive.
The slow way
indices= []
for iy in y:
indices += np.where(x==iy)[0][0]
The memory hog
xe = np.outer([1,]*len(x), y)
ye = np.outer(x, [1,]*len(y))
junk, indices = np.where(np.equal(xe, ye))
Is there a faster way or less memory intensive approach? Ideally the search would take advantage of the fact that we are searching for not one thing in a list, but many things, and thus is slightly more amenable to parallelization.
Bonus points if you don't assume that every element of y is actually in x.
I want to suggest one-line solution:
indices = np.where(np.in1d(x, y))[0]
The result is an array with indices for x array which corresponds to elements from y which were found in x.
One can use it without numpy.where if needs.
As Joe Kington said, searchsorted() can search element very quickly. To deal with elements that are not in x, you can check the searched result with original y, and create a masked array:
import numpy as np
x = np.array([3,5,7,1,9,8,6,6])
y = np.array([2,1,5,10,100,6])
index = np.argsort(x)
sorted_x = x[index]
sorted_index = np.searchsorted(sorted_x, y)
yindex = np.take(index, sorted_index, mode="clip")
mask = x[yindex] != y
result = np.ma.array(yindex, mask=mask)
print result
the result is:
[-- 3 1 -- -- 6]
How about this?
It does assume that every element of y is in x, (and will return results even for elements that aren't!) but it is much faster.
import numpy as np
# Generate some example data...
x = np.arange(1000)
np.random.shuffle(x)
y = np.arange(100)
# Actually preform the operation...
xsorted = np.argsort(x)
ypos = np.searchsorted(x[xsorted], y)
indices = xsorted[ypos]
I think this is a clearer version:
np.where(y.reshape(y.size, 1) == x)[1]
than indices = np.where(y[:, None] == x[None, :])[1]. You don't need to broadcast x into 2D.
This type of solution I found to be best because unlike searchsorted() or in1d() based solutions that have seen posted here or elsewhere, the above works with duplicates and it doesn't care if anything is sorted. This was important to me because I wanted x to be in a particular custom order.
I would just do this:
indices = np.where(y[:, None] == x[None, :])[1]
Unlike your memory-hog way, this makes use of broadcast to directly generate 2D boolean array without creating 2D arrays for both x and y.
The numpy_indexed package (disclaimer: I am its author) contains a function that does exactly this:
import numpy_indexed as npi
indices = npi.indices(x, y, missing='mask')
It will currently raise a KeyError if not all elements in y are present in x; but perhaps I should add a kwarg so that one can elect to mark such items with a -1 or something.
It should have the same efficiency as the currently accepted answer, since the implementation is along similar lines. numpy_indexed is however more flexible, and also allows to search for indices of rows of multidimensional arrays, for instance.
EDIT: ive changed the handling of missing values; the 'missing' kwarg can now be set with 'raise', 'ignore' or 'mask'. In the latter case you get a masked array of the same length of y, on which you can call .compressed() to get the valid indices. Note that there is also npi.contains(x, y) if this is all you need to know.
Another solution would be:
a = np.array(['Bob', 'Alice', 'John', 'Jack', 'Brian', 'Dylan',])
z = ['Bob', 'Brian', 'John']
for i in z:
print(np.argwhere(i==a))
Use this line of code :-
indices = np.where(y[:, None] == x[None, :])[1]
My solution can additionally handle a multidimensional x. By default, it will return a standard numpy array of corresponding y indices in the shape of x.
If you can't assume that y is a subset of x, then set masked=True to return a masked array (this has a performance penalty). Otherwise, you will still get indices for elements not contained in y, but they probably won't be useful to you.
The answers by HYRY and Joe Kington were helpful in making this.
# For each element of ndarray x, return index of corresponding element in 1d array y
# If y contains duplicates, the index of the last duplicate is returned
# Optionally, mask indices where the x element does not exist in y
def matched_indices(x, y, masked=False):
# Flattened x
x_flat = x.ravel()
# Indices to sort y
y_argsort = y.argsort()
# Indices in sorted y of corresponding x elements, flat
x_in_y_sort_flat = y.searchsorted(x_flat, sorter=y_argsort)
# Indices in y of corresponding x elements, flat
x_in_y_flat = y_argsort[x_in_y_sort_flat]
if not masked:
# Reshape to shape of x
return x_in_y_flat.reshape(x.shape)
else:
# Check for inequality at each y index to mask invalid indices
mask = x_flat != y[x_in_y_flat]
# Reshape to shape of x
return np.ma.array(x_in_y_flat.reshape(x.shape), mask=mask.reshape(x.shape))
A more direct solution, that doesn't expect the array to be sorted.
import pandas as pd
A = pd.Series(['amsterdam', 'delhi', 'chromepet', 'tokyo', 'others'])
B = pd.Series(['chromepet', 'tokyo', 'tokyo', 'delhi', 'others'])
# Find index position of B's items in A
B.map(lambda x: np.where(A==x)[0][0]).tolist()
Result is:
[2, 3, 3, 1, 4]
more compact solution:
indices, = np.in1d(a, b).nonzero()

Check how many numpy array within a numpy array are equal to other numpy arrays within another numpy array of different size

My problem
Suppose I have
a = np.array([ np.array([1,2]), np.array([3,4]), np.array([5,6]), np.array([7,8]), np.array([9,10])])
b = np.array([ np.array([5,6]), np.array([1,2]), np.array([3,192])])
They are two arrays, of different sizes, containing other arrays (the inner arrays have same sizes!)
I want to count how many items of b (i.e. inner arrays) are also in a. Notice that I am not considering their position!
How can I do that?
My Try
count = 0
for bitem in b:
for aitem in a:
if aitem==bitem:
count+=1
Is there a better way? Especially in one line, maybe with some comprehension..
The numpy_indexed package contains efficient (nlogn, generally) and vectorized solutions to these types of problems:
import numpy_indexed as npi
count = len(npi.intersection(a, b))
Note that this is subtly different than your double loop, discarding duplicate entries in a and b for instance. If you want to retain duplicates in b, this would work:
count = npi.in_(b, a).sum()
Duplicate entries in a could also be handled by doing npi.count(a) and factoring in the result of that; but anyway, im just rambling on for illustration purposes since I imagine the distinction probably does not matter to you.
Here is a simple way to do it:
a = np.array([ np.array([1,2]), np.array([3,4]), np.array([5,6]), np.array([7,8]), np.array([9,10])])
b = np.array([ np.array([5,6]), np.array([1,2]), np.array([3,192])])
count = np.count_nonzero(
np.any(np.all(a[:, np.newaxis, :] == b[np.newaxis, :, :], axis=-1), axis=0))
print(count)
>>> 2
You can do what you want in one liner as follows:
count = sum([np.array_equal(x,y) for x,y in product(a,b)])
Explanation
Here's an explanation of what's happening:
Iterate through the two arrays using itertools.product which will create an iterator over the cartesian product of the two arrays.
Compare each two arrays in a tuple (x,y) coming from step 1. using np.array_equal
True is equal to 1 when using sum on a list
Full example:
The final code looks like this:
import numpy as np
from itertools import product
a = np.array([ np.array([1,2]), np.array([3,4]), np.array([5,6]), np.array([7,8]), np.array([9,10])])
b = np.array([ np.array([5,6]), np.array([1,2]), np.array([3,192])])
count = sum([np.array_equal(x,y) for x,y in product(a,b)])
# output: 2
You can convert the rows to dtype = np.void and then use np.in1d as on the resulting 1d arrays
def void_arr(a):
return np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1])))
b[np.in1d(void_arr(b), void_arr(a))]
array([[5, 6],
[1, 2]])
If you just want the number of intersections, it's
np.in1d(void_arr(b), void_arr(a)).sum()
2
Note: if there are repeat items in b or a, then np.in1d(void_arr(b), void_arr(a)).sum() likely won't be equal to np.in1d(void_arr(a), void_arr(b)).sum(). I've reversed the order from my original answer to match your question (i.e. how many elements of b are in a?)
For more information, see the third answer here

Numpy multidimensional array slicing

Suppose I have defined a 3x3x3 numpy array with
x = numpy.arange(27).reshape((3, 3, 3))
Now, I can get an array containing the (0,1) element of each 3x3 subarray with x[:, 0, 1], which returns array([ 1, 10, 19]). What if I have a tuple (m,n) and want to retrieve the (m,n) element of each subarray(0,1) stored in a tuple?
For example, suppose that I have t = (0, 1). I tried x[:, t], but it doesn't have the right behaviour - it returns rows 0 and 1 of each subarray. The simplest solution I have found is
x.transpose()[tuple(reversed(t))].transpose()
but I am sure there must be a better way. Of course, in this case, I could do x[:, t[0], t[1]], but that can't be generalised to the case where I don't know how many dimensions x and t have.
you can create the index tuple first:
index = (numpy.s_[:],)+t
x[index]
HYRY solution is correct, but I have always found numpy's r_, c_ and s_ index tricks to be a bit strange looking. So here is the equivalent thing using a slice object:
x[(slice(None),) + t]
That single argument to slice is the stop position (i.e. None meaning all in the same way that x[:] is equivalent to x[None:None])

Categories

Resources