Performing operation on 2D array using indices from 1D array - python

I have the following array in python:
a = np.array([[1,1,1],[1,1,1],[1,1,1]])
and the following index array:
b = np.array([0,1,2])
I want to index a using b such that I can subtract 1 from the matching row/column and get the following result:
[[0,1,1],[1,0,1],[1,1,0]]
I can do it using loops, wanted to know if there was a "non-loop" way of doing it.
for i in range(len(b)):
a[i][b[i]] = a[i][b[i]] - 1

It looks like there is some confusion on how to handle this.
You want a simple indexing:
a[np.arange(len(a)), b] -= 1
Output:
array([[0, 1, 1],
[1, 0, 1],
[1, 1, 0]])
Output for b = np.array([2,0,1])
array([[1, 1, 0],
[0, 1, 1],
[1, 0, 1]])

Your code produces output as follows:
a = np.array([[1,1,1],[1,1,1],[1,1,1]])
b = np.array([0,1,2])
for i in range(len(b)):
a[i][b[i]] = a[i][b[i]] - 1
Output:
array([[0, 1, 1],
[1, 0, 1],
[1, 1, 0]])
This can be done in non -loopy way as follows:
a[np.arange(len(b)),b] -= 1
print(a)
Output:
array([[0, 1, 1],
[1, 0, 1],
[1, 1, 0]])

Related

Python Numpy According a 2-D array's value to assign values to a 3-D array

Let's say, an array A which shape is (2,3) and values are in 0, 1, 2, 3
Another array B which shape is (2, 3, 4)
Goal:According to A position and value to add 1 in B. without using loop. maybe numpy.where? is possible?
Example:
A = [[0, 1, 3],[2, 1, 0]]
B = np.zeros((2, 3, 4))
something I'm looking for help
B = [[[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 0, 1]]
[[0, 0, 1, 0]
[0, 1, 0, 0]
[1, 0, 0, 0]]]
further more, if value in A is Nah, what will happen. can we just do nothing?
Check out this code:
Method-1
B[0,[0,1,2], A[0]] = 1
B[1,[0,1,2], A[1]] = 1
Method-2
import numpy as np
A = [[0, 1, 3],[2, 1, 0]]
B = np.zeros((2, 3, 4))
for i,j in zip(range(len(A)),A):
for k,l in zip(range(len(j)),j):
B[i][k][l] = 1
print(B)
I've got an idea.
one hot coding.
numpy.eye(4)[A]
so that A has the same shape as B.
A + B

Python pandas: set row and col to zeros if element is zero?

I'm trying to solve the following python interview questions using Pandas:
Given a m x n matrix, if an element is 0, set its entire row and column to 0. Do it in-place.
without using (enumerate)!!!
Here are some examples:
Example 1
[[1, 1, 1], [1, 0, 1], [1, 1, 1]] # input
[[1, 0, 1], [0, 0, 0], [1, 0, 1]] # output
Example 2
[[0, 1, 2, 0], [3, 4, 5, 2], [1, 3, 1, 5]] # input
[[0, 0, 0, 0], [0, 4, 5, 0], [0, 3, 1, 0]] # output
You can try this:
lst = [[1, 1, 1], [1, 0, 1], [1, 1, 1]]
df = pd.DataFrame(lst)
df_result = df.copy(deep=True)
df_result.loc[df.eq(0).any(axis=1)] = 0
df_result.loc[:, df.eq(0).any(axis=0)] = 0
result = df_result.values.tolist()
output:
[[1, 0, 1], [0, 0, 0], [1, 0, 1]]
Using only built-in Python functions:
# Example data (list)
lst = [[1, 1, 1], [1, 0, 1], [1, 1, 1]]
# For each row, if any of the values in the row is 0, replace all the values with 0
# Obs: I'm using a `list comprehension` to make the code shorter
for row in lst:
if any([value==0 for value in row]):
row[:] = [0] * len(row)
Using numpy:
# Import and create the array from the list
import numpy as np
a = np.array(lst)
# Set zeros in-place
a[(a==0).any(1), :] = 0
Using pandas:
# Import and create the dataframe from the list
import pandas as pd
df = pd.DataFrame(lst)
# Set zeros in-place
df.iloc[df.eq(0).any(1), :] = 0
The output for all of them is the same (rows with all zeros if there's at least one original zero on them). That logic was applied in all examples here. As you're still learning nested lists in Python, I would recommend to continue your studies with Python built-in classes, methods, functions, and etc. Afterwards you may want to take a look how indexing works in numpy and pandas so that you can get a better understanding of the code here.
Output:
print(lst)
[[1, 1, 1], [0, 0, 0], [1, 1, 1]]
print(a)
[[1 1 1]
[0 0 0]
[1 1 1]]
# ignore the first line and column,
# as they indicate the row and column names, respectively:
print(df)
0 1 2
0 1 1 1
1 0 0 0
2 1 1 1

Customize iterating over numpy matrix

I'm using python 3.X and I want to create such iterator that will allow me to iterate a matrix from cell [N,0] to [0,N]
I don't want to use indices-magic so I tried np.nditer which is not enough for that.
a = np.matrix(np.random.randint(0,3,(3,3)))
>>>([[0, 0, 1],
[1, 1, 2],
[1, 2, 2]])
it = np.nditer(a, flags=['f_index'])
for i in range(a.size):
print(it[0])
it.iternext()
>>>0 0 1 1 1 2 1 2 2
I want to get the following :
1,2,2,1,1,2,0,0,1
Is it possible using iterators of some kind?
In [29]: arr = np.array([[0,0,1],[1,1,2],[1,2,2]])
In [30]: arr[::-1,:]
Out[30]:
array([[1, 2, 2],
[1, 1, 2],
[0, 0, 1]])
In [31]: arr[::-1,:].ravel()
Out[31]: array([1, 2, 2, 1, 1, 2, 0, 0, 1])

Python Numpy. Manipulating with 2 matrices

I have 2 CSV files with the same size. Values are 1s and 0s.
I need to loop over 2 files (matrices) and create a new matrix using the following logic:
if matrix A value = 1 and matrix B value = 1
then
result value is 0,
if 1 and 0
then
0,
if 0 and 0
then
0.
A = [
[1, 0, 1],
[1, 1, 1]
]
B = [
[1, 0, 0],
[1, 0, 0]
]
=>
C = [
[0, 0, 1],
[0, 1, 1]
]
I know that Numpy is used to loop and manipulate with matrices and arrays, but I stuck to find how to do it in a proper way.
Here is one way to get your desired output, but I think the logic you described was not quite what you meant. This outputs an array of 1 where your matrices are different from one another, and 0 where they are alike.
A = np.array([
[1, 0, 1],
[1, 1, 1]
])
B = np.array([
[1, 0, 0],
[1, 0, 0]])
C = (A != B).astype('int')
array([[0, 0, 1],
[0, 1, 1]])

What is a faster way to get the location of unique rows in numpy

I have a list of unique rows and another larger array of data (called test_rows in example). I was wondering if there was a faster way to get the location of each unique row in the data. The fastest way that I could come up with is...
import numpy
uniq_rows = numpy.array([[0, 1, 0],
[1, 1, 0],
[1, 1, 1],
[0, 1, 1]])
test_rows = numpy.array([[0, 1, 1],
[0, 1, 0],
[0, 0, 0],
[1, 1, 0],
[0, 1, 0],
[0, 1, 1],
[0, 1, 1],
[1, 1, 1],
[1, 1, 0],
[1, 1, 1],
[0, 1, 0],
[0, 0, 0],
[1, 1, 0]])
# this gives me the indexes of each group of unique rows
for row in uniq_rows.tolist():
print row, numpy.where((test_rows == row).all(axis=1))[0]
This prints...
[0, 1, 0] [ 1 4 10]
[1, 1, 0] [ 3 8 12]
[1, 1, 1] [7 9]
[0, 1, 1] [0 5 6]
Is there a better or more numpythonic (not sure if that word exists) way to do this? I was searching for a numpy group function but could not find it. Basically for any incoming dataset I need the fastest way to get the locations of each unique row in that data set. The incoming dataset will not always have every unique row or the same number.
EDIT:
This is just a simple example. In my application the numbers would not be just zeros and ones, they could be anywhere from 0 to 32000. The size of uniq rows could be between 4 to 128 rows and the size of test_rows could be in the hundreds of thousands.
Numpy
From version 1.13 of numpy you can use numpy.unique like np.unique(test_rows, return_counts=True, return_index=True, axis=1)
Pandas
df = pd.DataFrame(test_rows)
uniq = pd.DataFrame(uniq_rows)
uniq
0 1 2
0 0 1 0
1 1 1 0
2 1 1 1
3 0 1 1
Or you could generate the unique rows automatically from the incoming DataFrame
uniq_generated = df.drop_duplicates().reset_index(drop=True)
yields
0 1 2
0 0 1 1
1 0 1 0
2 0 0 0
3 1 1 0
4 1 1 1
and then look for it
d = dict()
for idx, row in uniq.iterrows():
d[idx] = df.index[(df == row).all(axis=1)].values
This is about the same as your where method
d
{0: array([ 1, 4, 10], dtype=int64),
1: array([ 3, 8, 12], dtype=int64),
2: array([7, 9], dtype=int64),
3: array([0, 5, 6], dtype=int64)}
There are a lot of solutions here, but I'm adding one with vanilla numpy. In most cases numpy will be faster than list comprehensions and dictionaries, although the array broadcasting may cause memory to be an issue if large arrays are used.
np.where((uniq_rows[:, None, :] == test_rows).all(2))
Wonderfully simple, eh? This returns a tuple of unique row indices and the corresponding test row.
(array([0, 0, 0, 1, 1, 1, 2, 2, 3, 3, 3]),
array([ 1, 4, 10, 3, 8, 12, 7, 9, 0, 5, 6]))
How it works:
(uniq_rows[:, None, :] == test_rows)
Uses array broadcasting to compare each element of test_rows with each row in uniq_rows. This results in a 4x13x3 array. all is used to determine which rows are equal (all comparisons returned true). Finally, where returns the indices of these rows.
With the np.unique from v1.13 (downloaded from the source link on the latest documentation, https://github.com/numpy/numpy/blob/master/numpy/lib/arraysetops.py#L112-L247)
In [157]: aset.unique(test_rows, axis=0,return_inverse=True,return_index=True)
Out[157]:
(array([[0, 0, 0],
[0, 1, 0],
[0, 1, 1],
[1, 1, 0],
[1, 1, 1]]),
array([2, 1, 0, 3, 7], dtype=int32),
array([2, 1, 0, 3, 1, 2, 2, 4, 3, 4, 1, 0, 3], dtype=int32))
In [158]: a,b,c=_
In [159]: c
Out[159]: array([2, 1, 0, 3, 1, 2, 2, 4, 3, 4, 1, 0, 3], dtype=int32)
In [164]: from collections import defaultdict
In [165]: dd = defaultdict(list)
In [166]: for i,v in enumerate(c):
...: dd[v].append(i)
...:
In [167]: dd
Out[167]:
defaultdict(list,
{0: [2, 11],
1: [1, 4, 10],
2: [0, 5, 6],
3: [3, 8, 12],
4: [7, 9]})
or indexing the dictionary with the unique rows (as hashable tuple):
In [170]: dd = defaultdict(list)
In [171]: for i,v in enumerate(c):
...: dd[tuple(a[v])].append(i)
...:
In [172]: dd
Out[172]:
defaultdict(list,
{(0, 0, 0): [2, 11],
(0, 1, 0): [1, 4, 10],
(0, 1, 1): [0, 5, 6],
(1, 1, 0): [3, 8, 12],
(1, 1, 1): [7, 9]})
This will do the job:
import numpy as np
uniq_rows = np.array([[0, 1, 0],
[1, 1, 0],
[1, 1, 1],
[0, 1, 1]])
test_rows = np.array([[0, 1, 1],
[0, 1, 0],
[0, 0, 0],
[1, 1, 0],
[0, 1, 0],
[0, 1, 1],
[0, 1, 1],
[1, 1, 1],
[1, 1, 0],
[1, 1, 1],
[0, 1, 0],
[0, 0, 0],
[1, 1, 0]])
indices=np.where(np.sum(np.abs(np.repeat(uniq_rows,len(test_rows),axis=0)-np.tile(test_rows,(len(uniq_rows),1))),axis=1)==0)[0]
loc=indices//len(test_rows)
indices=indices-loc*len(test_rows)
res=[[] for i in range(len(uniq_rows))]
for i in range(len(indices)):
res[loc[i]].append(indices[i])
print(res)
[[1, 4, 10], [3, 8, 12], [7, 9], [0, 5, 6]]
This will work for all the cases including the cases in which not all the rows in uniq_rows are present in test_rows. However, if somehow you know ahead that all of them are present, you could replace the part
res=[[] for i in range(len(uniq_rows))]
for i in range(len(indices)):
res[loc[i]].append(indices[i])
with just the row:
res=np.split(indices,np.where(np.diff(loc)>0)[0]+1)
Thus avoiding loops entirely.
Not very 'numpythonic', but for a bit of an upfront cost, we can make a dict with the keys as a tuple of your row, and a list of indices:
test_rowsdict = {}
for i,j in enumerate(test_rows):
test_rowsdict.setdefault(tuple(j),[]).append(i)
test_rowsdict
{(0, 0, 0): [2, 11],
(0, 1, 0): [1, 4, 10],
(0, 1, 1): [0, 5, 6],
(1, 1, 0): [3, 8, 12],
(1, 1, 1): [7, 9]}
Then you can filter based on your uniq_rows, with a fast dict lookup: test_rowsdict[tuple(row)]:
out = []
for i in uniq_rows:
out.append((i, test_rowsdict.get(tuple(i),[])))
For your data, I get 16us for just the lookup, and 66us for building and looking up, versus 95us for your np.where solution.
Approach #1
Here's one approach, not sure about the level of "NumPythonic-ness" though to such a tricky problem -
def get1Ds(a, b): # Get 1D views of each row from the two inputs
# check that casting to void will create equal size elements
assert a.shape[1:] == b.shape[1:]
assert a.dtype == b.dtype
# compute dtypes
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
# convert to 1d void arrays
a = np.ascontiguousarray(a)
b = np.ascontiguousarray(b)
a_void = a.reshape(a.shape[0], -1).view(void_dt).ravel()
b_void = b.reshape(b.shape[0], -1).view(void_dt).ravel()
return a_void, b_void
def matching_row_indices(uniq_rows, test_rows):
A, B = get1Ds(uniq_rows, test_rows)
validA_mask = np.in1d(A,B)
sidx_A = A.argsort()
validA_mask = validA_mask[sidx_A]
sidx = B.argsort()
sortedB = B[sidx]
split_idx = np.flatnonzero(sortedB[1:] != sortedB[:-1])+1
all_split_indx = np.split(sidx, split_idx)
match_mask = np.in1d(B,A)[sidx]
valid_mask = np.logical_or.reduceat(match_mask, np.r_[0, split_idx])
locations = [e for i,e in enumerate(all_split_indx) if valid_mask[i]]
return uniq_rows[sidx_A[validA_mask]], locations
Scope(s) of improvement (on performance) :
np.split could be replaced by a for-loop for splitting using slicing.
np.r_ could be replaced by np.concatenate.
Sample run -
In [331]: unq_rows, idx = matching_row_indices(uniq_rows, test_rows)
In [332]: unq_rows
Out[332]:
array([[0, 1, 0],
[0, 1, 1],
[1, 1, 0],
[1, 1, 1]])
In [333]: idx
Out[333]: [array([ 1, 4, 10]),array([0, 5, 6]),array([ 3, 8, 12]),array([7, 9])]
Approach #2
Another approach to beat the setup overhead from the previous one and making use of get1Ds from it, would be -
A, B = get1Ds(uniq_rows, test_rows)
idx_group = []
for row in A:
idx_group.append(np.flatnonzero(B == row))
The numpy_indexed package (disclaimer: I am its author) was created to solve problems of this kind in an elegant and efficient manner:
import numpy_indexed as npi
indices = np.arange(len(test_rows))
unique_test_rows, index_groups = npi.group_by(test_rows, indices)
If you dont care about the indices of all rows, but only those present in test_rows, npi has a bunch of simple ways of tackling that problem too; f.i:
subset_indices = npi.indices(unique_test_rows, unique_rows)
As a sidenote; it might be useful to take a look at the examples in the npi library; in my experience, most of the time people ask a question of this kind, these grouped indices are just a means to an end, and not the endgoal of the computation. Chances are that using the functionality in npi you can reach that end goal more efficiently, without ever explicitly computing those indices. Do you care to give some more background to your problem?
EDIT: if you arrays are indeed this big, and always consist of a small number of columns with binary values, wrapping them with the following encoding might boost efficiency a lot further still:
def encode(rows):
return (rows * [[2**i for i in range(rows.shape[1])]]).sum(axis=1, dtype=np.uint8)

Categories

Resources