how to insert a squence with slight modification in a numpy array? - python

I have a fixed list and a fixed size numpy array.
data = [10,20,30]
arr = np.zeros(9)
I want to insert the data in NumPy arr with some slight modification so that the expected output should look like this:
arr = [10, 20, 30, 9, 22, 32, 13, 16, 28]
the difference between the values can be in the range of (-5, 5)
my attept was:
import numpy as np
data = [10,20,30]
dataArr = np.zeros(9)
for i in range(9):
for j in data:
dataArr[3*i:3*(i+1)] = random.randint(int(j - 5), int(j + 5))
dataArr
but it gives me this output:
array([34., 34., 34., 33., 33., 33., 35., 35., 35.])
can somebody please help?

When you do
dataArr[3*i:3*(i+1)] = value
You are setting the entire range to value. In fact your indices even go beyond the range of dataArr, even though it doesn't raise an exception.
See after execution :
print(dataArr[3*i:3*(i+1)])
# output:
# []
Iterate over the values of data, and modify the three corresponding values in dataArr by doing so:
nbOfRepetitions = 3
dataArr = np.zeros(len(data)*nbOfRepetitions)
for i in range(len(data)):
dataArr[i] = data[i]
for j in range(1, nbOfRepetitions):
dataArr[i+len(data)*j] = random.randint(int(data[i] - 5), int(data[i] + 5))
Where nbOfRepetitions is the number of time you want to have data in dataArr (this includes the non-modified copy at the start).
This gives the expected result.
array([10., 20., 30., 12., 18., 30., 10., 24., 28.])
Edited to generalize for different sizes for data and dataArr.

If you want to skip the first N items that are already in your data array, you can change your inner loop and access indices to something like:
import random
import numpy as np
MAX_LENGTH = 9
data = [10,20,30]
dataArr = np.zeros(MAX_LENGTH)
for i in range(MAX_LENGTH):
for j in range(len(data),MAX_LENGTH):
dataArr[j] = random.randint((min(data) - 5), (max(data) + 5))
dataArr[0:len(data)] = data
# output:
array([10., 20., 30., 28., 31., 17., 30., 30., 10.])

Other than above solution, this can be also one more way
import numpy as np
(
(np.random.randint(10, size=(3, 3)) - 5) +
np.array([10, 20, 30])
).reshape(-1)
First get 3x3 matrix of random integers from 0-10 and subtract 5 to get from -5 to 5
Before adding np.array([10, 20, 30]) will be broadcasted along 0 axis like np.array([10, 20, 30])[np.newaxis, :] and towards end flattening

You need to set up your array to use integer types:
dataArr = np.zeros(9, dtype=int)

Related

Fastest way to find nearest neighbours in NumPy array

What is the fastest way to perform operations on adjacent elements of an mxn array within distance $l$ (where m, n are large). If this was an image, it would equate to an operation on the surrounding pixels. To make things clearer, I've created a new array with the neighbours of the corresponding source.
Given some array like
x = [[1,2,3],
[4,5,6],
[7,8,9]]
if I were to take the [0,0] element, and want the surrounding elements at $l$=1, I'd need the [0,1] and [1,0] elements (namley 2 and 4). The desired output would look something like this
y = [[[2,4], [1,3,5], [2,6]],
[[1,5,7], [4,6,2,8], [3,9,5]],
[[4,8], [7,5,9], [8,6]]]
I've tried playing around with kdTree from scipy.spatial, and am aware of https://stackoverflow.com/a/45742628/20451990, but as far as I can tell this is actually finding the nearest data points, whereas I want to find the nearest array elements. I guess it could be naively done by iterating through, but that is very slow...
The end goal here is to generate combinations of nearby array elements which I will be taking the product of. For the example above this could be
[[1*2, 1*4], [2*1, 2*3, 2*5], [3*2, 3*6]],...]
Key takeaways
With numba, it is possible to get roughly 690x times faster algorithms than with naïve python code with for-loops and list appends.
With numba, functions have signature; you tell explicitly what is the datatype.
Avoid memory (re-)allocations. Try to allocate memory for any arrays in advance. Reuse the data containers whenever possible (See: cell_result in the numbafied process_cell())
Numba is not super handy with classes (at least, OOP style code), stuff which is dynamically typed, containers with mixed types or containers changing in size. Prefer simple functions and typed structures with defined size. See also: Supported Python features
Numba likes for-loops, and they're fast!
Prewords
You asked for a fastest way to calculate this. I had no baseline, so I created first a pure python for-loop solution as a baseline. Then, I used numba to make the code run fast. It most probably is not the fastest implementation but at least it is way faster than the naïve pure python for-loop approach.
So, if you are not familiar with numba this is a good way to learn about it a bit :)
Used test data
I use two pieces of test data. First, the simple array given in the question. I call this myarr, and it is used for easy comparison of the output:
import numpy as np
myarr = np.array(
[
[1, 2, 3],
[4, 5, 6],
[7, 8, 9],
],
dtype=np.float32,
)
The second dataset is for benchmarking. You mentioned that the arrays will be of size 30 x 30 and the distance I will be less than 4.
arr_large = np.arange(1, 30 * 30 + 1, 1, dtype=np.float32).reshape(30, 30)
In other words, the arr_large is a 30 x 30 2d-array:
>>> arr_large
array([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.,
12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22.,
23., 24., 25., 26., 27., 28., 29., 30.],
...
[871., 872., 873., 874., 875., 876., 877., 878., 879., 880., 881.,
882., 883., 884., 885., 886., 887., 888., 889., 890., 891., 892.,
893., 894., 895., 896., 897., 898., 899., 900.]], dtype=float32)
I specified the dtype because specifying datatype is needed at the optimization step. For the pure python solution this is of course not necessary at all.
Baseline solution: Pure python with for-loops
I implemented the baseline soution with a python class and for-loops. The output from it looks like this (source for NeighbourProcessor below):
Example output with 3 x 3 input array (I=1)
n = NeighbourProcessor()
output = n.process(myarr, max_distance=1)
The output is then
>>> output
{(0, 0): [2, 4],
(0, 1): [2, 6, 10],
(0, 2): [6, 18],
(1, 0): [4, 20, 28],
(1, 1): [10, 20, 30, 40],
(1, 2): [18, 30, 54],
(2, 0): [28, 56],
(2, 1): [40, 56, 72],
(2, 2): [54, 72]}
which is same as
{(0, 0): [1 * 2, 1 * 4],
(0, 1): [2 * 1, 2 * 3, 2 * 5],
(0, 2): [3 * 2, 3 * 6],
(1, 0): [4 * 1, 4 * 5, 4 * 7],
(1, 1): [5 * 2, 5 * 4, 5 * 6, 5 * 8],
(1, 2): [6 * 3, 6 * 5, 6 * 9],
(2, 0): [7 * 4, 7 * 8],
(2, 1): [8 * 5, 8 * 7, 8 * 9],
(2, 2): [9 * 6, 9 * 8]}
This is basically what was asked in the question; the target ouput was
[[1*2, 1*4], [2*1, 2*3, 2*5], [3*2, 3*6]],...]
Here I used a dictionary with (row, column) as the key because that way you can more easily find the output for each cell.
Baseline performance
For the largest input of 30 x 30, and largest distance (I=4), the calculation takes about 0.188 seconds on my laptop:
>>> %timeit n.process(arr_large, max_distance=4)
188 ms ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Code for NeighbourProcessor
import math
import numpy as np
class NeighbourProcessor:
def __init__(self):
self.arr = None
def process(self, arr, max_distance=1):
self.arr = arr
output = dict()
rows, columns = self.arr.shape
for current_row in range(rows):
for current_col in range(columns):
cell_result = self.process_cell(current_row, current_col, max_distance)
output[(current_row, current_col)] = cell_result
return output
def row_col_is_within_array(self, row, col):
if row < 0 or col < 0:
return False
if row > self.arr.shape[0] - 1 or col > self.arr.shape[1] - 1:
return False
return True
def distance(self, row, col, current_row, current_col):
distance_squared = (current_row - row) ** 2 + (current_col - col) ** 2
return np.sqrt(distance_squared)
def are_neighbours(self, row, col, current_row, current_col, max_distance):
if row == current_row and col == current_col:
return False
if not self.row_col_is_within_array(row, col):
return False
return self.distance(row, col, current_row, current_col) <= max_distance
def neighbours(self, current_row, current_col, max_distance):
start_row = math.floor(current_row - max_distance)
start_col = math.floor(current_col - max_distance)
end_row = math.ceil(current_row + max_distance)
end_col = math.ceil(current_col + max_distance)
for row in range(start_row, end_row + 1):
for col in range(start_col, end_col + 1):
if self.are_neighbours(
row, col, current_row, current_col, max_distance
):
yield row, col
def process_cell(self, current_row, current_col, max_distance):
cell_output = []
current_cell_value = self.arr[current_row][current_col]
for row, col in self.neighbours(current_row, current_col, max_distance):
neighbour_cell_value = self.arr[row][col]
cell_output.append(current_cell_value * neighbour_cell_value)
return cell_output
Short explanation
So what the NeighbourProcessor.process does is goes through the rows and columns of the input array, starting from (0,0), which is left top corner, and processing from left to right, top to bottom until the bottom right corner, which is (n_rows, n_columns), each time marking the cell as current cell; (current_row, current_column).
For each current cell, process it in process_cell. That will form an iterator with neighbours() which iterates all the neighbours at within maximum distance of I from the current cell. You can check how the logic goes in are_neighbours
Faster solution: Using numba and memory pre-allocation
Now I will make a functions-only version with numba, and try to make the processing as fast as possible. There is possibility also to use classes in numba, but they are still bit more experimental and complex, and this problem can be solved with functions only. The readability of the code suffers a bit, but that's the price we sometimes pay for speed optimization.
I'll start with the process function. Now it will have to create a a three dimensional array instead of a dict. The reason we want to create the array ahead of time because we memory allocation is a costly process and we want to do that exactly once. So, instead of having this as output for myarr:
# output[(row,column)]
#
output[(0,0)] # [2,4]
output[(0,1)] # [2, 6, 10]
#..etc
I want constant-sized output:
# output[row][column]
#
output[0][0] # [2, 4, nan, nan]
output[0][1] # [2, 6, 10, nan]
#..etc
Notice that after all the "pairs", the output is np.nan (not a number). Any postprocessing script must then just simply ignore the extra nans.
Solving for the required size for the pre-allocated array
How I know the size of the third dimension, i.e. the number of neighbours for given max. distance I? Well, I don't. It seems this is quite a complicated problem. See, for example this, this or the Gauss circle problem in Wikipedia. Nevertheless, I can quite easily calculate an upper bound for the number of neighbours. In the following I assume that neighbour is a neighbour if and only if the distance of the middle point of the cells is less or equal to I. If you create sketches with pen and paper, you will notice that when you increase the number of neighbours, the maximum number of neighbours grows as:
I = 1 -> max_number_neighbours = 4
I = 2 -> max_number_neighbours = 9
I = 3 -> max_number_neighbours = 28
Here is an example sketch with 10 x 10 2d-array and distance I=3, when current cell is (4,5), the number of neighbours must be less or equal to 28:
This pattern is represented as a function of max distance (I): (2*I-1)**2 + 4 -1, or
n_third_dimension = max_number_neighbours = (2*I-1)**2 + 3
Refactoring the code to work with numba
We start with creating the function signature of the entry point. In this case, we create a function process with the function signature:
#numba.jit("f4[:,:,:](f4[:,:], f4)")
def process(arr, max_distance):
...
See the docs for the other available types. The f4[:,:] just means that the input is 2d-array of float32 and f4[:,:,:](....) means that the function output is 3d-array of float32. Next, we create the output with the formula we invented above. Here is one part of the magic: memory pre-allocation with np.empty:
n_third_dimension = (2 * math.ceil(max_distance) - 1) ** 2 + 3
output = np.empty((*arr.shape, n_third_dimension), dtype=np.float32)
cell_result = np.empty(n_third_dimension, dtype=np.float32)
Numbafied code
I will not walk though the rest of the code hand-in-hand, but you can see below that it is a bit modified version of the pure python for-loop baseline.
import math
import numba
import numpy as np
#numba.njit("f4(i4,i4,i4,i4)")
def distance(row, col, current_row, current_col):
distance_squared = (current_row - row) ** 2 + (current_col - col) ** 2
return np.sqrt(distance_squared)
#numba.njit("boolean(i4,i4, i4,i4)")
def row_col_is_within_array(
row,
col,
arr_rows,
arr_cols,
):
if row < 0 or col < 0:
return False
if row > arr_rows - 1 or col > arr_cols - 1:
return False
return True
#numba.njit("boolean(i4,i4,i4,i4,f4,i4,i4)")
def are_neighbours(
neighbour_row,
neighbour_col,
current_row,
current_col,
max_distance,
arr_rows,
arr_cols,
):
if neighbour_row == current_row and neighbour_col == current_col:
return False
if not row_col_is_within_array(
neighbour_row,
neighbour_col,
arr_rows,
arr_cols,
):
return False
return (
distance(neighbour_row, neighbour_col, current_row, current_col) <= max_distance
)
#numba.njit("f4[:](f4[:,:], f4[:], i4,i4,i4,f4)")
def process_cell(
arr, cell_result, current_row, current_col, n_third_dimension, max_distance
):
for i in range(n_third_dimension):
cell_result[i] = np.nan
current_cell_value = arr[current_row][current_col]
# Potential cell neighbour area
start_row = math.floor(current_row - max_distance)
start_col = math.floor(current_col - max_distance)
end_row = math.ceil(current_row + max_distance)
end_col = math.ceil(current_col + max_distance)
arr_rows, arr_cols = arr.shape
cell_pointer = 0
for neighbour_row in range(start_row, end_row + 1):
for neighbour_col in range(start_col, end_col + 1):
if are_neighbours(
neighbour_row,
neighbour_col,
current_row,
current_col,
max_distance,
arr_rows,
arr_cols,
):
neighbour_cell_value = arr[neighbour_row][neighbour_col]
cell_result[cell_pointer] = current_cell_value * neighbour_cell_value
cell_pointer += 1
return cell_result
#numba.njit("f4[:,:,:](f4[:,:], f4)")
def process(arr, max_distance):
n_third_dimension = (2 * math.ceil(max_distance) - 1) ** 2 + 3
output = np.empty((*arr.shape, n_third_dimension), dtype=np.float32)
cell_result = np.empty(n_third_dimension, dtype=np.float32)
rows, columns = arr.shape
for current_row in range(rows):
for current_col in range(columns):
cell_result = process_cell(
arr,
cell_result,
current_row,
current_col,
n_third_dimension,
max_distance,
)
output[current_row][current_col][:] = cell_result
return output
Example output
>>> output = process(myarr, max_distance=1.0)
>>> output
array([[[ 2., 4., nan, nan],
[ 2., 6., 10., nan],
[ 6., 18., nan, nan]],
[[ 4., 20., 28., nan],
[10., 20., 30., 40.],
[18., 30., 54., nan]],
[[28., 56., nan, nan],
[40., 56., 72., nan],
>>> output[0]
array([[ 2., 4., nan, nan],
[ 2., 6., 10., nan],
[ 6., 18., nan, nan]], dtype=float32)
>>> output[0][1]
array([ 2., 6., 10., nan], dtype=float32)
# Above is the same as target: [2 * 1, 2 * 3, 2 * 5]
Speed of the numbafied code and closing words
The baseline approach rxecution time was 188 ms. Now, it is 271 µs. That is only 0.00144 times of what the original code took! (99.85% reduction in execution time. Some would say 693x faster.).
>>> %timeit process(arr_large, max_distance=4.0)
271 µs ± 2.88 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Note that you might want to calculate the distance differently, or add there weighting, or some more complex logic, aggregation functions, etc. This could be still further optimized a bit by creating better estimate for the maximum number of neighbors, for example. Have fun with numba, and I hope you learned something! :)
Bonus tip: There is also ahead of time compilation in numba which you can use to make also the first function call fast!

Add 2 column vector to ndarray [duplicate]

I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices.
This code works, and illustrates my goals:
def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = numpy.empty(a.shape, dtype=a.dtype)
shuffled_b = numpy.empty(b.shape, dtype=b.dtype)
permutation = numpy.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
For example:
>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])
>>> b = numpy.asarray([1, 2, 3])
>>> shuffle_in_unison(a, b)
(array([[2, 2],
[1, 1],
[3, 3]]), array([2, 1, 3]))
However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large.
Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too.
One other thought I had was this:
def shuffle_in_unison_scary(a, b):
rng_state = numpy.random.get_state()
numpy.random.shuffle(a)
numpy.random.set_state(rng_state)
numpy.random.shuffle(b)
This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.
Your can use NumPy's array indexing:
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = numpy.random.permutation(len(a))
return a[p], b[p]
This will result in creation of separate unison-shuffled arrays.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y, random_state=0)
To learn more, see http://scikit-learn.org/stable/modules/generated/sklearn.utils.shuffle.html
Your "scary" solution does not appear scary to me. Calling shuffle() for two sequences of the same length results in the same number of calls to the random number generator, and these are the only "random" elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to shuffle(), so the whole algorithm will generate the same permutation.
If you don't like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.
Example: Let's assume the arrays a and b look like this:
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
We can now construct a single array containing all the data:
c = numpy.c_[a.reshape(len(a), -1), b.reshape(len(b), -1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
Now we create views simulating the original a and b:
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
The data of a2 and b2 is shared with c. To shuffle both arrays simultaneously, use numpy.random.shuffle(c).
In production code, you would of course try to avoid creating the original a and b at all and right away create c, a2 and b2.
This solution could be adapted to the case that a and b have different dtypes.
Very simple solution:
randomize = np.arange(len(x))
np.random.shuffle(randomize)
x = x[randomize]
y = y[randomize]
the two arrays x,y are now both randomly shuffled in the same way
James wrote in 2015 an sklearn solution which is helpful. But he added a random state variable, which is not needed. In the below code, the random state from numpy is automatically assumed.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y)
from np.random import permutation
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data #numpy array
y = iris.target #numpy array
# Data is currently unshuffled; we should shuffle
# each X[i] with its corresponding y[i]
perm = permutation(len(X))
X = X[perm]
y = y[perm]
Shuffle any number of arrays together, in-place, using only NumPy.
import numpy as np
def shuffle_arrays(arrays, set_seed=-1):
"""Shuffles arrays in-place, in the same order, along axis=0
Parameters:
-----------
arrays : List of NumPy arrays.
set_seed : Seed value if int >= 0, else seed is random.
"""
assert all(len(arr) == len(arrays[0]) for arr in arrays)
seed = np.random.randint(0, 2**(32 - 1) - 1) if set_seed < 0 else set_seed
for arr in arrays:
rstate = np.random.RandomState(seed)
rstate.shuffle(arr)
And can be used like this
a = np.array([1, 2, 3, 4, 5])
b = np.array([10,20,30,40,50])
c = np.array([[1,10,11], [2,20,22], [3,30,33], [4,40,44], [5,50,55]])
shuffle_arrays([a, b, c])
A few things to note:
The assert ensures that all input arrays have the same length along
their first dimension.
Arrays shuffled in-place by their first dimension - nothing returned.
Random seed within positive int32 range.
If a repeatable shuffle is needed, seed value can be set.
After the shuffle, the data can be split using np.split or referenced using slices - depending on the application.
you can make an array like:
s = np.arange(0, len(a), 1)
then shuffle it:
np.random.shuffle(s)
now use this s as argument of your arrays. same shuffled arguments return same shuffled vectors.
x_data = x_data[s]
x_label = x_label[s]
There is a well-known function that can handle this:
from sklearn.model_selection import train_test_split
X, _, Y, _ = train_test_split(X,Y, test_size=0.0)
Just setting test_size to 0 will avoid splitting and give you shuffled data.
Though it is usually used to split train and test data, it does shuffle them too.
From documentation
Split arrays or matrices into random train and test subsets
Quick utility that wraps input validation and
next(ShuffleSplit().split(X, y)) and application to input data into a
single call for splitting (and optionally subsampling) data in a
oneliner.
This seems like a very simple solution:
import numpy as np
def shuffle_in_unison(a,b):
assert len(a)==len(b)
c = np.arange(len(a))
np.random.shuffle(c)
return a[c],b[c]
a = np.asarray([[1, 1], [2, 2], [3, 3]])
b = np.asarray([11, 22, 33])
shuffle_in_unison(a,b)
Out[94]:
(array([[3, 3],
[2, 2],
[1, 1]]),
array([33, 22, 11]))
One way in which in-place shuffling can be done for connected lists is using a seed (it could be random) and using numpy.random.shuffle to do the shuffling.
# Set seed to a random number if you want the shuffling to be non-deterministic.
def shuffle(a, b, seed):
np.random.seed(seed)
np.random.shuffle(a)
np.random.seed(seed)
np.random.shuffle(b)
That's it. This will shuffle both a and b in the exact same way. This is also done in-place which is always a plus.
EDIT, don't use np.random.seed() use np.random.RandomState instead
def shuffle(a, b, seed):
rand_state = np.random.RandomState(seed)
rand_state.shuffle(a)
rand_state.seed(seed)
rand_state.shuffle(b)
When calling it just pass in any seed to feed the random state:
a = [1,2,3,4]
b = [11, 22, 33, 44]
shuffle(a, b, 12345)
Output:
>>> a
[1, 4, 2, 3]
>>> b
[11, 44, 22, 33]
Edit: Fixed code to re-seed the random state
Say we have two arrays: a and b.
a = np.array([[1,2,3],[4,5,6],[7,8,9]])
b = np.array([[9,1,1],[6,6,6],[4,2,0]])
We can first obtain row indices by permutating first dimension
indices = np.random.permutation(a.shape[0])
[1 2 0]
Then use advanced indexing.
Here we are using the same indices to shuffle both arrays in unison.
a_shuffled = a[indices[:,np.newaxis], np.arange(a.shape[1])]
b_shuffled = b[indices[:,np.newaxis], np.arange(b.shape[1])]
This is equivalent to
np.take(a, indices, axis=0)
[[4 5 6]
[7 8 9]
[1 2 3]]
np.take(b, indices, axis=0)
[[6 6 6]
[4 2 0]
[9 1 1]]
If you want to avoid copying arrays, then I would suggest that instead of generating a permutation list, you go through every element in the array, and randomly swap it to another position in the array
for old_index in len(a):
new_index = numpy.random.randint(old_index+1)
a[old_index], a[new_index] = a[new_index], a[old_index]
b[old_index], b[new_index] = b[new_index], b[old_index]
This implements the Knuth-Fisher-Yates shuffle algorithm.
Shortest and easiest way in my opinion, use seed:
random.seed(seed)
random.shuffle(x_data)
# reset the same seed to get the identical random sequence and shuffle the y
random.seed(seed)
random.shuffle(y_data)
most solutions above work, however if you have column vectors you have to transpose them first. here is an example
def shuffle(self) -> None:
"""
Shuffles X and Y
"""
x = self.X.T
y = self.Y.T
p = np.random.permutation(len(x))
self.X = x[p].T
self.Y = y[p].T
With an example, this is what I'm doing:
combo = []
for i in range(60000):
combo.append((images[i], labels[i]))
shuffle(combo)
im = []
lab = []
for c in combo:
im.append(c[0])
lab.append(c[1])
images = np.asarray(im)
labels = np.asarray(lab)
I extended python's random.shuffle() to take a second arg:
def shuffle_together(x, y):
assert len(x) == len(y)
for i in reversed(xrange(1, len(x))):
# pick an element in x[:i+1] with which to exchange x[i]
j = int(random.random() * (i+1))
x[i], x[j] = x[j], x[i]
y[i], y[j] = y[j], y[i]
That way I can be sure that the shuffling happens in-place, and the function is not all too long or complicated.
Just use numpy...
First merge the two input arrays 1D array is labels(y) and 2D array is data(x) and shuffle them with NumPy shuffle method. Finally split them and return.
import numpy as np
def shuffle_2d(a, b):
rows= a.shape[0]
if b.shape != (rows,1):
b = b.reshape((rows,1))
S = np.hstack((b,a))
np.random.shuffle(S)
b, a = S[:,0], S[:,1:]
return a,b
features, samples = 2, 5
x, y = np.random.random((samples, features)), np.arange(samples)
x, y = shuffle_2d(train, test)

How to stack numpy arrays alternately/slicewise along a specific axis?

How can I stack arrays in an alternating fashion? Consider the following example with three arrays:
import numpy as np
one = np.ones((5, 2, 2))
two = np.ones((5, 2, 2))*2
three = np.ones((5, 2, 2))*3
I would like to create a new array result with shape (15, 2, 2) which is formed by alternately taking a slice from each of the given arrays, i.e. the result should look like:
result[0] = one[0]
result[1] = two[0]
result[2] = three[0]
result[3] = one[1]
result[4] = two[1]
result[5] = three[1]
result[6] = one[2]
etc...
The arrays above are just an example to illustrate the question, I am not looking for a way to create this specific result array. What is the easiest way to achieve this, at best with specifying a stacking axis?
Of course, it is possible to do some loops but it seems rather inconvenient...
You may wanne take a look at np.stack() i.e.:
np.stack([one, two, three], axis=1).reshape(15, 2, 2)
With np.hstack and then reshape (with -1 for the first axis appended with the lengths along last two axes for a generic solution) -
np.hstack([one,two,three]).reshape((-1,)+one.shape[1:])
I think you are looking for np.vstack
np.vstack((one,two,three))
Read more about it here np.vstack
With selectable axis:
# example arrays
a,b,c = np.multiply.outer([1,2,3],np.ones((5,2,2)))
# axis
k = 1
np.stack([a,b,c],k+1).reshape(*(-(k==j) or s for j,s in enumerate(a.shape)))
# array([[[1., 1.],
# [2., 2.],
# [3., 3.],
# [1., 1.],
# [2., 2.],
# [3., 3.]],
#
# [[1., 1.],
...

Finding the max and min in a tuple of tuples

I'm new to python and having some problems finding the minimum and maximum values for a tuple of tuples. I need them to normalise my data. So, basically, I have a list that is a row of 13 numbers, each representing something. Each number makes a column in a list, and I need the max and min for each column. I tried indexing/iterating through but keep getting an error of
max_j = max(j)
TypeError: 'float' object is not iterable
any help would be appreciated!
The code is (assuming data_set_tup is a tuple of tuples, eg ((1,3,4,5,6,7,...),(5,6,7,3,6,73,2...)...(3,4,5,6,3,2,2...)) I also want to make a new list using the normalised values.
normal_list = []
for i in data_set_tup:
for j in i[1:]: # first column doesn't need to be normalised
max_j = max(j)
min_j = min(j)
normal_j = (j-min_j)/(max_j-min_j)
normal_list.append(normal_j)
normal_tup = tuple(normal_list)
You can transpose rows to columns and vice versa with zip(*...). (Use list(zip(*...)) in Python 3)
cols = zip(*data_set_tup)
normal_cols = [cols[0]] # first column doesn't need to be normalised
for j in cols[1:]:
max_j = max(j)
min_j = min(j)
normal_cols.append(tuple((k-min_j)/(max_j-min_j) for k in j)
normal_list = zip(*normal_cols)
This really sounds like a job for the non-builtin numpy module, or maybe the pandas module, depending on your needs.
Adding an extra dependency on your application should not be done lightly, but if you do a lot of work on matrix-like data, then your code will likely be both faster and more readable if you use one of the above modules throughout your application.
I do not recommend converting a list of lists to a numpy array and back again just to get this single result -- it's better to use the pure python method of Jannes answer. Also, seeing that you're a python beginner, numpy may be overkill right now. But I think your question deserves an answer pointing out that this is an option.
Here's a step-by-step console illustration of how this would work in numpy:
>>> import numpy as np
>>> a = np.array([[1,3,4,5,6],[5,6,7,3,6],[3,4,5,6,3]], dtype=float)
>>> a
array([[ 1., 3., 4., 5., 6.],
[ 5., 6., 7., 3., 6.],
[ 3., 4., 5., 6., 3.]])
>>> min = np.min(a, axis=0)
>>> min
array([1, 3, 4, 3, 3])
>>> max = np.max(a, axis=0)
>>> max
array([5, 6, 7, 6, 6])
>>> normalized = (a - min) / (max - min)
>>> normalized
array([[ 0. , 0. , 0. , 0.66666667, 1. ],
[ 1. , 1. , 1. , 0. , 1. ],
[ 0.5 , 0.33333333, 0.33333333, 1. , 0. ]])
So in actual code:
import numpy as np
def normalize_by_column(a):
min = np.min(a, axis=0)
max = np.max(a, axis=0)
return (a - min) / (max - min)
We have nested_tuple = ((1, 2, 3), (4, 5, 6), (7, 8, 9)).
First of all we need to normalize it. Pythonic way:
flat_tuple = [x for row in nested_tuple for x in row]
Output: [1, 2, 3, 4, 5, 6, 7, 8, 9] # it's a list
Move it to tuple: tuple(flat_tuple), get max value: max(flat_tuple), get min value: min(flat_tuple)

How to normalize a 2-dimensional numpy array in python less verbose?

Given a 3 times 3 numpy array
a = numpy.arange(0,27,3).reshape(3,3)
# array([[ 0, 3, 6],
# [ 9, 12, 15],
# [18, 21, 24]])
To normalize the rows of the 2-dimensional array I thought of
row_sums = a.sum(axis=1) # array([ 9, 36, 63])
new_matrix = numpy.zeros((3,3))
for i, (row, row_sum) in enumerate(zip(a, row_sums)):
new_matrix[i,:] = row / row_sum
There must be a better way, isn't there?
Perhaps to clearify: By normalizing I mean, the sum of the entrys per row must be one. But I think that will be clear to most people.
Broadcasting is really good for this:
row_sums = a.sum(axis=1)
new_matrix = a / row_sums[:, numpy.newaxis]
row_sums[:, numpy.newaxis] reshapes row_sums from being (3,) to being (3, 1). When you do a / b, a and b are broadcast against each other.
You can learn more about broadcasting here or even better here.
Scikit-learn offers a function normalize() that lets you apply various normalizations. The "make it sum to 1" is called L1-norm. Therefore:
from sklearn.preprocessing import normalize
matrix = numpy.arange(0,27,3).reshape(3,3).astype(numpy.float64)
# array([[ 0., 3., 6.],
# [ 9., 12., 15.],
# [ 18., 21., 24.]])
normed_matrix = normalize(matrix, axis=1, norm='l1')
# [[ 0. 0.33333333 0.66666667]
# [ 0.25 0.33333333 0.41666667]
# [ 0.28571429 0.33333333 0.38095238]]
Now your rows will sum to 1.
I think this should work,
a = numpy.arange(0,27.,3).reshape(3,3)
a /= a.sum(axis=1)[:,numpy.newaxis]
In case you are trying to normalize each row such that its magnitude is one (i.e. a row's unit length is one or the sum of the square of each element in a row is one):
import numpy as np
a = np.arange(0,27,3).reshape(3,3)
result = a / np.linalg.norm(a, axis=-1)[:, np.newaxis]
# array([[ 0. , 0.4472136 , 0.89442719],
# [ 0.42426407, 0.56568542, 0.70710678],
# [ 0.49153915, 0.57346234, 0.65538554]])
Verifying:
np.sum( result**2, axis=-1 )
# array([ 1., 1., 1.])
I think you can normalize the row elements sum to 1 by this:
new_matrix = a / a.sum(axis=1, keepdims=1).
And the column normalization can be done with new_matrix = a / a.sum(axis=0, keepdims=1). Hope this can hep.
You could use built-in numpy function:
np.linalg.norm(a, axis = 1, keepdims = True)
it appears that this also works
def normalizeRows(M):
row_sums = M.sum(axis=1)
return M / row_sums
You could also use matrix transposition:
(a.T / row_sums).T
Here is one more possible way using reshape:
a_norm = (a/a.sum(axis=1).reshape(-1,1)).round(3)
print(a_norm)
Or using None works too:
a_norm = (a/a.sum(axis=1)[:,None]).round(3)
print(a_norm)
Output:
array([[0. , 0.333, 0.667],
[0.25 , 0.333, 0.417],
[0.286, 0.333, 0.381]])
Use
a = a / np.linalg.norm(a, ord = 2, axis = 0, keepdims = True)
Due to the broadcasting, it will work as intended.
Or using lambda function, like
>>> vec = np.arange(0,27,3).reshape(3,3)
>>> import numpy as np
>>> norm_vec = map(lambda row: row/np.linalg.norm(row), vec)
each vector of vec will have a unit norm.
We can achieve the same effect by premultiplying with the diagonal matrix whose main diagonal is the reciprocal of the row sums.
A = np.diag(A.sum(1)**-1) # A

Categories

Resources