I am trying to save the results from a loop in a np.array.
import numpy as np
p=np.array([])
points= np.array([[3,0,0],[-1,0,0]])
for i in points:
for j in points:
if j[0]!=0:
n=i+j
p= np.append(p,n)
However the resulting array is a 1D array of 6 members.
[2. 0. 0. -2. 0. 0.]
Instead I am looking for, but have been unable to produce:
[[2,0,0],[-2,0,0]]
Is there any way to get the result above?
Thank you.
One possibility is to turn p into a list, and convert it into a NumPy array right at the end:
p = []
for i in points:
...
p.append(n)
p = np.array(p)
What you're looking for is vertically stacking your results:
import numpy as np
p=np.empty((0,3))
points= np.array([[3,0,0],[-1,0,0]])
for i in points:
for j in points:
if j[0]!=0:
n=i+j
p= np.vstack((p,n))
print p
which gives:
[[ 2. 0. 0.]
[-2. 0. 0.]]
Although you could also reshape your result afterwards:
import numpy as np
p=np.array([])
points= np.array([[3,0,0],[-1,0,0]])
for i in points:
for j in points:
if j[0]!=0:
n=i+j
p= np.append(p,n)
p=np.reshape(p,(-1,3))
print p
Which gives the same result
.
I must warn you, hovever, that your code fails if j[0]!=0 as that would make n undefined...
np.vstack
np.empty
np.reshape
Related
I have a 3D NumPy array of size (9,9,200) and a 2D array of size (200,200).
I want to take each channel of shape (9,9,1) and generate an array (9,9,200), every channel multiplied 200 times by 1 scalar in a single row, and average it such that the resultant array is (9,9,1).
Basically, if there are n channels in an input array, I want each channel multiplied n times and averaged - and this should happen for all channels. Is there an efficient way to do so?
So far what I have is this -
import numpy as np
arr = np.random.rand(9,9,200)
nchannel = arr.shape[-1]
transform = np.array([np.random.uniform(low=0.0, high=1.0, size=(nchannel,)) for i in range(nchannel)])
for channel in range(nchannel):
# The below line needs optimization
temp = [arr[:,:,i] * transform[channel][i] for i in range(nchannel)]
arr[:,:,channel] = np.sum(temp, axis=0)/nchannel
Edit :
A sample image demonstrating what I am looking for. Here nchannel = 3.
The input image is arr. The final image is the transformed arr.
EDIT:
import numpy as np
n_channels = 3
scalar_size = 2
t = np.ones((n_channels,scalar_size,scalar_size)) # scalar array
m = np.random.random((n_channels,n_channels)) # letters array
print(m)
print(t)
m_av = np.mean(m, axis=1)
print(m_av)
for i in range(n_channels):
t[i] = t[i]*m_av1[i]
print(t)
output:
[[0.04601533 0.05851365 0.03893352]
[0.7954655 0.08505869 0.83033369]
[0.59557455 0.09632997 0.63723506]]
[[[1. 1.]
[1. 1.]]
[[1. 1.]
[1. 1.]]
[[1. 1.]
[1. 1.]]]
[0.04782083 0.57028596 0.44304653]
[[[0.04782083 0.04782083]
[0.04782083 0.04782083]]
[[0.57028596 0.57028596]
[0.57028596 0.57028596]]
[[0.44304653 0.44304653]
[0.44304653 0.44304653]]]
What you're asking for is a simple matrix multiplication along the last axis:
import numpy as np
arr = np.random.rand(9,9,200)
transform = np.random.uniform(size=(200, 200)) / 200
arr = arr # transform
I understand the concept of vectorization, and how you can avoid using a loop to run through the elements when you want to adjust each individual element, however what I can't figure out it how to do this when we have a conditional based on the neighbouring values of a pixel.
For example, if I have a mask:
mask = np.array([[0,0,0,0],
[1,0,0,0],
[0,0,0,1],
[1,0,0,0]])
And I wanted to change an element by evaluating neighboring components in the mask, like so:
if sum(mask[j-1:j+2,i-1:i+2].flatten())>1 and mask[j,i]!=1:
out[j,i]=1
How can I vectorize the operation when I specifically need to access the neighboring elements?
Thanks in advance.
Full loop:
import numpy as np
mask = np.array([[0,0,0,0], [1,0,0,0], [0,0,0,1], [1,0,0,0]])
out = np.zeros(mask.shape)
for j in range(len(mask)):
for i in range(len(mask[0])):
if sum(mask[j-1:j+2,i-1:i+2].flatten())>1 and mask[j,i]!=1:
out[j,i]=1
Output:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 0. 0.]]
Such a 'neighborhood sum' operation is often called a 2D convolution. In your case since you don't have any weighting it is efficiently implemented in the (IMO somewhat poorly named) scipy.ndimage.uniform_filter, which can compute the mean of a neighborhood (and the sum is
just the mean multiplied by the size).
import numpy as np
from scipy.ndimage import uniform_filter
mask = np.array([[0,0,0,0], [1,0,0,0], [0,0,0,1], [1,0,0,0]])
neighbor_sum = 9 * uniform_filter(mask.astype(np.float32), 3, mode="constant")
neighbor_sum = np.rint(neighbor_sum).astype(int)
out = ((neighbor_sum > 1) & (mask != 1)).astype(int)
print(out)
Output (which is different than your example but looking at it by hand is correct, assuming you don't want the edges to wrap around):
[[0 0 0 0]
[0 0 0 0]
[1 1 0 0]
[0 0 0 0]]
If you do want the edges to wrap around (or other edge behavior), look at the mode argument of uniform_filter.
I am calculating the difference of each element in a numpy array. My code is
import numpy as np
M = 10
x = np.random.uniform(0,1,M)
y = np.array([x])
# Calculate the difference
z = np.array(y[:,None]-y)
When I run my code I get [[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]]. I don't get a 10 by 10 array.
Where do I go wrong?
You should read the broadcasting rules for numpy
y.T - x
Another way:
np.subtract.outer(x, x)
You are not getting 10 by 10 array because value of M is 10. Try:
M = (10,10)
I searched stackoverflow but could not find an answer to this specific question. Sorry if it is a naive question, I am a newbie to python.
I have several 2d arrays (or lists) that I would like to read into a 3d array (list) in python. In Matlab, I can simply do
for i=1:N
# read 2d array "a"
newarray(:,:,i)=a(:,:)
end
so newarray is a 3d array with "a" being the 2d slices arranged along the 3rd dimension.
Is there a simple way to do this in python?
Edit: I am currently trying the following:
for file in files:
img=mpimg.imread(file)
newarray=np.array(0.289*cropimg[:,:,0]+0.5870*cropimg[:,:,1]+0.1140*cropimg[:,:,2])
i=i+1
I tried newarray[:,:,i] and it gives me an error
NameError: name 'newarray' is not defined
Seems like I have to define newarray as a numpy array? Not sure.
Thanks!
If you're familiar with MATLAB, translating that into using NumPy is fairly straightforward.
Lets say you have a couple arrays
a = np.eye(3)
b = np.arange(9).reshape((3, 3))
print(a)
# [[ 1. 0. 0.]
# [ 0. 1. 0.]
# [ 0. 0. 1.]]
print(b)
# [[0 1 2]
# [3 4 5]
# [6 7 8]]
If you simply want to put them into another dimension, pass them both to the array constructor in an iterable (e.g. a list) like so:
x = np.array([a, b])
print(x)
# [[[ 1. 0. 0.]
# [ 0. 1. 0.]
# [ 0. 0. 1.]]
#
# [[ 0. 1. 2.]
# [ 3. 4. 5.]
# [ 6. 7. 8.]]]
Numpy is smart enough to recognize the arrays are all the same size and creates a new dimension to hold it all.
print(x.shape)
# (2, 3, 3)
You can loop through it, but if you want to apply the same operations to it across some dimensions, I would strongly suggest you use broadcasting so that NumPy can vectorize the operation and it runs a whole lot faster.
For example, across one dimension, lets multiply one slice by 2, another by 3. (If it's not a pure scalar, we need to reshape the array to the same number of dimensions to broadcast, then the size on each needs to either match the array or be 1). Note that I'm working along the 0th axis, your image is probably different. I don't have a handy image to load up to toy with
y = x * np.array([2, 3]).reshape((2, 1, 1))
print(y)
#[[[ 2. 0. 0.]
# [ 0. 2. 0.]
# [ 0. 0. 2.]]
#
# [[ 0. 3. 6.]
# [ 9. 12. 15.]
# [ 18. 21. 24.]]]
Then we can add them up
z = np.sum(y, axis=0)
print(z)
#[[ 2. 3. 6.]
# [ 9. 14. 15.]
# [ 18. 21. 26.]]
If you're using NumPy arrays, you can translate almost directly from Matlab:
for i in range(1, N+1):
# read 2d array "a"
newarray[:, :, i] = a[:, :]
Of course you'd probably want to use range(N), because arrays use 0-based indexing. And obviously you're going to need to pre-create newarray in some way, just as you'd have to in Matlab, but you can translate that pretty directly too. (Look up the zeros function if you're not sure how.)
If you're using lists, you can't do this directly—but you probably don't want to anyway. A better solution would be to build up a list of 2D lists on the fly:
newarray = []
for i in range(N):
# read 2d list of lists "a"
newarray.append(a)
Or, more simply:
newarray = [read_next_2d_list_of_lists() for i in range(N)]
Or, even better, make that read function a generator, then just:
newarray = list(read_next_2d_list_of_lists())
If you want to transpose the order of the axes, you can use the zip function for that.
I have a large numpy 1 dimensional array of data in Python and want entries x (500) to y (520) to be changed to equal 1. I could use a for loop but is there a neater, faster numpy way of doing this?
for x in range(500,520)
numpyArray[x] = 1.
Here is the for loop that could be used but it seems like there could be a function in numpy that I'm missing - I'd rather not use the masked arrays that numpy offers
You can use [] to access a range of elements:
import numpy as np
a = np.ones((10))
print(a) # Original array
# [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
startindex = 2
endindex = 4
a[startindex:endindex] = 0
print(a) # modified array
# [ 1. 1. 0. 0. 1. 1. 1. 1. 1. 1.]