I have a 3d image stored in fet_img np array. The size is (400,400,74).
I want to access the 74 2D images seperately, each of size (400,400).
I would expect that this would do the trick:
fet_img[:][:][0]
However, when I print the shape of this, I get (400,74)
I tried
fet_img[0][:][:]
and
fet_img[:][0][:]
but the size of all three of these are (400,74)...
I'm overlooking something but I can't quite figure out what?
Note: I'm runnning this from a local jupyter notebook and all values are dtype('float64') if that matters at all.
You should use fet_img[:, :, 0] instead.
Related
I am having a bit of misunderstanding with numpy array
I have a set of two data (m, error) which I would like to plot
I save them in the array like this as I catch them in the same loop (which is probably causing the issue)
sum_error = np.append(m,error)
Then just simply trying to plot like this but it doesn't work as apparently this array is only of size 1 (with a tuple?)
plt.scatter(x=sum_error[:, 0], y=sum_error[:, 1])
What is the correct way to proceed? should I really make two different arrays, one for m, one for error in order to plot them?
Thanks
As it is answered on this thread, try using
np.vstack((m,error))
I have to translate a code from Octave to Python, among many things the program does something like this:
load_image = imread('image.bmp')
which as you can see its a bitmap, then if I do
size(load_image) that prints (1200,1600,3) which its ok, but, when I do:
load_image
it prints a one dimensional array, that does not make any sense to me, my question is how in Octave are these values interpreted because I have to load the same image in opencv and I couldn't find the way.
thanks.
What you have is a 3D array in octave. Here in the x-dimension you seem to have RGB values for each pixel and Y and Z dimension are the rows and columns respectively. However when you print it you will see all the values in the array and hence it looks like a 1D array.
Try something like this and look at the output:
load_image(:,:,i)
The i stands for the dimensions of your image RGB. If you want to 2D print your 3D image using matplotlib or similar, you need to do the same.
To visualize TensorType(float64, matrix) as an image using imshow, How could I that?I cannot directly use imshow on Tensor since it gives me this error
mat is not a numpy array, neither a scalar
When I try to convert datatype to array using numpy.asarray I get
mat data type = 17 is not supported
Is there any way to convert to uint8 datatype?
Theano tensors are symbolic variables, with no value attached to them. If you want to plot something it is probably a shared variable (the weights of your network), in which case you can grab the values with my_shared.get_value(), or the output of the network when you pass an input, in which case I believe it should already be a numpy array.
I have a 2d array with int value that I want to convert into an image.
The 2d array is generated randomly between 1-3, with consideration for what the neighboring int it in the array, I want to convert 1,2,3 to R,G,B in an image to better see what the outcome of the generator is.
What is the best way to do this?
I would use the matplotlib library. Just use plt.imshow or plt.pcolormesh (the second one is technically better for discrete values). The default colormap is pretty close to RGB in this case, but you could use another colormap if you wanted to. For example:
import numpy as np
import matplotlib.pyplot as plt
# Creating random 1-3 data in a 2D array
data = np.random.randint(1,4,[100,150])
plt.pcolormesh(a)
I'm using IPython and %matplotlib inline, you might need to call plt.show() to get it to draw if you are not using IPython.
I have two datasets that I need to correlate in Python. One array is a .mat file and the other is a list of .bin files. From these datasets I have created two 3D arrays with the same extent (120x112x244). While familiar with Python I have not worked with such datasets before, and thus am seeking advice on how to correlate these arrays. I attempted numpy correlate and received:
"ValueError: object too deep for desired array"
Any suggestions would be greatly appreciated
One idea I would try is to flatten the 3D matrix first, then use coorelate -- since coorelate only takes 1D vectors.
http://docs.scipy.org/doc/numpy/reference/generated/numpy.correlate.html.
Let's say your two matricies are called A and B.
>>> import numpy
>>> array_a = numpy.ndarray.flatten(A)
>>> array_b = numpy.ndarray.flatten(B)
>>> results = numpy.correlate(array_a, array_b)