I have to translate a code from Octave to Python, among many things the program does something like this:
load_image = imread('image.bmp')
which as you can see its a bitmap, then if I do
size(load_image) that prints (1200,1600,3) which its ok, but, when I do:
load_image
it prints a one dimensional array, that does not make any sense to me, my question is how in Octave are these values interpreted because I have to load the same image in opencv and I couldn't find the way.
thanks.
What you have is a 3D array in octave. Here in the x-dimension you seem to have RGB values for each pixel and Y and Z dimension are the rows and columns respectively. However when you print it you will see all the values in the array and hence it looks like a 1D array.
Try something like this and look at the output:
load_image(:,:,i)
The i stands for the dimensions of your image RGB. If you want to 2D print your 3D image using matplotlib or similar, you need to do the same.
Related
I have a 3d image stored in fet_img np array. The size is (400,400,74).
I want to access the 74 2D images seperately, each of size (400,400).
I would expect that this would do the trick:
fet_img[:][:][0]
However, when I print the shape of this, I get (400,74)
I tried
fet_img[0][:][:]
and
fet_img[:][0][:]
but the size of all three of these are (400,74)...
I'm overlooking something but I can't quite figure out what?
Note: I'm runnning this from a local jupyter notebook and all values are dtype('float64') if that matters at all.
You should use fet_img[:, :, 0] instead.
I would like to remove the background light gradient from the following image, such that the lightening would become more homogeneous, the interesting objects being the kind of "cones" seen from the top.
Image:
I also have an image "background" without the cones :
I tried the simplest thing , which is to convert these images in grayscale and the substracting it but the result is pretty ... (really) bad, using :
img = np.array(Image.open('../Pics/image.png').convert('L'))
background = np.array(Image.open('../Pics/background.JPG').convert('L'))
img_filtered = img - background
What could you advise me ? The ideal would be to stay in RGB, though I don't know almost anything about image processing, filters, etc ...
By "the result is pretty ... (really) bad", i assume, you see a picture like this:
This seems to be due to the fact, that subtracting images, which could produce negative numbers instead starts "from the top" of the brightness-scale, like this:
4-5 = 255 instead of -1.
This is a byproduct, on how the pictures are loaded.
If i use "plain numpy array", get a picture like this:
So maybe try handling your pictures as numpy arrays: take a look over here
[Edit: This is due to the dtype uint8 of the numpy arrays. Changing to int should already be enough]
I am having a bit of misunderstanding with numpy array
I have a set of two data (m, error) which I would like to plot
I save them in the array like this as I catch them in the same loop (which is probably causing the issue)
sum_error = np.append(m,error)
Then just simply trying to plot like this but it doesn't work as apparently this array is only of size 1 (with a tuple?)
plt.scatter(x=sum_error[:, 0], y=sum_error[:, 1])
What is the correct way to proceed? should I really make two different arrays, one for m, one for error in order to plot them?
Thanks
As it is answered on this thread, try using
np.vstack((m,error))
I am using OpenCV to read and display an image. I am trying to do a scalar multiplication but it is being displayed very differently for two similar approaches:
img = cv2.imread('C:/Python27/user_scripts/images/g1.jpg', -1)
cv2.imshow('img_scaled1', 0.5*img)
cv2.waitKey(0)
cv2.imshow('img_scaled2', img/2)
cv2.waitKey(0)
In the 1st case, hardly anything is displayed. 2nd case works fine.
It seems to me that imshow() does not support numpy array of floats.
I want to use the first method. Can somebody help?
There are lot of pitfall when using images. This one seems like a type issue.
imshowaccept uint8 arrays in range(0,256) (256 excluded), and float arrays in range(0.0,1.0). When doing a=a*.5, you have a float array out of range, so no warranty on the result.
A solution is to cast the array in the uint8 type by:
imshow((a*.5).astype(np.uint8))
or
imshow((a*.5).astype('uint8'))
So I'm doing template matching on a colored image and skimage.feature.match_template() seems to work just fine. But I'm not sure exactly how it is performing this because although the original images are size N x N x 3, the array output is one dimensional. I theorized that it was only performing the template match on the red layer perhaps, but this doesn't seem to be the case. Does it do some sort of averaging for RGB images? I want to understand where its getting its values from so I know that its interpreting the image correctly. Thanks!
From the match_template() documentation:
Returns
-------
output : array
Response image with correlation coefficients.
So, the template matching uses either 2 or 3 dimensions depending on the input image and template. However, the output will always be a scalar score that tells you how well the template matched at a specific image position.