I'm new to Python and i need to draw a RGB spectrum as a numpy array.
For me it's clear that i need to rise the RGB values across the dimensions to get the spectrum.
import numpy as np
import matplotlib.pyplot as plt
spectrum = np.zeros([255,255, 3], dtype=np.unit8) #init the array
#fill the array with rgb values to create the spectrum without the use of loops
plt.imshow(spectrum)
plt.axis('off') # don't show axis
plt.show()
Is there a possiblity (e.g. a python or numpy method) to create the spectrum without the use of loops?
Not sure if this is the result you'd like, but you could define the arrays for the RGB values yourself (see HSV-RGB comparison). I've used Pillow to convert grayscale to colour.
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
spectrum = np.zeros([256,256*6, 3], dtype=np.uint8) # init the array
# fill the array with rgb values to create the spectrum without the use of loops
spectrum[:,:,0] = np.concatenate(([255]*256, np.linspace(255,0,256), [0]*256, [0]*256, np.linspace(0,255,256), [255]*256), axis=0)
spectrum[:,:,1] = np.concatenate((np.linspace(0,255,256), [255]*256, [255]*256, np.linspace(255,0,256), [0]*256,[0]*256), axis=0)
spectrum[:,:,2] = np.concatenate(([0]*256, [0]*256,np.linspace(0,255,256),[255]*256, [255]*256, np.linspace(255,0,256)), axis=0)
img = Image.fromarray(spectrum, 'RGB')
img.show()
Related
As the title says, I have to take an image and write code that colors in every n-th pixel on x axis and y axis.
I've tried using for loops, but it colors in the whole axis line instead of the one pixel that i need. I either have to use OpenCV or Pillow for this task.
#pillow
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
picture = Image.open('e92m3.jpg')
picture_resized = picture.resize( (500,500) )
pixels = picture_resized.load()
#x,y
for i in range(0,500):
pixels[i,10] = (0,255,0)
for i in range(0,500):
pixels[10,i] = (255,0,0)
%matplotlib notebook
plt.imshow(picture_resized)
This is how it should approximately look like:
You really should avoid for loops with image processing in Python. They are horribly slow and inefficient. As pretty much all image processing suites use Numpy arrays to store images, you should try and use vectorised Numpy access methods such as slicing, indexing and broadcasting:
import numpy as np
import cv2
# Load image
im = cv2.imread('lena.png')
# Use Numpy indexing to make alternate rows and columns black
im[0::2,0::2] = [0,0,0]
im[1::2,1::2] = [0,0,0]
cv2.imwrite('result.png', im)
If you want to use PIL/Pillow in place of OpenCV, load and save the image like this:
from PIL import Image
# Load as PIL Image and make into Numpy array
im = np.array(Image.open('lena.png').convert('RGB'))
... process ...
# Make Numpy array back into PIL Image and save
Image.fromarray(im).save('result.png')
Maybe have a read here about indexing.
I don't think I've understood your question but here is my answer on what i understood of it.
def interval_replace(img, offset_x: int=0, interval_x: int, offset_y: int=0, interval_y: int, replace_pxl: tuple):
for y in range(offset_y, img.shape[0]):
for x in range(offset_x, img.shape[1]):
if x % interval_x == 0 and y % interval_y == 0:
img[y][x] = replace_pxl
I tried out the following example. This should show image processing results.
from scipy import ndimage as ndi
import matplotlib.pyplot as plt
from scipy import misc
import numpy as np
import cv2
from skimage.morphology import watershed, disk
from skimage import data
from skimage.filters import rank
from skimage.util import img_as_ubyte
from skimage import io; io.use_plugin('matplotlib')
image = img_as_ubyte('imagepath.jpg')
# denoise image
denoised = rank.median(image, disk(2))
# find continuous region (low gradient -
# where less than 10 for this image) --> markers
# disk(5) is used here to get a more smooth image
markers = rank.gradient(denoised, disk(5)) < 10
markers = ndi.label(markers)[0]
# local gradient (disk(2) is used to keep edges thin)
gradient = rank.gradient(denoised, disk(2))
# process the watershed
labels = watershed(gradient, markers)
# display results
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(8, 8),
sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax[0].set_title("Original")
ax[1].imshow(gradient, cmap=plt.cm.nipy_spectral, interpolation='nearest')
ax[1].set_title("Local Gradient")
ax[2].imshow(markers, cmap=plt.cm.nipy_spectral, interpolation='nearest')
ax[2].set_title("Markers")
ax[3].imshow(image, cmap=plt.cm.gray, interpolation='nearest')
ax[3].imshow(labels, cmap=plt.cm.nipy_spectral, interpolation='nearest', alpha=.7)
ax[3].set_title("Segmented")
for a in ax:
a.axis('off')
fig.tight_layout()
plt.show()
I get the follofing error.
Traceback (most recent call last):
File "/home/workspace/calculate_watershed.py", line 15, in <module>
image = img_as_ubyte('koralle0.jpg')
File "/home/workspace/venv/lib/python3.5/site-packages/skimage/util/dtype.py", line 409, in img_as_ubyte
return convert(image, np.uint8, force_copy)
File "/home/workspace/venv/lib/python3.5/site-packages/skimage/util/dtype.py", line 113, in convert
.format(dtypeobj_in, dtypeobj_out))
ValueError: Can not convert from <U12 to uint8.
The path to the image is a valued one. Do You have any idea how to solve this problem? Thanks in advance
The problem is that the ndarray returned from your image has dtype <U12 which cannot be converted to dtype uint8. To check the dtype of your image file, convert it to a numpy array. I get a <U38 dtype for my image:
np.array('CAPTURE.jpg')
#array('Capture.JPG', dtype='<U38')
You should first read the image with skimage.io.imread(image_path). This will return an ndarray of MxN, MxNx3 or MxNx4. Then, reshape the resultant ndarray to 2D if its 3D or 4D. This conversion is required because skimage.filters.rank.median(image) accepts an image ndarray of 2D shape. In the following code, I've used my sample image to perform these steps before passing to img_as_ubyte(sk_image). The rest of the code remains the same.
from skimage.io import imread
#<---code--->
sk_image = imread('CAPTURE.jpg') #read the image to convert to skimage ndarray
sk_image = sk_image.transpose(1,0,2).reshape(130,-1) #convert to 2D array
image = img_as_ubyte(sk_image) #Convert image to 8-bit unsigned integer format.
#<---code--->
I get the following images:
You should consider the following points:
Check the shape of the image array returned from imread: After reading the image with sk_image = imread('CAPTURE.jpg'), check the shape of the array with sk_image.shape. For my image, I get the shape as (74, 130, 3) which shows its a 3D array.
To reshape to 2D, first get the strides with sk_image.strides. For my image, I get (390, 3, 1), then transpose with sk_image.transpose(1,0,2). You can also check the strides after transposing and you will notice the values have been swapped sk_image.transpose(1,0,2).strides: (3, 390, 1). Then, use reshape: sk_image.transpose(1, 0, 2).reshape(130,-1) to reshape to a 2D array. You will notice that the reshape dimensions have been roughly calculated from the stride value(390/2).
P.S: You can read more about 3D to 2D reshaping of numpy arrays here.
I am new to computational vision and python and I could not really figure out what went wrong. I have tried to randomize all the image pixels in a RGB image, but my image turned out to be completely wrong as seen below. Can someone please shed some light?
from scipy import misc
import numpy as np
import matplotlib.pyplot as plt
#Loads an arbitrary RGB image from the misc library
rgbImg = misc.face()
%matplotlib inline
#Display out the original RGB image
plt.figure(1,figsize = (6, 4))
plt.imshow(rgbImg)
plt.show()
#Initialise a new array of zeros with the same shape as the selected RGB image
rdmImg = np.zeros((rgbImg.shape[0], rgbImg.shape[1], rgbImg.shape[2]))
#Convert 2D matrix of RGB image to 1D matrix
oneDImg = np.ravel(rgbImg)
#Randomly shuffle all image pixels
np.random.shuffle(oneDImg)
#Place shuffled pixel values into the new array
i = 0
for r in range (len(rgbImg)):
for c in range(len(rgbImg[0])):
for z in range (0,3):
rdmImg[r][c][z] = oneDImg[i]
i = i + 1
print rdmImg
plt.imshow(rdmImg)
plt.show()
original image
image of my attempt in randomizing image pixel
You are not shuffling the pixels, you are shuffling everything when you use np.ravel() and np.shuffle() afterwards.
When you shuffle the pixels, you have to make sure that the color, the RGB tuples, stay the same.
from scipy import misc
import numpy as np
import matplotlib.pyplot as plt
#Loads an arbitrary RGB image from the misc library
rgbImg = misc.face()
#Display out the original RGB image
plt.figure(1,figsize = (6, 4))
plt.imshow(rgbImg)
plt.show()
# doc on shuffle: multi-dimensional arrays are only shuffled along the first axis
# so let's make the image an array of (N,3) instead of (m,n,3)
rndImg2 = np.reshape(rgbImg, (rgbImg.shape[0] * rgbImg.shape[1], rgbImg.shape[2]))
# this like could also be written using -1 in the shape tuple
# this will calculate one dimension automatically
# rndImg2 = np.reshape(rgbImg, (-1, rgbImg.shape[2]))
#now shuffle
np.random.shuffle(rndImg2)
#and reshape to original shape
rdmImg = np.reshape(rndImg2, rgbImg.shape)
plt.imshow(rdmImg)
plt.show()
This is the random racoon, notice the colors. There is not red or blue there. Just the original ones, white, grey, green, black.
There are some other issues with your code I removed:
Do not use the nested for loops, slow.
The preallocation with np.zeros is not needed (if you ever need it, just pass rgbImg.shape as argument, no need to unpack the separate values)
Change plt.imshow(rdmImg) into plt.imshow(rdmImg.astype(np.uint8))
This may related to this issue https://github.com/matplotlib/matplotlib/issues/9391/
Below is a simple section of code used to access an image using PIL, convert to a numpy array and then print the number of elements in the array.
The image in question is here - - and consists of exactly 100 pixels (10x10). However, the numpy array contains 300 elements (where I would expect 100 elements). What am I doing wrong?
import numpy as np
import PIL
impath = 'C:/Users/Ricky/Desktop/testim.tif'
im = PIL.Image.open(impath)
arr = np.array(im)
print arr.size #300
Every image can be composed by 3 bands (Red-Green-Blue or RGB composition).
Since your image is a black/white image those three bands are the same. You can see the difference using a colored image.
Try this to see what I mean:
import matplotlib.pyplot as pyplot
# this line above import a matplotlib library for plotting image
import numpy as np
import PIL
impath = 'C:/Users/Ricky/Desktop/testim.tif'
im = PIL.Image.open(impath)
arr = np.array(im)
print arr.shape # (10, 10, 3)
print arr[:, : ,0].size # 100
# next lines actually show the image
pyplot.imshow(arr[:, : ,0], cmap='gray')
pyplot.show()
I am trying to save a numpy array of dimensions 128x128 pixels into a grayscale image.
I simply thought that the pyplot.imsave function would do the job but it's not, it somehow converts my array into an RGB image.
I tried to force the colormap to Gray during conversion but eventhough the saved image appears in grayscale, it still has a 128x128x4 dimension.
Here is a code sample I wrote to show the behaviour :
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mplimg
from matplotlib import cm
x_tot = 10e-3
nx = 128
x = np.arange(-x_tot/2, x_tot/2, x_tot/nx)
[X, Y] = np.meshgrid(x,x)
R = np.sqrt(X**2 + Y**2)
diam = 5e-3
I = np.exp(-2*(2*R/diam)**4)
plt.figure()
plt.imshow(I, extent = [-x_tot/2, x_tot/2, -x_tot/2, x_tot/2])
print I.shape
plt.imsave('image.png', I)
I2 = plt.imread('image.png')
print I2.shape
mplimg.imsave('image2.png',np.uint8(I), cmap = cm.gray)
testImg = plt.imread('image2.png')
print testImg.shape
In both cases the results of the "print" function are (128,128,4).
Can anyone explain why the imsave function is creating those dimensions eventhough my input array is of a luminance type?
And of course, does anyone have a solution to save the array into a standard grayscale format?
Thanks!
With PIL it should work like this
from PIL import Image
I8 = (((I - I.min()) / (I.max() - I.min())) * 255.9).astype(np.uint8)
img = Image.fromarray(I8)
img.save("file.png")
There is also an alternative of using imageio. It provides an easy and convenient API and it is bundled with Anaconda. It can save grayscale images as a single color channel file.
Quoting the documentation
>>> import imageio
>>> im = imageio.imread('imageio:astronaut.png')
>>> im.shape # im is a numpy array
(512, 512, 3)
>>> imageio.imwrite('astronaut-gray.jpg', im[:, :, 0])
I didn't want to use PIL in my code and as noted in the question I ran into the same problem with pyplot, where even in grayscale, the file is saved in MxNx3 matrix.
Since the actual image on disk wasn't important to me, I ended up writing the matrix as is and reading it back "as-is" using numpy's save and load methods:
np.save("filename", image_matrix)
And:
np.load("filename.npy")
There is also a possibility to use scikit-image, then there is no need to convert numpy array into a PIL object.
from skimage import io
io.imsave('output.tiff', I.astype(np.uint16))