crop image in skimage? - python

I'm using skimage to crop a rectangle in a given image, now I have (x1,y1,x2,y2) as the rectangle coordinates, then I had loaded the image
image = skimage.io.imread(filename)
cropped = image(x1,y1,x2,y2)
However this is the wrong way to crop the image, how would I do it in the right way in skimage

This seems a simple syntax error.
Well, in Matlab you can use _'parentheses'_ to extract a pixel or an image region. But in Python, and numpy.ndarray you should use the brackets to slice a region of your image, besides in this code you is using the wrong way to cut a rectangle.
The right way to cut is using the : operator.
Thus,
from skimage import io
image = io.imread(filename)
cropped = image[x1:x2,y1:y2]

One could use skimage.util.crop() function too, as shown in the following code:
import numpy as np
from skimage.io import imread
from skimage.util import crop
import matplotlib.pylab as plt
A = imread('lena.jpg')
# crop_width{sequence, int}: Number of values to remove from the edges of each axis.
# ((before_1, after_1), … (before_N, after_N)) specifies unique crop widths at the
# start and end of each axis. ((before, after),) specifies a fixed start and end
# crop for every axis. (n,) or n for integer n is a shortcut for before = after = n
# for all axes.
B = crop(A, ((50, 100), (50, 50), (0,0)), copy=False)
print(A.shape, B.shape)
# (220, 220, 3) (70, 120, 3)
plt.figure(figsize=(20,10))
plt.subplot(121), plt.imshow(A), plt.axis('off')
plt.subplot(122), plt.imshow(B), plt.axis('off')
plt.show()
with the following output (with original and cropped image):

You can crop image using skimage just by slicing the image array like below:
image = image_name[y1:y2, x1:x2]
Example Code :
from skimage import io
import matplotlib.pyplot as plt
image = io.imread(image_path)
cropped_image = image[y1:y2, x1:x2]
plt.imshow(cropped_image)

you can go ahead with the Image module of the PIL library
from PIL import Image
im = Image.open("image.png")
im = im.crop((0, 50, 777, 686))
im.show()

Related

How to create semi transparent pattern with python pil?

I have found some examples on this site. I would like to create, example 6. Can you help?
Create, as a numpy array, the image of the napkin. The squares have a size of 10×10. You may use the command numpy tile. Save the resulting image to a file.
In a standard grayscale image, black pixels are 0, gray pixels are 128, and white ones are 255:
import numpy as np
import matplotlib.pyplot as plt
# first create one 20 x 20 tile
a1 = np.zeros((20,20), dtype=int)
a1[10:20,0:10] = a1[0:10,10:20] = 128
a1[10:20,10:20] = 255
# fill the whole 100 x 100 area with the tiles
a = np.tile(a1, (5,5))
# plot and save
plt.imshow(a, 'Greys_r')
plt.savefig('pattern.png')
You could do this:
from PIL import Image
import numpy as np
# Make grey 2x2 image
TwoByTwo = np.full((2,2), 128, np.uint8)
# Change top-left to black, bottom-right to white
TwoByTwo[0,0] = 0
TwoByTwo[1,1] = 255
# Tile it
tiled = np.tile(TwoByTwo, (5,5))
# Make into PIL Image, rescale in size and save
Image.fromarray(tiled).resize((100,100), Image.NEAREST).save('result.png')

Turn a image to grayscale in python

I'm a newbie to tensorflow and keras, and I'm trying to create a CNN model for The Street View House Numbers (SVHN) dataset. The dataset contains color images, and I want to turn them in grayscale. I found some code on the web that claims they're turning image to grayscale, but it just changes colors.
People are reading the second image with a gray colormap. Is there any way to actually turn this image to grayscale?
(I do not know how to process an image in this kind of programming languages. If this is a dumb question, please forgive me and provide a brief explain.)
I provided images and code below, I'll be grateful for any help.
Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
#Read picture:
picture = plt.imread('google.jpg')
print("google logo's shape is: ",picture.shape) #(500, 500, 3)
#saving picture as an np array:
pic_array = np.array(picture)
#Turning image to grayscale
grayscale_pic = np.expand_dims(np.dot(pic_array[...,:3],[0.299, 0.587, 0.144]),axis = 0)
#Dimensions shifted, (probly my mistake):
grayscale_pic = np.moveaxis(grayscale_pic, 0, -1)
print("shape of grayscale pic = ", grayscale_pic.shape) # (500, 500, 1)
plt.imshow(picture) #Figure_1
plt.show()
plt.imshow(grayscale_pic) #Figure_2
plt.show()
U can convert a normal image to grayscale using opencv like this:
import cv2
gray = cv2.cvtColor(picture,cv2.COLOR_RGB2GRAY)
If u prefer numpy over opencv, then u can use this:
gray = np.dot(picture[...,:3], [0.2989, 0.5870, 0.1140])
You can use matplotlib with weights:
import numpy as np
import matplotlib.pyplot as plt
an_image = plt.imread('google.png')
rgb_weights = [0.2989, 0.5870, 0.1140]
grayscale_image = np.dot(an_image[..., :3], rgb_weights)
plt.axis('off')
plt.imshow(grayscale_image, cmap=plt.get_cmap("gray"), aspect='auto')
plt.show()
Output:
If you remove aspect='auto' parameter:
or you can use opencv
import cv2
an_image = cv2.imread("google.png")
grey_image = cv2.cvtColor(an_image, cv2.COLOR_BGR2GRAY)
or you can use PIL library
from PIL import Image
img = Image.open('google.png').convert('LA')
LA mode is L (8-bit pixels, black and white) with ALPHA desinged for .gif and .png. If your images are .jpeg use L.
Output:
There can be several ways to do this. One potential way is to utilize PIL(Pillow) library:
from PIL import Image
import matplotlib.pyplot as plt
picture = Image.open('google.jpg')
grayscale_pic = picture.convert('LA')
grayscale_pic.save('grayscale.png')
fig,ax = plt.subplots(nrows=1, ncols=2)
plt.subplot(1,2,1)
plt.imshow(picture)
plt.subplot(1,2,2)
plt.imshow(grayscale_pic)
plt.show()
Output:

How to represent a binary image as a graph with the axis being height and width dimensions and the data being the pixels

I am trying to use Python along with opencv, numpy and matplotlib to do some computer vision for a robot which will use a railing to navigate. I am currently extremely stuck have run out of places to look. My current code is:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('railings.jpg')
railing_image = np.copy(image)
resized_image = cv2.resize(railing_image,(881,565))
gray = cv2.cvtColor(resized_image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 85, 255)
cv2.imshow('test',canny)
image_array = np.array(canny)
ncols, nrows = image_array.shape
count = 0
scan = np.array
for x in range(0,image_array.shape[1]):
for y in range(0,image_array.shape[0]):
if image_array[y, x] == 0:
count += 1
scan = [scan, count]
print(scan)
plt.plot([0, count])
plt.axis([0, nrows, 0, ncols])
plt.show()
cv2.waitKey(0)
I am using a canny image which is stored in an array of 1's and 0's, the image I need represented is
The final result should look something like the following image.
I've tried using a histogram function but I've only managed to get that to output essentially a count of the number of times a 1 or 0 appears.
If anyone could help me or point me in the right direction that would produce a graph that represents the image pixels within a graph of height and width dimensions.
Thank you
I'm not sure how general this is but you could just use numpy argmax to get location of the maximum (like this) in your case. You should avoid loops as this will be very slow, better to use numpy functions. I've imported your image and used the cutoff criterion that 200 or more in the yellow channel is railing,
import cv2
import numpy as np
import matplotlib.pyplot as plt
#This loads the canny image you uploaded
image = cv2.imread('uojHJ.jpg')
#Trim off the top taskbar
trimimage = image[100:, :,0]
#Use argmax with 200 cutoff colour in one channel
maxindex = np.argmax(trimimage[:,:]>200, axis=0)
#Plot graph
plt.plot(trimimage.shape[0] - maxindex)
plt.show()
Where this looks as follows:

Marking boundary given mask

I have a volume of image slices and their according masks. I've been trying to use skimage.segmentation library to mark the object in mind for each slice according to its mask.
import numpy as np
from skimage.segmentation import mark_boundaries
import matplotlib.pyplot as plt
def plot_marked_volume(marked_image_volume, mask):
for slice in range(len(marked_image_volume)):
if np.count_nonzero(mask[slice,:,:]):
plt.figure(figsize=(10,10))
edges_pz = mark_boundaries(marked_image_volume[slice,:,:], mask[slice].astype(np.int),
color=(1,0,0), mode='thin')
plt.imshow(edges_pz)
plt.title('slice ' + str(slice))
plt.show()
Here's a sample image and mask slice:
However running the code results in given boundaries with black backgrounds.
I am expecting an output like the following yellow boundAry (Ignore the 'CG'):
Any thoughts and suggestions as to what might be the issue is appreciated.
Although, I couldn't understand fully from your provided data, that what you were trying to do, but if you just want the mask to be shown in the original image, this is what you may like to do:
fig, axarr = plt.subplots(1, 3, figsize=(15, 40))
axarr[0].axis('off')
axarr[1].axis('off')
axarr[2].axis('off')
imgPath = "download.jpeg"
image = cv2.imread(imgPath)
#Show original image of same shape as of edges_pz or mask. Arguments should be image not its path.
axarr[0].imshow(image)
#Show the maks or edges_pz in your case
axarr[1].imshow(edges_pz)
#Show the image with combined mask and the original image, the shape of both image and mask should be same.
axarr[2].imshow(image)
axarr[2].imshow(edges_pz, alpha=0.4)
I hope this helps.

Extract L* component from image after converting from RGB to LAB

I am trying to take an RGB image, convert it into LAB (aka CIE L* a* b*) colorspace, and extract the L* component.
Here is my code:
from skimage import io, color
from scipy import misc
import matplotlib.pyplot as plt
import cv2
img = misc.imread("/Users/zheyuanlin/Desktop/opencv_tests/parrots.png", mode='RGB')
img_resized = misc.imresize(img, (256, 256), 'bilinear') # resized to 256x256
img_cielab = color.rgb2lab(img_resized, illuminant='D50')
# Rescale due to range of LAB values being L (0-100), a (-128-127), b (-128-127)
cielab_scaled = (img_cielab + [0, 128, 128]) / [100, 255, 255]
cie_l, cie_a, cie_b = cv2.split(cielab_scaled)
""" Display the image """
plt.imshow(cie_l)
plt.show()
This is the image produced:
Here is an example of an L* component of the same image from a research paper I found on google:
I don't know why mine looks so green, does anyone know the problem with my code? Thanks!
The image you display is identical to the one from the paper. But pyplot has a default color map that adds blue, green and yellow to the gray-scale image.
To change the color map used, use the set_cmap function:
plt.set_cmap('gray')

Categories

Resources