Python/numpy points list to black/white image area - python

I'm trying to convert a continuous list points (between 0 and 1) into black and white image, representing area under/over list points.
plt.plot(points)
plt.ylabel('True val')
plt.show()
print("Points shape-->", points.shape)
I can save the image produced by matplotlib but i think this could be a nasty workaround
At the end i would like to obtain and image with shape of (224,224) where white zone represent area under line and black zone represent are over line...
image_area = np.zeros((points.shape[0],points.shape[0],))
# ¿?
Any ideas or suggestions how to approach it are welcome! Thanks experts

Here is a basic example of how you could do it. Since the slicing requires integers, you may have to scale your raw data first.
import numpy as np
import matplotlib.pyplot as plt
# your 2D image
image_data = np.zeros((224, 224))
# your points. Here I am just using a random list of points
points = np.random.choice(224, size=224)
# loop over each column in the image and set the values
# under "points" equal to 1
for col in range(len(image_data[0])):
image_data[:points[col], col] = 1
# show the final image
plt.imshow(image_data, cmap='Greys')
plt.show()

Thank you Eric, here the solution with your proposal, thank you very much!
def to_img(points):
shape = points.shape[0]
# your 2D image
image_data = np.zeros((shape, shape))
# your points. Here I am just using a random list of points
# points = np.random.choice(224, size=224)
def minmax_norm_img(data, xmax, xmin):
return (data - xmin) / (xmax - xmin)
points_max = np.max(points)
points_min = np.min(points)
points_norm = minmax_norm_img(points,points_max , points_min)
# loop over each column in the image and set the values
# over "points" equal to 1
for col in range(len(image_data[0])):
image_data[shape-int(points_norm[col]*shape):, col] = 1
return image_data

Related

How to morph two grid-like images seamlessly?

I have two images that consist of colored squares with different grid step (10x10 and 12x12).
What I want is to make the first image to be smoothly transformed into the second one.
When I use a plain image overlay with cv2.addWeighted() function, the result (left) is not good because of the intersected grid spaces. I suppose it would be better to shift remaining grid cells to the borders and clear out the rest (right).
Is there any algorithm to deal with this task?
Thanks.
You can interpolate each pixel individually between different images.
import numpy as np
from scipy import interpolate
import matplotlib.pyplot as plt
np.random.seed(200)
num_images = 2
images = np.random.rand(num_images, 8,8)
for index, im in enumerate(images):
print(f'Images {index}')
fig = plt.imshow(im)
plt.show()
Interpolating these images:
n_frames = 4
x_array = np.linspace(0, 1, int(n_frames))
def interpolate_images(frame):
intermediate_image = np.zeros((1, *images.shape[1:]))
for lay in range(images.shape[1]):
for lat in range(images.shape[2]):
tck = interpolate.splrep(np.linspace(0, 1, images.shape[0]), images[:, lay, lat], k = 1)
intermediate_image[:, lay, lat] = interpolate.splev(x_array[frame], tck)
return intermediate_image
for frame in range(n_frames):
im = interpolate_images(int(frame))
fig = plt.imshow(im[0])
plt.show()

Change the color values of the image in for loop using python

Fig.1 Polygon with black background
Fig.2 desired output
Fig.3 Input
There are some images with a black background and white polygon inside (fig.1) since some lines of the polygon are not straight the value of colors in some pixels are not 0 or 255. I tried to change the color values to 0 for outside of the polygon and 255 for inside. This code is working perfectly for single image to change the colors (fig.2) but when I put it in the loop for all images (1024) it's not changing some values (fig 3) for instance please see pixel (137,588) in figs 1 and 2.
from skimage import io
import matplotlib.pyplot as plt
import scipy.io as spio
import numpy as np
pixels = 600
my_dpi = 100
num_geo=1024
## Load coordinates
mat = spio.loadmat('coordinateXY.mat', squeeze_me=True)
coord = mat['coordxy']*10
for i in range(num_geo):
geo = coord[:, :, i]
print(coord[:, :, i])
fig = plt.figure(num_geo,figsize=( pixels/my_dpi, pixels/my_dpi),facecolor='k', dpi=my_dpi)
plt.axes([0,0,1,1])
rectangle = plt.Rectangle((-300, -300), 600, 600, fc='k')
plt.gca().add_patch(rectangle)
polygon = plt.Polygon(coord[:, :, i],color='w')
plt.gca().add_patch(polygon)
plt.axis('off')
plt.axis([-300,300,-300,300])
plt.savefig('figure/%d.jpg' % i, dpi=my_dpi)
# Save as numpy file
img_mat = io.imread('figure/%d.jpg' % i)
np.save('img_mat.npy', img_mat)
data = np.load('img_mat.npy')
# # adjust the colors and save the revised version
data1 = np.where(data<180, 0, data)
data2 = np.where(data1>185, 255, data1)
arr=data2
plt.imsave('figureRev/%d.jpg' % i,arr)
plt.close()
Oh, I see now!
It's either because you used JPEG which is lossy and allowed to change your data - in which case try PNG format which is lossless.
Or, it's because the diagonal line has been drawn with "anti-aliasing" Wikipedia linkwhich you can turn off if you don't want it. Alternatively, you can threshold your data at say 127 to ensure all values below and equal become zero and all values above become 255.

for loop to remove values not in an array in python

I have a set of images in a numpy array. After some processing and applying a threshold I turned them into images that have either value 0 or 1 in each xy coordinate. I want to use a for loop and nonzero to turn the xy coordinates of the original image that are not in the nonzero array to zero and leave the pixels in the nonzero array with their original intensity. Im a complete noob in programming and I have been given this task.
This is what I have so far but the last part doesn't work:
import cv2
# Taking the first image of the data
image = series_copy2[0,:,:]
# Mean total background of the image
print('Mean total background = ' +str(np.mean(image)) + ' counts.')
# Threshold for background removal
threshold =30
# Setting all pixels below a threshold to zero to remove the background
image[image[:,:] < threshold] = 0
image[image[:,:]>threshold]=1
# Plotting the result for checking
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
data = image
plt.tight_layout()
im = plt.imshow(data, interpolation = 'nearest')
np.transpose(np.nonzero(data))
nz_arrays=np.transpose(np.nonzero(data))
#this doesn't work
for x in data:
if image[image[:,:] not in nz_arrays]=0
# Plotting the result for checking
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
data = image
plt.tight_layout()
im = plt.imshow(data, interpolation = 'nearest')
#this doesn't work
for x in data:
if image[image[:,:] not in nz_arrays] is 0:
# What is this doing?
When using if, you need to end it with a colon and then write the function.

Given a 2D numpy array of real numbers, how to generate an image depicting the intensity of each number?

I have a 2D numpy array and would like to generate an image such that the pixels corresponding to numbers that have a high value (relative to other pixels) are coloured with a more intense colour. For example if the image is in gray scale, and a pixel has value 0.4849 while all the other pixels correspond to values below 0.001 then that pixel would probably be coloured black, or something close to black.
Here is an example image, the array is 28x28 and contains values between 0 and 1.
All I did to plot this image was run the following code:
import matplotlib.pyplot as plt
im = plt.imshow(myArray, cmap='gray')
plt.show()
However, for some reason this only works if the values are between 0 and 1. If they are on some other scale which may include negative numbers, then the image does not make much sense.
You can use different colormaps too, like in the example below (note that I removed the interpolation):
happy_array = np.random.randn(28, 28)
im = plt.imshow(happy_array, cmap='seismic', interpolation='none')
cbar = plt.colorbar(im)
plt.show()
And even gray is going to work:
happy_array = np.random.randn(28, 28)
im = plt.imshow(happy_array, cmap='gray', interpolation='none')
cbar = plt.colorbar(im)
plt.show()
You can normalize the data to the range (0,1) by dividing everything by the maximum value of the array:
normalized = array / np.amax(a)
plt.imshow(normalized)
If the array contains negative values you have two logical choices. Either plot the magnitude:
mag = np.fabs(array)
normalized = mag / np.amax(mag)
plt.imshow(normalized)
or shift the array so that everything is positive:
positive = array + np.amin(array)
normalized = positive / np.amax(positive)
plt.imshow(normalized)

Python Open CV - Get coordinates of region

I am a beginner in image processing (and openCV). After applying watershed algorithm to an image, the output that is obtained is something like this -
Is it possible to have the co-ordinates of the regions segmented out ?
The code used is this (in case you wish to have a look) -
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('input.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
# noise removal
kernel = np.ones((3,3),np.uint8)
opening = cv2.morphologyEx(thresh,cv2.MORPH_OPEN,kernel, iterations = 2)
# sure background area
sure_bg = cv2.dilate(opening,kernel,iterations=3)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening,cv2.cv.CV_DIST_L2,5)
ret, sure_fg = cv2.threshold(dist_transform,0.7*dist_transform.max(),255,0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg,sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
# Add one to all labels so that sure background is not 0, but 1
markers = markers+1
# Now, mark the region of unknown with zero
markers[unknown==255] = 0
markers = cv2.watershed(img,markers)
img[markers == -1] = [255,0,0]
plt.imshow(img)
plt.show()
Is there any function or algorithm to extract the co-ordinates of the coloured regions that are separated out ? Any help would be much appreciated !
After this line:
markers = cv2.watershed(img,markers)
markers will be an image with all region segmented, and the pixel value in each region will be an integer (label) greater than 0. Background has label 0, boundaries has label -1.
You already know the number of labels from ret returned by connectedComponents.
You need a data structure to contains the points for each region. For example, the points of each region will go in an array of points. You need several of this (for each region), so another array.
So, if you want to find the pixel of each region, you can do:
1) Scan the image and append the point to an array of arrays of points, where each array of points will contains the points of the same region
// Pseudocode
"labels" is an array of an array of points
initialize labels size to "ret", the length of each array of points is 0.
for r = 1 : markers.rows
for c = 1 : markers.cols
value = markers(r,c)
if(value > 0)
labels{value-1}.append(Point(c,r)) // r = y, c = x
end
end
end
2) Generate a mask for each label value, and collect the points in the mask
// Pseudocode
"labels" is an array of an array of points
initialize labels size to "ret", the length of each array of points is 0.
for value = 1 : ret-1
mask = (markers == value)
labels{value-1} = all points in the mask // You can use cv::boxPoints(...) for this
end
The first approach is likely to be much faster, the second is easier to implement. Sorry, but I can't give you Python code (C++ would have been much better :D ), but you should find your way out whit this.
Hope it helps

Categories

Resources