Blending images in Python for loop and range - python

so I am having trouble figuring out how to blend two images with for loops and range(len). For this particular case, I need to compute the color of each pixel by setting value of each color component to be the average of the same components in the corresponding pixel of each input image. Such that I need to use this formula
(first_value + second_value) / 2
Sum the pixels for each of the R, G, B channels for both images and then divide by 2.
img1 = load_img('images/cat.jpg')
img2 = load_img('images/texture.jpg')
def blend(img1,img2):
> for r in range(len(img1)):
> for c in range(len(img1[r])):
I am not sure if I am going in the right direction, if so, should I be stating the images' height and width even if they are the same? (They are both 800x600). Also I want to return a new image therefore should I be creating a new list and append whatever pixel my function iterates through?
I know that there is the blend function and also a addweighted but I want to get the average without using those methods.

Related

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

Manipulating recognized numbers from a numbers recognition script in python

I have a workable numbers recognition script in python that works and produces this result
This was created using opencv,sklearn and skimage. How do I how save the recognized image in a file to use later in a different computation?
You need to save crops of the original image to separate files. Without seeing your code it is not possible to determine how you are storing the crops, but you need to access the values used for drawing the green rectangles. The same values for drawing the green rectangles can then be used to save a crop of the image by slicing the image.
If you used opencv cv2.rectangle() function to draw the boxes it would mean you have the top-left corner and bottom-right corner of the rectangle. However, numpy slicing is in the format:
crop_image = image[y:y+h, x:x+w]
Therefore, you would have to compute the height and width values (i.e. height = (xmax - xmin) + xmin and the same for width).
Again as you have no code showing my guess is you have a loop that draws the boxes. If that is the case underneath the cv2.rectange() function you could write:
crop_image = image[ymin: (ymax-ymin)+ymin, xmin: (max-min)+xmin]
cv2.imwrite('images/prediction_'+predicted_value+'_'+count+'.jpg', crop_image)
Reference this against your code to get the xmin/xmax/ymin/ymax values:
cv2.rectange(img, (xmin, ymin), (xmax, ymax), (R, G, B), Thickness)
Where image is the input image you do the predictions on, predicted_value is the prediction you made for the number and count is a basic count to ensure you don't override files. You can obtain this by using the enumerate function in the for loop: i.e. for count, x in enumerate(data)
The output files would save to a folder called images (assuming you've created this prior) and with files name like:
prediction_3_0.jpg
prediction_4_1.jpg
prediction_2_2.jpg
prediction_3_3.jpg
prediction_5_4.jpg
This way when you read them back in you know what the predicted number was. Let me know if any of my assumptions are wrong and I can edit the answer.

How to apply a transformation to a certain tonal range in VIPS/Python

I have to apply various transformations to different tonal ranges of 16-bit tiff files in VIPS (and Python). I have managed to do so, but I am new to VIPS and I am not convinced I am doing this in an efficient manner. These images are several hundred megabytes each, and cutting each excess step can save me a few seconds per image.
I wonder if there is a more efficient way to achieve the same results I obtain from the code below, for instance using lookup tables (I couldn't really figure out how they work in VIPS). The code separates the shadows in the red channel and passes them through a transformation.
im = Vips.Image.new_from_file("test.tiff")
# Separate the red channel
band = im[0]
# Find the tone limit for the bottom 5%
lim = band.percent(5)
# Create a mask using the tone limit
mask = (band <= lim)
# Convert the mask to 16 bits
mask = mask.cast(band.BandFmt, shift = True)
# Run the transformation on the image and keep only the shadow areas
new_shadows = (65535 * (shadows / lim * 0.1)) & mask
After running more or less similar codes for each tonal range (highlight, shadows, midtones, I add all the resulting images together to reconstruct the original band:
new_band = (new_shadows.add(new_highlights).add(new_midtones)).cast(band.BandFmt)
I made you a demo program showing how to do something like this with the vips histogram functions:
import sys
import pyvips
im = pyvips.Image.new_from_file(sys.argv[1])
# find the image histogram
#
# we'll get a uint image, one pixel high and 256 or
# 65536 pixels across, it'll have three bands for an RGB image source
hist = im.hist_find()
# find the normalised cumulative histogram
#
# for a 16-bit source, we'll have 65535 as the right-most element in each band
norm = hist.hist_cum().hist_norm()
# search from the left for the first pixel > 5%: the position of this pixel
# will give us the pixel value that 5% of pixels fall below
#
# .profile() gives back a pair of [column-profile, row-profile], we want index 1
# one. .getpoint() reads out a pixel as a Python array, so for an RGB Image
# we'll have something like [19.0, 16.0, 15.0] in shadows
shadows = (norm > 5.0 / 100.0 * norm.width).profile()[1].getpoint(0, 0)
# Now make an identity LUT that matches our original image
lut = pyvips.Image.identity(bands=im.bands,
ushort=(im.format == "ushort"))
# do something to the shadows ... here we just brighten them a lot
lut = (lut < shadows).ifthenelse(lut * 100, lut)
# make sure our lut is back in the original format, then map the image through
# it
im = im.maplut(lut.cast(im.format))
im.write_to_file(sys.argv[2])
It does a single find-histogram operation on the source image, then a single map-histogram operation, so it should be fast.
This is just adjusting the shadows, you'll need to extend it slightly to do midtones and highlights as well, but you can do all three modifications from the single initial histogram, so it shouldn't be any slower.
Please open an issue on the libvips tracker if you have any more questions:
https://github.com/libvips/libvips/issues

Get the (x,y) coordinate values from an image array's RGB value using numpy

I am new to python so I really need help with this one.
I have an image greyscaled and thresholded so that the only colors present are black and white.
I'm not sure how to go about writing an algorithm that will give me a list of coordinates (x,y) on the image array that correspond to the white pixels only.
Any help is appreciated!
Surely you must already have the image data in the form of a list of intensity values? If you're using Anaconda, you can use the PIL Image module and call getdata() to obtain this intensity information. Some people advise to use NumPy methods, or others, instead, which may improve performance. If you want to look into that then go for it, my answer can apply to any of them.
If you have already a function to convert a greyscale image to B&W, then you should have the intensity information on that output image, a list of 0's and 1's , starting from the top left corner to the bottom right. If you have that, you already have your location data, it just isnt in (x,y) form. To do that, use something like this:
data = image.getdata()
height = image.getHeight()
width = image.getWidth()
pixelList = []
for i in range(height):
for j in range(width):
stride = (width*i) + j
pixelList.append((j, i, data[stride]))
Where data is a list of 0's and 1's (B&W), and I assume you have written getWidth() and getHeight() Don't just copy what I've written, understand what the loops are doing. That will result in a list, pixelList, of tuples, each tuple containing intensity and location information, in the form (x, y, intensity). That may be a messy form for what you are doing, but that's the idea. It would be much cleaner and accessible to instead of making a list of tuples, pass the three values (x, y, intensity) to a Pixel object or something. Then you can get any of those values from anywhere. I would encourage you to do that, for better organization and so you can write the code on your own.
In either case, having the intensity and location stored together makes sorting out the white pixels very easy. Here it is using the list of tuples:
whites = []
for pixel in pixelList:
if pixel[2] == 1:
whites.append(pixel[0:2])
Then you have a list of white pixel coordinates.
You can usePIL and np.where to get the results efficiently and concisely
from PIL import Image
import numpy as np
img = Image.open('/your_pic.png')
pixel_mat = np.array(img.getdata())
width = img.size[0]
pixel_ind = np.where((pixel_mat[:, :3] > 0).any(axis=1))[0]
coordinate = np.concatenate(
[
(pixel_ind % width).reshape(-1, 1),
(pixel_ind // width).reshape(-1, 1),
],
axis=1,
)
Pick the required pixels and get their index, then calculate the coordinates based on it. Without using Loop expressions, this algorithm may be faster.
PIL is only used to get the pixel matrix and image width, you can use any library you are familiar with to replace it.

Dithering in JES/Jython

My goal is to dither an image in JES/Jython using the Floyd-Steinberg method. Here is what I have so far:
def Dither_RGB (Canvas):
for Y in range(getHeight(Canvas)):
for X in range(getWidth(Canvas)):
P = getColor(Canvas,X,Y)
E = getColor(Canvas,X+1,Y)
SW = getColor(Canvas,X-1,Y+1)
S = getColor(Canvas,X,Y+1)
SE = getColor(Canvas,X+1,Y+1)
return
The goal of the above code is to scan through the image's pixels and process the neighboring pixels needed for Floyd-Steinberg.
What I'm having trouble understanding is how to go about calculating and distributing the differences in R,G,B between the old pixel and the new pixel.
Anything that could point me in the right direction would be greatly appreciated.
I don't know anything about the method you are trying to implement, but for the rest: Assuming Canvas is of type Picture, you can't get directly the color that way. The color of a pixel can be obtained from a variable of type Pixel:
Example: Here is the procedure to get the color of each pixels from an image and assign them at the exact same position in a new picture:
def copy(old_picture):
# Create a picture to be returned, of the exact same size than the source one
new_picture = makeEmptyPicture(old_picture.getWidth(), old_picture.getHeight())
# Process copy pixel by pixel
for x in xrange(old_picture.getWidth()):
for y in xrange(old_picture.getHeight()):
# Get the source pixel at (x,y)
old_pixel = getPixel(old_picture, x, y)
# Get the pixel at (x,y) from the resulting new picture
# which remains blank until you assign it a color
new_pixel = getPixel(new_picture, x, y)
# Grab the color of the previously selected source pixel
# and assign it to the resulting new picture
setColor(new_pixel, getColor(old_pixel))
return new_picture
file = pickAFile()
old_pic = makePicture(file)
new_pic = copy(old_pic)
Note: The example above applies only if you want to work on a new picture without modifying the old one. If your algorithm requires to modify the old picture on the fly while performing the algorithm, the final setColor would have been applied directly to the original pixel (no need for a new picture, neither the return statement).
Starting from here, you can compute anything you want by manipulating the RGB values of a pixel (using setRed(), setGreen() and setBlue() functions applied to a Pixel, or col = makeColor(red_val, green_val, blue_val) and apply the returned color to a pixel using setColor(a_pixel, col)).
Example of RGB manipulations here.
Some others here and especially here.

Categories

Resources