Dithering in JES/Jython - python

My goal is to dither an image in JES/Jython using the Floyd-Steinberg method. Here is what I have so far:
def Dither_RGB (Canvas):
for Y in range(getHeight(Canvas)):
for X in range(getWidth(Canvas)):
P = getColor(Canvas,X,Y)
E = getColor(Canvas,X+1,Y)
SW = getColor(Canvas,X-1,Y+1)
S = getColor(Canvas,X,Y+1)
SE = getColor(Canvas,X+1,Y+1)
return
The goal of the above code is to scan through the image's pixels and process the neighboring pixels needed for Floyd-Steinberg.
What I'm having trouble understanding is how to go about calculating and distributing the differences in R,G,B between the old pixel and the new pixel.
Anything that could point me in the right direction would be greatly appreciated.

I don't know anything about the method you are trying to implement, but for the rest: Assuming Canvas is of type Picture, you can't get directly the color that way. The color of a pixel can be obtained from a variable of type Pixel:
Example: Here is the procedure to get the color of each pixels from an image and assign them at the exact same position in a new picture:
def copy(old_picture):
# Create a picture to be returned, of the exact same size than the source one
new_picture = makeEmptyPicture(old_picture.getWidth(), old_picture.getHeight())
# Process copy pixel by pixel
for x in xrange(old_picture.getWidth()):
for y in xrange(old_picture.getHeight()):
# Get the source pixel at (x,y)
old_pixel = getPixel(old_picture, x, y)
# Get the pixel at (x,y) from the resulting new picture
# which remains blank until you assign it a color
new_pixel = getPixel(new_picture, x, y)
# Grab the color of the previously selected source pixel
# and assign it to the resulting new picture
setColor(new_pixel, getColor(old_pixel))
return new_picture
file = pickAFile()
old_pic = makePicture(file)
new_pic = copy(old_pic)
Note: The example above applies only if you want to work on a new picture without modifying the old one. If your algorithm requires to modify the old picture on the fly while performing the algorithm, the final setColor would have been applied directly to the original pixel (no need for a new picture, neither the return statement).
Starting from here, you can compute anything you want by manipulating the RGB values of a pixel (using setRed(), setGreen() and setBlue() functions applied to a Pixel, or col = makeColor(red_val, green_val, blue_val) and apply the returned color to a pixel using setColor(a_pixel, col)).
Example of RGB manipulations here.
Some others here and especially here.

Related

Blending images in Python for loop and range

so I am having trouble figuring out how to blend two images with for loops and range(len). For this particular case, I need to compute the color of each pixel by setting value of each color component to be the average of the same components in the corresponding pixel of each input image. Such that I need to use this formula
(first_value + second_value) / 2
Sum the pixels for each of the R, G, B channels for both images and then divide by 2.
img1 = load_img('images/cat.jpg')
img2 = load_img('images/texture.jpg')
def blend(img1,img2):
> for r in range(len(img1)):
> for c in range(len(img1[r])):
I am not sure if I am going in the right direction, if so, should I be stating the images' height and width even if they are the same? (They are both 800x600). Also I want to return a new image therefore should I be creating a new list and append whatever pixel my function iterates through?
I know that there is the blend function and also a addweighted but I want to get the average without using those methods.

How to get list of pixel coordinates[x, y] of certain color(RGB) with python?

I'm trying to make an indoor navigation and I need indoor map that robot can automatically navigate the way.
I'm thinking of using image which have different colors for each place(certain section), and I want to know the way to get the coordinates of the certain colors. So that I can designate places to certain color area using that coordinate. How can I get the list of pixel coordinates of the image which have certain color using python? In this case, I know RGB code of color! I am currently using pycharm
Assuming your picture has shape (Y, X, 3). Y is the height of the image and X the width. Last dimension is for the 3 RGB colors channels.
You can use this code on the image to get a list of pixels coordinates with particular RGB set:
def pixels(image, rgb_set):
set = list()
for j in range(0, image.shape[0]):
for i in range(0, image.shape[1]):
if image[j][i] == rgb_set:
set.append([j, i])
return set

Comparing and plotting regions of the same color over a dataset of a few hundred images

A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub

Tranform HSV mask into a set of points

I created a HSV mask from the image. The result is like the following:
I hope that this mask can be represented by a set of points. My original idea was to use Skimage Skeletonize to create a line and then use the sliding window to calculate the local mean for point creation.
However, skeletonize takes too long. It requires 0.4s for each frame. This is not a good idea for video processing.
Do you want the points of all True elements of the mask, or do you just want a skeleton? If the former..
import skimage as ski
from skimage import io
import numpy as np
mask = ski.io.imread('./mask.png')[:,:,0]/255
mask = mask.astype('bool')
s0,s1 = mask.shape # dimensions of mask
a0,a1 = np.arange(s0),np.arange(s1) # make two 1d coordinate arrays
coords = np.array(np.meshgrid(a0,a1)).T # cartesian product into a coordinate matrix
coords = coords[mask] # mask out the points of interest
If the latter, you can get the start and end points (from left to right) of the object in the mask in a fast way with something like
start_mat = np.stack((np.roll(mask,1,axis=1),mask),-1)
start_mask = np.fromiter(map(lambda p: np.alltrue(p==np.array([False,True])),start_mat[mask]),dtype=bool)
starts = coords[start_mask]
end_mat = np.stack((np.roll(mask,-1,axis=1),mask),-1)
end_mask = np.fromiter(map(lambda p: np.alltrue(p==np.array([False,True])),end_mat[mask]),dtype=bool)
ends = coords[end_mask]
This will give you a rough outline of the object. Outline points will be missing anywhere that the slope of the figure is 0. You may have to think of a vertical difference scheme for those areas. The same idea would work with np.roll(...,axis=0). You could just concatenate the unique points from rolling over rows to the points from rolling over columns to get the full outline.
Averaging the correct pairs to get the skeleton isn't so easy.
Here's a resultant outline. You can definitely make this faster than 0.4s:
Couldn't a simple For loop work?
Scan each "across" line of your bitmap looking for...
X pos where from Black meets White = new start point.
Also in same scanned line now look for a new X-pos: where from White meets Black = new end point.
Either put dots at start/end points for "outline" effect, or else put dots in "center" effect by dot.x = (end_point - start_point) / 2

Get the (x,y) coordinate values from an image array's RGB value using numpy

I am new to python so I really need help with this one.
I have an image greyscaled and thresholded so that the only colors present are black and white.
I'm not sure how to go about writing an algorithm that will give me a list of coordinates (x,y) on the image array that correspond to the white pixels only.
Any help is appreciated!
Surely you must already have the image data in the form of a list of intensity values? If you're using Anaconda, you can use the PIL Image module and call getdata() to obtain this intensity information. Some people advise to use NumPy methods, or others, instead, which may improve performance. If you want to look into that then go for it, my answer can apply to any of them.
If you have already a function to convert a greyscale image to B&W, then you should have the intensity information on that output image, a list of 0's and 1's , starting from the top left corner to the bottom right. If you have that, you already have your location data, it just isnt in (x,y) form. To do that, use something like this:
data = image.getdata()
height = image.getHeight()
width = image.getWidth()
pixelList = []
for i in range(height):
for j in range(width):
stride = (width*i) + j
pixelList.append((j, i, data[stride]))
Where data is a list of 0's and 1's (B&W), and I assume you have written getWidth() and getHeight() Don't just copy what I've written, understand what the loops are doing. That will result in a list, pixelList, of tuples, each tuple containing intensity and location information, in the form (x, y, intensity). That may be a messy form for what you are doing, but that's the idea. It would be much cleaner and accessible to instead of making a list of tuples, pass the three values (x, y, intensity) to a Pixel object or something. Then you can get any of those values from anywhere. I would encourage you to do that, for better organization and so you can write the code on your own.
In either case, having the intensity and location stored together makes sorting out the white pixels very easy. Here it is using the list of tuples:
whites = []
for pixel in pixelList:
if pixel[2] == 1:
whites.append(pixel[0:2])
Then you have a list of white pixel coordinates.
You can usePIL and np.where to get the results efficiently and concisely
from PIL import Image
import numpy as np
img = Image.open('/your_pic.png')
pixel_mat = np.array(img.getdata())
width = img.size[0]
pixel_ind = np.where((pixel_mat[:, :3] > 0).any(axis=1))[0]
coordinate = np.concatenate(
[
(pixel_ind % width).reshape(-1, 1),
(pixel_ind // width).reshape(-1, 1),
],
axis=1,
)
Pick the required pixels and get their index, then calculate the coordinates based on it. Without using Loop expressions, this algorithm may be faster.
PIL is only used to get the pixel matrix and image width, you can use any library you are familiar with to replace it.

Categories

Resources