I have a map with a scale like this one: (the numbers are just an example)
which describes a single variable on a map. However, I don't have access to the original data and know pretty close to nothing
about image processing. What I have done is use PIL to get the pixel-coordinates and RGB values of each point on the map. Simply using pix = im.load() and saving pix[x,y] for each x,y. Now I would like to guess the value of each point using the gradient above.
Is there a standard formula for such a gradient? Does it look very familiar to the trained eye? I have visited Digital Library of Mathematical Functions for some examples ... but I'm not sure if it's using the hue, the rgb height function or something else (to make things easier I'm also colorblind to some greens/brows/reds) :)
Any tips on how to proceed, libraries, links or ideas are appreciated. Thank you!
edit:
Following the replies and martineau's suggestion, I've tried to catch the colors at the top and bottom:
def rgb2hls(colotup):
'''converts 225 based RGB to 360 based HLS
`input`: (222,98,32) tuple'''
dec_rgb = [x/255.0 for x in colotup] # use decimal 0.0 - 1.0 notation for RGB
hsl_col = colorsys.rgb_to_hls(dec_rgb[0], dec_rgb[1], dec_rgb[2])
# PIL uses hsl(360,x%,y%) notation and throws errors on float, so I use int
return (int(hsl_col[0]*360), int(hsl_col[1]*100), int(hsl_col[2]*100))
def pil_hsl_string(hsltup):
'''returns a string PIL can us as HSL color
from a tuple (x,y,z) -> "hsl(x,y%,z%)"'''
return 'hsl(%s,%s%%,%s%%)' % (hsltup[0], hsltup[1], hsltup[2])
BottomRed = (222,98,32) # taken with gimp
TopBlue = (65, 24, 213)
hue_red = pil_hsl_string(rgb2hls(BottomRed))
hue_blue = pil_hsl_string(rgb2hls(TopBlue))
However they come out pretty different ... which makes me worry about using the rgb_to_hls function to extract the values. Or I'm I doing something very wrong? Here's what the color s convert to with the code:
Interesting question..
If you do a clock-wise walk in HSL color-space from 250,85%,85% --> 21,85%,85% you get a gradient very close to the one you've shown. The obvious difference being that your image exhibits a fairly narrow band of greenish values.
So, if you have the 4 magic numbers then you can interpolate to any point within the map.
These of course being the first and last colour, also the first and last scale value.
Here's the image I got with a straight linear gradient on the H channel (used the gimp).
EDIT: I've since whipped up a program to grab the pixel values for each row, graphing the results. You can see that indeed, the Hue isn't linear, you can also see the S & V channels taking a definite dip at around 115 (115 pixels from top of image) This indeed corresponds with the green band.
Given the shape of the curves, I'm inclined to think that perhaps they are intended to model something. But don't have the experience in related fields to recognise the shape of the curves.
Below, I've added the graphs for the change in both the HSV and RGB models.
The left of the graph represents the top of the bar.
The X-axis labels represent pixels
Quite interesting, me thinks. Bookmarked.
The scale in the image looks like an HSV gradient to me, something like what is mentioned in this question. If so, you could use the colorsys.rgb_to_hls() or colorsys.rgb_to_hsv() functions to obtain a hue color value between 0 and 1 from the r,g,b values in a pixel. That can then be mapped accordingly.
However, short of doing OCR, I have no idea how to determine the range of values being represented unless it's some consistent range you can just hardcode.
I would recomend to define an area where you want to compare the colour. Take an FFT of the regions. Each colour is defined by a frequency. You do the same on the countour scale. then compare and narrow on a value.
I have found some like to understand it better.
http://www.imagemagick.org/Usage/fourier/
You can get something like that by varying the hue with a fixed saturation and luminance.
http://en.wikipedia.org/wiki/HSL_and_HSV
Related
Assuming there are only 2 colors in an image. What's the simplest way in Python to tell an image has more (the colored areas) of these 2 colors than the other (group of similar images)?
Definition of "more": the area of total colored blocks of one picture, is bigger than the other. (please note the shape of colors might be irregular)
Thank you.
Okay, after some experimentation, I have a possible solution. You can use Pillow, a common image-loading/handling library, to convert the images to an ndarray, and then use the count_nonzero() method to get your desired results. As a fun side-effect, this works with an arbitrary amount of colors. Here's full working code that I just tried:
from PIL import Image # because for some reason, that's how you import something from Pillow
import numpy as np
im = Image.open("/path/to/image.png")
arr = np.array(im.getdata())
unique_colors, counts = np.unique(arr.reshape(-1, arr.shape[1]), axis=0, return_counts=True)
Now the unique_colors variable holds the unique colors that appear in your image, and counts holds the corresponding counts for each color in the image; that is to say, counts[i] is the number of times unique_colors[i] appears in the image for any i.
How does the unique + reshaping line work? This is borrowed from this particular answer. Basically, you flatten out your image array such that it has shape (num_pixels, num_channels), which could be 1, 3, or 4 depending on your image format (single-channel, RGB, RGBA, etc.). Now that I have a giant 2D "table" of pixels, I simply find which row values (hence axis=0) are unique, and then use the return_counts keyword to return, well, the counts.
At this point, you have extracted the unique colors and counts of those colors for a single image. To compare multiple images, you would repeat this process on multiple images, find the colors they have in common, and then you can simply compare integers to find out which image has more of a particular color.
For my particular image, the format of the channels happened to be RGBA; in any case, I would recommend printing out arr.shape prior to the reshape step to verify that you have the correct index. If you/anyone else knows of a more general method to find the channel index of an image obtained in this fashion — I'm all ears. Thus, you may have to change the index of arr.shape to something else depending on your image. For the record, I tried this on a .png image, like you specified. Hope this helps!
A chem student asked me for help with plotting image segmenetation:
A stationary camera takes a picture of the experimental setup every second over a period of a few minutes, so like 300 images yield.
The relevant parts in the setup are two adjacent layers of differently-colored foams observed from the side, a 2-color sandwich shrinking from both sides, basically, except one of the foams evaporates a bit faster.
I'd like to segment each of the images in the way that would let me plot both foam regions' "width" against time.
Here is a "diagram" :)
I want to go from here --> To here
Ideally, given a few hundred of such shots, in which only the widths change, I get an array of scalars back that I can plot. (Going to look like a harmonic series on either side of the x-axis)
I have a bit of python and matlab experience, but have never used OpenCV or Image Processing toolbox in matlab, or actually never dealt with any computer vision in general. Could you guys throw like a roadmap of what packages/functions to use or steps one should take and i'll take it from there?
I'm not sure how to address these things:
-selecting at which slice along the length of the slice the algorithm measures the width(i.e. if the foams are a bit uneven), although this can be ignored.
-which library to use to segment regions of the image based on their color, (some k-means shenanigans probably), and selectively store the spatial parameters of the resulting segments?
-how to iterate that above over a number of files.
Thank you kindly in advance!
Assume your Intensity will be different after converting into gray scale ( if not, just convert to other color space like HSV or LAB, then just use one of the components)
img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
First, Threshold your grayscaled input into a few bands
ret,thresh1 = cv.threshold(img,128,255,cv.THRESH_BINARY)
ret,thresh2 = cv.threshold(img,27,255,cv.THRESH_BINARY_INV)
ret,thresh3 = cv.threshold(img,77,255,cv.THRESH_TRUNC)
ret,thresh4 = cv.threshold(img,97,255,cv.THRESH_TOZERO)
ret,thresh5 = cv.threshold(img,227,255,cv.THRESH_TOZERO_INV)
The value should be tested out by your actual data. Here Im just give a example
Clean up the segmented image using median filter with a radius larger than 9. I do expect some noise. You can also use ROI here to help remove part of noise. But personally I`m lazy, I just wrote program to handle all cases and angle
threshholed_images_aftersmoothing = cv2.medianBlur(threshholed_images,9)
Each band will be corresponding to one color (layer). Now you should have N segmented image from one source. where N is the number of layers you wish to track
Second use opencv function bounding rect to find location and width/height of each Layer AKA each threshholed_images_aftersmoothing. Eg. boundingrect on each sub-segmented images.
C++: Rect boundingRect(InputArray points)
Python: cv2.boundingRect(points) → retval¶
Last, the rect have x,y, height and width property. You can use a simple sorting order to sort from top to bottom layer based on rect attribute x. Run though all vieo to obtain the x(layer id) , height vs time graph.
Rect API
Public Attributes
_Tp **height** // this is what you are looking for
_Tp width
_Tp **x** // this tells you the position of the band
_Tp y
By plot the corresponding heights (|AB| or |CD|) over time, you can obtain the graph you needed.
The more correct way is to use Kalman filter to track the position and height graph as I would expect some sort of bubble will occur and will interfere with the height of the layers.
To be honest, i didnt expect a chem student to be good at this. Haha good luck
Anything wrong you can find me here or Email me if i`m not watching stackoverflow
You can select a region of interest straight down the middle of the foams, a few pixels wide. If you stack these regions for each image it will show the shrink over time.
If for example you use 3 pixel width for the roi, the result of 300 images will be a 900 pixel wide image, where the left is the start of the experiment and the right is the end. The following image can help you understand:
Though I have not fully tested it, this code should work. Note that there must only be images in the folder you reference.
import cv2
import numpy as np
import os
# path to folder that holds the images
path = '.'
# dimensions of roi
x = 0
y = 0
w = 3
h = 100
# store references to all images
all_images = os.listdir(path)
# sort images
all_images.sort()
# create empty result array
result = np.empty([h,0,3],dtype=np.uint8)
for image in all_images:
# load image
img = cv2.imread(path+'/'+image)
# get the region of interest
roi = img[y:y+h,x:x+w]
# add the roi to previous results
result = np.hstack((result,roi))
# optinal: save result as image
# cv2.imwrite('result.png',result)
# display result - can also plot with matplotlib
cv2.imshow('Result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Update after question edit:
If the foams have different colors, your can use easily separate them by color by converting the image you hsv and using inrange (example). This creates a mask (=2D array with values from 0-255, one for each pixel) that you can use to calculate average height and extract the parameters and area of the image.
You can find a script to help you find the HSV colors for separation on this GitHub
I have an image image.png and I want to find all clipped pixels. Here is what I have so far:
for i in range(1,width):
for j in range(1, height):
r,g,b = image.getpixel((i,j))
If( ): # I don't know what should be the condition here
# do something else
I use Python, Tkinter, Pil.
Thanks
If by 'clipped' you mean saturated, then you probably want to create a threshold based on the intensity of the pixel. There are a few equations that try to determine this, but I would recommend one of the Grayscale equations. Looking at the equation used in ATSC:
I=.2126*r+.7152*g+.0722*b
Then just figure out what range of values for I you considered 'clipped'.
I have a list of 7130 values which are from 0 to 1, these values represent a "heat value" on a city map at corresponding GPS coordinates and I can extract the coordinates from a data file. Now I want to use python to generate a heat map like the picture shown here. Does anyone know how to do that? Thank you very much
There are a lot of details that would go into answering this question.
The first would be, what have you got so far?
Do you have a map image?
Have you plotted the gps coordinates to coincide with the pixel positions in your image?
Do you have a Look Up Table to correspond with your different "temp" values?
Are you looking to create a static output image, or do you want it to update dynamically? (This may determine the method you use to generate the heat-map overlay.)
Once you have those details ironed out it should be fairly simple to generate a heat map like the one above using any of the various imaging libraries available in Python (PIL/OpenCV).
This is a rough (overly simplified) outline of how I would generate the heat map from the initial data using OpenCV:
I would start with two images, the map image, and a zero valued (black) image of the same size. You can then add the appropriate values to all three color channels of your blank image at the gps/pixel locations which will give you a 3 channeled gray image. (So if the value is 0.25, you set RG and B to 0.25)
Then, apply a gaussian blur with a large kernel size; appropriate to the amount of blending you want between points and the size of your image.
You will likely need to multiply your blurred image by some factor (depending on kernel size) after blurring to brighten the colors.
Next apply your Look Up Table to your Blurred mapped values:
Then you could merge the two images into one output image using any number of combinations (add, multiply, addWeighted(), etc). Or if you want to involve an alpha channel for a cleaner overlay you could use the method described here.
Here is a great library HEATMAP.
Code snippet:
import heatmap
import random
pts = []
for x in range(400):
pts.append((random.random(), random.random() ))
print "Processing %d points..." % len(pts)
hm = heatmap.Heatmap()
img = hm.heatmap(pts)
img.save("classic.png")
Result image
If you use PYTHON language there is a very short way to do it. You need to install just one library which is gmaps. You need a Google Map Key to use this library.
Link of the library: https://jupyter-gmaps.readthedocs.io/en/stable/tutorial.html#weighted-heatmaps
Here is an example:
import gmaps
import gmaps.datasets
from ipywidgets.embed import embed_minimal_html
import pandas as pd
columns = ["latitude","longitude","magnitude"]
a = []
for i in your_arr:
a.append([your_arr[0], your_arr[1], your_arr[2]])
df = pd.DataFrame(a,columns=columns)
gmaps.configure(api_key="YOUR_GOOGLE_MAPS_API_KEY_HERE")
fig = gmaps.figure()
heatmap_layer = gmaps.heatmap_layer(
df[['latitude', 'longitude']], weights=df['magnitude'],max_intensity=210,point_radius=30
)
fig.add_layer(heatmap_layer)
embed_minimal_html('export.html', views=[fig])
Here how it looks like:
After that you can open export.html in your browser and can see your heatmap easily :)
I'm writing a program that does basic image processing.
Keep in mind that the images are in grayscale, not RGB, also, I'm fairly new to Python, so an explanation of what I'm doing wrong/right would be incredibly helpful.
I'm trying to write an outline algorithm that follows this set of rules:
All light pixels in the original must be white in the outline image.
All dark pixels on the edges of the image must be black in the outline image.
If a pixel that is not on an edge of the image is dark and all of the 8 surrounding pixels are dark, this pixel is on the inside of a shape and must be white in the outline image.
All other dark pixels must be black in the outline image.
So far I have this:
def outlines(image):
"""
Finds the outlines of shapes in an image. The parameter must be
a two-dimensional list of pixels. The return value is another
two-dimensional list of pixels which describes an image showing
outlines of the shapes in the original image. Each pixel in the
return value will be either black (0) or white (255).
"""
height=len(image)
width=len(image[0])
new_image=[]
for r in range(height):
new_row=[]
index=0
for c in range(width):
if image[r][c]>128:
new_row.append(255)
if image[r][c]<=128:
new_row.append(0])
new_image.append(new_row)
Can someone show me how to implement the algorithm into my outlines function?
Thanks in advance.
Edit: This is an assignment for my University Comp Sci class, I'm not asking for someone to do my homework, rather because I've virtually no idea what the next step is.
Edit2: If someone could explain to me a simple edge detection function that is similar to the algorithm I need to create I would appreciate it.
In addition to check if your pixel is dark or clear, you should also check, when dark, if the rest of pixels around are also dark in order to make that point white instead.
Check this function and try to use it for that purpose:
def all_are_dark_around(image, r, c):
# range gives the bounds [-1, 0, 1]
# you could use the list directly. Probably better for this especific case
for i in range(-1,2):
for j in range(-1,2):
# if the pixel is clear return False.
# note that image[r+0][c+0] is always dark by definition
if image[r+i][c+j] <= 128:
return False
#the loop finished -> all pixels in the 3x3 square were dark
return True
Advices:
note that you should never check image[r][c] when r or c are equal
to 0 or to height or width. That is, when the checked pixel is in
the border because in this case there is at least one side where
there is no adjacent pixel to look at in the image and you will get
an IndexError
don't expect this code to work directly and to be the best code on
terms of efficiency or good style. This is a hint for your homework.
You should do it. So look at the code, take your time (optimally it
should be equal or longer than the time it took to me to write the
function), understand how it works and adapt it to your code fixing
any exceptions and border situations you encounter in the way.