Make a colorbar for RGB data? - python

I got data that is made based on a 3 channels, which are red in as RGB, this means that the color bar would have to circle between all 3 colors to show all possible shades Is there a simple way of doing this?
Here is an example. Red is left enhanced, blue right enhanced and green centrally enhanced. (it is looking at spectral features.) This means that Red+Blue (= Purple) would be right and central enhanced and weak in left. etc.
I need a way to show that properly with a colorbar of sorts.

I'm not sure I understood what is your expected result. I'm providing a temporary answer anyway so that you can eventually point me to the right direction.
This is an example colorbar made with numpy arrays:
The code I used to generate it is the following:
import numpy as np
import cv2
# Initialize an empty array that matches opencv ranges for hsv images:
# hue (cylinder 180°) 0-179 (multiplied by 10 to "stretch" horizontally)
# saturation is fixed at 254
# value (0-254)
bar = np.ndarray([255,1800,3], dtype="uint8")
for x in range(1800):
for y in range(255):
bar[y,x,0] = int(x/10)
bar[y,x,1] = 254
bar[y,x,2] = y
#Convert to BGR (opencv standard instead of rgb)
bgr = cv2.cvtColor(bar, cv2.COLOR_HSV2BGR)
cv2.imshow('Colorbar', bgr)
cv2.waitKey()

Related

Plotting HSV channel histograms from a BGR image Opencv

I have B,G,R histograms that look like the following:
Image Histogram for B channel of an image
Description: On the X axis, I have the values from 0-255, that each pixel ranges from, and on Y axis, I have the number of pixels that have that particular X value.
My code for the same is:
hist1 = cv2.calcHist([image],[0],None,[256],[0,256])
plt.plot(hist1, color='b')
plt.xlabel("Value (blue)")
plt.ylabel("Number of pixels")
plt.title('Image Histogram For Blue Channel')
plt.show()
My question is, that I need to get the same plot - X axis with values, and Y axis with number of pixels, for HSV channels. Basically, instead of B, G, and R plots, I need the same histogram, but one that gets H, S, I.
I got the following code:
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = img2[:,:,0], img2[:,:,1], img2[:,:,2]
hist_h = cv2.calcHist([h],[0],None,[256],[0,256])
#hist_s = cv2.calcHist([s],[0],None,[256],[0,256])
#hist_v = cv2.calcHist([v],[0],None,[256],[0,256])
plt.plot(hist_h, color='r', label="hue")
Which gives me the following plot: Hue plot for an image
But from what I've read so far, BGR and HSV are different color spaces. So, I want to know, that when I'm using the calcHist function after converting to HSV, and then splitting into three channels, those channels by default are H,S and V? It's not that they're actually only BGR, but just simply mislabelled H, S and V? I just want to verify how both the methods are practically the same, but BGR and HSV are different color spaces.
Edit: Here's the source image
Image
Most likely you have a synthetic image with nothing in the red and green channels and some random data centred on 128 in the blue channel.
When you go to HSV colourspace, all the hues are centred on 110 which corresponds to 220 degrees which is blue in the regular 0..360 HSV range. Remember OpenCV uses a range of 0..180 for Hue when using uint8 so that it fits in uint8's range of 0..255. So you effectively need to multiply the 110 you see in your Hue histogram by 2... making 220 which is blue.
See bottom part of this figure.
As you seem so uncertain of your plotting, I made histograms of the HSV channels for you below. I used a different tool to generate them, but don't let that bother you - in fact confirmation from a different tool is always a good sign.
First, here are the Hue (left), Saturation (centre) and Value (right) channels side-by-side:
Now the Hue histogram:
This tells you all the hues in the image are similar - i.e. various shades of blue.
Now the Saturation histogram:
This tells you that the colours in the image are generally low-to-medium saturated with no really vivid colours.
And finally, the Value histogram:
This tells you the image is generally mid-brightness, with no dark shadows and a small peak of brighter areas on the right of the histogram corresponding to where the white parts are in the original.

How to downscale an image without losing discrete values?

I have an image of a city with discrete colors (Green=meadow, black=buildings, white/yellow=roads). Using Pillow, I import the picture in my (Python) program and convert it to a Numpy array with discrete values for the colors (i.e. green pixels become 1's, black pixels become 2's, etc).
I want to downscale the resolution of the image (for computational purposes) while retaining as much information as possible. However, using Pillow's resize() method, colors deviate from these discrete values. How can I downscale this image while (most importantly) retaining the discrete colors and (also important) with losing as little information as possible?
Here an example of the image: https://i.imgur.com/6Tef55H.png
EDIT: per request, some code:
from PIL import Image
import Numpy as np
picture = Image.open(some_image.png)
width, height = picture.size
pic_array = np.zeros(width,height)
# Turn the image into discrete values
for i in range(0,width):
for j in range(0,height):
red, green, blue = picture.getpixel((i,j))
if red == a and green == b and blue == c:
#An example of how discrete colors are converted to values
pic_array[i][j] = 1
Scaling can be done in two ways:
1) Scaling the original image using Pillow's resize library or
2) rescaling the final array using something like:
scaled_array = pic_array[0:width:5, 0:height,5]
Option 1 is "well" in terms of retaining information but loses discrete values, while option 2 does it the other way around.
I was interested in this question and wrote some code to try out some ideas - specifically the "mode" filter suggested by #jasonharper in the comments. So, I programmed it up.
First of all the input image is not 4 nicely defined classes, but actually has 6,504 different colours, so I made a palette of 4 colours using ImageMagick like this:
magick xc:black xc:white xc:yellow xc:green +append palette.png
Here it is enlarged - in reality is 4x1 pixels:
Then I mapped the colours in the image to the palette of 4 discrete colours:
magick map.png +dither -remap palette.png start.png
Then I tried this code to calculate the median and the mode of each 3x3 window:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
from scipy import stats
from skimage.util import view_as_blocks
# Open image and make into Numpy array
im = Image.open('start.png')
na = np.array(im)
# Make a view as 3x3 blocks - crop anything not a multiple of 3
block_shape=(3,3)
view = view_as_blocks(na[:747,:], block_shape)
flatView = view.reshape(view.shape[0], view.shape[1], -1) # now (249,303,9)
# Get median of each 3x3 block
resMedian = np.median(flatView, axis=2).astype(np.uint8)
Image.fromarray(resMedian*60).save('resMedian.png') # arbitrary scaling by 60 for contrast
# Get mode of each 3x3 block
resMode = stats.mode(flatView, axis=2)[0].reshape((249,303)).astype(np.uint8)
Image.fromarray(resMode*60).save('resMode.png') # arbitrary scaling by 60 for contrast
Here is the result of the median filter:
And here is the result of the "mode" filter which is indeed better IMHO:
Here is animated comparison:
If anyone wants to take the code and adapt it to try new ideas, please feel free!

Get pixel location of binary image with intensity 255 in python opencv

I want to get the pixel coordinates of the blue dots in an image.
To get it, I first converted it to gray scale and use threshold function.
import numpy as np
import cv2
img = cv2.imread("dot.jpg")
img_g = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret1,th1 = cv2.threshold(img_g,127,255,cv2.THRESH_BINARY_INV)
What to do next if I want to get the pixel location with intensity 255? Please tell if there is some simpler method to do the same.
I don't think this is going to work as you would expect.
Usually, in order to get a stable tracking over a shape with a specific color, you do that in RGB/HSV/HSL plane, you could start with HSV which is more robust in terms of lighting.
1-Convert to HSV using cv2.cvtColor()
2-Use cv2.inRagne(blue_lower, blue_upper) to "filter" all un-wanted colors.
Now you have a good-looking binary image with only blue color in it (assuming you have a static background or more filters should be added).
3-Now if you want to detect dots (which is usually more than one pixel) you could try cv2.findContours
4- You can get x,y pixel of contours using many methods(depends on the shape of what you want to detect) like this cv2.boundingRect()

Changing a color range in an image to a different color texture

I successfully changed a range of colors in an image to a single other color. I would like to make it more realistic by changing the colors to match the distribution of a swatch or, at the very least, a narrow band of random hues.
Some brown-looking grass
changed to very artificial green-looking grass
My code:
import cv2 as cv
import os
import numpy as np
import random
# Load the image and convert to HSV colourspace
image = cv.imread('brown_grass.jpg')
hsv=cv.cvtColor(image,cv.COLOR_BGR2HSV)
# Define lower and uppper limits of what we call "brown"
brown_lo=np.array([18,0,0])
brown_hi=np.array([28,255,255])
# Mask image to only select browns
mask=cv.inRange(hsv,brown_lo,brown_hi)
# Change image to green where we found brown
image[mask>0]=(74,183,125) # how do change it to a nice realistic texture swatch?
cv.imwrite('result.jpg',image)
(Thanks to I want to change the colors in image with python from specific color range to another color for the first part of the solution)
Thanks to #MarkSetchell for the idea; the way to shade the masked area in a more natural way is to exploit the fact that brown and green are adjacent on the HSV color model, and add the delta between them, plus a boost for saturation and value.
hsv[mask>0]=hsv[mask>0]+(25,30,10) # add an HSV vector
image2=cv.cvtColor(hsv,cv.COLOR_HSV2BGR) # convert image back into RGB color space
cv.imwrite('result.jpg', image2)
Exaggerating the effect here:

Colouring an image Pygame

I'd like to create a game, similar to that of Geometry Dash. I have all the images for the cubes, but they are all grey and white - this is to allow the user to select the colours.
I have two variables, colour_1 and colour_2. colour_1 should be in the grey, and colour_2 should be in the white. If I say what the variables are, how would I modify the image to have the right colours?
The colours on the images are not all the same, the edges blend, so that the image is smoother. This may cause complications.
I found this on the website Fishstick proposed
Here's a working code based on it
img_surface = pygame.image.load("image.gif") # Load image on a surface
img_array = pygame.surfarray.array3d(img_surface) # Convert it into an 3D array
colored_img = numpy.array(img_array) # Array thing
colored_img[:, :, 0] = 255 # <-- Red
colored_img[:, :, 1] = 128 # <-- Green
colored_img[:, :, 2] = 0 # <-- Blue
img_surface = pygame.surfarray.make_surface(colored_img) # Convert it back to a surface
screen.blit(img_surface, (0, 0)) # Draw it on the screen
This change the color value of each pixel. If you set red to 255, it will add 255 to red for all pixels. But if you set 255 to all colors the image will be white.
Note that you need to install NumPy to use this and you can do so by doing this:
pip install numpy
Also you could try replacing the last two lines for
pygame.surfarray.blit_array(screen, colored_img)
Which should be faster but it didn't work for me so I converted the array into a surface then blitted it on the screen.
If that doesn't answer your question maybe these will:
Pygame - Recolor pixes of a certain color to another using SurfArray (Array slicing issue)
https://gamedev.stackexchange.com/questions/26550/how-can-a-pygame-image-be-colored
Pygame Surfarray Module Documentation
http://www.pygame.org/docs/ref/surfarray.html#pygame.surfarray.blit_array
Write classes and instantiate the objects with color code variables.
You can write a method that will draw the shape/image in combination with state-specific data.

Categories

Resources