I have a bunch of (x,y) type points. Corresponding to these points there is an associated value (within a specific range). These values can be used to put color to them (using some color map).
If we now use these (x,y) and color we can have a pixelated image (with a color pixel at each x,y). Is there an easy way to fill the pixels between so that I get a sensible gradient of colors.
TLDR: How to fill pixels within a boundary when you know values at nearby pixels?
The thing which makes it difficult for me is I have much more pixels to fill than the number of already colored pixels.
Also can I control the gradient type (linear, quadratic etc.)
Related
I am using imshow() to create pseudo-coloured maps of matrices of values (2d numpy arrays). Using clim argument one can set the range of values (below 1 and 6) to be represented within the colour scale (see below). This way for example all outliers (whether 7 or 7000000) will be yellow. This skews the perception of the image, as the reader doesn't know if this pixel is 6 or 700000.
Does anyone know of any way to colour all values outside of this range some other fixed colour of choice, for example, magenta?
I have a list of (x,y) points that constitue several circles with different centers, they all have the same diameter (which is known).
I need to detect the number of circles in total (not necessary to define their parameters). Is there a simple way to do that in python? (preferably without openCV)
If all circles have the same size and they do not intersect, you can just scan the picture line-by-line, pixel-by-pixel.
When you meet a pixel of circle color, apply flood-fill algorithm from this point and mark all connected pixels of the same color with the same integer value (1 for the first circle and so on).
After all the last value is number of objects.
Aslo you can use connected-component labelling algorithm
I'm looking for a way to split a number of images into proper rectangles. These rectangles are ideally shaped such that each of them take on the largest possible size without containing a lot of white.
So let's say that we have the following image
I would like to get an output such as this:
Note the overlapping rectangles, the hole and the non axis aligned rectangle, all of these are likely scenario's I have to deal with.
I'm aiming to get the coordinates describing the corner pieces of the rectangles so something like
[[(73,13),(269,13),(269,47)(73,47)],
[(73,13),(73,210),(109,210),(109,13)]
...]
In order to do this I have already looked at the cv2.findContours but I couldn't get it to work with overlapping rectangles (though I could use the hierarchy model to deal with holes as that causes the contours to be merged into one.
Note that although not shown holes can be nested.
A algorithm that works roughly as follow should be able to give you the result you seek.
Get all the corner points in the image.
Randomly select 3 points to create a rectangle
Count the ratio of yellow pixels within the rectangle, accept if the ratio satisfy a threshold.
Repeat 2 to 4 until :
a) every single combination of point is complete or
b) all yellow pixel are accounted for or
c) after n number of iteration
The difficult part of this algorithm lies in step 2, creating rectangle from 3 points.
If all the rectangles were right angle, you can simply find the minimum x and y to correspond for topLeft corner and maximum x and y to correspond for bottomRight corner of your new rectangle.
But since you have off axis rectangle, you will need to check if the two vector created from the 3 points have a 90 degree angle between them before generating the rectangle.
Current state
I have a numpy array of shape (900, 1800, 3) that has been made from an image file.
That's one array element per pixel: 900 px high, 1800 px wide, and 3 channels (R, G, B) per pixel represented in the array.
There are only a small number (3-20) unique RGB colors in the images being parsed, so there are only very few different RGB value combinations represented in the array.
Goal
Identify the smallest circular areas in the image that contains n number of unique colors, where n will always be less than or equal to the number of unique colors in the image.
Return top y (by count or pct) of the smallest areas.
A 'result' could simply be the x,y value of the center pixel of an identified circular area and its radius.
I do plan to draw a circle around each area, but this question is about the best approach for first identifying the top smallest areas.
The Catch/Caveat
The images are actually flattened projections of spheres. That means that a pixel at the right edge of the image is actually adjacent to a pixel on the left edge, and similarly for top and bottom pixels. The solution must account for this as it is parsing pixels to identify closest pixels with other colors. EDIT: this part may be answered in comments below
The Question
My initial approach is to simply parse pixel by pixel and brute force the problem with handrolled x/y coordinate math: take a pixel, work outwards until we hit n colors, score that pixel for how many steps outward it took, next pixel. Keep a top y dict that gets re-evaluated after each pixel, adding any pixels that make top y, and dumping any that get pushed out. Return that dict as the output.
I know that many python libs like scipy, scikit-image, and maybe others like to work with images as numpy arrays. I'm sure there is a method/approach that is smarter and leverages a library or some kind of clustering algo instead of brute forcing it, but I'm not familiar enough with the space to know intuitively what methods and libs to consider. The question: What is the pseudocode for a good method/lib to do this the right way?
Is it possible the value pixel of image is change after image rotate? I rotate an image, ex, I rotate image 13 degree, so I pick a random pixel before the image rotate and say it X, then I brute force in image has been rotate, and I not found pixel value as same as X. so is it possible the value pixel can change after image rotate? I rotate with opencv library in python.
Any help would be appreciated.
Yes, it is possible for the initial pixel value not to be found in the transformed image.
To understand why this would happen, remember that pixels are not infinitely small dots, but they are rectangles with horizontal and vertical sides, with small but non-zero width and height.
After a 13 degrees rotation, these rectangles (which have constant color inside) will not have their sides horizontal and vertical anymore.
Therefore an approximation needs to be made in order to represent the rotated image using pixels of constant color, with sides horizontal and vertical.
If you just rotate the same image plane the image pixels will remain same. Simple maths