I'm trying to figure out how to do a pixel (color) search from the center and out, in a spiral shape.. Not like the normal "left to right pixel search".
So far i've made some simple x-y searches. Using standard PIL. But it was to slow. as the result always seems to be closer to the center (of the image) in my case. The thing is that it's not a square image, so the center position(s) can be 2 or more pixels (not A center, but "two"+ pixels centerd), this is where I "loose it".. Can you peeps give me some hints? Im always working from a screenshot PIL-> ImageGrab.grab(), using image.size to get the image size, and px=image.getpixel((x, y)) to get the current pixel-location
I'm working with R,G,B-colours: if px[0] == r and px[1] == g and px[2] == b:
See this answer for a bunch of different algorithms written in python for iterating over a matrix in a spiral fashion.
Related
I have a .dxf file containing a drawing (template) which is just a piece with holes, from said drawing I successfully extract the coordinates of the holes and their diameters given in a list [[x1,y1,d1],[x2,y2,d2]...[xn,yn,dn]].
After this, I take a picture of the piece (same as template) and after some image processing, I obtain the coordinates of my detected holes and the contours. However, this piece in the picture can be rotated with respect to the template.
How do I do the right hole correspondance (between coordinates of holes in template and the rotated coordinates of holes in image) so I can know the which diameter corresponds to each hole in the image?
Is there any method of point sorting it can give me this correspondence?
I'm working with Python and OpenCV.
All answers will be highly appreciated. Thanks!!!
Image of Template: https://ibb.co/VVpWmKx
In the template image, contours are drawn to the same size as given in the .dxf file, which differs to the size (in pixels) of the contours of the piece taken from camera.
Processed image taken from the camera, contours of the piece are shown: https://ibb.co/3rjCg5F
I've tried OpenCV functions of feature matching (ORB algorithm) so I can get the rotation angle the piece in picture was rotates with respect to the template?
but I still cannot get this rotation angle? how can I get the rotation angle with image descriptors?
is this the best approach for this problem? are there any better methods to address this problem?
Considering the image of the extracted contours, you might not need something as heavy as the feature matching algorithm of the OCV library. One approach would be to take the most outter contour of the piece and get the cv::minAreaRect of it. Resulting rotated rectangle will give you the angle. Now you just have to decide if the symmetry matches, because it might be flipped. That can be done as well in many ways. One of the most simple one (excluding the fact, the scale might be off) is that you take the most outter contour again, fill it and count the percentage of the points that overlay with the template. The one with right symmetric orientation should match in almost all points. Given that the scale of the matched piece and the template are the same.
emm you should use huMoments which gives translation, scale and rotation invariance descriptor for matching.
The hu moment can be found here https://en.wikipedia.org/wiki/Image_moment. and it is implemented in opencv
you can dig up the theory of Moment invariants on the wiki site pretty easily
to use it you can simply call
// Calculate Moments
Moments moments = moments(im, false);
// Calculate Hu Moments
double huMoments[7];
HuMoments(moments, huMoments);
The sample moment will be
h[0] = 0.00162663
h[1] = 3.11619e-07
h[2] = 3.61005e-10
h[3] = 1.44485e-10
h[4] = -2.55279e-20
h[5] = -7.57625e-14
h[6] = 2.09098e-20
Usually, here is a large range of the moment. There usually coupled with a log transform to lower the dynamic range for matching
H=log(H)
H[0] = 2.78871
H[1] = 6.50638
H[2] = 9.44249
H[3] = 9.84018
H[4] = -19.593
H[5] = -13.1205
H[6] = 19.6797
BTW, you might need to pad the template to extract the edge contour
This may be called "Region of Interest" I'm not exactly sure. But, what I'd like to do is rather easy to explain.
I have a photo that I need to align to a grid.
https://snag.gy/YaAWdg.jpg
For starters; the little text that says "here" must be 151px from the top of the screen.
Next; from "here" to position 8 of the chin must be 631px
Finally; a straight line down the middle of the picture at line 28 on the nose must be done.
If i'm not making sense please tell me to elaborate.
I have the following ideas (this is pseudo code)
It is simply to loop until the requirements are met with a resize function, a lot like brute forcing but thats all i can think of..
i.e..
while (top.x,y = 151,0)
img.top-=1 ## this puts the image one pixel above until reaching the desired positioning
while (top.x,y & eight.x,y != 631)
resize += 1 # resize by 1 pixel until the height is reached
## center the nose
image.position = nose.
Consider switching the order of your operations to prevent the need to iterate. A little bit of math and a change of perspective should do the trick:
1.) Resize the image such that the distance from "here" to the chin is 631px.
2.) Use a region of interest to crop your image so that "here" is 151px from the top of the screen.
3.) Draw your line.
EDIT:
The affine transform in OpenCV would work to morph your image into the proper fill, assuming that you have all the proper constraints defined.
If all you need to do is a simple scale... First calculate the distance between points, using something like this.
Point2f a(10,10);
Point2f b(100,100);
float euclideanDist(Point& p, Point& q) {
Point diff = p - q;
return cv::sqrt(diff.x*diff.x + diff.y*diff.y);
}
Then create a scale factor to resize your image
float scaleFactor = euclideanDist(a,b) / 631;
cv::resize(input, output, cv::Size(), scaleFactor, scaleFactor, cv::INTER_LINEAR);
Using both instances of scaleFactor will create a uniform scaling in X&Y. Using two different scale factors will scale X and Y independently.
Take a look to OpenCV's tutorial, for images with faces you can use Haar Cascades to simplify the work. (https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html#face-detection)
Otherwise, look at ROI (Region of Interest) to extract an area and apply your algorithm on it (resize or crop)
I am developing an application which processes cheques for banks. But when the bank's image of a cheque can be skewed or rotated slightly by an angle of maximum value 20 degrees. Before the cheque can be processed, I need to properly align this skewed image. I am stuck here.
My initial idea was that I will first try to get the straight horizontal lines using Hough Line Transform in an "ideal cheque image". Once i get the number of straight lines, I will use the same technique to detect straight lines in a skewed image. If the number of lines is less than some threshold, I will detect the image as skewed. Following is my attempt:
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,50,50)
lines = cv2.HoughLinesP(edges,1,np.pi/180,100,1000,100)
if len(lines[0]) > 2:
#image is mostly properly aligned
else:
#rotate it by some amount to align it
However, this gets me nowhere in finding the angle by which it is skewed. If i can find the angle, I can just do the following:
#say it is off by +20 degrees
deg = 20
M = cv2.getRotationMatrix2D(center, -deg, 1.0)
rotated = cv2.warpAffine(image, M, (w, h))
I then thought of getting the angle of rotation using scalar product. But then, using the scalar product of which two elements? I cannot get elements from the "bad" cheque by their coordinates in the "ideal" cheque, because its contents are skewed. So, is there any way in openCV by which, I can, say, superimpose the "bad" image over the "ideal" one and somehow calculate the angle it is off by?
What I would do in your case is to find the check within the image using feature matching with your template check image. Then you only need to find the transformation from one to the other and deduce the angle from this.
Take a look at this OpenCV tutorial that teaches you how to do that.
EDIT:
In fact, if what you want is to have the bank check with the right orientation, the homography is the right tool for that. No need to extract an angle. Just apply it to your image (or its inverse depending on how you computed it) and you should get a beautiful check, ready for processing.
This is kind of an easy problem to solve, i'm trying to re-arrange shapes in a plane but first i need to detect them in the right way, i've came up with this very inefficient algorithm but it does the job fine until it reaches two shapes that are separated by a distance that is < 1px:
Here you have it in python pseudocode:
#all pixels
for x in range(0, image.width):
for y in range(0, image.height):
if pixel is black:
# mark start of shapes
else:
if shape is open:
for r in range (0, image.height):
if pixel is black:
# found shape, keep shape open
else:
# close shape
else:
for r in range (0, image.height):
paint pixel gray # this draws the vertical gray lines in the example
This is the resulting image:
As you can see, the grey bars are drawn between the shapes but it doesn't work when two shapes are too close together (with a distance of less than 1px between each other)
IMPORTANT: I don't need to make this work for shapes that overlap vertically.
I dont really care about the python/pillow syntax if you can explain really well what your algorithm does and how it works and it looks like python / PIL code.
It sounds like you want to look at both the current column of pixels, and the previous one. If there's no y-position that's black in both columns, it's a new shape (or no shape).
I'm writing a program that does basic image processing.
Keep in mind that the images are in grayscale, not RGB, also, I'm fairly new to Python, so an explanation of what I'm doing wrong/right would be incredibly helpful.
I'm trying to write an outline algorithm that follows this set of rules:
All light pixels in the original must be white in the outline image.
All dark pixels on the edges of the image must be black in the outline image.
If a pixel that is not on an edge of the image is dark and all of the 8 surrounding pixels are dark, this pixel is on the inside of a shape and must be white in the outline image.
All other dark pixels must be black in the outline image.
So far I have this:
def outlines(image):
"""
Finds the outlines of shapes in an image. The parameter must be
a two-dimensional list of pixels. The return value is another
two-dimensional list of pixels which describes an image showing
outlines of the shapes in the original image. Each pixel in the
return value will be either black (0) or white (255).
"""
height=len(image)
width=len(image[0])
new_image=[]
for r in range(height):
new_row=[]
index=0
for c in range(width):
if image[r][c]>128:
new_row.append(255)
if image[r][c]<=128:
new_row.append(0])
new_image.append(new_row)
Can someone show me how to implement the algorithm into my outlines function?
Thanks in advance.
Edit: This is an assignment for my University Comp Sci class, I'm not asking for someone to do my homework, rather because I've virtually no idea what the next step is.
Edit2: If someone could explain to me a simple edge detection function that is similar to the algorithm I need to create I would appreciate it.
In addition to check if your pixel is dark or clear, you should also check, when dark, if the rest of pixels around are also dark in order to make that point white instead.
Check this function and try to use it for that purpose:
def all_are_dark_around(image, r, c):
# range gives the bounds [-1, 0, 1]
# you could use the list directly. Probably better for this especific case
for i in range(-1,2):
for j in range(-1,2):
# if the pixel is clear return False.
# note that image[r+0][c+0] is always dark by definition
if image[r+i][c+j] <= 128:
return False
#the loop finished -> all pixels in the 3x3 square were dark
return True
Advices:
note that you should never check image[r][c] when r or c are equal
to 0 or to height or width. That is, when the checked pixel is in
the border because in this case there is at least one side where
there is no adjacent pixel to look at in the image and you will get
an IndexError
don't expect this code to work directly and to be the best code on
terms of efficiency or good style. This is a hint for your homework.
You should do it. So look at the code, take your time (optimally it
should be equal or longer than the time it took to me to write the
function), understand how it works and adapt it to your code fixing
any exceptions and border situations you encounter in the way.