I am trying to find or think of an algorithm that finds a path from a thick line. I think the images make easier to understand what I am trying to do.
Given is a 2D array as the picture with values 0 and 1 and I am trying to find the nodes of the lines. Has anybody an idea?
You could follow the contour and nibble away pixel by pixel (checking that the connectivity stays intact).
If you cannot remove any more pixels, you have a 1 pixel line as wanted.
But the line will most likely have very few long linear segments (unlike in your example)
I recommend using the most famous python library for image treatment : Pillow.
https://python-pillow.org/
Some questions to orientate you :
Is it really black and white source image ? If No, the first step will be to make each pixel of this picture either black or white (pillow proposes this feature)
Is the width of the white pattern constant (i.e. always 15 pixels) ? If No, your program will first need to scan first the whole picture and then to guess the pattern. If Yes, you can guess the pattern while scanning the picture.
But what means "scanning the picture" ?
That's the key question.
You could check all lines of pixels (from the first line till the last line, and for each line, from left to right), each time you encounter a white pixel you record its coordinates and you record how many white pixels are aside this first white pixel.
Doing that, you will get a table where all white pixels are located.
Then, it's more about mathemtics than about programming.
Related
This is an image containing some text, then a text box, then a Signature & after that a bottom line ending the picture.
And the 2nd image is what I want as output using python.
I have several pictures like this, I want the same cropped output for all the images.
Here is what I tried: I used pytesseract to OCR the image 1st to locate the text box determining that as starting point & thresholding to determine the endpoint of the signature area....then tried using OpenCV to crop that area & saving it in a local directory but the approach is not much promising.
Can someone help me to solve the problem?
There are several relatively simple approaches you could try.
You could use cv2.findContours() to locate the two big black titles "Additional Information" and "Signatures and Certifications" and scale off their position to deduce where the signature is. Your intermediate image might be like this:
You could use "Hit and Miss" matching to locate the 4 corners of the box "The IRS doesn't require your consent...".
You could do flood-fills in say, red, starting from white pixels near the centre of the image till you get a red box filled around the right size for the box "The IRS doesn't require your consent...". Then scale from that.
You could do a flood-fill in black, starting 2/3 of the way down the image and half way across, then find the remaining white area which would be the box "The IRS doesn't require...".
You could look for the large, empty white area below the signature (using summation across the rows) and scale off that. I am showing the summation across the rows to the right of the original image. You could invert first and look for black. Do the summation with:
rowTotals = np.sum(img, axis=1)
I made a model that predicts electrical symbols and junctions:
image of model inference.
Given the xywh coordinates of each junctions' bounding box in a form of a dataframe: image of the dataframe, how would I make an output that stores the location of all the wires in a .txt file in a form of: (xstart,ystart), (xend,yend).
I'm stuck at writing a way to check if there is a valid line (wire) between any two given junctions.
data = df.loc[df['name'] == 'junction']
# iterates through all of the junctions
for index, row in data.iterrows():
for index2, row2 in data.iterrows():
check_if_wire_is_valid()
My attempt was to erase all electrical symbols (make everything in bounding boxes white except for junctions) from the inference image and run cv.HoughLinesP to find wires. How can I write a function that checks if the cv.HoughLinesP output lies between two junctions?
Note that the minimum distance that lies between two junctions should be greater than 1px because if I have a parallel circuit like such: top left and bottom right junction would "detect" more than 1px of line between them and misinterpret that as a valid line.
EDIT: minAreaRect on contours . I've drawn this circuit with no elements for simplification and testing. This is the resulting minAreaRect found for the given contours. I can't seem to find a way to properly validate lines from this.
My initial solution was to compare any two junctions and if they are relatively close on the x-axis, then I would say that those two junctions form a vertical wire, and if other two junctions were close on the y-axis I would conclude they form a horizontal wire. junction distance to axis.
Now, this would create a problem if I had a diagonal line. I'm trying find a solution that is consistent and applicable to every circuit. I believe I'm onto something with HoughLinesP method or contours but that's as far as my knowledge can assist me.
The main goal is to create an LTSpice readable circuit for simulating purposes. Should I change my method of finding valid lines? If so, what is your take on the problem?
This should be doable using findContours(). A wire is always a (roughly) straigt line, right ?
Paint the classified boxes white, as you said
threshold() to get a binary image with the wires (and other symbols and letters) in white, everything else black.
run findContours() on that to extract objects.
Get the bounding boxes (minAreaRect) for all contours
discard all contours with a too wide side ratio, those are letter or symbols, keep only those slim enough to be a wire
Now you got all wires as objects, similiar to the junction list. Now, for how to merge those two... Some options that come to mind:
Grow the boxes by a certain amount, and check if they overlap.
Interpolate a line from the wire boxes and check if they cross any intersection box close by.
Or the other way around: draw a line between intersections and check how much of it goes through a certain wire box.
This is a pure math problem, and i don't know what you performance requirements are. So i'll leave it at that.
Let me start by saying that I'm a complete amateur in image recognition and I'm trying to complete my first assignment using OpenCV in Python. I'm currently really struggling and therefore I came here for some advice or any help in general that would put me on the right path.
What am I currently trying to do:
My goal here is to recognize a shooting target image that user uploads and compare it to one of two shooting target templates (images provided lower). My app is afterward going to calculate this shooting target based on the template it matches and give the user a really accurate score of his shot/shots (based on millimeters from the center of the target). This is just a long goal. For now, I'm just trying to figure out how to distinguish the uploaded target image from the templates I have.
Examples of shooting targets:
As I mentioned I have two shooting target templates: target 1 and target 2.
The user then uploads a target that must match one of the templates.
Example that matches target 1
Example that matches target 2
Whenever the uploaded shooting target doesn't match any of the templates, the app should tell the user and not continue with the calculation.
What have I done and tried so far:
For starters, I figured it would be beneficial to remove everything from the background and crop the image by the shooting target, and so I did. (I thought if I removed all of the background interference I could easily just compare the two images, but I later found out this actually wouldn't be accurate at all).
After that, I tried to calculate the percentage of the black color to the other color inside the target (without the background), but again found out this wouldn't be accurate since the shooter could shoot through a lot of the black color and then the percentage would fluctuate. Also, I wouldn't be able to tell if it's one of the templates since another completely different shooting target could have the same amount of black color in the middle.
As of comparison of the two images, I tried a lot of ways (histogram, feature matching with brute force, template matching) and neither of those seemed to be accurate nor usable (I could have been doing it wrong tho, that's a possibility).
What I have figured after all of those failures is that possibly the best solution would be to compare the circles inside the shooting target or the numbers inside the black middle circles, but I couldn't figure out how to do so properly.
Do you guys have any idea on how to go about this? I would really appreciate any help or any push towards the solution of my problem. Code examples are highly appreciated and would make my day.
Best regards.
The targets seem to differ only in score bands (rings) 4, 5 and 6. So I would try and concentrate on those areas.
I took your sample images and resized them to exactly 500x500 pixels, then I measured the radius from the centre to the outside edge of band 4 (which was 167 px) and to the edge of band 6 (which was 95 px). So the outer limit of the area of interest is 167/500, or 0.33xW and the inner limit is 95/500, or 0.19xW where W is the width of the enclosing rectangle.
So, you can draw that mask like this:
#!/usr/bin/env python3
import numpy as np
import cv2
# Define width/height of target in pixels
W = 300
# Make mask, white for area of interest, black elsewhere
mask = np.zeros((W,W),dtype=np.uint8)
cv2.circle(mask, (W//2,W//2), int(0.33*W), 255, -1) # White outer circle
cv2.circle(mask, (W//2,W//2), int(0.19*W), 0, -1) # Black inner circle
That gives you this mask:
You can now calculate, say, the mean of all pixels within that mask using:
maskedMean = cv2.mean(YourImage, mask)
and only pixels that are white within the mask will contribute to the mean.
Here is the mask placed beside one of your targets:
I am trying to learn python by doing some small projects. one of them is as follows:
I want to write a code that converts an image to straight lines. I am actually into string art and want to receive the output in a way that can be used to easily build the string art by using the output of my code.
so, I'll try to explain what I'm trying to do:
1. import an image and get the pixel coordinate and save them into an array.
2.get brightness value for each pixel of the image.
3. choose a number of lines that are going to be the "quality" of my output, in reality, they'd be the strings that I use to create the art.
4.draw a random amount of lines through the darkest pixel of the image, and compare each lines total brightness to the others, the darkest (the line hitting the darkest pixels) is chosen and the pixels in that line are removed from the pixel array.
5.save the two x and y coordinate of every line's intersection with the image borders ( my canvas). so I can use this later and know where to start and finish ny strings.
6. repeat the step 4 and 5, for the number of lines chosen.
7. print or save the XY coordinates of the line intersections with image borders.
I plan using PIL and numpy to do this, now my question is this:
a. do you think there are easier or better ways to active my goal?
b. what is the best way to get a clean array of pixels from any given digital image?
you can see the kind of image I'm trying to produce at linify.me
thanks.
how can I identify the presence or absence of regular stripes of different colors, but ranging from very very very very light pink to black inside of a scanned image (bitmap 200x200dpi 24-bit).
Carry a few examples.
Example 1
Example 2 (the lines are in all the columns except 7 in the second row of the last column)
For now try to identify (using python language) whether or not there is at least 5-10 pixels for the presence of different color from white to each strip, however, does not always work because the scanned image is not of high quality and the strip changes color very similar to color that surrounds it.
Thanks.
This looks to me a connected component labeling in an image to identify discrete regions of certain color range. You can have a look to cvBlobLib. Some pre-processing would be required to merge the pixels if there are holes or small variations between neighbors.
Not going to happen. The human visual system is far better than any image processing system, and I don't see anything in the 2nd row of #3. #1 and #5 are also debatable.
You need to find some way to increase the optical quality of your input.
Search for segmentation algorithm ,with a low threshold.
It should give you good results as the edges are sharp.
Sobel would be a good start ;)