Detecting different paths on a map from a top view - python

Map for the robot: (https://i.stack.imgur.com/50kDz.jpg)
Hi, I am very new to computer vision. I have a layout of map consisting 3 routes. I want to be able to identify the map and the 3 routes separately. When I have a gap between the tape as seen in the image I am able to identify the different routes with the help of Hough line transform function. However if I remove the gap and all the routes are joined together it is identified as one route. Also the program gets distracted by random lines of objects outside the route which I do not want. How can I fix this or what should I look into to.
I have attempted sobel edge detection and canny. I was thinking about implementing some sort of feature to identify it by orientation or just Identify the junctions. Even when I use those libraries it identifies the lines unclearly.

Related

Automaticly inserting text into dxf file with python

I‘m trying to automatically insert text into dxf contours using python. I have a bulk of dxf files for lasercutting. Often we want to engrave the partnumber into the sheetmetal part.
My attempt is to make a square box Where the length and with equals the text height and width. After to Programm found a place where it is outside the innerconturs and inside the outer contour I want to fill the box with text. I tried to abstract the contours with a polygon and start checking if it fits or not, which kind of works ok. Not finish completely yet.
I wondered if there is some sort of library /tool that has this function because computing time is quite high atm before putting more afford into the Programm or anyone has an easier approach than mine.
The next release of ezdxf (v0.16 is in beta now), has a new text2path add-on, which uses Matplotlib to render text strings as path objects. This path objects can be placed as:
POLYLINE and SPLINE entities to preserves the smooth curves
flattened to POLYLINE or LWPOLYLINE entities, which consist only of straight lines
HATCH entities with or without spline edges
flatten the paths into simple vertices
There are some examples in the https://github.com/mozman/ezdxf/tree/master/examples/addons folder (*_to_path.py).
It is possible to transform the path objects by a Matrix44 transformation and there even exist a function to fit some paths into a box: fit_paths_into_box().
For additional questions, you can use the discussions board at github.

How to validate whether a marked feature is correctly tracking an object in a video by OpenCV?

I want to validate the object detection(the green boxes) which I have marked, whether it's tracking that object only in a video.
How do I check whether it's tracking that object only and not moved to any other object? say in this case How do I validate that the left car(black) is tracking
correctly over the next set of frames along with another object(white car on the right side)
edit: I have tried with finding contours and extracting particularly that object (the black car in this frame) and tried to extract features out of it, but that didn't work.
If you initialize your tracking with a detection step, you could periodically reiterate that detection inside the region you are tracking to make sure the car is still there. Alternatively, you could describe the object region using various histograms (color, gradients, etc.) and check whether the region you are tracking is still similar to what it started with.
I suggest checking out color histograms and HOGs (histogram of oriented gradients) to start with, and maybe combine them.
You could also analyze the objects' motion to detect irregularities, jumps, etc. Consider comparing next-frame positions to predicted positions with a Kalman filter.

How to detect specific objects first before selecting region of interests (selectROI) in OpenCV

I'm doing a project in multiple object tracking, particularly pedestrians on a streets. I've read about the tracking API in OpenCV and there's a part where you have to specify a ROI (a.k.a. square) around your tracked objects. Yet, I don't know how to make the machine first understand that it needs to detect pedestrians only, and then from the detected people, draw the ROI around that object. Also, the number of people on the street is different in each frame, so how to automate the program to detect the people and then draw that square on them? Thanks.

Easiest way to parse simple SVG into Python, applying all transforms

I'd like to draw a few simple objects in Inkscape (lines, circles, rectangles), group them, move them around, scale, rotate, make copies, etc. and then I need something which will let me load the SVG into Python and iterate over all these shapes, getting the relevant attributes (for circles: centre and radius; for lines: the two end-points; etc.).
I've installed and tried svg-utils, pysvg, svgfig and read about inkex, and while all of these seem to allow me to iterate through the XML structure, with varying degrees of awkwardness, as far as I can see none of them apply the transforms to the elements. So, if I draw a line from (0,0) to (1,1), group it, move it to (100,100), then its XML tag is still going to say (0,0) to (1,1), but its real position is computed by applying the transform in its containing group, to these end-points.
I don't want to write all this transform-application code myself, because that would be re-inventing the bicycle. But I need help finding a convenient existing bicycle...
One likely useful route is to find an exporter into a simple format, which would already have had to solve all these problems. Here is an example I found: http://en.wikipedia.org/wiki/SK1_%28program%29#Supported_formats
But which of the export formats listed there is likely to be the simplest?

Robust detection of grid pattern in an image

I have written a program in Python which automatically reads score sheets like this one
At the moment I am using the following basic strategy:
Deskew the image using ImageMagick
Read into Python using PIL, converting the image to B&W
Calculate calculate the sums of pixels in the rows and the columns
Find peaks in these sums
Check the intersections implied by these peaks for fill.
The result of running the program is shown in this image:
You can see the peak plots below and to the right of the image shown in the top left. The lines in the top left image are the positions of the columns and the red dots show the identified scores. The histogram bottom right shows the fill levels of each circle, and the classification line.
The problem with this method is that it requires careful tuning, and is sensitive to differences in scanning settings. Is there a more robust way of recognising the grid, which will require less a-priori information (at the moment I am using knowledge about how many dots there are) and is more robust to people drawing other shapes on the sheets? I believe it may be possible using a 2D Fourier Transform, but I'm not sure how.
I am using the EPD, so I have quite a few libraries at my disposal.
First of all, I find your initial method quite sound and I would have probably tried the same way (I especially appreciate the row/column projection followed by histogramming, which is an underrated method that is usually quite efficient in real applications).
However, since you want to go for a more robust processing pipeline, here is a proposal that can probably be fully automated (also removing at the same time the deskewing via ImageMagick):
Feature extraction: extract the circles via a generalized Hough transform. As suggested in other answers, you can use OpenCV's Python wrapper for that. The detector may miss some circles but this is not important.
Apply a robust alignment detector using the circle centers.You can use Desloneux parameter-less detector described here. Don't be afraid by the math, the procedure is quite simple to implement (and you can find example implementations online).
Get rid of diagonal lines by a selection on the orientation.
Find the intersections of the lines to get the dots. You can use these coordinates for deskewing by assuming ideal fixed positions for these intersections.
This pipeline may be a bit CPU-intensive (especially step 2 that will proceed to some kind of greedy search), but it should be quite robust and automatic.
The correct way to do this is to use Connected Component analysis on the image, to segment it into "objects". Then you can use higher level algorithms (e.g. hough transform on the components centroids) to detect the grid and also determine for each cell whether it's on/off, by looking at the number of active pixels it contains.

Categories

Resources