Python - detect elements of a region - python

I have an array made of 1 and 0 (image below), and I am working on a Python script that detects the borders of the central region (the big white blob) and marks all the internal points as 1. How would you do it?
I wrote a piece of code that does repeated connectivity search, but this doesn't seem the way to go - the region changes shape and new areas are added.

as I can't put a comment i put it here.
I had a problem close to yours: I wanted to select several holes and then calculate the area, the roundness...
What I did was to use the java implementation of python (jython) by which I could use a library called imageJ which is dedicated to image processing (all is include in Fiji). Navigating in the library is a bit fastidiuous but it is powerfull one
Here is the wand tool: http://rsbweb.nih.gov/ij/developer/api/ij/gui/Wand.html
Have a look here for "How getting pixels of a ROi" : http://fiji.sc/Introduction_into_Developing_Plugins#ImageJ.27s_API

Related

Detect (and maybe decode) PDF417 barcodes using python

I am trying to detect the pdf417 barcode (2D barcode) from an image using python.
I will be receiving images of IDs where there is a barcode in them but it might not always be straight. So I am looking for an effective way to DETECT the pdf417 barcode using Python.
I tried all of the available methods that I could find (that uses python)
e.g.,
pdf417decoder: requires the image to be cut exactly around the barcode just like in the image below:
pyzbar: only detects 1D barcodes
python-zxing and zxing: didn't detect any of the pdf417 barcodes that I tried (around 10 different IDs - different country)
Barcode-detection: this is a DL approach that uses YOLO-V3 to detect barcodes, but again (after trying it), it only detects 1D barcodes...
Is there a method that I missed?
Am I using a wrong approach towards this problem?
Possible solution that I am thinking of: using computer vision (some filters and transformations) to detect a box that has black and white dots... Something similar to this.
Thanks!
After various trials, I ended up using an approach of template matching by OpenCV.
You need to precisely choose your template image that will be the search reference of your algorithm. You need to feed it some grayscaled images.
Then, you need to choose the boxes that have a result higher than a certain threshold (for me 0.55). Then apply NMS (non max suppression) to filter out the noisy boxes.
But keep in mind that there are many edge cases to encounter. If someone is interested to see the complete solution, please let me know.

Eliminate the background (the common points) of 3 images - OpenCV

Forgive me but I'm new in OpenCV.
I would like to delete the common background in 3 images, where there is a landscape and a man.
I tried some subtraction codes but I can't solve the problem.
I would like output each image only with the man and without landscape
Are there in OpenCV Algorithms what do this do? (then without any manual operation so no markers or other)
I tried this python code CV - Extract differences between two images
but not works because in my case i don't have an image with only background (without man).
I thinks that good solution should to Compare all the images and save those "points" that are the same at least in an image.
In this way I can extrapolate a background (which we call "Result.jpg") and finally analyze each image and cut those portions that are also present in "Result.jpg".
You say it's a good idea? Do you have other simplest ideas?
Without semantic segmentation, you can't do that.
Because all you can compute is where two images differ, and this does not give you the silhouette of the person, but an overlapping of two silhouettes. You'll never know the exact outline.

Multiview Stereo 3D construction in Blender

I intend to make a 3D model based on multi view stereo images ( basically 2D plane images of the same object from different angles and orientation) inside Blender from scratch.However, I am new to Blender.
I wanted to know if there are any tutorials of how to project a single pixel or point in the space of Blender's 3D environment using python. If not tutorial, any documentation. I am still learning about this whole 3D construction thing and pretty new to this, so I am not sure maybe these points are displayed using a 3 dimensional matrix/array ?
Basically I want to implement 3D construction based on a paper written by some researchers. Mostly every such project is in C++. I want to do it in Python in Blender, and if I am capable enough, make these libraries open source.
Suggest me any pre-requisite if you think that shall help me. I have just started my 3rd year of BSc Computer Science course, and very new to the world of Computer Graphics.
(My skillset is C, Java and Python.)
I would be very glad and appreciate any help.
Thank You
[Link to websitehttps://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction[][1]]
image2
Yes, it can very likely be done in Blender, and in Python at least for small geometries / low resolution.
A valid approach for the kind of scenarios you seem to want to play with is based on the idea of "space carving" or "silhouette projection". A good description in is an old paper by Kutulakos and Seitz, which was based in part on earlier work by Szelisky.
Given a good estimation of the silhouettes, these methods can correctly reconstruct all convex portions of the object's surface, and the subset of concavities that are resolved in the photo hull. The remaining concavities are "patched" over and need to be reconstructed using a different method (e.g. stereo, or structured light). For the surfaces that can be reconstructed, space carving is generally more robust than stereo (since it is insensitive to the color and surface texture of the object), and can work on surfaces where structured light struggles (e.g. surfaces with specularities, or very dark objects with low reflectance for a laser stripe)
The basic idea is to use the silhouettes of the projection of the object in cameras around it to "remove" mass from an initial volume (e.g. a box) encompassing the object, a bit like a sculptor carving a statue by removing material from a block of marble.
Computationally, you can do it representing the volume of space of interest using an octree, initialized with a minimal level of subdivision, and then progressively refined. The refinement consists of projecting the vertices of the octree leaves in the cameras, and identifying which leaves are completely outside or partially inside the silhouettes. The former are pruned, while the latter are split, and the process continues until no more leaves can be split or a maximul level of subdivision is reached. The hull of the octree is then extracted as a "watertight" mesh using standard methods.
Apart from the above paper, a way more detailed description can be found on an old patent by Geometrix - it sold a scanner based on the above ideas around year 2000. Here is what it looked like:

extract text and labels from PDF document

I am trying to detect and extract the "labels" and "dimensions" of a 2D technical drawing which is being saved as PDF using python. I came across a python library call "pytesseract" which has optical character recognition capability. I tried the demo on my image but it fails to detect most of the label/dimensions. Please suggest if there is other way to do it. Thank you**.
** Attached is a sample of the 2D technical drawing I try to detect
** what I am trying to achieve is to able to obtain the coordinate of every dimensions (the 160,120,10 4x45 etc) on the image, and extract the, as well.
About 16 months ago we asked ourselves the same question.
If you want to implement it yourself, I'd suggest the following process:
Extract the Canvas from the sheet
Separate the Cuts
Detect the Measure Regions on each Cut
Detect the individual attributes of the Measure Regions to understand where the Measure Start & End. In your particular example that's relatively easy.
Run the detected Measure Labels through OCR
Associate the Labels to the Measures
Verify your results
Alternatively you can also run it through our API and get the results as JSON.
Here's a quick visualization of the result:
Drawing Read (GT stands for General Tolerances)

is there a library for python that is more suitable for drawing faces in python

I am trying to make a python script that is able to create randomized 2d faces. I'm using turtle at the moment but, I was wondering if there is a better library that is capable for example of auto-completion (auto connecting of lines that are distant from each other) say for example you wanted to divide a face into three portions upper mid and bottom and these three sections have different widths. but, at the end, you connect them together.

Categories

Resources