Sift Lowe, VLFeat Sift, Python Photogrammetry Toolbox - python

I do not have much programming experience, and I am currently working on a project with photogrammetry.
I'm testing out the Python Photogrammetry Toolbox, which allows the user to select Sift Lowe or VLFeat Sift for the feature detection process.
However, the sift.exe crashes whenever the image size is above a certain threshold. My input images are 6000x4000, and I have to scale down to 1500x1000 in order for the program not to crash.
Does anyone know if there is some parameter I can adjust to fix this?
SiftGPU in VisualSFM does not seem to have any programs when I utilize the full resolution.
Thanks

...for windows version - the problem is in "osm-bundler\osm-bundlerWin64\software\vlfeat\bin\w32\sift.exe" - "sift.exe" files are 32bit (limit of 2GB per process/max image size ~ 1900x1900)

SIFT is now available as part of main opencv repository (as patent on SIFT is expired).
you can check it with the latest version of opencv (as of today 4.4). Try opencv sift using the below code.
cv2.SIFT_create()
Let us know whether it solves your problem.

Related

Tracking and bluring faces in multiple 360 images via python opencv

Is there a way to track down and nicely blur faces or part of face (like hair) for multiple 360 degree images via python opencv. ? I'am using Windows OS and python3.8
Two methods with opencv and python
Using a Gaussian blur to anonymize faces in images and video streams
Applying a “pixelated blur” effect to anonymize faces in images and video
The method is well explained here and you can access code.
Now, a more advance solution if you are using GPU, and you want to run the application on a live video stream.. its with nvidia DS and Deep Learning. The github here reports results on T4, i believe you should be able to run it on Jetson nano. Here is the link
Yes, there is. First you need to detect the face(s) using Haar-cascade, which will provide you the rectangle coordinates of the face location. Then you can use this answer to blur the desired portion of an image.

Algorithm for extracting background from image

Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!

Python-OpenCV Image Recognition

I am new to the computer vision area and i have been given this task,
I need to recognize an amount of images with a camera as soon as they enter the camera focus, this images would be scanned previously and stored in some sort of database.(maybe the key-points collection to each image)
well, i've been doing some research and found that SIFT may do the trick but i don't know how to use it properly, i need to do this on Python-opencv
Note: I already found examples in which I can get the key-points on an image using SIFT, but the code is very confusing to someone who does not know the language, any help is appreciated.
Here is a good page for you to get started and learn the basics along the way.

Open CV or alternative for post processing identification

I'm new to Open CV and image processing, but I was wondering if I have a still image, say a jpeg file is it possible to run Open CV or another package on the image and have it identify where the humans are in the image? (I don't know if that sounds impossible or not since I haven't worked much in this but any advice would be appreciated) The photos are from the raspberry pi, which takes a photo after a PIR motion scanner detects movement.
You can use opencv for this. Opencv Hog descriptor may help you for this. Opencv can be used in rasberry pi as well.
Here is the link for the people detection code, or you can find it in samples of the opencv package.

Python Targeting System

I am working on a project where I need to program a Raspberry Pi to grab an image from a webcam, search that image for a box and identify what box it is by it's size ratio. The boxes will be a unique color to the rest of the environment. It would also be good to identify the distance from the box and angle to the box.
Everything I've seen seems to indicate that this should be possible, but after several days of searching I have yet to find anything that really helps me to do this. This project is my first experience using Python, so I'm pretty newbish. Any help even with how to do little portions of this would be greatly appreciated.
Here's my working code so far. It's not much, all it does is grab an image from a webcam :/
import imgproc
from img imgproc *
camera = Camera(160, 120)
viewer = Viewer(160, 120)
n = 1
while (n > 0):
img = camera.grabImage()
viewer.displayImage(img)
This is not a complete solution, but some good ideas on how to get started :)
First off, there are Python bindings for OpenCV, an open source free computer vision library originally written in C: http://opencv.willowgarage.com/documentation/python/index.html
The first thing you have to do when solving a computer vision problem is pre-process. In particular, knowing that the box is a different colour helps a LOT - it means we can threshold by colour and create an image that is black where the box is not, and white where the box is, using a technique such as in http://aishack.in/tutorials/thresholding/ .
Then, you'd follow a process similar to the Sudoku grabber/solver described in this blog - you do blob extraction ( http://en.wikipedia.org/wiki/Blob_extraction ) then do a hough transform to get lines, and then you can compare the lines' distances to each other to determine the box's ratio. http://aishack.in/tutorials/sudoku-grabber-with-opencv-plot/
Pretty much just read about people's OpenCV Sudoku solvers until you get the gist of how it's done, because there are a lot of good tutorials and it's a simple illustration of how computer vision projects go: https://www.google.com.au/search?q=sudoku+opencv&aq=f&oq=sudoku+opencv&aqs=chrome.0.57j60l3j0l2.1506&sourceid=chrome&ie=UTF-8
You may want to try installing SimpleCV from the github repo. Using SimpleCV you should be able to get the blob's color using the Image.hueDistance command. If you use the findBlobs command to find your boxes each blob should have its aspect ratio as a parameter. We just posted our full PyCon tutorial about SimpleCV here. You can view just the slides here. We've heard that there are some issues installing PyGame (a SimpleCV dependency) on the RaspberryPi. This walk through might address those issues.

Categories

Resources