I have been asked to publish a complete book online similar way Google Books does? i.e. it's viewable and printable but not download-able.
Is the process is basically "high quality scanning"? are there any open source solution to "mass generation" of "watermark" on those high quality images. Suppose you have an original image. and when the user views it online, I re-create the image add watermark and some other text on top of the image "on-the-fly" are there such library exist in python off course :)
Any tips? If you have done this before please share.
Thanks
Unfortunately Google uses a patented technique for scanning it's books, so you will probably have to stick to traditional methods.
Google created some seriously nifty
infrared camera technology that
detects the three-dimensional shape
and angle of book pages when the book
is placed in the scanner. This
information is transmitted to the OCR
software, which adjusts for the
distortions and allows the OCR
software to read text more accurately.
No more broken bindings, no more
inefficient glass plates.
Basically you will need to scan the book using an OCR application (tesseract is good), then I would generate a PDF/image from the scanned text, and finally add the watermark on top. The Python Imaging Library would seem to be the best tool for this.
Don't know much about Google Books, but Python Imaging Library can do watermarking (there's ASPN recipe for that).
See the slashdot question on reproducing Google's photo + laser grid technique.
Related
My goal is to find a way to download (with Python) satellite images given coordinates describing a rectangle. I've never really found a precise and free solution (no business here, just school stuff).
At first I tried Google Maps' API, which worked perfectly but turned out to be paying after a certain time. I then considered using OpenStreetMap, but again I had a lot of trouble finding information on how to obtain those.
Can you please help me with a simple solution?
OpenStreetMap only provides map data. OpenStreetMap doesn't have aerial imagery and thus also no satellite imagery API.
If you are looking for free aerial imagery then take a look at OpenAerialMap.
I am building a app to see the progress of deforestation. Over time i would like to take a satellite image from a location and see what percentage of that image contains forest.
I have attempted google's vision API, it does not have this functionality.
Is this something that can be done in OpenCV or must I do this from scratch with semantic segmentation or something similar?
So far I could see in the documentation there doesn't seem to be any pattern/texture recognition for the API. My belief is that you could try to do a dominant color recognition. if your image data has enough data of differentiable colors, I think you should be able to get an acceptable analysis.
PD: Having some experience with satellital imagery processing, as additional info, I can comment that the usual way to find out the status of the land for plants, forest and general crop development and health is though color analysis.
Nonetheless, Satellite/drone images are mostly multispectral and several UV bands are extensively used as biomass behaves very different with season/health/development status with the combination of visible and UV electromagnetic bands.
Have you tried to look at the satellite image recognition Kaggle competitions? There are a lot of discussions as well as available scripts for tasks similar to yours:
Links: https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection
Example script: https://www.kaggle.com/arpandhatt/satellite-image-classification
I am trying to make a project in which I manipulate image with several tampering attacks. Since I am new to this, I referred to some research papers and found that to attack an image, one just gets an object from an image and pastes it on another. For example you can see this image:
I want to implement the same (adding or cutting out specific object from the image) in my project. But I am not able to figure it out about how to implement this with Spyder and OpenCV. I tried to search but it gave me concepts of frameworks and deep learning which is out of my league as for now. Is there any simpler way to achieve this ?
Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!
I am working on a project where I need to program a Raspberry Pi to grab an image from a webcam, search that image for a box and identify what box it is by it's size ratio. The boxes will be a unique color to the rest of the environment. It would also be good to identify the distance from the box and angle to the box.
Everything I've seen seems to indicate that this should be possible, but after several days of searching I have yet to find anything that really helps me to do this. This project is my first experience using Python, so I'm pretty newbish. Any help even with how to do little portions of this would be greatly appreciated.
Here's my working code so far. It's not much, all it does is grab an image from a webcam :/
import imgproc
from img imgproc *
camera = Camera(160, 120)
viewer = Viewer(160, 120)
n = 1
while (n > 0):
img = camera.grabImage()
viewer.displayImage(img)
This is not a complete solution, but some good ideas on how to get started :)
First off, there are Python bindings for OpenCV, an open source free computer vision library originally written in C: http://opencv.willowgarage.com/documentation/python/index.html
The first thing you have to do when solving a computer vision problem is pre-process. In particular, knowing that the box is a different colour helps a LOT - it means we can threshold by colour and create an image that is black where the box is not, and white where the box is, using a technique such as in http://aishack.in/tutorials/thresholding/ .
Then, you'd follow a process similar to the Sudoku grabber/solver described in this blog - you do blob extraction ( http://en.wikipedia.org/wiki/Blob_extraction ) then do a hough transform to get lines, and then you can compare the lines' distances to each other to determine the box's ratio. http://aishack.in/tutorials/sudoku-grabber-with-opencv-plot/
Pretty much just read about people's OpenCV Sudoku solvers until you get the gist of how it's done, because there are a lot of good tutorials and it's a simple illustration of how computer vision projects go: https://www.google.com.au/search?q=sudoku+opencv&aq=f&oq=sudoku+opencv&aqs=chrome.0.57j60l3j0l2.1506&sourceid=chrome&ie=UTF-8
You may want to try installing SimpleCV from the github repo. Using SimpleCV you should be able to get the blob's color using the Image.hueDistance command. If you use the findBlobs command to find your boxes each blob should have its aspect ratio as a parameter. We just posted our full PyCon tutorial about SimpleCV here. You can view just the slides here. We've heard that there are some issues installing PyGame (a SimpleCV dependency) on the RaspberryPi. This walk through might address those issues.