Here is the image I need to detect: http://s13.postimg.org/wt8qxoco3/image.png
Here is the base64 representation: http://pastebin.com/raw.php?i=TZQUieWe
The reason why I'm asking for your help is because this is a complex problem and I am not equipped to solve it. It will probably take me a week to do it by myself.
Some pseudo-code that I thought about:
1) Take screenshot of the app and store it as image object.
2) Convert binary64 representation of my image to image object.
3) Use some sort of algorithm/function to compare both image objects.
By on screen, I mean in an app. I have the app's window name and the PID.
To be 100% clear, I need to essentially detect if image1 is inside image2. image1 is the image I gave in the OP. image2 is a screenshot of a window.
If you break this down into pieces, they're all pretty simple.
First, you need a screenshot of the app's window as a 2D array of pixels. There are a variety of different ways to do this in a platform-specific way, but you didn't mention what platform you're on, so… let's just grab the whole screen, using PIL:
screenshot = ImageGrab.grab()
haystack = screenshot.load()
Now, you need to convert your base64 into an image. Taking a quick look at it, it's clearly just an encoded PNG file. So:
decoded = data.decode('base64')
f = cStringIO.StringIO(decoded)
image = Image.open(f)
needle = image.load()
Now you've got a 2D array of pixels, and you want to see if it exists in another 2D array. There are faster ways to do this—using numpy is probably best—but there's also a dumb brute-force way, which is a lot simpler to understand: just iterate the rows of haystack; for each one, iterate the columns, and see if you find a run of bytes that matches the first row of needle. If so, keep going through the rest of the rows until you either finish all of needle, in which case you return True, or find a mismatch, in which case you continue and just start again on the next row.
this is probably the best place to start:
http://effbot.org/imagingbook/image.htm
if you don't have access to the image's meta data, file name, type, etc, what you're trying to do is very difficult, but your pseudo sounds on-point. essentially, you'll have to create an algorithmic model based on a photo's shapes, lines, size, colors, etc. then you'd have to match that model against models already made and indexed in some database. hope that helps.
It looks like https://python-pillow.org/ is a more updated version of PIL.
Related
I'm very new to image processing in Python (and not massively adept at python in general), so forgive me for how stupid this may sound. Im working with an AI for object detection, and need to submit 1000x1000 pixel images to it, that have been divided up from larger images of varying lengths and widths (not necessarily divisible, but I have a way of padding out images less than 1000x1000). In order for this to work, I need 200 pixel overlap on each segment or the AI will pick may miss objects.
I've tried a host of methods, and have either got the image to divide up using the methods suggested in Creating image tiles (m*n) of original image using Python and Numpy and how can I split a large image into small pieces in python (plus a few others that effectively do the same techniques in different words. I've been able to make a grid and get the tile names from this, using How to determine coordinate of grid elements of an image, however have not been able to get overlap to work in this, as I would then just tile it normally.
Basically what I'm saying is that I've found one way to cut the images up that works, and one way to get the tile coordinates, but I am utterly failing at putting it all together. Does anyone have any advice on what to do here?
So far I've not found a direct approach to my end goal online - and I've tried mucking around with different scripts (like the ones listed above), but feel like Im barking up totally the wrong tree.
Im trying to remove the differences between two frames and keep the non-chaning graphics. Would probably repeat the same process with more frames to get more accurate results. My idea is to simplify the frames removing things that won't need to simplify the rest of the process that will do after.
The different frames are coming from the same video so no need to deal with different sizes, orientation, etc. If the same graphic its in another frame but with a different orientation or scale, I would like to also remove it. For example:
Image 1
Image 2
Result (more or less, I suppose that will be uglier but containing a similar information)
One of the problems of this idea is that the source video, even if they are computer generated graphics, is compressed so its not that easy to identify if a change on the tonality of a pixel its actually a change or not.
Im ideally not looking at a pixel level and given the differences in saturation applied by the compression probably is not possible. Im looking for unchaged "objects" in the image. I want to extract the information layer shown on top of whats happening behind it.
During the last couple of days I have tried to achieve it in a Python script by using OpenCV with all kinds of combinations of absdiffs, subtracts, thresholds, equalizeHists, canny but so far haven't found the right implementation and would appreciate any guidance. How would you achieve it?
Im ideally not looking at a pixel level and given the differences in saturation applied by the compression probably is not possible. Im looking for unchaged "objects" in the image. I want to extract the information layer shown on top of whats happening behind it.
This will be extremely hard. You would need to employ proper CV and if you're not an expert in that field, you'll have really hard time.
How about this, forgetting about tooling and libs, you have two images, ie. two equally sized sequences of RGB pixels. Image A and Image B, and the output image R. Allocate output image R of the same size as A or B.
Run a single loop for every pixel, read pixel a and from A and pixel b from B. You get a 3-element (RGB) vector. Find distance between the two vectors, eg. magnitude of a vector (b-a), if this is less than some tolerance, write either a or b to the same offset into result image R. If not, write some default (background) color to R.
You can most likely do this with some HW accelerated way using OpenCV or some other library, but that's up to you to find a tool that does what you want.
Given two images - one a cropped (but not scaled) portion of the other, how can I find the crop parameters (i.e.: the x and y offsets and width/height)? The idea is to crop one image (screenshot) by hand, and then crop a lot more at the same points.
Ideally via imagemagick, but I am happy with any pseudo-code solution, or with Perl, Python, JavaScript (in order of preference)
I have thought of a brute-force approach (find the first pixel which is the same color, check the next, keep going until different, or move to the next). Before I go down this barabarous (and probably slow) route, I'd like to check for better ones.
Template matching can be used for the identification of smaller image within a larger image.
The following resource might be helpful. Please check it out
https://docs.opencv.org/4.5.2/d4/dc6/tutorial_py_template_matching.html
I am trying to identify a state of a valve(on or off). My approach is to give to images of each states and compare the current image with those two and see which one it belongs to.
I have tried to compare using new_image - on_image and new_image - off_image. Then compare the number of different pixels. It works, but i feel like in some cases it might not work and there must be another better way do a simple classification like this.
Any reference or ideas?
Subtracting pixels might not be very robust in case your camera position changes slightly. If you don't shy away from using open Computer Vision (open CV) there is an interesting recipe for finding a predefined object in a picture:
Feature Matching + Homography to find Objects
You could cut out the lever from your image and search it in every new image. Depending on the coordinates and especially the rotation, you can set the status of the valve. This might even work in crazy cases where someone half opened (or for pessimists: half closed) the valve, or if the lever becomes partially covered.
there is this project am currently working on, which requires me to watermark every uploaded image. i have tried series of examples online, but they are not giving me what i really want as result
for example
i have an image A with image B watermarked on it, the two images are of the same dimensions. i applied opacity of 0.5 on image B before placing it on image A
now, i would really appreciate if anyone could help with a boolean function to check if image A has already been watermarked with image B before watermarking it.
thanks.
This depends on several factors that you'll need to provide more information for.
For instance, how complex are these images? Is there a lot of noise? Are the images that are uploaded similar in any way, or are they heterogenous? Are the watermarks always the same, or are they different?
As a general principle for extracting objects from images, you should look into processes such as color deconvolution, thresholding, and blob extraction.
In short--some sample images would go a long way...
yeah, finally found a dubious way to solve the problem, by hiding a specific text on the alpha layer of the image after watermarking it using steganography.
so on every upload, i get the image, iterate through the lowest pixels of the image's alpha layer, then compare the result to the text. if the result matches the text, definitely, the image has been watermarked.