I am trying to make a project in which I manipulate image with several tampering attacks. Since I am new to this, I referred to some research papers and found that to attack an image, one just gets an object from an image and pastes it on another. For example you can see this image:
I want to implement the same (adding or cutting out specific object from the image) in my project. But I am not able to figure it out about how to implement this with Spyder and OpenCV. I tried to search but it gave me concepts of frameworks and deep learning which is out of my league as for now. Is there any simpler way to achieve this ?
Related
My goal is to blur the picture a bit using a bilinear debayer.
This is to embody the dirty image of the VHS days.
As a graphic major, I tried to reproduce it with various graphic tools, but did not get the desired quality result.
I want that subtle feeling of faded haze when scanned with a scanner.
I decided to emulate a camera sensor.
The process I envisioned is this:
I convert the tiff,targa.png.jpg format image I made into a bayer format image. I want to restore the original image by debayering it again with a bilinear algorithm.
The reason for the bilinear method is that it degrades most gently and strongly.
The link below is the image change according to the algorithm.
https://www.dpreview.com/forums/post/63514167
I'm not a programmer at all, but I've tried something on my own to get what I want.
https://codegolf.stackexchange.com/questions/86410/reverse-bayer-filter-of-an-image
I succeeded in making an image of the Bayer pattern using the coding here.
And I tried debayering by running the debayer source code downloaded from other places, but it failed because the extension was not supported.
So you can change demoasic(debayer) in various ways
I got a program called darkable and raw therapy and tried to convert it, but these programs could only recognize raw files.
Even the algorithms provided by both programs were so good that it was hard to get the impression that the image was degraded.
How do I make what I want?
What can I look for? I really want to make this.
Please let me know which way I should go.
I want to overlay 3d objects on live webcam for my project. I have overlayed 2d objects like googles, mustache, and hat. Now I want to overlay 3d objects like beard, mustache, and hair. I searched over some articles and tutorials, but all of them were ambiguous and didn't teach where to start. From what I have learned I need to create 3d objects using blender and import it using OpenGL and then somehow overlay it to the facial landmarks. I want to know what I need to learn to achieve this objective. I have read the previous question on this source1 and it didn't help much. Also, I have read several blogs like this,
this, and the best one here and various others while surfing. I know I am entering into AR/VR, but I ready to learn the things required to get my work done.
There are various libraries/frameworks which don't require to code anything like SparkAR, Virtualtryon, ditto, etc. but they don't teach you anything. I want to learn how to do these things by myself. If I create my 3d object like hair(hair-like Goku in SuperSaiyan)/Mustache or eyes in blender how can I overlay it in real-time on webcam using OpenCV or any other python compatible lib/framework? I mean overlaying 3d hair over my own hair, overlaying 3d eyes over my eyes and similar. What are the things required to do such a task? What things do I need to learn?
fig:Shoe in the red circle is to be detected
I am trying to create a python script using cv2 that can recognize the shoe of the baller and determine whether the shoe is beyond, on or before the white line(refer to the image).
I have no idea about any kind of approach to use, what kind of algorithms might be helpful. Need some guideline, please help!
(Image is attached)
I realize this would work better as a comment because it isn't a full answer, but I don't have enough rep yet to leave comments, haha.
You may be interested in OpenCV's Canny Edge detection algorithm:
http://docs.opencv.org/trunk/da/d22/tutorial_py_canny.html
This will allow you to find shapes within your image.
Also, you can find similarly colored blobs using SimpleBlobDetector:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
This should make it fairly easy to detect the white line.
In order to detect a more complex object like the shoe, you'll probably have to make something like a object detection cascade file and use a CascadeClassifier to find it:
http://docs.opencv.org/2.4/doc/tutorials/objdetect/cascade_classifier/cascade_classifier.html#cascade-classifier
http://johnallen.github.io/opencv-object-detection-tutorial/
Basically, you take a bunch of pictures to "teach" what the object looks like, and output that info to a file that a CascadeClassifier can use to detect objects in input images. It may be hard to distinguish between different brands of shoe though, if you need it to be that specific. Also, you may need to adjust the input images (saturation, brightness, etc) before trying to detect objects in order to get good results.
Good day. I have this set of geotagged photos. I want to build a system which approximate the location of a query image based on how similar it is from the geotagged photos. I will be using python and opencv to accomplish this task. However, the problem is that most of the geotagged photos have people on it (I'm only after the background scenery).
I found some face detection algorithms that I can use to detect people on photos. However, what I need is to detect the whole body of the people in the images and just leave out the background.
Opencv have algorithms which can be used in removing background (I was hoping to reverse the output and leave the background instead). However, this is only applicable to videos (subtracting static with moving parts). Can you guys recommend any solution to this problem (where to start/ related studies/ algorithms)? I appreciate any help. Thanks!
I am trying to detect a marker in a webcam video feed and overlay it with a 3d object - pretty much exactly like this: http://www.morethantechnical.com/2009/06/28/augmented-reality-with-nyartoolkit-opencv-opengl/
I know artoolkit is the best module for this, but I was hoping to just use opencv in python since I dont know nearly enough c/c++ to be able to use artoolkit. I am hoping someone will be able to get me on the right track towards detecting the marker and determining its location and orientation etc since I have no idea how best to go about this or what functions I should be using.
OpenCV doesn't have marker detection / tracking functionality out of box. However it provides all algorithms needed so it's fairly easy to implement your own one.
The article you are referring to uses OpenCV only for video grabbing. The marker detection is done by NyARToolkit which is derived from ARToolkit. NyARToolkit have versions for Java, C# and ActionScript.
ARToolkit is mostly written in plain C without using fancy C++ features. It's probably easier to use than you thought. The documentation contains well explained tutorials. e.g http://www.hitl.washington.edu/artoolkit/documentation/devstartup.htm
The introductory documentation can help you understand the process of marker detection even if you decide not to use ARToolkit.
I think the most used way to perform marker detection using python and open CV is to use SURF Descriptors.
I have found very useful this video and the linked code you can find in this page. Here you can download the code. I don't know how to overlay it with a 3d object but I'm sure you can do something with pygame or matplotlib.