How to add Gaussian noise to an image? - python

How to add a certain amount of Gaussian noise to the image in python? Do I need to convert somehow the values of the image to double type or something else?
Also, I have doubts about measuring the level of noise in the image. One adds it according to the dB (decibels) while other considers the variance. How it is related and how should I measure the noise level?

You can use the random_noise function in scikit-image. It goes something like this:
skimage.util.random_noise(image, mode='gaussian', seed=None, clip=True, **kwargs)
You can read more about it here:
http://scikit-image.org/docs/stable/api/skimage.util.html#random-noise

I'm assuming you mean applying a Gaussian blur. Pillow (a Python Image Library fork) supports a lot of image processing methods, including Gaussian blur. The ImageFilter module in particular implements this.
As for how to measure the level of noise--that's a somewhat complicated question. The concepts of radius and variance are mostly related (this post discusses it to some degree). In practicality, for picking the right parameter here for image processing, theory is all well and good but I've found guess and try to be the best way here.
dB is generally related to Gaussian filters (at least on the topic of "Gaussian" things that can act on input signals), as in digital signal processing (DSP). Gaussian blur/filter are similar concepts as convolutions on input signals, but discussed in different domains. When talking about signals in DSP, it's a bit more natural to talk about dB for the filter response... as well as just comparing signals generally. I'm assuming this is not what you're talking about.

Related

How can I find identical/duplicate images in a image bank [duplicate]

I am thinking about creating a database system for images where they are stored with compact signatures and then matched against a "query image" that could be a resized, cropped, brightened, rotated or a flipped version of the stored one. Note that I am not talking about image similarity algorithms but rather strictly about duplicate detection. This would make things a lot simpler. The system wouldn't care if two images have an elephant on them, it would only be important to detect if the two images are in fact the same image.
Histogram comparisons simply won't work for cropped query images. The only viable way to go I see is shape/edge detection. Images would first be somehow discretized, every pixel being converted to an 8-level grayscale for example. The discretized image will contain vast regions in the same colour which would help indicate shapes. These shapes then could be described with coefficients and their relative position could be remembered. Compact signatures would be produced out of that. This process will be carried out over each image being stored and over each query image when a comparison has to be performed. Does that sound like an efficient and realisable algorithm? To illustrate this idea:
removed dead ImageShack link
I know this is an immature research area, I have read Wikipedia on the subject and I would ask you to propose your ideas about such an algorithm.
SURF should do its job.
http://en.wikipedia.org/wiki/SURF
It is fast an robust, it is invariant on rotations and scaling and also on blure and contrast/lightning (but not so strongly).
There is example of automatic panorama stitching.
Check article on SIFT first
http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
If you want to do a feature detection driven model, you could perhaps take the singular value decomposition of the images (you'd probably have to do a SVD for each color) and use the first few columns of the U and V matrices along with the corresponding singular values to judge how similar the images are.
Very similar to the SVD method is one called principle component analysis which I think will be easier to use to compare between images. The PCA method is pretty close to just taking the SVD and getting rid of the singular values by factoring them into the U and V matrices. If you follow the PCA path, you might also want to look into correspondence analysis. By the way, the PCA method was a common method used in the Netflix Prize for extracting features.
How about converting this python codes to C back?
Check out tineye.com They have a good system that's always improving. I'm sure you can find research papers from them on the subject.
The article you might be referring to on Wikipedia on feature detection.
If you are running on Intel/AMD processor, you could use the Intel Integrated Performance Primitives to get access to a library of image processing functions. Or beyond that, there is the OpenCV project, again another library of image processing functions for you. The advantage of a using library is that you can try various algorithms, already implemented, to see what will work for your situation.

How to better preprocess images for a better deep learning result?

We are experimenting with applying a convolutional neural network to classify good surfaces and surfaces with defects.
The good and bad images are mostly like the following:
Good ones:
Bad ones:
The image is relatively big (Height:800 pixels, width: 500 pixels)
The defect very local and small relative to image
The background is very noisy
The deep learning (6 x conv+pooling -> flatten -> dense64-> dense32) result is very bad
(perhaps due to limited Bad samples and very small defect pattern)
There are other defect patterns like very subtle scratches, residuals and stains, etc., which is one of the main reasons that we want to use deep learning instead of specific feature engineering.
We can and are willing to accumulate more images of defects.
So the question are:
Is deep learning even an appropriate tool for defect detection like this in practice.
If yes, how can we adapt or pre-process the images to the formats that the deep learning models can really work with. (Could we apply some known filters to make the background much less noisy?)
If no, what are other practical techniques that can be used other than deep models.
Will things like template matching or anything else actually be a fit for this type of problems?
Update:
Very good idea to come up with an explicit circular stripes checker.
It might be directly used to check where the pattern is disturbed or be used as a pre-processing step for deep learning.
Update:
A more subtle pattern 'scratch'.
There is a scratch starting from the bottom of the fan area going up and a little to the right.
Is deep learning even an appropriate tool for defect detection like
this in practice.
Deep learning certainly is a possibility that promises to be universal. In general, it should rather be the last resort than the first approach. Downsides include:
It is difficult to include prior knowledge.
You therefore need an extreme amount of data to train the classifier for the general case.
If you succeed, the model is opaque. It might depend on subtle properties, which cause it to fail if the manufacturing process is changed in the slightest way and there is no easy way to fix it.
If yes, how can we adapt or pre-process the images to the formats that
the deep learning models can really work with. (Could we apply some
known filters to make the background much less noisy?)
Independent of the classifier you eventually decide to use, preprocessing should be optimal.
Illumination: The illumination is uneven. I'd suggest to define a region of interested, in which the illumination is bright enough to see something. I'd suggest to calculate the average intensity over many images and use this to normalize the brightness. The result would be an image cropped to the region of interest, where the illumination is homogenous.
Circular stripes: In the images you show, as the stripes are circular, their orientation depends on the position in the image. I would suggest to use a transformation, which transforms the region of interest (fraction of a circle) into a trapezoid, where each stripe is horizontal and the length of each stripe is retained.
If no, what are other practical techniques that can be used other than
deep models. Will things like template matching or anything else
actually be a fit for this type of problems?
Rather than identifying defects, you could try identifying the intact structure, which has relatively constant properties. (This would be the circular stripes checker that I have suggested in the comment). Here, one obvious thing to test would be a 2D fourier transformation at each pixel within an image preprocessed as described above. If the stripes are intact, you should see that the frequency of intensity change is much lower in horizontal than in vertical direction. I would just plot these two quantities for many "good" and "bad" pixels and check, whether that might already allow some classification.
If you can preselect possible defects with that method, you could then crop out a small image and subject it to deep learning or whatever other method you want to use.

How can noisy components be identified using the ICA method with file.edf in python?

I am trying to remove muscle artifacts from an EEG signal corresponding to an epileptic patient. For that, I used the fastICA method with python. The figure below represents the independent components:
enter image description here
Unfortunately, I could not distinguish the components corresponding to the artifacts. Is there a way to help me know which components to remove?
first of thing you should know how an eeg signal look like. I think in the picture attached ICA21, ICA7 and ICA2 is completely noisy data
Not sure if this is possible given the data that you have, but one possibility is to frame it as a supervised problem. Say you have a few epileptic patients' EEGs and a few from non-epileptic patients. You can apply an ICA decomposition to the whole dataset, and then use each component by itself as a feature vector (maybe discretizing it) to predict the class (i.e., epileptic vs. non-epileptic).
The noise components should have no predictive value, so you might be able to find that a cluster of components has a (statistically) significantly higher predictive value than another. This will require manually looking at the accuracy value of each component and making a subjective decision, but maybe it can help as an exploratory analysis.
Of course, this only works if you have data from multiple patients.

Time-varying band-pass filter in Python

I am trying to solve a problem very similar to the one discussed in this post
I have a broadband signal, which contains a component with time-varying frequency. I need to monitor the phase of this component over time. I am able to track the frequency shifts by (a somewhat brute force method of) peak tracking in the spectrogram. I need to "clean up" the signal around this time varying peak to extract the Hilbert phase (or, alternatively, I need a method of tracking the phase that does not involve the Hilbert transform).
To summarize that previous post: varying the coefficients of a FIR/IIR filter in time causes bad things to happen (it does not just shift the passband, it also completely confuses the filter state in ways that cause surprising transients). However, there probably is some way to adjust filter coefficients in time (probably by jointly modifying the filter coefficients and the filter state in some intelligent way). This is beyond my expertise, but I'd be open to any solutions.
There were two classes of solutions that seem plausible: one is to use a resonator filter (basically a damped harmonic oscillator driven by the signal) with a time-varying frequency. This model is simple enough to avoid surprising filter transients. I will try this -- but resonators have very poor attenuation in the stop band (if they can even be said to have a stop band?). This makes me nervous as I'm not 100% sure how the resonate filters will behave.
The other suggestion was to use a filter bank and smoothly interpolate between various band-pass filtered signals according to the frequency. This approach seems appealing, but I suspect it has some hidden caveats. I imagine that linearly mixing two band-pass filtered signals might not always do what you would expect, and might cause weird things? But, this is not my area of expertise, so if mixing over a filter bank is considered a safe solution (one that has been analyzed and published before), I would use it.
Another potential class of solutions occurs to me, which is to just take the phase from the frequency peak in a sliding short-time Fourier transform (could be windowed, multitaper, etc). If anyone knows any prior literature on this I'd be very interested. Related, would be to take the phase at the frequency power peak from a sliding complex Morlet wavelet transform over the band of interest.
So, I guess, basically I have three classes of solutions in mind.
1. Resonator filters with time-varying frequncy.
2. Using a filter bank, possibly with mixing?
3. Pulling phase from a STFT or CWT, (these can be considered a subset of the filter bank approach)
My supicion is that in (2,3) surprising thing will happen to the phase from time to time, and that in (1) we may not be able to reject as much noise as we'd like. It's not clear to me that this problem even has a perfect solution (uncertainty principle in time-frequency resolution?).
Anyway, if anyone has solved this before, and... even better, if anyone knows any papers that sound directly applicable here, I would be grateful.
Not sure if this will help, but googling "monitor phase of time varying component" resulted in this: Link
Hope that helps.

transformation matrices

I've been reviewing some material on affine and projective (or is it perspective?) transformations. I've reviewed the contents of PIL and wikipedia. (http://en.wikipedia.org/wiki/Transformation_matrix)
Unfortunately, I can't seem to find details that specify how to form the matrices for an affine or projective transform on an image. Wikipedia's article seems to deviate a bit from other resources I've discovered.
Are there any resources on the web that briefly describe not only those operations, but, how to implement them?
Keep in mind, I'd like to be able to understand what matlab's maketform function is doing for 'affine' and 'projective' transformations.
thanks in advance.
A good resource on affine transforms and creating the matrix is here: http://www.coranac.com/tonc/text/affine.htm

Categories

Resources