Heres my input image:
I am plotting histogram of this image using the following code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('red.jpg')
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
Here is the plotted histogram output: On the left hand side is the original histogram and on the right hand side is the zoomed version:
My starting point is 255 and ending point is zero.
All my important data lies on the range of 235 to 255. As at 235 the line becomes straight (pl. see right hand side of histogram)
I want to write a python - opencv code which finds out when red line of histogram becomes straight and once the number is found after which the line shows minimum deviation delete all the remaining pixels from the image. In above case delete pixels having value (0 to 235). How can this be achieved ?
Histogram is basically arrays (bins).
The opencv histogram bins you create, you can check for the number of values & mean values in each bin, and compare it with previous bin. (More like a sliding window). If you find the difference to be greater than a threshold, then consider them to be chosen bins(pixels).
This is a technique used to identify peaks in a 1D array.
Related
i'm trying to find theses two horizontal lines with the Houghlines transform. As you can see, the picture is very noisy ! Currently my workflow looks like this :
crop the image
blur it
low the noise (for that, I invert the image, and then substract the blured image to the inverted one)
open it and dilate it with an "horizontal kernel" (kernel_1 = np.ones((10,1), np.uint8)
threshold
Houglines
the results are not as good as expected... is there a better strategy, knowing that I will always serach for horizontal lines (hence, abs(theta) will always be closed to 0 or pi)
the issue is the noise and the faint signal. you can subdue the noise with averaging/integration, while maintaining the signal because it's replicated along a dimension (signal is a line).
your approach using a very wide but narrow kernel can be extended to simply integrating along the whole image.
rotate the image so the suspected line is aligned with an axis (let's say horizontal)
sum up all pixels of one scanline (horizontal line), np.sum(axis=1) or mean, either way mind the data type. working with floats is convenient.
work with the 1-dimensional series of values.
this will not tell you how long the line is, only that it's there and potentially spanning the whole width.
edit: since my answer got a reaction, I'll elaborate as well:
I think you can lowpass that to get the "gray" baseline, then subtract ("difference of gaussians"). that should give you a nice signal.
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
import scipy.ndimage
im = cv.imread("0gczo.png", cv.IMREAD_GRAYSCALE) / np.float32(255)
relief = im.mean(axis=1)
smoothed = scipy.ndimage.gaussian_filter(relief, sigma=2.0)
baseline = scipy.ndimage.gaussian_filter(relief, sigma=10.0)
difference = smoothed - baseline
std = np.std(difference)
level = 2
outliers = (difference <= std * -level)
plt.plot(difference)
plt.hlines([std * +level, std * -level], xmin=0, xmax=len(relief))
plt.plot(std * -level + outliers * std)
plt.show()
# where those peaks are:
edgemap = np.diff(outliers.astype(np.int8))
(edges,) = edgemap.nonzero()
print(edges) # [392 398 421 427]
print(edgemap[edges]) # [ 1 -1 1 -1]
Much the same as Christoph's answer, but just wanted to share a processed image which I can't do in the comments.
I just took the mean across the rows with np.mean(axis=1) and normalised the result. Hopefully you can see the two dark bands corresponding to your lines.
Ok here is the situation, I want to make a watershed of this binary vessels image.
Binary vessels.
I want to use these colored vessels as seed points for the algorithm.
Seed points
It seems that when I use the raw colored image, the watershed does not go further than the colored image.
The goal is to have this image.
Filled binary vessels
The code use is this one
distances = distance_transform_edt(vessels)
segmentation = watershed(-distances, markers, mask=vessels).
The only solution that I found was to erode markers data (the 1st colored image).
Do you guys have a solution why watershed do this ? We even try the same code on other computers and it works find without erosion.
Edit:
Here is an image of the distances. When I take the negative, every 1 become -1. So the highest values in the image become 0.
welcome to the scikit-image thread of SO! Below is a small reproducible example showing that the watershed behaves nicely even with touching markers.
import matplotlib.pyplot as plt
import numpy as np
from skimage import segmentation
from scipy import ndimage
img = np.zeros((20, 20), dtype=np.bool)
img[3:-3, 3:-3] = True
distance = ndimage.distance_transform_edt(img)
markers = np.zeros_like(img, dtype=np.uint8)
markers[7:-7, 5:10] = 1
markers[7:-7, 10:15] = 2
ws = segmentation.watershed(-distance, markers, mask=img)
fig, ax = plt.subplots(1, 3)
ax[0].imshow(img)
ax[1].imshow(markers)
ax[2].imshow(ws)
plt.show()
Could it happen that the non-labeled vessel pixels in your markers array are not set to 0 but 1 instead? The watershed only labels 0-valued pixels.
A reproducible standalone script could help, the different images you linked to had different dimensions so it was hard to work from them.
Finally, you could be interested here in trying the random walker algorithm which can produce really good results for images such as your (no strong gradients between the regions you want to separate).
I want to write a script to create an image from a connection matrix. Basically, wherever there is a '1' in the matrix, I want that area to be shaded in the image. For eg -
I created this image using Photoshop. But I have a large dataset so I will have to automate the process. It would be really helpful if anyone could point me in the right direction.
EDIT
The image that I am getting after using the script is this. This is due to the fact that the matrix is large (19 x 19). Is there any way I can increase the visibility of this image so the black and white boxes appear more clear?
I would suggest usage of opencv combined with numpy in this case.
Create two-dimensional numpy.array of dtype='uint8' with 0 for black and 255 for white. For example, to get 2x2 array with white left upper, white right lower, black left lower and black right upper, you could use code:
myarray = numpy.array([[255,0],[0,255]],dtype='uint8')
Then you could save that array as image with opencv2 in this way:
cv2.imwrite('image.bmp',myarray)
In which every cell of array is represented by single pixel, however if you want to upscale (so for example every cell is represented by 5x5 square) then you might use numpy.kron function, with following one line:
myarray = numpy.kron(myarray, numpy.ones((5,5)))
before writing image
May be you can try this!
import matplotlib.cm as cm
# Display matrix
plt.imshow(np.random.choice([0, 1], size=100).reshape((10, 10)),cmap=cm.binary)
With a Seaborn heatmap:
import seaborn as sns
np.random.seed(3)
sns.set()
data = np.random.choice([0, 1], size=(16,16), p=[3./4, 1./4])
ax = sns.heatmap(data, square=True, xticklabels=False, yticklabels=False, cbar=False, linewidths=.8, linecolor='lightgray', cmap='gray_r')
Note the reverse colormap gray_r to have black for 1's and white for 0's.
I am trying to add shading to a map of some data by calculating the gradient of the data and using it to set alpha values.
I start by loading my data (unfortunately I cannot share the data as it is being used in a number of manuscripts in preparation. EDIT - December, 2020: the published paper is available with open access on the Society of Exploration Geophysicists library, and the data is available with an accompanying Jupyter Notebook):
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from pylab import imread, imshow, gray, mean
import matplotlib.colors as cl
%matplotlib inline
data = np.loadtxt('data.txt')
plt.imshow(data, cmap='cubehelix')
plt.show()
gets me a plot of the data:
Then I calculate the total horizontal gradient and normalize it to use for shading:
dx,dy = np.gradient(data, 1, 1)
tdx=np.sqrt(dx*dx + dy*dy)
tdx_n=(tdx-tdx.min())/(tdx.max()-tdx.min())
tdx_n=1-tdx_n
which looks as I expected:
plt.imshow(tdx_n[4:-3,4:-3], cmap='bone')
plt.show()
To create the shading effect I thought I would get the colour from the plot of the data, then replace the opacity with the gradient so as to have dark shading proportional to the gradient, like this:
img_array = plt.get_cmap('cubehelix')(data[4:-3,4:-3])
img_array[..., 3] = (tdx_n[4:-3,4:-3])
plt.imshow(img_array)
plt.show()
But the result is not what I expected:
This is what I am looking for (created in Matlab, colormap is different):
Any suggestion as to how I may modify my code?
UPDATED
With Ran Novitsky's method, using the code suggested in the answer by titusjan, I get this result:
which gives the effect I was looking for. In terms of shading though I do like titusjan's own suggestion of using HSV, which gives this result:
.
However, I could not get the colormap to be cubehelix, even though I called for it:
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
hsv = rgb_to_hsv(img_array[:, :, :3])
hsv[:, :, 2] = tdx_n
rgb = hsv_to_rgb(hsv)
plt.imshow(rgb[4:-3,4:-3], cmap='cubehelix')
plt.show()
First of all, Matplotlib includes a hill shading implementation. This calculates the intensity by comparing the gradient with a light source at a certain angle. So it's not exactly what you are implementing, but close and may even give better results.
Ran Novitsky has made another hill shading implementation that differs from Matplotlib in the way how the color and intensity values are merged. I can't tell which is better but it's worth a look.
Perhaps the best way of combining color and intensity would be to use gouraud shading, which is used in 3D computer graphics. My own approach, which I have implemented in the past, was to put the intensity in the value layer of the HSV color of the image.
I don't think I agree with your approach of placing the intensity (tdx_n in your case) in the alpha layer of the image. This means that where the gradient is low the image will be transparent and you will see data that was plotted earlier. I think that's what's happening in your screen shot.
Furthermore I think you need to normalize your data before you pass it through the cmap, just as you normalize your intensity:
data_n=(data-data.min())/(data.max()-data.min())
img_array = plt.get_cmap('cubehelix')(data_n)
We then can use the approach of Ran Novitsky to merge the color with the intensity:
rgb = img_array[:, :, :3]
# form an rgb eqvivalent of intensity
d = tdx_n.repeat(3).reshape(rgb.shape)
# simulate illumination based on pegtop algorithm.
rgb = 2 * d * rgb + (rgb ** 2) * (1 - 2 * d)
plt.imshow(rgb[4:-3,4:-3])
plt.show()
Or you can follow my past approach and put the intensity in the value layer of the HSV triplet.
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
hsv = rgb_to_hsv(img_array[:, :, :3])
hsv[:, :, 2] = tdx_n
rgb = hsv_to_rgb(hsv)
plt.imshow(rgb[4:-3,4:-3])
plt.show()
Edit 2015-05-23:
Your question has prompted me to finish my hill shading implementation that I started a year ago. I've put it on Github here.
It uses a blending mechanism that is similar to Gouraud shading, which is used in 3D computer graphics. It's labeled RGB blending below. I think this is the best blending algorithm, HSV blending gives erroneous results when the color is close to black (note the blue color in the center of the HSV image, which is not present in the un-shaded data).
RGB blending is also the simplest algorithm, it just multiplies the intensity with the RGB triplet (it adds an extra dimension of length 1 to allow broadcasting in the multiplication).
rgb = img_array[:, :, :3]
tdx_n_exp = np.expand_dims(tdx_n, axis=2)
result = tdx_n_exp * rgb
plt.imshow(result[4:-3,4:-3])
I represent images in the form of 2-D arrays. I have this picture:
How can I get the pixels that are directly on the boundaries of the gray region and colorize them?
I want to get the coordinates of the matrix elements in green and red separately. I have only white, black and gray regions on the matrix.
The following should hopefully be okay for your needs (or at least help). The idea is to split into the various regions using logical checks based on threshold values. The edge between these regions can then be detected using numpy roll to shift pixels in x and y and comparing to see if we are at an edge,
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from skimage.morphology import closing
thresh1 = 127
thresh2 = 254
#Load image
im = sp.misc.imread('jBD9j.png')
#Get threashold mask for different regions
gryim = np.mean(im[:,:,0:2],2)
region1 = (thresh1<gryim)
region2 = (thresh2<gryim)
nregion1 = ~ region1
nregion2 = ~ region2
#Plot figure and two regions
fig, axs = plt.subplots(2,2)
axs[0,0].imshow(im)
axs[0,1].imshow(region1)
axs[1,0].imshow(region2)
#Clean up any holes, etc (not needed for simple figures here)
#region1 = sp.ndimage.morphology.binary_closing(region1)
#region1 = sp.ndimage.morphology.binary_fill_holes(region1)
#region1.astype('bool')
#region2 = sp.ndimage.morphology.binary_closing(region2)
#region2 = sp.ndimage.morphology.binary_fill_holes(region2)
#region2.astype('bool')
#Get location of edge by comparing array to it's
#inverse shifted by a few pixels
shift = -2
edgex1 = (region1 ^ np.roll(nregion1,shift=shift,axis=0))
edgey1 = (region1 ^ np.roll(nregion1,shift=shift,axis=1))
edgex2 = (region2 ^ np.roll(nregion2,shift=shift,axis=0))
edgey2 = (region2 ^ np.roll(nregion2,shift=shift,axis=1))
#Plot location of edge over image
axs[1,1].imshow(im)
axs[1,1].contour(edgex1,2,colors='r',lw=2.)
axs[1,1].contour(edgey1,2,colors='r',lw=2.)
axs[1,1].contour(edgex2,2,colors='g',lw=2.)
axs[1,1].contour(edgey2,2,colors='g',lw=2.)
plt.show()
Which gives the . For simplicity I've use roll with the inverse of each region. You could roll each successive region onto the next to detect edges
Thank you to #Kabyle for offering a reward, this is a problem that I spent a while looking for a solution to. I tried scipy skeletonize, feature.canny, topology module and openCV with limited success... This way was the most robust for my case (droplet interface tracking). Hope it helps!
There is a very simple solution to this: by definition any pixel which has both white and gray neighbors is on your "red" edge, and gray and black neighbors is on the "green" edge. The lightest/darkest neighbors are returned by the maximum/minimum filters in skimage.filters.rank, and a binary combination of masks of pixels that have a lightest/darkest neighbor which is white/gray or gray/black respectively produce the edges.
Result:
A worked solution:
import numpy
import skimage.filters.rank
import skimage.morphology
import skimage.io
# convert image to a uint8 image which only has 0, 128 and 255 values
# the source png image provided has other levels in it so it needs to be thresholded - adjust the thresholding method for your data
img_raw = skimage.io.imread('jBD9j.png', as_grey=True)
img = numpy.zeros_like(img, dtype=numpy.uint8)
img[:,:] = 128
img[ img_raw < 0.25 ] = 0
img[ img_raw > 0.75 ] = 255
# define "next to" - this may be a square, diamond, etc
selem = skimage.morphology.disk(1)
# create masks for the two kinds of edges
black_gray_edges = (skimage.filters.rank.minimum(img, selem) == 0) & (skimage.filters.rank.maximum(img, selem) == 128)
gray_white_edges = (skimage.filters.rank.minimum(img, selem) == 128) & (skimage.filters.rank.maximum(img, selem) == 255)
# create a color image
img_result = numpy.dstack( [img,img,img] )
# assign colors to edge masks
img_result[ black_gray_edges, : ] = numpy.asarray( [ 0, 255, 0 ] )
img_result[ gray_white_edges, : ] = numpy.asarray( [ 255, 0, 0 ] )
imshow(img_result)
P.S. Pixels which have black and white neighbors, or all three colors neighbors, are in an undefined category. The code above doesn't color those. You need to figure out how you want the output to be colored in those cases; but it is easy to extend the approach above to produce another mask or two for that.
P.S. The edges are two pixels wide. There is no getting around that without more information: the edges are between two areas, and you haven't defined which one of the two areas you want them to overlap in each case, so the only symmetrical solution is to overlap both areas by one pixel.
P.S. This counts the pixel itself as its own neighbor. An isolated white or black pixel on gray, or vice versa, will be considered as an edge (as well as all the pixels around it).
While plonser's answer may be rather straight forward to implement, I see it failing when it comes to sharp and thin edges. Nevertheless, I suggest you use part of his approach as preconditioning.
In a second step you want to use the Marching Squares Algorithm. According to the documentation of scikit-image, it is
a special case of the marching cubes algorithm (Lorensen, William and
Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface
Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings)
21(4) July 1987, p. 163-170
There even exists a Python implementation as part of the scikit-image package. I have been using this algorithm (my own Fortran implementation, though) successfully for edge detection of eye diagrams in communications engineering.
Ad 1: Preconditioning
Create a copy of your image and make it two color only, e.g. black/white. The coordinates remain the same, but you make sure that the algorithm can properly make a yes/no-decision independent from the values that you use in your matrix representation of the image.
Ad 2: Edge Detection
Wikipedia as well as various blogs provide you with a pretty elaborate description of the algorithm in various languages, so I will not go into it's details. However, let me give you some practical advice:
Your image has open boundaries at the bottom. Instead of modifying the algorithm, you can artifically add another row of pixels (black or grey to bound the white/grey areas).
The choice of the starting point is critical. If there are not too many images to be processed, I suggest you select it manually. Otherwise you will need to define rules. Since the Marching Squares Algorithm can start anywhere inside a bounded area, you could choose any pixel of a given color/value to detect the corresponding edge (it will initially start walking in one direction to find an edge).
The algorithm returns the exact 2D positions, e.g. (x/y)-tuples. You can either
iterate through the list and colorize the corresponding pixels by assigning a different value or
create a mask to select parts of your matrix and assign the value that corresponds to a different color, e.g. green or red.
Finally: Some Post-Processing
I suggested to add an artificial boundary to the image. This has two advantages:
1. The Marching Squares Algorithm works out of the box.
2. There is no need to distinguish between image boundary and the interface between two areas within the image. Just remove the artificial boundary once you are done setting the colorful edges -- this will remove the colored lines at the boundary of the image.
Basically by follow pyStarter's suggestion of using the marching square algorithm from scikit-image, the desired could contours can be extracted with the following code:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from skimage import measure
import scipy.ndimage as ndimage
from skimage.color import rgb2gray
from pprint import pprint
#Load image
im = rgb2gray(sp.misc.imread('jBD9j.png'))
n, bins_edges = np.histogram(im.flatten(),bins = 100)
# Skip the black area, and assume two distinct regions, white and grey
max_counts = np.sort(n[bins_edges[0:-1] > 0])[-2:]
thresholds = np.select(
[max_counts[i] == n for i in range(max_counts.shape[0])],
[bins_edges[0:-1]] * max_counts.shape[0]
)
# filter our the non zero values
thresholds = thresholds[thresholds > 0]
fig, axs = plt.subplots()
# Display image
axs.imshow(im, interpolation='nearest', cmap=plt.cm.gray)
colors = ['r','g']
for i, threshold in enumerate(thresholds):
contours = measure.find_contours(im, threshold)
# Display all contours found for this threshold
for n, contour in enumerate(contours):
axs.plot(contour[:,1], contour[:,0],colors[i], lw = 4)
axs.axis('image')
axs.set_xticks([])
axs.set_yticks([])
plt.show()
!
However, from your image there is no clear defined gray region, so I took the two largest counts of intensities in the image and thresholded on these. A bit disturbing is the red region in the middle of the white region, however I think this could be tweaked with the number of bins in the histogram procedure. You could also set these manually as Ed Smith did.
Maybe there is a more elegant way to do that ...
but in case your array is a numpy array with dimensions (N,N) (gray scale) you can do
import numpy as np
# assuming black -> 0 and white -> 1 and grey -> 0.5
black_reg = np.where(a < 0.1, a, 10)
white_reg = np.where(a > 0.9, a, 10)
xx_black,yy_black = np.gradient(black_reg)
xx_white,yy_white = np.gradient(white_reg)
# getting the coordinates
coord_green = np.argwhere(xx_black**2 + yy_black**2>0.2)
coord_red = np.argwhere(xx_white**2 + yy_white**2>0.2)
The number 0.2 is just a threshold and needs to be adjusted.
I think you are probably looking for edge detection method for gray scale images. There are many ways to do that. Maybe this can help http://en.m.wikipedia.org/wiki/Edge_detection. For differentiating edges between white and gray and edges between black and gray, try use local average intensity.