I want to write a script to create an image from a connection matrix. Basically, wherever there is a '1' in the matrix, I want that area to be shaded in the image. For eg -
I created this image using Photoshop. But I have a large dataset so I will have to automate the process. It would be really helpful if anyone could point me in the right direction.
EDIT
The image that I am getting after using the script is this. This is due to the fact that the matrix is large (19 x 19). Is there any way I can increase the visibility of this image so the black and white boxes appear more clear?
I would suggest usage of opencv combined with numpy in this case.
Create two-dimensional numpy.array of dtype='uint8' with 0 for black and 255 for white. For example, to get 2x2 array with white left upper, white right lower, black left lower and black right upper, you could use code:
myarray = numpy.array([[255,0],[0,255]],dtype='uint8')
Then you could save that array as image with opencv2 in this way:
cv2.imwrite('image.bmp',myarray)
In which every cell of array is represented by single pixel, however if you want to upscale (so for example every cell is represented by 5x5 square) then you might use numpy.kron function, with following one line:
myarray = numpy.kron(myarray, numpy.ones((5,5)))
before writing image
May be you can try this!
import matplotlib.cm as cm
# Display matrix
plt.imshow(np.random.choice([0, 1], size=100).reshape((10, 10)),cmap=cm.binary)
With a Seaborn heatmap:
import seaborn as sns
np.random.seed(3)
sns.set()
data = np.random.choice([0, 1], size=(16,16), p=[3./4, 1./4])
ax = sns.heatmap(data, square=True, xticklabels=False, yticklabels=False, cbar=False, linewidths=.8, linecolor='lightgray', cmap='gray_r')
Note the reverse colormap gray_r to have black for 1's and white for 0's.
Related
Ok here is the situation, I want to make a watershed of this binary vessels image.
Binary vessels.
I want to use these colored vessels as seed points for the algorithm.
Seed points
It seems that when I use the raw colored image, the watershed does not go further than the colored image.
The goal is to have this image.
Filled binary vessels
The code use is this one
distances = distance_transform_edt(vessels)
segmentation = watershed(-distances, markers, mask=vessels).
The only solution that I found was to erode markers data (the 1st colored image).
Do you guys have a solution why watershed do this ? We even try the same code on other computers and it works find without erosion.
Edit:
Here is an image of the distances. When I take the negative, every 1 become -1. So the highest values in the image become 0.
welcome to the scikit-image thread of SO! Below is a small reproducible example showing that the watershed behaves nicely even with touching markers.
import matplotlib.pyplot as plt
import numpy as np
from skimage import segmentation
from scipy import ndimage
img = np.zeros((20, 20), dtype=np.bool)
img[3:-3, 3:-3] = True
distance = ndimage.distance_transform_edt(img)
markers = np.zeros_like(img, dtype=np.uint8)
markers[7:-7, 5:10] = 1
markers[7:-7, 10:15] = 2
ws = segmentation.watershed(-distance, markers, mask=img)
fig, ax = plt.subplots(1, 3)
ax[0].imshow(img)
ax[1].imshow(markers)
ax[2].imshow(ws)
plt.show()
Could it happen that the non-labeled vessel pixels in your markers array are not set to 0 but 1 instead? The watershed only labels 0-valued pixels.
A reproducible standalone script could help, the different images you linked to had different dimensions so it was hard to work from them.
Finally, you could be interested here in trying the random walker algorithm which can produce really good results for images such as your (no strong gradients between the regions you want to separate).
Heres my input image:
I am plotting histogram of this image using the following code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('red.jpg')
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
Here is the plotted histogram output: On the left hand side is the original histogram and on the right hand side is the zoomed version:
My starting point is 255 and ending point is zero.
All my important data lies on the range of 235 to 255. As at 235 the line becomes straight (pl. see right hand side of histogram)
I want to write a python - opencv code which finds out when red line of histogram becomes straight and once the number is found after which the line shows minimum deviation delete all the remaining pixels from the image. In above case delete pixels having value (0 to 235). How can this be achieved ?
Histogram is basically arrays (bins).
The opencv histogram bins you create, you can check for the number of values & mean values in each bin, and compare it with previous bin. (More like a sliding window). If you find the difference to be greater than a threshold, then consider them to be chosen bins(pixels).
This is a technique used to identify peaks in a 1D array.
I am trying to add shading to a map of some data by calculating the gradient of the data and using it to set alpha values.
I start by loading my data (unfortunately I cannot share the data as it is being used in a number of manuscripts in preparation. EDIT - December, 2020: the published paper is available with open access on the Society of Exploration Geophysicists library, and the data is available with an accompanying Jupyter Notebook):
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from pylab import imread, imshow, gray, mean
import matplotlib.colors as cl
%matplotlib inline
data = np.loadtxt('data.txt')
plt.imshow(data, cmap='cubehelix')
plt.show()
gets me a plot of the data:
Then I calculate the total horizontal gradient and normalize it to use for shading:
dx,dy = np.gradient(data, 1, 1)
tdx=np.sqrt(dx*dx + dy*dy)
tdx_n=(tdx-tdx.min())/(tdx.max()-tdx.min())
tdx_n=1-tdx_n
which looks as I expected:
plt.imshow(tdx_n[4:-3,4:-3], cmap='bone')
plt.show()
To create the shading effect I thought I would get the colour from the plot of the data, then replace the opacity with the gradient so as to have dark shading proportional to the gradient, like this:
img_array = plt.get_cmap('cubehelix')(data[4:-3,4:-3])
img_array[..., 3] = (tdx_n[4:-3,4:-3])
plt.imshow(img_array)
plt.show()
But the result is not what I expected:
This is what I am looking for (created in Matlab, colormap is different):
Any suggestion as to how I may modify my code?
UPDATED
With Ran Novitsky's method, using the code suggested in the answer by titusjan, I get this result:
which gives the effect I was looking for. In terms of shading though I do like titusjan's own suggestion of using HSV, which gives this result:
.
However, I could not get the colormap to be cubehelix, even though I called for it:
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
hsv = rgb_to_hsv(img_array[:, :, :3])
hsv[:, :, 2] = tdx_n
rgb = hsv_to_rgb(hsv)
plt.imshow(rgb[4:-3,4:-3], cmap='cubehelix')
plt.show()
First of all, Matplotlib includes a hill shading implementation. This calculates the intensity by comparing the gradient with a light source at a certain angle. So it's not exactly what you are implementing, but close and may even give better results.
Ran Novitsky has made another hill shading implementation that differs from Matplotlib in the way how the color and intensity values are merged. I can't tell which is better but it's worth a look.
Perhaps the best way of combining color and intensity would be to use gouraud shading, which is used in 3D computer graphics. My own approach, which I have implemented in the past, was to put the intensity in the value layer of the HSV color of the image.
I don't think I agree with your approach of placing the intensity (tdx_n in your case) in the alpha layer of the image. This means that where the gradient is low the image will be transparent and you will see data that was plotted earlier. I think that's what's happening in your screen shot.
Furthermore I think you need to normalize your data before you pass it through the cmap, just as you normalize your intensity:
data_n=(data-data.min())/(data.max()-data.min())
img_array = plt.get_cmap('cubehelix')(data_n)
We then can use the approach of Ran Novitsky to merge the color with the intensity:
rgb = img_array[:, :, :3]
# form an rgb eqvivalent of intensity
d = tdx_n.repeat(3).reshape(rgb.shape)
# simulate illumination based on pegtop algorithm.
rgb = 2 * d * rgb + (rgb ** 2) * (1 - 2 * d)
plt.imshow(rgb[4:-3,4:-3])
plt.show()
Or you can follow my past approach and put the intensity in the value layer of the HSV triplet.
from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
hsv = rgb_to_hsv(img_array[:, :, :3])
hsv[:, :, 2] = tdx_n
rgb = hsv_to_rgb(hsv)
plt.imshow(rgb[4:-3,4:-3])
plt.show()
Edit 2015-05-23:
Your question has prompted me to finish my hill shading implementation that I started a year ago. I've put it on Github here.
It uses a blending mechanism that is similar to Gouraud shading, which is used in 3D computer graphics. It's labeled RGB blending below. I think this is the best blending algorithm, HSV blending gives erroneous results when the color is close to black (note the blue color in the center of the HSV image, which is not present in the un-shaded data).
RGB blending is also the simplest algorithm, it just multiplies the intensity with the RGB triplet (it adds an extra dimension of length 1 to allow broadcasting in the multiplication).
rgb = img_array[:, :, :3]
tdx_n_exp = np.expand_dims(tdx_n, axis=2)
result = tdx_n_exp * rgb
plt.imshow(result[4:-3,4:-3])
I have a plot of spatial data that I display with imshow().
I need to be able to overlay the crystal lattice that produced the data. I have a png
file of the lattice that loads as a black and white image.The parts of this image I want to
overlay are the black lines that are the lattice and not see the white background between the lines.
I'm thinking that I need to set the alphas for each background ( white ) pixel to transparent (0 ? ).
I'm so new to this that I don't really know how to ask this question.
EDIT:
import matplotlib.pyplot as plt
import numpy as np
lattice = plt.imread('path')
im = plt.imshow(data[0,:,:],vmin=v_min,vmax=v_max,extent=(0,32,0,32),interpolation='nearest',cmap='jet')
im2 = plt.imshow(lattice,extent=(0,32,0,32),cmap='gray')
#thinking of making a mask for the white background
mask = np.ma.masked_where( lattice < 1,lattice ) #confusion here b/c even tho theimage is gray scale in8, 0-255, the numpy array lattice 0-1.0 floats...?
With out your data, I can't test this, but something like
import matplotlib.pyplot as plt
import numpy as np
import copy
my_cmap = copy.copy(plt.cm.get_cmap('gray')) # get a copy of the gray color map
my_cmap.set_bad(alpha=0) # set how the colormap handles 'bad' values
lattice = plt.imread('path')
im = plt.imshow(data[0,:,:],vmin=v_min,vmax=v_max,extent=(0,32,0,32),interpolation='nearest',cmap='jet')
lattice[lattice< thresh] = np.nan # insert 'bad' values into your lattice (the white)
im2 = plt.imshow(lattice,extent=(0,32,0,32),cmap=my_cmap)
Alternately, you can hand imshow a NxMx4 np.array of RBGA values, that way you don't have to muck with the color map
im2 = np.zeros(lattice.shape + (4,))
im2[:, :, 3] = lattice # assuming lattice is already a bool array
imshow(im2)
The easy way is to simply use your image as a background rather than an overlay. Other than that you will need to use PIL or Python Image Magic bindings to convert the selected colour to transparent.
Don't forget you will probably also need to resize either your plot or your image so that they match in size.
Update:
If you follow the tutorial here with your image and then plot your data over it you should get what you need, note that the tutorial uses PIL so you will need that installed as well.
I am trying to take pixels from an image and plot them ontop of a Blue Marble map. I have figured out how to project them on to the map. I have just not been able to figure out how to color each individual pixel when they are projected onto the map.
I have been using the plot() method, when I do them individually the terminal automatically kills my process because it has to plot ~65000 times. Is there another method I could use? Is there a way to use an array of pixel colors in any of these methods? Is this possible with PIL?
rgb is the color array with a 3-tuple ie. (14,0,0) etc. full_x and full_y are a 2 dimensional array where it is # of pixels x 5 different x,y points (to make the pixel shape on the blue marble image)
This is where I tried to do an array of colors:
for i in range(len(rgb)):
hexV = struct.pack('BBB',*rgb[i]).encode('hex')
hexA.append('#' + hexV)
m.plot(full_x, full_y, color=hexA)
I have also tried:
for i in range(len(rgb)):
hexV = struct.pack('BBB',*rgb[i]).encode('hex')
#hexA.append('#' + hexV)
hexA = '#' + hexV
m.plot(full_x[i], full_y[i], color=hexA[i])
This is where I tried to do each pixel individually and then the process was automatically killed.
Any help would be much appreciated. Thanks in advance.
For anyone who sees this and has the same problem:
Apparently all you have to use is scatter. In order to map pixels/any other points with multiple colors use scatter with an x array, y array and pixel color array.