I've been working on a web-app to produce images using Bokeh in Python and have been having trouble making images with non-uniform pixel width. The type of behavior that I'd like to have is similar to the NonUniformImage function from the matplotlib.image module, but I want it to be interactive in the browser, which is why I use Bokeh.
The data that I want to plot has a fixed pixel width in the vertical direction, but each column can have a different pixel width. Now, the only way that I could figure out how to make a variable width column in an image plot was to slice each column into its own image and plot them all as separate images with the appropriate widths. While this does plot things with the widths I want, it has a rendering issue in between each of the pixels where white lines show up depending on the level of zoom. These white lines will translate to the saved images as well. I've written up some sample code below:
import numpy as np
from bokeh.models import Range1d
from bokeh.plotting import figure, show, output_file
# Sample Data
img = np.array([[(x/10.*255,y/10.*255,100,255) for x in range(10)] for y in range(10)])
# Convert to RGBA array that can be plotted
d = np.empty((10, 10), dtype=np.uint32)
view = d.view(dtype=np.uint8).reshape((10, 10, 4))
view[:,:,:] = img
# Set output file
output_file("image.html", title="image.py example")
# Setup the figure
rng = Range1d(0,10,bounds='auto')
p = figure(x_range=rng, y_range=rng, plot_width=500, plot_height=500,active_scroll='wheel_zoom')
# Slice the images
imgs = [d[:,n:n+1] for n in range(10)]
dhs = [10 for n in range(10)]
ys = [0 for n in range(10)]
dws = [0.5 if n%2 == 0 else 1.5 for n in range(10)]
xs = [sum(dws[:n]) for n in range(10)]
# Plot the image
p.image_rgba(image=imgs, x=xs, y=ys, dw=dws, dh=dhs)
show(p)
Now, my real data is much denser than this sample data, so the rendering is dominated by the white vertical lines. If you zoom in far enough, you can see that the pixels are right next to each other.
So, my question is the following: Is there a better way to plot a non-uniform image in Bokeh? Something that can take in x,y position information for each pixel would be preferable. Or, is there a way I can get the rendering to work better using this method to avoid the white stripes?
EDIT: It seems that if I give the pixels some overlap then it gets rid of the striping. But there is a limit to that, the overlap needs to be sufficiently large, which seems like a pretty sketchy way of doing things. I'll think about this some more, as it could be a work-around, but I'd like to have a more reasonable solution.
Related
I want to plot characteristics of areas on a map, but with very uneven population density, the larger tiles misleadingly attract attention. Think of averages (of test scores, say) by ZIP codes.
High-resolution maps are available to separate inhabited locales and even density within them. The Python code below does produce a raster colored according to the average such density for every pixel.
However, what I would really need is coloring from a choropleth map of the same area (ZIP codes of Hungary in this case) but the coloring affecting only points that would show up on the raster anyway. The raster could only determine the gamma of the pixel (or maybe height in some 3D analog). What is a good way to go about this?
A rasterio.mask.mask somehow?
(By the way, an overlay with the ZIP code boundaries would also be nice, but I have a better understanding of how that could work with GeoViews.)
import rasterio
import os
import datashader as ds
from datashader import transfer_functions as tf
import xarray as xr
from matplotlib.cm import viridis
# download a GeoTIFF from this location: https://data.humdata.org/dataset/hungary-high-resolution-population-density-maps-demographic-estimates
data_path = '~/Downloads/'
file_name = 'HUN_youth_15_24.tif' # young people
file_path = os.path.join(data_path, file_name)
src = rasterio.open(file_path)
da = xr.open_rasterio(file_path)
cvs = ds.Canvas(plot_width=5120, plot_height=2880)
img = tf.shade(cvs.raster(da,layer=1), cmap=viridis)
ds.utils.export_image(img, "map", export_path=data_path, fmt=".png")
I am not sure if I understand, so please just tell me if I am mistaken. If I understood well, you can achieve what you want using numpy only (I am sure translating this to xarray will be easy):
# ---- snipped code already in the question -----
import numpy as np
import matplotlib.pyplot as plt
# fake a choropleth in a dirty, fast way
height, width = 2880, 5120
choropleth = np.empty((height, width, 3,), dtype=np.uint8)
CHUNKS = 10
x_size = width // CHUNKS
for x_step, x in enumerate(range(0, width, width // CHUNKS)):
y_size = height // CHUNKS
for y_step, y in enumerate(range(0, height, height // CHUNKS)):
choropleth[y: y+y_size, x: x+x_size] = (255-x_step*255//CHUNKS,
0, y_step*255//CHUNKS)
plt.figure("Fake Choropleth")
plt.imshow(choropleth)
# Option 1: play with alpha only
outimage = np.empty((height, width, 4,), dtype=np.uint8) # RGBA image
outimage[:, :, 3] = img # Set alpha channel
outimage[:, :, :3] = choropleth # Set color
plt.figure("Alpha filter only")
plt.imshow(outimage)
# Option 2: clear the empty points
outimage[img == 0, :3] = 0 # White. use 0 for black
plt.figure("Points erased")
plt.imshow(outimage[:,:,:3]) # change to 'outimage' to see the image with alpha
Results:
Dummy choroplet
Alpha filtered figure
Black background, no alpha filter
Note that the images might seem different because of matplotlib's antialiasing.
Datashader will let you combine data of many types into a common raster shape where you can do whatever making or filtering you like using xarray operations based on NumPy. E.g. you can render the choropleth as polygons, then mask out uninhabited regions. How to normalize by area is up to you, and could get very complex, but should be doable once you define precisely what you are intending to do. See the transform code at https://examples.pyviz.org/nyc_taxi/nyc_taxi.html for examples of how to do this, as in:
def transform(overlay):
picks = overlay.get(0).redim(pickup_x='x', pickup_y='y')
drops = overlay.get(1).redim(dropoff_x='x', dropoff_y='y')
pick_agg = picks.data.Count.data
drop_agg = drops.data.Count.data
more_picks = picks.clone(picks.data.where(pick_agg>drop_agg))
more_drops = drops.clone(drops.data.where(drop_agg>pick_agg))
return (hd.shade(more_drops, cmap=['lightcyan', "blue"]) *
hd.shade(more_picks, cmap=['mistyrose', "red"]))
picks = hv.Points(df, ['pickup_x', 'pickup_y'])
drops = hv.Points(df, ['dropoff_x', 'dropoff_y'])
((hd.rasterize(picks) * hd.rasterize(drops))).apply(transform).opts(
bgcolor='white', xaxis=None, yaxis=None, width=900, height=500)
Here it's not really masking anything, but hopefully you can see how masking would work; just get some rasterized object then do a mathematical operation using some other rasterized object. Here the steps are all done in a function using HoloViews objects so that you can have a live interactive plot, but you would probably want to work out the approach using the more basic code at datashader.org where you only have to deal with xarray objects and not a HoloViews pipeline; you can then translate what you did for a single xarray into the HoloViews pipeline that would then allow full interactive usage with pan, zoom, axes, etc.
Heres my input image:
I am plotting histogram of this image using the following code:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('red.jpg')
color = ('b','g','r')
for i,col in enumerate(color):
histr = cv2.calcHist([img],[i],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.show()
Here is the plotted histogram output: On the left hand side is the original histogram and on the right hand side is the zoomed version:
My starting point is 255 and ending point is zero.
All my important data lies on the range of 235 to 255. As at 235 the line becomes straight (pl. see right hand side of histogram)
I want to write a python - opencv code which finds out when red line of histogram becomes straight and once the number is found after which the line shows minimum deviation delete all the remaining pixels from the image. In above case delete pixels having value (0 to 235). How can this be achieved ?
Histogram is basically arrays (bins).
The opencv histogram bins you create, you can check for the number of values & mean values in each bin, and compare it with previous bin. (More like a sliding window). If you find the difference to be greater than a threshold, then consider them to be chosen bins(pixels).
This is a technique used to identify peaks in a 1D array.
I want to write a script to create an image from a connection matrix. Basically, wherever there is a '1' in the matrix, I want that area to be shaded in the image. For eg -
I created this image using Photoshop. But I have a large dataset so I will have to automate the process. It would be really helpful if anyone could point me in the right direction.
EDIT
The image that I am getting after using the script is this. This is due to the fact that the matrix is large (19 x 19). Is there any way I can increase the visibility of this image so the black and white boxes appear more clear?
I would suggest usage of opencv combined with numpy in this case.
Create two-dimensional numpy.array of dtype='uint8' with 0 for black and 255 for white. For example, to get 2x2 array with white left upper, white right lower, black left lower and black right upper, you could use code:
myarray = numpy.array([[255,0],[0,255]],dtype='uint8')
Then you could save that array as image with opencv2 in this way:
cv2.imwrite('image.bmp',myarray)
In which every cell of array is represented by single pixel, however if you want to upscale (so for example every cell is represented by 5x5 square) then you might use numpy.kron function, with following one line:
myarray = numpy.kron(myarray, numpy.ones((5,5)))
before writing image
May be you can try this!
import matplotlib.cm as cm
# Display matrix
plt.imshow(np.random.choice([0, 1], size=100).reshape((10, 10)),cmap=cm.binary)
With a Seaborn heatmap:
import seaborn as sns
np.random.seed(3)
sns.set()
data = np.random.choice([0, 1], size=(16,16), p=[3./4, 1./4])
ax = sns.heatmap(data, square=True, xticklabels=False, yticklabels=False, cbar=False, linewidths=.8, linecolor='lightgray', cmap='gray_r')
Note the reverse colormap gray_r to have black for 1's and white for 0's.
I'm practically new to python and don't have much knowledge about it. I need help converting this pseudocode into Python which is written to obtain the background by removing moving objects in the images. In regards to the Pseudocode, I don't understand the Lines 3, 4 and 5 so maybe once its converted into Python, I can understand it better. In line 3 and 4, I don't understand what the & does and in the last line, I don't understand how is it even computing an image.
Any help will be appreciated.
The code is provided below:
Mat sequence[3];// the sequence of images to loop through
Mat output, x = 0, y = 0; // looping through the sequence
matchTemplate(sequence[i], sequence[i+1], output, CV_TM_CCOEFF_NORMED)
mask = 1 & (output>0.9) // get correlated part amongst the images
x += sequence[i] & mask + sequence[i+1] & mask; // accumulate background infer
y += 2*mask; // keep count
end of loop;
Mat bg = x.mul(1.0/y); // average background
Sample images to try are also provided below:
image1
image2
image3
I'm not very familiar with OpenCV, so I hope you'll excuse me if I don't provide a code snippet you can just copy and paste. But if I understand the pseudocode correctly, it is doing this:
sequence = list of images
x will hold sum of backgrounds
y will hold the number of frames use to build x
for each index i in sequence:
c = matrix of correlation coefficients between (sequence[i], sequence[i+1]) from matchTemplate
mask = pixels that are highly correlated (90%+)
x += actual pixels from sequence[i] & mask and sequence[i+1] & mask that are considered background
y += 2 for every pixel in mask
bg = average of background images x / number of frames y
So what's happening is, for every pair of images, it marks the pixels that are the same in both images. The assumption is that background doesn't change between adjacent frames and foreground does. Whether pixels are "the same" is judged on the basis of correlation being >90%. Then it takes all the marked pixels, and averages them.
As one of the commentors mentioned, the mean of the images does remove the foreground but the entire image becomes a little faded. Here is the code that does that:
import skimage.io as io
import numpy as np
import matplotlib.pyplot as plt
cim1 = io.imread('https://i.stack.imgur.com/P44wT.jpg')
cim2 = io.imread('https://i.stack.imgur.com/wU4Yt.jpg')
cim3 = io.imread('https://i.stack.imgur.com/yUbB6.jpg')
x,y,z = cim1.shape
newimage = np.copy(cim1)
for row in range(x-1):
for col in range(y-1):
r = np.mean([cim1[row][col][0],cim2[row][col][0],cim3[row][col][0]]).astype(int)
g = np.mean([cim1[row][col][1],cim2[row][col][1],cim3[row][col][1]]).astype(int)
b = np.mean([cim1[row][col][2],cim2[row][col][2],cim3[row][col][2]]).astype(int)
newimage[row][col] = [r,g,b]
fix, ax = plt.subplots(figsize=(10,10))
ax.axis('off')
ax.imshow(newimage)
The output image I get from this:
A better approach to this problem is to find the median of the three images. The more images you have in the algorithm the better is the background. Here is a snippet I tried (just replacing mean with median). If you have more images you can get a much more accurate one.
x,y,z = cim1.shape
newimage = np.copy(cim1)
for row in range(x-1):
for col in range(y-1):
r = np.median([cim1[row][col][0],cim2[row][col][0],cim3[row][col][0]]).astype(int)
g = np.median([cim1[row][col][1],cim2[row][col][1],cim3[row][col][1]]).astype(int)
b = np.median([cim1[row][col][2],cim2[row][col][2],cim3[row][col][2]]).astype(int)
newimage[row][col] = [r,g,b]
fix, ax = plt.subplots(figsize=(10,10))
ax.axis('off')
ax.imshow(newimage)
The final output:
If you had more images, you can completely remove the foreground. Hope you got the idea on which you can build upon.
My code assumes all your images are of the same dimensions. The solution will be a bit more complicated if you captured the images in different views. In that case you may have to use template matching algorithm (your pseudo code seems to be doing something similar) to extract the common canvas from your images.
So I have an image and I have a pixel mask for that image, where the mask is the same size as the image and contains values of 0 and 1, where if it is 0 I don't want to modify the image, and if it is 1 I want to add a transparent color over that pixel of the image.
Basically I want to highlight certain segments of the image but still see what is underneath.
Now I have searched high and low but haven't found a simple way to do this. I used np.where with the mask to get the pixel locations of the 1's to use with the plot functions. I first tried scatter plots with a small marker size and no edge color (small scatter plot markers in matplotlib are always black), but the markers are not one image pixel in size, they seem to be an absolute size and so depending on the size of the figure the transparency is affected and weird patterns are created from the overlapping markers.
Just the regular pyplot plot function created the exact look I desired (where the coloring was smooth and invariant to figure size) but it also colored horizontal connections between disjoint segments in the mask (since it is drawing lines I guess), so I couldn't use that.
What worked the best was patches, which I came across in this question: (How to set a fixed/static size of circle marker on a scatter plot?). I found that rectangular patches with width and height of 1 gave me the exact desired effect, where I could put a transparent color over certain pixels of the image. However this proved to produce a ton (tens of thousands) of rectangles for certain images, and so it was quite slow. Even when using a PatchCollection instead of calling addPatch every time it was still slow.
Now I can probably just join adjacent rectangles to reduce the number of things needing to be drawn, but I was just wondering if there was an easier way to do this?
Thanks.
You can do a semitransparent overlay either using masked arrays or by setting the alpha values in an RGBA image. Here are both worked through (using the example of three semitransparent red squares placed over a circular pattern), and they give similar images (so I'll only show one):
from pylab import *
from numpy import ma
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
# z4 = 3 diagonal square
# zm = a uniform image (ones), with a mask of squares (~z4)
z4 = np.repeat(np.repeat(eye(3, dtype=bool), 40, axis=0), 40, axis=1)
zm = ma.masked_where(~z4, ones((120,120)))
imshow(z3, cmap=cm.jet)
imshow(zm, cmap=cm.bwr, alpha=.3, vmin=0, vmax=1) #cm.bwr is an easy way to get red
# do this by changing alpha for each pixel
figure()
z5 = zeros((120, 120, 4), dtype=float)
z5[..., 0] = 1
z5[..., 3] = .4*z4.astype(float)
imshow(z3, cmap=cm.jet)
imshow(z5)
show()
I think both approaches can produce the same results for all cases, but:
1. the masked arrays can be a more direct approach if the mask or composition becomes complicated, and masking gives you more flexibility in drawing your overlay image since, for example, you can use colormaps rather than specifying the full RGBA for every pixel, but,
2. the masked array approach doesn't give full pixel-by-pixel control over the alpha value like RGBA does.
z1 = sin(X*Y)
z1 = cos(2*X)
z2 = cos(5*(X+Y))
zm = ma.masked_where( (z2<.5) & (Y>0), z1)
figure()
imshow(z3)
imshow(zm, cmap=cm.gray, alpha=.4, vmin=-2, vmax=2)
show()
It's a bit crazy, but here's what's going on: The primary image is a circular pattern that goes from blue to red (z3). Then there are vertical bars that faintly shade this (z1) but only in half of the figure and in narrow alternate diagonal bands on the other half (due to the mask). Here's a more complicated image using masked arrays:
Just to add on to what tom10 has posted, the masked arrays do work great with colormaps, but I also wrote a small function in the meantime that should work with any RGB color tuple.
def overlayImage(im, mask, col, alpha):
maskRGB = np.tile(mask[..., np.newaxis], 3)
untocuhed = (maskRGB == False) * im
overlayComponent = alpha * np.array(col) * maskRGB
origImageComponent = (1 - alpha) * maskRGB * im
return untocuhed + overlayComponent + origImageComponent
im is the rgb image
mask is a boolean mask of the image, such that mask.shape + (3,) = im.shape
col is just the 3-tuple rgb value you want to mask the image with
alpha is just the alpha value / transparency for the mask
I also needed a clear contour on my areas. Thus, you can easily add a contour plot on top: e.g., create a dummy numpy array and set a different value in each area of interest.
Here's an example build on top of tom10's answer with a different condition:
x = y = linspace(-6, 6, 100)
X, Y = meshgrid(x, y)
z3 = X*X + Y*Y # circular pattern
# first, do this with a masked array
figure()
imshow(z3, cmap=cm.jet, extent = (-6,6,-6,6));
zm = ma.masked_where((z3>=0.7) & (z3<=1.5), ones(np.shape(z3)));
imshow(zm, cmap=cm.bwr, alpha=.4, vmin=0, vmax=1, extent = (-6,6,-6,6)) #cm.bwr is an easy way to get red
# Build dummy array of 1s and 0s (you can play with different values to obtain different contours for different regions):
temp_vector = ones(np.shape(z3));
temp_vector[(z3>=0.7) & (z3<=1.5)] = 0.0;
temp_vector[(z3>8.2)] = 2.0; # etc.
# Create contour. I found only one contour necessary:
contour(X, Y, temp_vector, 1, colors=['r','g']);
show()
Which yields: