I'm drawing an image in pyqtgraph, and I'd like to be able to see the grid lines. But the grid lines are always drawn underneath the image, so any black areas of the image obscure the grid. Here's a fairly minimal example:
import matplotlib # necessary for interactive plots in pyqtgraph
import pyqtgraph as pg
import numpy as np
n = 100000
sigma_y = 1e-3
sigma_x = 1e-3
x0 = np.matrix([np.random.normal(0, sigma_x, n), np.random.normal(0, sigma_y, n)])
bins = 30
histogram, x_edges, y_edges = np.histogram2d(np.asarray(x0)[0], np.asarray(x0)[1], bins)
x_range = x_edges[-1] - x_edges[0]
y_range = y_edges[-1] - y_edges[0]
imv = pg.ImageView(view=pg.PlotItem())
imv.show()
imv.setPredefinedGradient('thermal')
imv.getView().showGrid(True, True)
imv.setImage(histogram, pos=(x_edges[0], y_edges[0]), scale=(x_range / bins, y_range / bins))
Here's what I see (after zooming out a little). You can see that the black area of the image obscures the grid lines.
EDIT: it's possible in the GUI to change the black colour to transparent (not my first choice, but an OK workaround for now), so you can see the grid below the image. That works OK but I can't figure out how to do it in code. How do I get the lookup table out of the ImageView to modify it?
Here is what I did.
glw = pyqtgraph.GraphicsLayoutWidget()
pw = glw.addPlot(0, 0)
# Fix Axes ticks and grid
for key in pw.axes:
ax = pw.getAxis(key)
# Set the grid opacity
if grid_is_visible:
ax.setGrid(grid_opacity * 255)
else:
ax.setGrid(False)
# Fix Z value making the grid on top of the image
ax.setZValue(1)
This did cause some other issue, I think. It may have been the context menu or it had to do with panning and zooming, because of how Qt was signaling the events. One axes got the event priority and prevented the event from propagating for the other axes to pan and zoom. I submitted a pull request to pyqtgraph, so that might not be an issue anymore. I can't remember what caused the problem though. It may work just fine for you. I was doing a lot of other things like changing the background color and changing the viewbox background color which caused some small issues.
As a note I also changed the image z value. You shouldn't have to though.
imv.setZValue(1)
Related
i am currently struggling to see through how to interact in an appropriate way with a mayavi rendered scene.
I have a lidar point cloud which gets plotted by the function points3d(), now i have set in addition a bounding box around a car in between the point cloud, and i would like to change the color of the points inside the box as soon as i hover with my mouse over the bounding box.
Can you tell me how I can just select the points inside the bbox and change their color?
And my second question is, how can i show the same scene of the pointcloud in a 3d view and a bird view, concurrently?
Thank you very much :]
I have found a solution regarding the color problem - I don't know if it is best practice. But i still need help for determining the points inside the bounding box. I would also like to create a gui which enables the user to modify the size and orientation of the bounding box. [but that is an other topic]
import numpy as np
from mayavi.mlab import draw, points3d
from tvtk.api import tvtk
# Primitives
N = 3000 # Number of points
ones = np.ones(N) #np.hstack((np.tile(np.array([1]), int(N/2)).T, np.tile(np.array([4000]), int(N/2)).T))
scalars = ones #np.arange(N) # Key point: set an integer for each point
# Define color table (including alpha), which must be uint8 and [0,255]
colors = np.vstack((np.tile(np.array([[255],[255],[0]]), int(N/2)).T, np.tile(np.array([[0],[0],[255]]), int(N/2)).T))
# Define coordinates and points
x, y, z = (np.random.random((N, 3))*255).astype(np.uint8).T # Assign x, y, z values to match color
pts = points3d(x, y, z, scale_factor=10) # Create points
#pts.glyph.color_mode = 'color_by_vector' # Color by scalar
# Set look-up table and redraw
#pts.module_manager.scalar_lut_manager.lut.table = colors
pts.glyph.scale_mode = 'scale_by_vector'
sc=tvtk.UnsignedCharArray()
sc.from_array(colors)
pts.mlab_source.dataset.point_data.scalars = sc
draw()
I need to introduce a non-constant alpha value using pcolormesh (imshow is a priori not a possible substitue because I need to use log scale for the axes -- hence non-regular spacing along each coordinate).
Following this post, I tried to change a posteriori the alpha value of the faces. However, in the results, I can't get rid of edges that appear.
Here is a minimal example, where I plot a 2D gaussian bump (with very few points), with alpha increasing from the lower left to the upper right corner:
from matplotlib import pyplot as plt
import numpy as np
# start with coordinates, corresponding meshgrid to compute the "shading" value and
# extended coordinate array for pcolormesh (center mesh)
xx = np.linspace(-4,4,7)
xmesh, ymesh = np.meshgrid(xx,xx)
xplot = np.pad(0.5*(xx[1:]+xx[:-1]),1,'reflect',reflect_type="odd") # center & extend
yy = np.exp(-xx[None,:]**2-xx[:,None]**2) # data to plot
# plot the data
fig = plt.figure()
hpc = plt.pcolormesh(xplot, xplot, yy, shading="flat", edgecolor=None)
plt.gca().set_aspect(1)
# change alpha of the faces: lower-left to upper-right gradient
fig.canvas.draw() # this generate face color array
colors = hpc.get_facecolor()
grad = ( (xmesh.ravel()+ymesh.ravel())/2. - xx.min() ) / ( xx.max()-xx.min() )
colors[:,3] = grad.ravel() # change alpha
hpc.set_facecolor(colors) # update face colors
fig.canvas.draw() # make the modification appears
The result looks like this: 2D gaussian bump (with very few points), with alpha increasing from the lower left to the upper right corner:
Is it possible to get rid of these edges ? My problem is that I don't even know where it comes from... I tried adding hpc.set_antialiased(True), hpc.set_rasterized(True), explicitely adding edges with hpc.set_facecolor(face), tuning the linewidth to very small values -- none of these worked.
Thanks a lot for your help
The problem is that the squares overlap a tiny bit, and they are somewhat transparent (you're setting their alpha values != 1) -- so at the overlaps, they're less transparent than they should be, and it looks like a line.
You can fix it by making the squares opaque, but with a colour as if they had the stated transparency, with a white background:
def alpha_to_white(color):
white = np.array([1,1,1])
alpha = color[-1]
color = color[:-1]
return alpha*color + (1 - alpha)*white
colors = np.array([alpha_to_white(color) for color in colors])
I've been working on a web-app to produce images using Bokeh in Python and have been having trouble making images with non-uniform pixel width. The type of behavior that I'd like to have is similar to the NonUniformImage function from the matplotlib.image module, but I want it to be interactive in the browser, which is why I use Bokeh.
The data that I want to plot has a fixed pixel width in the vertical direction, but each column can have a different pixel width. Now, the only way that I could figure out how to make a variable width column in an image plot was to slice each column into its own image and plot them all as separate images with the appropriate widths. While this does plot things with the widths I want, it has a rendering issue in between each of the pixels where white lines show up depending on the level of zoom. These white lines will translate to the saved images as well. I've written up some sample code below:
import numpy as np
from bokeh.models import Range1d
from bokeh.plotting import figure, show, output_file
# Sample Data
img = np.array([[(x/10.*255,y/10.*255,100,255) for x in range(10)] for y in range(10)])
# Convert to RGBA array that can be plotted
d = np.empty((10, 10), dtype=np.uint32)
view = d.view(dtype=np.uint8).reshape((10, 10, 4))
view[:,:,:] = img
# Set output file
output_file("image.html", title="image.py example")
# Setup the figure
rng = Range1d(0,10,bounds='auto')
p = figure(x_range=rng, y_range=rng, plot_width=500, plot_height=500,active_scroll='wheel_zoom')
# Slice the images
imgs = [d[:,n:n+1] for n in range(10)]
dhs = [10 for n in range(10)]
ys = [0 for n in range(10)]
dws = [0.5 if n%2 == 0 else 1.5 for n in range(10)]
xs = [sum(dws[:n]) for n in range(10)]
# Plot the image
p.image_rgba(image=imgs, x=xs, y=ys, dw=dws, dh=dhs)
show(p)
Now, my real data is much denser than this sample data, so the rendering is dominated by the white vertical lines. If you zoom in far enough, you can see that the pixels are right next to each other.
So, my question is the following: Is there a better way to plot a non-uniform image in Bokeh? Something that can take in x,y position information for each pixel would be preferable. Or, is there a way I can get the rendering to work better using this method to avoid the white stripes?
EDIT: It seems that if I give the pixels some overlap then it gets rid of the striping. But there is a limit to that, the overlap needs to be sufficiently large, which seems like a pretty sketchy way of doing things. I'll think about this some more, as it could be a work-around, but I'd like to have a more reasonable solution.
I have a numpy array A, having a shape of (60,60,3), and I am using:
plt.imshow( A,
cmap = plt.cm.gist_yarg,
interpolation = 'nearest',
aspect = 'equal',
shape = A.shape
)
plt.savefig( 'myfig.png' )
When I examine the myfig.png file, I see that it is 1200x800 pixels (in color).
What's going on here? I was expecting a 60x60 RGB image.
matplotlib doesn't work directly with pixels but rather a figure size (in inches) and a resolution (dots per inch, dpi)
So, you need to explicitly give a figure size and dpi. For example, you could set your figure size to 1x1 inches, then set the dpi to 60, to get a 60x60 pixel image.
You also have to remove the whitespace around the plot area, which you can do with subplots_adjust
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(1,1))
A = np.random.rand(60,60,3)
plt.imshow(A,
cmap=plt.cm.gist_yarg,
interpolation='nearest',
aspect='equal',
shape=A.shape)
plt.subplots_adjust(left=0,right=1,bottom=0,top=1)
plt.savefig('myfig.png',dpi=60)
That creates this figure:
Which has a size of 60x60 pixels:
$ identify myfig.png
myfig.png PNG 60x60 60x60+0+0 8-bit sRGB 8.51KB 0.000u 0:00.000
You might also refer to this answer, which has lots of good information about figure sizes and resolutions.
Because plt.savefig renders a picture from your data and has an option dpi, dots per inch, it is set to some default value. You can change the quality of your figure by doing plt.savefig('myfig.png', dpi=100)
Genuine Matplotlib Engine is highly abstract, while Renderers ...
As an initial remark, matplotlib toolbox helps to create a lot of 2D / 3D plotting with also having a support for composing smart overlays from pixmap-DataSET-s
( "pictures" could be imagined as a {3D[for RGB] | 4D[for RGBA] }-colourspace data in pixmap-DataSet-s with 2D-[x,y]-mapping of colours onto "2D-paper" )
So one can overlay / view / save pixmap-DataSET pictures via matplotlib methods.
How to make pixmap size settings then?
For "picture" objects, that are rather "special" to the unconstrained world of unlimited numerical precision and any-depth LevelOfDetail in matplotlib, there are a few settings, that come into account at the very end of the processing lifecycle - at the output-generation moment.
So,
matplotlib can instruct it's (pre)selected Rendererto produce final graphical outputat a special typographic density ... aka dpi = 60 dots-per-inchand also give an overall sizing ... figsize = ( 1, 1 ) which makes your 60 x 60 image exactly 60x60 pixels ( once you spend a bit more effort to enforce matplotlib to disable all edges and other layout-specific surroundings )
The overlay composition may also use .figimage() method, where one can additionally specify x0 = x_PX_OFFSET and y0 = y_PX_OFFSET details about where to start placing a picture data within a given figsize * dpi pixel-mapping area.
I am new to matplotlib and python and would like to display an image so that 1 pixel of the image is actually represented by 1 pixel in the figure. In MATLAB, this is achieved with the command truesize(). How can I do this in Python?
I tried playing around with the imshow() arguments as well as set_dpi() and set_figwidth()/set_figheight(), but with no luck.
Thanks.
If you want to create images right down to the pixel level, why not use PIL in the first place? That way you wouldn't have to programatically calculate your true drawing area by substracting margins, labels and axis widths from the figure extend.
This hack does what I wanted to do, though it's still not perfect:
h = mplt.imshow(img, interpolation='nearest')
dpi = h.figure.get_dpi()
h.figure.set_figwidth(img.shape[0] / dpi)
h.figure.set_figheight(img.shape[1] / dpi)
h.figure.canvas.resize(img.shape[1] + 1, img.shape[0] + 1)
h.axes.set_position([0, 0, 1, 1])
h.axes.set_xlim(-1, img.shape[1])
h.axes.set_ylim(img.shape[0], -1)
It can be generalized to account for a margin around the axes holding the image.