PyVista plotter unable to produce background image for multiple subplots - python

I am trying to render a mesh in Python using pyvista.Plotter() while trying to show images alongside the rendered mesh. The code is currently in the form of
import pyvista as pv
from pyvista import examples
filenames = (['filename1.jpg','filename2.jpg','filename3.jpg',])
mesh = pv.PolyData('meshfile.ply')
p = pv.Plotter(shape='1|3')
p.subplot(0)
p.add_mesh(mesh)
t=1
for i in filenames:
p.subplot(t)
p.add_background_image(i)
#p.add_mesh(examples.load_airplane(), show_edges=False)
t +=1
where I thought the Plotter.add_background_image() would be the most convenient way to plot images using PyVista. The commented out line in the for loop actually produces the right arrangement but I would like the smaller plots to have background images rather than another mesh. However, only the final image file is actually shown and it is the background image of p.subplot(0) which should not have a background image. Would there be a more convenient way of displaying images alongside a pyvista 3d-rendered window?

Looking at the documentation of Plotter.add_background_image():
add_background_image(image_path, scale=1, auto_resize=True, as_global=True)
as_global (bool, optional) – When multiple render windows are present, setting as_global=False will cause the background to only appear in one window.
So you might just have to call the method as
p.add_background_image(i, as_global=False)

Related

plot image with interpolation in Python Bokeh like matplotlib?

Is there any way to plot 2D array as an image using Bokeh with interpolation like in Matplotlib? I am able to plot using an example: https://docs.bokeh.org/en/latest/docs/gallery/image.html
However, the image is to coarse. I like the way interpolation work in Matplotlib: https://matplotlib.org/gallery/images_contours_and_fields/interpolation_methods.html
I tried to perform interpolation beforehand but the matrix size now is to big.
I had the same issue and I've found the answer in pyviz's Gitter.
The solution combines Holoviews and Datashader:
import holoviews as hv
from holoviews import opts
from holoviews.operation.datashader import regrid
img = hv.Image(data)
regrid(img, upsample=True, interpolation='bilinear')
If you are working with a large dataset then you could try Bokeh in combination with Datashader/HoloViews like in this example. When zooming in, Datashader can dynamically create new high quality images from your data that could be displayed in your Bokeh plot.
Not an answer but an observation - I've noticed that plotting an image via an image_url source it appears interpolated when zoomed in whilst if you read in the same image and display it from a columndatasource via 'image' it then appears blocky when zoomed. I'd love to know how to make it appear interpolated too when zoomed, eg like the raw png image appears. Holoview/datashader would be a great solution but in my case I need it to work offline/as a standalone html file.

Rendering 2D images from STL file in Python

I would like to load an STL file and produce a set of 2D images in different rotations.
I got the basics working with numpy-stl based on this example, ended up with this code -
from stl import mesh
from mpl_toolkits import mplot3d
from matplotlib import pyplot
filename = '3001.stl'
# Create a new plot
figure = pyplot.figure()
axes = figure.gca(projection='3d')
# Load the STL files and add the vectors to the plot
mesh = mesh.Mesh.from_file(filename)
axes.add_collection3d(mplot3d.art3d.Poly3DCollection(mesh.vectors, color='lightgrey'))
#axes.plot_surface(mesh.x,mesh.y,mesh.z)
# Auto scale to the mesh size
scale = mesh.points.flatten()
axes.auto_scale_xyz(scale, scale, scale)
#turn off grid and axis from display
pyplot.axis('off')
#set viewing angle
axes.view_init(azim=120)
# Show the plot to the screen
pyplot.show()
This works well only that I end up with a silhouette of the component, lacking a lot of the detail. the picture below is a lego brick...
I tried to highlight the edges. but that is sensitive to how the model was created, which is not great for me.
I was hoping that by adding lighting, the shadows could help add the missing detail but I can't find a way to do that.
Any idea how to add lightsource to the code below to create shadows ?
After getting tired with Mayavi's install disasters I ended up writing my own library for this.
https://github.com/bwoodsend/vtkplotlib
Your code would be something like
import vtkplotlib as vpl
from stl.mesh import Mesh
path = "your path here.stl"
# Read the STL using numpy-stl
mesh = Mesh.from_file(path)
# Plot the mesh
vpl.mesh_plot(mesh)
# Show the figure
vpl.show()
If you want the brick to be blue you can replace the mesh_plot with
vpl.mesh_plot(mesh, color="blue")
If you don't find Mayavi helpful, you could try Panda3D which is intended for graphics/3D rendering applications. I find it quite straightforward for doing simple stuff like this.

I want to dynamically plot a histogram in the pygame window itself where I am showing an animation. Is there any possible way for the same?

I have a Van der Waal's gas simulation where we are showing real time collisions between gas molecules. I am doing that using Pygame and it works fine. However, I want one half of the Pygame window to show the real time collisions while the other half plots a dynamic histogram for every time step. Till now, I haven't come across any such code which allows plotting in the same Pygame window where some other simulation is going on.
Using custom ipywidget to display incremental simulation image display
I have been using jupyter technology for all my needs to displaying results from simulations, and I have had reasonable succeesses. Here is what I would do in your case.
I am not an expert at the capabilities of pygame. Looks like in each loop, you are running a simulation step, get the simulation state, and feed it to the pygame scene building to update and render the state for that step (loop).
Assume that, MyPyGameRenderer is your python class object whose MyPyGameRenderer.produce_rendering( simul_state=None) is the one that cause the rendering. I would alter this routine as follows
class MyPyGameRenderer(object)
def produce_rendering_as_a_png_img(self, simul_state=None):
myRenderPngByteData=None
#using pygame, interpret the simulation state and produce rendering
#as a png image byte data and set it to myRenderPngByteData
return myRenderPngByteData
I would install jupyter, make a notebook with python2.7 kernel, and put the above code, and the code for simulation there, and show my png image produced.
If what I said above is what you are doing and your objective is to arrange GUI elements, (in your case it is displaying two windows side by side), I would be tempted to use ipywidgets. ipywidgets enable you in some rudimentary way to assemble GUI elements. The GUI composition capabilities of ipywidgets are cruder than a sophisticated javascript renderings of GUI elements that you see in a webpage, but in my experience they are sufficient for demonstrating scientific results. Here are the steps that I would do.
In achieving what you are saying, for each simulation step, I would still use pygame to do the scene compositing and rendering.
I would construct a custom ipywidget called MyCustomIncrementalImageWidget with a value element which is supposed to hold a png as a dataurl image to display. The rendering routine of the custom ipywidget, just displays the png image which is set as a dataurl. I can display any random png image in my MyCustomIncrementalImageWidget as follows
import base64
#read a random png image and set its byte-data here
my_random_png_img_byte_data = None
myImage = MyCustomIncrementalImageWidget()
dataurl = "data:image/png;base64" + base64.b64encode(my_random_png_img_byte_data
myImage.value = dataurl
display(myImage)
Now the above code will show your png image in the ipywidget framework. You can have the following advantages.
You can compose this myImage with other extensive GUI elements available from ipywidget framework and construct more complex GUI here. In your case, I would have two instances of MyCustomIncrementalImageWidget-es one for holding the rendering of the scene and the other for holding the histogram.
You can compose these two custom ipywidet-s using widget.HBox, which should render them horizontally side by side.
You need only one call for the display of your GUI elements. And this can be outside your simulation loop. In short, the GUI display is detached from your simulation code, All you need to do in your simulation step is to update the value element of each of the two custom widgets in your simulation step.
Putting it together, here is my high level code.
import b64encode
from ipywidgets import widgets
class MyCustomIncrementalImageWidget(widgets.DOMWidget):
#your implementation of custom ipywidget
pass
def get_dataurl_from_imagedata(img_png_byte_data=None):
dataurl = "data:image/png;base64" + base64.b64encode(img_png_byte_data)
return dataurl
class MyPyGameRenderer(object)
def produce_rendering_as_a_png_img(self, simul_state=None):
myRenderPngByteData=None
#using pygame, interpret the simulation state and produce rendering
#as a png image and set the byte-data to myRenderPngByteData
return myRenderPngByteData
def produce_histogram_as_a_png_img(self, simul_state=None):
myHistPngByteData=None
#interpret the simulation data and produce the histigram as a png image
# set the image byte-data to myHistPngByteData
return myHistPngByteData
def update(self, simul_state=None):
r=self.produce_rendering_as_a_png_img(simul_state=simul_state)
h=self.produce_histogram_as_a_png_img(simul_state=simul_state)
return r, h
my_renderer=MyPyGameRenderer()
my_collision_render_widget=MyCustomIncrementalImageWidget()
my_histogram_widget=MyCustomIncrementalImageWidget()
top_level_hbox=widgets.HBox(children=tuple([my_collision_render_widget,
my_histogram_widget]))
My simulation code will be as follows
whle(True):
simul_state=None
#do your simulation and get simulation state and set it to simul_state
r, h= my_pygame_renderer.update(simul_state=simul_state)
my_collision_render_widget.value = get_dataurl_from_imagedata(r)
my_histogram_widget.value=get_daturl_from_imagedata(h)
And wherever I want to display this widget combination, I would do.
from IPython import display
display(top_level_hbox)
Please take a look at my custom ipywidget implementation here. My class is called ProgressImageWidget.

How to display axes on an image with opencv in python

Is there a way to show the row and column axes when displaying an image with cv2.imshow()? I am using the python bindings for opencv3.0
Not that I am aware of.
However, since you are using Python you are not constrained to use the rudimentary plotting capabilities of OpenCV HighGUI.
Instead, you can use the much more competent matplotlib library (or any of the other available Python plotting libraries).
To plot an image, including a default axis you do
import matplotlib.pyplot as plt
plt.imshow(image, interpolation='none') # Plot the image, turn off interpolation
plt.show() # Show the image window
I'm not sure I fully understand the question due to lack of info.
However you can use OpenCV's draw line function to draw a line from the example points (10,10) to (10,190), and another from (10,190) to (190,190)
On an example image that is 200X200 pixels in size, this will draw a line down the left hand side of the image, and a line along the bottom. You can then plot numbers or whatever you want along this line at increments of X-pixels.
Drawing text/numbers to an image is similar to drawing a line.
Once you have drawn the image, show with the usual image.imshow().
See OpenCV's drawing documentation here:
http://docs.opencv.org/modules/core/doc/drawing_functions.html
And an example to get you going can be found here:
http://opencvexamples.blogspot.com/2013/10/basic-drawing-examples.html#.VMj-bUesXuM
Hope this helps.

Saving images in Python at a very high quality

How can I save Python plots at very high quality?
That is, when I keep zooming in on the object saved in a PDF file, why isn't there any blurring?
Also, what would be the best mode to save it in?
png, eps? Or some other? I can't do pdf, because there is a hidden number that happens that mess with Latexmk compilation.
If you are using Matplotlib and are trying to get good figures in a LaTeX document, save as an EPS. Specifically, try something like this after running the commands to plot the image:
plt.savefig('destination_path.eps', format='eps')
I have found that EPS files work best and the dpi parameter is what really makes them look good in a document.
To specify the orientation of the figure before saving, simply call the following before the plt.savefig call, but after creating the plot (assuming you have plotted using an axes with the name ax):
ax.view_init(elev=elevation_angle, azim=azimuthal_angle)
Where elevation_angle is a number (in degrees) specifying the polar angle (down from vertical z axis) and the azimuthal_angle specifies the azimuthal angle (around the z axis).
I find that it is easiest to determine these values by first plotting the image and then rotating it and watching the current values of the angles appear towards the bottom of the window just below the actual plot. Keep in mind that the x, y, z, positions appear by default, but they are replaced with the two angles when you start to click+drag+rotate the image.
Just to add my results, also using Matplotlib.
.eps made all my text bold and removed transparency. .svg gave me high-resolution pictures that actually looked like my graph.
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# Do the plot code
fig.savefig('myimage.svg', format='svg', dpi=1200)
I used 1200 dpi because a lot of scientific journals require images in 1200 / 600 / 300 dpi, depending on what the image is of. Convert to desired dpi and format in GIMP or Inkscape.
Obviously the dpi doesn't matter since .svg are vector graphics and have "infinite resolution".
You can save to a figure that is 1920x1080 (or 1080p) using:
fig = plt.figure(figsize=(19.20,10.80))
You can also go much higher or lower. The above solutions work well for printing, but these days you want the created image to go into a PNG/JPG or appear in a wide screen format.
Okay, I found spencerlyon2's answer working. However, in case anybody would find himself/herself not knowing what to do with that one line, I had to do it this way:
beingsaved = plt.figure()
# Some scatter plots
plt.scatter(X_1_x, X_1_y)
plt.scatter(X_2_x, X_2_y)
beingsaved.savefig('destination_path.eps', format='eps', dpi=1000)
In case you are working with seaborn plots, instead of Matplotlib, you can save a .png image like this:
Let's suppose you have a matrix object (either Pandas or NumPy), and you want to take a heatmap:
import seaborn as sb
image = sb.heatmap(matrix) # This gets you the heatmap
image.figure.savefig("C:/Your/Path/ ... /your_image.png") # This saves it
This code is compatible with the latest version of Seaborn. Other code around Stack Overflow worked only for previous versions.
Another way I like is this. I set the size of the next image as follows:
plt.subplots(figsize=(15,15))
And then later I plot the output in the console, from which I can copy-paste it where I want. (Since Seaborn is built on top of Matplotlib, there will not be any problem.)

Categories

Resources