I'm trying to show a 3D image (a sphere) with a texture that contains some information. I need to rotate and zoom in/out the image.
I just came up using glumply and I saw some examples that are very helpful (especially the Earth rendering example at https://github.com/glumpy/glumpy/blob/master/examples/earth.py).
However, so far I haven't been able to find any example at all that zooms in/out the image. Does anybody know whether that's possible or not? I'm starting to think that it is not possible, but that's somehow hard to believe. I would really appreciate any example of how to do it (or somebody who knows about it telling me that it's impossible). I just discovered glumpy yesterday night, so the more complete the example, the better.
Thanks a lot!
EDIT: As far as I have seen, both the Trackball and Arcball classes (which I use for the 3D sphere) have an on_mouse_scroll method which should already zoom in/out when the mouse wheel is turned. However, that method is never called when I turn the wheel. I'm not sure whether this has something to do with a message I get in the console when I execute the program:
[w] Backend (<module 'glumpy.app.window.backends.backend_glfw' from 'C:\\Python37\\lib\\site-packages\\glumpy\\app\\window\\backends\\backend_glfw.py'>) not available
[w] Backend (<module 'glumpy.app.window.backends.backend_pyglet' from 'C:\\Python37\\lib\\site-packages\\glumpy\\app\\window\\backends\\backend_pyglet.py'>) not available
Any ideas? I'm using Windows 10 and Python 3.7.
The problem was that I was lacking the GLFW DLL library. I could create the sphere and rotate it, but I couldn't zoom in/out. I didn't pay much attention to a couple of warnings/errors that I got when I executed the application as it somehow seemed to work alright.
As jdehesa pointed out in his comments, I had not properly followed the installation steps shown in Step-by-step install for x64 bit Windows 7,8, and 10.
Now it works. Thanks jdehesa!
Related
I've been playing around with pythreejs, and, while it seems to be a good solution to the problem of visualizing 3D graphics in a jupyter notebook, I haven't been able to find any documentation about what jupyter is actually doing under the hood or what API exists for managing the widget. Currently, when I make a pythreejs plot (e.g., by calling display() on a pythreejs.Renderer object), I get a tiny little output window. How can I edit the size (and other properties) of this window? How can I see what the properties are?
Thanks!
I discovered by experimentation that this can be controlled by passing the width and height parameters to the pythreejs.Renderer constructor. I would, however, appreciate any answer that points me toward better documentation for pythreejs or some philosophy regarding why/how certain aspects of the three.js API were modified for Python's API.
I'm wondering if it is possible to plot a vertex as image (loaded from a file or directly) in Igraph. Any ideas?
This is definitley possible in the R version of iGraph using the raster function, however a brief search did not reveal any implementation of this function in Python (it's not in the igraph documentation anyway).
If this is essential to your work, then I would consider switching to R, or possibly another tool such as Gephi. For Python, however, you might consider using something like pyvis. This package is small but powerful in terms of visualization. I've been playing around with it over the past few days and its very easy to display a graph with pictures as nodes, and it comes with the added benefit of providing interactive functioning. Take a look at the tutorial here, which will highlight what this package can provide.
It's all in the title: I would like to make red-cyan anaglyphs (you know, these pictures you use coloured glasses to see in 3D) of simple shapes (like points3d plots) with Mayavi. Is there such a feature? Otherwise, would you have any advice for implementing it?
EDIT : Okay, that was simple: just hit '3' in the interactive window and this sets the stereoscopic mode on. However I'd be interested in ways to configure this option, which does not seem to be documented.
Yes the interactive renderer is very poorly documented. A lot of mayavi is very badly documented, but at the least the code is often well written to figure stuff out.
Programatically you can adjust it by editing scene.render_window.stereo_render.
The source code of tvtk InteractiveRenderStyle has the following comment, also:
Some systems support crystal Eyes LCD stereo glasses; you have to invoke
set_stereo_type_to_crystal_eyes() on the rendering window.
For more configuration, you'd probably have to read the tvtk source.
I have a wx.ScrolledWindow where is drawn on using cairo. I have implemented a zoom-functionality which right now redraws the whole content.
But as there will be up to 200 curves to draw I should consider a more performant solution.
I have thought of these:
Buffering images for the zoom factors -1/+1 (Memory consuming)
Using librsvg and buffer an SVG image (I have read something about this. Does librsvg work under Windows too?)
Storing the cairo.Context after drawing groups of curves, and on zoom restoring it (just an idea.. is that possible?)
Are there other possibilities, and: what is the best solution?
Thanks a lot
Not really a concrete answer to your question, but I was faced with the same problem and just switched to matplotlib where a zoom and pan function is already implemented. I am not sure though if it is super performant. I have the feeling my program was running more smoothly before.
I also tried out floatcanvas and floatcanvas2 but was not really happy with both of them.
If you're double-buffering anyway, why not do a quick bitmap scale as a "preview" while waiting for the newly redrawn vector image? I confess I don't know how to do this. But if you can make it work, it should work! :)
I am trying to detect a marker in a webcam video feed and overlay it with a 3d object - pretty much exactly like this: http://www.morethantechnical.com/2009/06/28/augmented-reality-with-nyartoolkit-opencv-opengl/
I know artoolkit is the best module for this, but I was hoping to just use opencv in python since I dont know nearly enough c/c++ to be able to use artoolkit. I am hoping someone will be able to get me on the right track towards detecting the marker and determining its location and orientation etc since I have no idea how best to go about this or what functions I should be using.
OpenCV doesn't have marker detection / tracking functionality out of box. However it provides all algorithms needed so it's fairly easy to implement your own one.
The article you are referring to uses OpenCV only for video grabbing. The marker detection is done by NyARToolkit which is derived from ARToolkit. NyARToolkit have versions for Java, C# and ActionScript.
ARToolkit is mostly written in plain C without using fancy C++ features. It's probably easier to use than you thought. The documentation contains well explained tutorials. e.g http://www.hitl.washington.edu/artoolkit/documentation/devstartup.htm
The introductory documentation can help you understand the process of marker detection even if you decide not to use ARToolkit.
I think the most used way to perform marker detection using python and open CV is to use SURF Descriptors.
I have found very useful this video and the linked code you can find in this page. Here you can download the code. I don't know how to overlay it with a 3d object but I'm sure you can do something with pygame or matplotlib.