I want to create an arrow, that can be controlled with the keyboard!
Rotations on the xz-plain work great, but i can´t let it rotate by it´s own z-axis.
So i dont want to use the system axis, i want a axis relativ to the arrow!
from visual import *
from threading import Thread
class Pfeil(frame, Thread):
"modelliert einen Pfeil"
def __init__(self, pos=(0,0,0), axis=(1,0,0)):
frame.__init__(self, pos=pos, axis=axis)
Thread.__init__(self)
selfpointer = arrow(frame=self, pos=(0,2,1), axis=(5,0,0), shaftwidth=1)
def tasten(self):
"Methode"
if scene.kb.keys:
taste=scene.kb.getkey()
if taste=='left':
self.rotate(angle=radians(5), axis=(0,1,0), origin=self.pos)
print(self.axis)
if taste=='right':
self.rotate(angle=radians(-5), axis=(0,1,0), origin=self.pos)
print(self.axis)
if taste=='up':
self.rotate(angle=radians(5), axis=(0,0,1), origin=self.pos)
print(self.axis)
def run(self):
while True:
self.tasten()
Sorry for being too dumb to paste my code in here, so here´s a upload...
Upload
Thanks for help, if you dont understand my problem, just comment pls!
You're just doing your transformations out of order.
You want to do a "local" transformation, which is very straightforward. Move the arrow back to the origin, do your rotation about the z axis, and then move it back to its original position.
This is easier done if you keep a local coordinate system for the arrow but this may be overkill for your purpose.
Related
I am currently working on using Vispy to add visualization capabilities to a Python simulation library. I have managed to get some basic visualizations running with the data from the simulations, and am now looking at wrapping it in functions/classes so users of the libraries easily visualize the simulation (by passing the data in a specific format or something) without having to code it themselves.
However, I am having trouble figuring out the right way/best practice to get the timers working properly to update the objects as they change in time.
For example, when running the visualization as script, an example of how I have implemented the timer is using global variables and iterators similar to how it is done in the Vispy Scene "Changing Line Colors" demo on the Vispy website :
def on_timer(event):
global colormaps, line, text, pos
color = next(colormaps)
line.set_data(pos=pos, color=color)
text.text = color
timer = app.Timer(.5, connect=on_timer, start=True)
But when I wrap the entire visualization script in a function/class, I am having trouble getting the timer to work correctly given the difference in scopes of the variables now. If anyone could give some idea of the best way to achieve this that would be great.
EDIT:
I have made an extremely simplified visualization that might be similar to some of the expected use cases. The code when run as a stand alone python script is:
import numpy as np
from vispy import app, scene
# Reproducible data (oscillating wave)
time = np.linspace(0, 5, 300)
rod_positions = []
for t in time:
wave_position = []
for i in range(100):
wave_position.append([0, i, 20 * (np.cos(t + i / 10))])
rod_positions.append(wave_position)
rod_positions = np.array(rod_positions)
# Prepare canvas
canvas = scene.SceneCanvas(keys="interactive", size=(800, 600), bgcolor="black")
canvas.measure_fps()
# Set up a view box to display the image with interactive pan/zoom
view = canvas.central_widget.add_view()
view.camera = scene.TurntableCamera()
rod = scene.visuals.Tube(
points=rod_positions[0], radius=5, closed=False, tube_points=16, color="green",
)
view.add(rod)
view.camera.set_range()
text = scene.Text(
f"Time: {time[0]:.4f}",
bold=True,
font_size=14,
color="w",
pos=(80, 30),
parent=canvas.central_widget,
)
update_counter = 0
max_updates = len(time) - 1
def on_timer_update(event):
global update_counter
update_counter += 1
if update_counter >= max_updates:
timer.stop()
canvas.close()
# Update new rod position and radius
rod_new_meshdata = scene.visuals.Tube(
points=rod_positions[update_counter],
radius=5,
closed=False,
tube_points=16,
color="green",
)._meshdata
rod.set_data(meshdata=rod_new_meshdata)
text.text = f"Time: {time[update_counter]:.4f}"
# Connect timer to app
timer = app.Timer("auto", connect=on_timer_update, start=True)
if __name__ == "__main__":
canvas.show()
app.run()
My attempt at wrapping this in a class is the following:
import numpy as np
from vispy import app, scene
class Visualizer:
def __init__(self, rod_position, time) -> None:
self.rod_positions = rod_position
self.time = time
self.app = app.application.Application()
# Prepare canvas
self.canvas = scene.SceneCanvas(keys="interactive", size=(800, 600), bgcolor="black")
self.canvas.measure_fps()
# Set up a view box to display the image with interactive pan/zoom
self.view = self.canvas.central_widget.add_view()
self.view.camera = scene.TurntableCamera()
self.update_counter = 0
self.max_updates = len(time) - 1
def _initialize_objects(self):
self.rod = scene.visuals.Tube(
points=self.rod_positions[0], radius=5, closed=False, tube_points=16, color="green",
)
self.view.add(self.rod)
self.view.camera.set_range()
self.text = scene.Text(
f"Time: {self.time[0]:.4f}",
bold=True,
font_size=14,
color="w",
pos=(80, 30),
parent=self.canvas.central_widget,
)
def update_timer(self, event):
self.update_counter += 1
if self.update_counter >= self.max_updates:
self.timer.stop()
self.canvas.close()
# Update new rod position and radius
rod_new_meshdata = scene.visuals.Tube(
points=self.rod_positions[self.update_counter],
radius=5,
closed=False,
tube_points=16,
color="green",
)._meshdata
self.rod.set_data(meshdata=rod_new_meshdata)
self.text.text = f"Time: {self.time[self.update_counter]:.4f}"
def run(self):
self._initialize_objects()
# Connect timer to app
self.timer = app.Timer("auto", connect=self.update_timer, start=True, app=self.app)
self.canvas.show()
self.app.run()
if __name__ == "__main__":
# Reproducible data (oscillating wave)
time = np.linspace(0, 5, 150)
rod_positions = []
for t in time:
wave_position = []
for i in range(100):
wave_position.append([0, i, 20 * (np.cos(2 * t + i / 10))])
rod_positions.append(wave_position)
rod_positions = np.array(rod_positions)
Visualizer = Visualizer(rod_positions, time)
Visualizer.run()
It seems to be working now which is good. However this is a minimal reproduction of my problem, so I would just like to make sure that this is the optimal/intended way. As a side note, I also feel as if my way of updating the rod by generating the meshdata and updating the rod in the view with this meshdata is not optimal and slowing the visualization down (it runs at around 10fps). Is the update loop itself optimal?
Thanks
Thanks for updating your question with your example. It makes it much easier to answer and continue our conversation. I started a pull request for VisPy about this topic where I add some examples that use Qt timers and Qt background threads. I hope to finish the PR in the next month or two so if you have any feedback please comment on it:
https://github.com/vispy/vispy/pull/2339
Timers
In general timers are a good way to update a visualization in a GUI event loop friendly way. However, in most GUI frameworks the timer is executed in the current GUI thread. This has the benefit that you can call GUI/drawing functions directly (they are in the same thread), but has the downside that any time you spend in your timer callback is time you're taking away from the GUI framework in order to respond to user interaction and redraw the application. You can see this in my timer example in the above PR if you uncomment the extra sleep line. The GUI basically stops responding while it is running the timer function.
I have a little message on GUI threads and event loops in the vispy FAQ here.
Threads
Threads are a good way around the limitations of timers, but come with a lot of complexities. Threads are also usually best done in a GUI-framework specific manner. I can't tell from your post if you are doing Qt or some other backend, but you will probably get the most bang for your buck by using the GUI framework specific thread functionality rather than generic python threads. If an application that works across GUI frameworks is what you're looking for then this likely isn't an option.
The benefit of using something like a QThread versus a generic python thread is that you can use things like Qt's signals and slots for sending data/information between the threads.
Note that as I described in the FAQ linked above that if you use threads you won't be able to call the set_data methods directly in your background thread(s). You'll need to send the data to the main GUI thread and then have it call the set_data method.
Suggestions
You are correct that your creation of a new Tube visual that you only use for extracting mesh data is a big slow down. The creation of a Visual is doing a lot of low-level GL stuff to get ready to draw the Visual, but then you don't use it so that work is essentially wasted time. If you can find a way to create the MeshData object yourself without the Visual you should see a speed up, but if you have a lot of data switching to threads is likely still needed.
I'm not sure how much this example's design resembles what you're really doing, but I would consider keeping the Application, Timer, and Visualizer classes separate. I'm not sure I can give you a good reason why, but something about creating these things in a singe class worries me. In general I think the programming concept of dependency injection applies here: don't create your dependencies inside your code, create them outside and pass them as arguments to your code. Or in the case of the Application and Timer, connect them to the proper methods on your class. If you have a much larger application then it may make sense to have a class that wraps all of the Vispy canvas and data updating logic, another one just for the outer GUI window, and then another for a data generation class. Then in the bottom if main block of code create your application, your timer, your canvas wrapper, your main window (passing the canvas wrapper if needed), and connect them all together.
Other thoughts
This is a problem I want to make easier for vispy users and scientific python programmers in general so let me know what I'm missing here and we'll see what kind of ideas we can come up with. Feel free to comment and provide feedback on my pull request. Do you have other ideas for what the examples should be doing? I have plans for additional fancier examples in that PR, but if you have thoughts about what you'd like to see let me know.
This is my first time posting to Stack Overflow, and I’m very new to the scene module for Pythonista, so please forgive any minor mistakes, and tell me if I have any mistakes regarding the formatting/question.
I’m currently trying to create a program that allows the user to hand draw a circle, and then take the circumference. However, the look of my line depends upon the send of which I draw. For example, if I draw fast, then the dots (the line segments) are few and far between, while slowly drawing makes it much more accurate. (I do not have the circumference yet, but I figure I can put a dot down every nth distance, then use the amount of dots and calculate it from there).
The question I’m posing is, how do I make it so that when I draw, speed does not (or at least insignificantly) impacts the line?
Notes- I’ve seen examples in the examples tab on Pythonista, and they all us the UI module, but since I already know a tiny bit of Scene, I want to stick with this. If it proves to be impossible, I’ll switch. (Also, if someone wishes to, could they create a tag that is “scene-module”? Thanks.)
from scene import *
import math
allPoints = []
line = []
def addPoint(x, y):
allPoints.append((x, y))
class MyScene(Scene):
def setup(self):
self.background_color = '#a9a9a9'
self.followPlayer = SpriteNode('shp:Circle', position = (-10,-10))
self.add_child(self.followPlayer)
def touch_began(self,touch):
self.followPlayer.position = touch.location
def touch_moved(self, touch):
x, y = touch.location
addPoint(x, y)
self.followPlayer.position = touch.location
self.drawNode = SpriteNode('iob:ios7_circle_filled_24', (x, y), parent = self)
run(MyScene())
Thank you
I've been using pyglet for a while now and I really like it. I've got one thing I'd like to do but have been unable to do so far, however.
I'm working on a 2D roleplaying game and I'd like the characters to be able to look different - that is to say, I wouldn't like use completely prebuilt sprites, but instead I'd like there to be a range of, say, hairstyles and equipment, visible on characters in the game.
So to get this thing working, I thought the most sensible way to go on about it would be to create a texture with pyglet.image.Texture.create() and blit the correct sprite source images on that texture using Texture.blit_into. For example, I could blit a naked human image on the texture, then blit a hair texture on that, etc.
human_base = pyglet.image.load('x/human_base.png').get_image_data()
hair_style = pyglet.image.load('x/human_hair1.png').get_image_data()
texture = pyglet.image.Texture.create(width=human_base.width,height=human_base.height)
texture.blit_into(human_base, x=0, y=0, z=0)
texture.blit_into(hair_style, x=0, y=0, z=1)
sprite = pyglet.sprite.Sprite(img=texture, x=0, y=0, batch=my_sprite_batch)
The problem is that blitting the second image into the texture "overwrites" the texture already blitted in. Even though both of the images have an alpha channel, the image below (human_base) is not visible after hair_style is blit on top of it.
One reading this may be wondering why do it this way instead of, say, creating two different pyglet.sprite.Sprite objects, one for human_base and one for hair_style and just move them together. One thing is the draw ordering: the game is tile-based and isometric, so sorting a visible object consisting of multiple sprites with differing layers (or ordered groups, as pyglet calls them) would be a major pain.
So my question is, is there a way to retain alpha when using blit_into with pyglet. If there is no way to do it, please, any suggestions for alternative ways to go on about this would be very much appreciated!
setting the blend function correctly should fix this:
pyglet.gl.glBlendFunc(pyglet.gl.GL_SRC_ALPHA,pyglet.gl.GL_ONE_MINUS_SRC_ALPHA)
I ran into the very same problem and couldn't find a proper solution. Apparently blitting two RGBA images/textures overlapping together will remove the image beneath. Another approache I came up with was using every 'clothing image' on every character as an independent sprite attached to batches and groups, but that was far from the optimal and reduced the FPS dramatically.
I got my own solution by using PIL
import pyglet
from PIL import Image
class main(pyglet.window.Window):
def __init__ (self):
TILESIZE = 32
super(main, self).__init__(800, 600, fullscreen = False)
img1 = Image.open('under.png')
img2 = Image.open('over.png')
img1.paste(img2,(0,0),img2.convert('RGBA'))
img = img1.transpose(Image.FLIP_TOP_BOTTOM)
raw_image=img.tostring()
self.image=pyglet.image.ImageData(TILESIZE,TILESIZE,'RGBA',raw_image)
def run(self):
while not self.has_exit:
self.dispatch_events()
self.clear()
self.image.blit(0,0)
self.flip()
x = main()
x.run()
This may well not be the optimal solution, but if you do the loading in scene loading, then it won't matter, and with the result you can do almost almost anything you want to (as long as you don't blit it on another texture, heh). If you want to get just 1 tile (or a column or a row or a rectangular box) out of a tileset with PIL, you can use the crop function.
I'm using matplotlib :
Let's say I have an object A, which has two attributes B and C, and a method that draws a figure. Both B and C have methods doing some stuff on the figure on events 'motion_notify_event'.
I've notice that both methods do not work at the same time, and there appears to be conflict.
How does one deal with such case ?
So I've written a code that shows the problem a little bit better than the above explanation.
import matplotlib.pyplot as plt
from matplotlib.widgets import MultiCursor
from matplotlib.patches import Circle
class Event1(object):
def __init__(self,axes):
self.fig = axes[0].figure
self.axes = axes
self.eid = self.fig.canvas.mpl_connect('motion_notify_event',self.onmove)
def onmove(self,event):
for ax in self.axes:
c = Circle((event.xdata,event.ydata),radius=0.05)
ax.add_patch(c)
ax.draw_artist(c)
self.fig.canvas.blit(self.fig.bbox)
class plotclass(object):
def __init__(self):
pass
def plotme(self):
self.fig = plt.figure()
self.ax1 = self.fig.add_subplot(211)
self.ax2 = self.fig.add_subplot(212)
for ax in (self.ax1,self.ax2):
ax.set_xlim((0,10))
ax.set_ylim((0,10))
# self.curs = MultiCursor(self.fig.canvas,(self.ax1,self.ax2))
self.ev1 = Event1((self.ax1,self.ax2))
self.fig.show()
def main():
pc = plotclass()
return pc
if __name__ == '__main__':
main()
Now in this code there are two stuff listening the motion_notify_event : the class Event1, which will draw circles at the cursor position, and the class 'plotclass', which creates the figure and draws cursors at the cursor position.
I have commented out the line self.curs = ..., and I see the circles as the mouse is moving, but if I uncomment it, I just see the cursors : why ? and how to see both?
Just to elaborate on my comment above, it's not due to the multiple event handling, it's due to different stages of blitting overwriting each other.
Blitting typically works by restoring a saved, fully-rendered state and then drawing on top of it.
In your current code, you're blitting but not restoring the saved state, so you get a "trail" of circles (presumably this is what you want).
However, MultiCursor calls fig.canvas.restore_region(...) before drawing itself (otherwise you'd have a "trail" of lines). Therefore, it restores a saved "blank" figure over what you've just drawn.
If you want use multiple passes of blitting, they'll need to coordinate with each other. There are a number of different ways to handle this, but they're overkill for most use cases. The quick fix is to pass in useblit=False to MultiCursor. This will slow your rendering down, however.
Can you elaborate a bit on what you're trying to do? Do you just want a cursor with a circle in the mouse position? (If so, just subclass MultiCursor.)
This is my little Python programm Using Vpython
I want to rotate a box.
I want to use the boxes axis and not the one of the scene.
so for example if its rotated to the right and then i want to get the "nose" down, i want to do this in the view of the box...
imagine i was a jet ;)
BTW: I´m a python 3
from visual import *
a=box(size=(5,1,3),axis=(1,0,0))
def tasten():
"Looooopings "
if scene.kb.keys: #action on keyboard?
druck=scene.kb.getkey() #save to cache
if druck=='left':
a.rotate(angle=-1/100, axis=(1,0,0)) #links drehen
if druck=='right':
a.rotate(angle=1/100, axis=(1,0,0)) #rechts drehen
if druck=='up':
a.rotate(angle=-1,axis=(0,0,1)) #nose down
while True:
tasten()
I would recommend creating a box class that stores the orientation, as martineau is suggesting. The class would have a vector that stores its orientation, then a method to rotate it in whatever way required.