I'd like to be able to play a video (that automatically stops at some fixed frames), and I want to be able to play/stop either using keyboard or mouse. However, it seems that opencv does not provide a function to wait both a keyboard event or a mouse event. I saw there exists a function setMouseCallback but I'm not sure how to use it to make sure that I don't have any conflicts between the keyboard and the mouse. Am I supposed to create a thread that receive messages from both the mouse and the keyboard, or is there something simpler? Otherwise, is there any more suited library to play/seek/stop a video precisely on a given frame?
EDIT
I tried to implement a solution based on threads, and I do action = self.queue.get() instead of cv2.waitKey(...), but then nothing is displayed on screen. Also, if I do cv2.waitKey(1) before, I get an empty frame, and the cv2.waitKey located in other threads does not detect any keypress...
Related
I have problem controlling videos from Python 3.x using python-vlc bindings on Linux.
The video plays fine in a window, but hotkeys seem to be ignored. Does libvlc media player handle hotkeys?
My minimal code:
import vlc
from time import sleep
player = vlc.MediaPlayer("test.mp4")
player.video_set_key_input(True)
player.play()
while player.get_state()!=vlc.State.Ended:
sleep(1)
It is my belief that your contention, that video_set_key_input and video_set_mouse_input are the solution is a misunderstanding of those functions.
From vlc.py
def video_set_key_input(self, on):
'''Enable or disable key press events handling, according to the LibVLC hotkeys
configuration. By default and for historical reasons, keyboard events are
handled by the LibVLC video widget.
#note: On X11, there can be only one subscriber for key press and mouse
click events per window. If your application has subscribed to those events
for the X window ID of the video widget, then LibVLC will not be able to
handle key presses and mouse clicks in any case.
#warning: This function is only implemented for X11 and Win32 at the moment.
#param on: true to handle key press events, false to ignore them.
'''
return libvlc_video_set_key_input(self, on)
def video_set_mouse_input(self, on):
'''Enable or disable mouse click events handling. By default, those events are
handled. This is needed for DVD menus to work, as well as a few video
filters such as "puzzle".
See L{video_set_key_input}().
#warning: This function is only implemented for X11 and Win32 at the moment.
#param on: true to handle mouse click events, false to ignore them.
'''
return libvlc_video_set_mouse_input(self, on)
This suggests that setting those functions to False simply tells vlc that mouse and key input are to be ignored and passed to the users appliction.
For that to work, your application must be accepting and processing, mouse and key input coming out of the assigned X-window.
Not, that suddenly, pressing the space bar will pause the video, as when, vlc is performing the task of accepting and processing the mouse and key input.
I was experimenting with pygame and noticed it raised a VideoExpose event when I press alt+tab and the window is fullscreen. when I switch press alt+tab again, everything on the screen is moved to the bottom left.
I know that this is supposed to mean that 'portions of the window must be redrawn', but how am I supposed to redraw them and what why does pygame even have this event in the first place?
If you are writing a program to use the windowing Event Model, the windowing environment sends the program events to notify it of environmental changes - window resize, mouse move, need to re-paint, etc.
Not handling these events will cause your application to be considered "non responsive" by the environment. Here on SO, there's about one question a week with PyGame and exactly this issue.
When working with PyGame re-drawing event handling may seem superfluous as the majority of PyGame games redraw the entire screen every frame, e.g.: 60 FPS. But if unnecessary, this method is a complete waste of resources (CPU-time, electricity, etc.) It is quite simple though, so good for beginners.
Say you were writing a card game like Solitaire... the screen updates only when interacting with the user. In terms of CPU, it's doing nothing 99.9% of the time while the user contemplates their next move. In this case, the program could be written to only re-draw the screen when necessary. When is it necessary? When the player gives input, or the program receives a pygame.VIDEOEXPOSE event from the widowing environment.
If your program is redrawing the window constantly, you can simply ignore the message. If not, when receiving the message call whatever block of code is normally used to render the window. The expose message may come with the region of the screen that needs to be re-drawn, in this case a really good application would only update that section of the display.
I have a video player application, with a graph display below it. My video player is fetching frames periodically, but when I move the mouse it freezes, and by printing what's happening I can see that the main loop didn't call anything
I've tried printing some text for every widget on_mouse_pos event but none of them is triggered, so I really don't know where should I look. Using the recorder module, I can see that there is no mouse event, so I'm not even sure the mouse event is recorder
I have several widgets now so I'm not sure posting them here would be useful, but I'd love to hear feeedback or any idea about this problem
Thanks a lot
So I was able to fix this, my frame pulling function was in a separate thread, moving it to a periodically triggered Clock event fixed this.
I'm still not sure why this bug happened, my 2 cents is that opencv block the GIL while reading a frame, and this somehow interfered with how kivy manages its events
I'm helping to implement an experiment using PsychoPy on a Windows 8 tablet. It doesn't seem to be possible to get direct access to touch events through either PsychoPy, or the pyglet or PyGame interfaces.
Most other sources I've found have referred to using mouse move events in place of touch events. This works fine for recording position, but for recording time it doesn't work for us. We would like to collect the timing of the start of the touch, whereas the mouse event comes at the end of the touch.
Does anyone know of a way to do this, either in PsychoPy or by importing another library into the experiment?
Update: Logging ioHub mouse events, it looks like press and release mouse events are both sent at the end of the touch. This makes sense as this is the point at which the OS is sure that the touch is not a swipe. (Also, it will decide whether the touch is a left- or right-click depending on the duration of the touch).
I've managed this using a hook into the WndProc, it's not pretty but it works. The solution, for posterity:
https://github.com/alisdt/pywmtouchhook
A brief summary:
I used a combination of ctypes and pywin32 (unfortunately neither alone could do the job) to register the target HWND to receive touch messages, and replace its WndProc, passing through all non-touch messages to the original WndProc.
I am trying to find some sort of library or function so I can get fast keyboard input.
Right now, using the Conio.h input method, you can hold down a key, but you have to wait a half a second for it to start repeating, the same as in any text box. This seems to be dictated by the cursor repeat delay, shown here.
Any way to get realtime keyboard input rather than having to suffer this small delay?
I've heard of pyHook but that doesn't work for Python 3(.2). Thanks!
You'll need to do it the hard way, creating your own window and then listening for keydown and keyup events, using a timer to trigger the "repeat" of the keypress.
I eventually wrote a small DLL to use the Win32 function GetAsyncKeyState.