Im using Python VLC to build a custom playback app in pyqt. I have painted a nice custom slider to track along with the video, but hit a bit of an annoying problem.
No matter how often I tell my slider to update, it's quite glitchy (jumping every 1/4 second or so) and looks choppy (just the timeline, not the video).
Digging into it, I learned that
media_player.get_position()
Has quite a low polling rate. It returns the same value quite often then jumps a large amount the next time it gives a new value.
So right now I ran some test metrics and found it tends to update every 0.25-0.3 seconds. So now I have a system that basicay stores the last value and last system time a new value came in, and the last jump-distance in returned values and does some basic math with those things to fake proper linear timeline data between polls to make a very smooth timeline slider.
The problem is this assumes my value of every 0.25-0.3 seconds is consistent across machines, hardware, frame rates of videos etc.
Does anyone know of a better fix?
Maybe a way to increase the poll rate of VLC to give me better data to begin with - or some better math to handle smoothing?
Thanks
Using get_position() returns a value between 0.0 and 1.0, essentially a percentage of the current position measured against the total running time.
Instead you can use get_time() which returns the current position in 1000ths of a second.
i.e.
print (self.player.get_time()/1000) would print the current position in seconds.
You could also register a callback for the vlc event EventType.MediaPlayerTimeChanged, as mentioned in the other answer given by #mtz
i.e.
Where self.player is defined as:
self.Instance = vlc.Instance()
self.player = self.Instance.media_player_new()
Then:
self.vlc_event_manager = self.player.event_manager()
self.vlc_event_manager.event_attach(vlc.EventType.MediaPlayerTimeChanged, self.media_time_changed)
def media_time_changed(self,event):
print (event.u.new_time/1000)
print (self.player.get_time()/1000)
print (self.player.get_position())
Try using the libvlc_MediaPlayerPositionChanged or libvlc_MediaPlayerTimeChanged mediaplayer events instead.
https://www.videolan.org/developers/vlc/doc/doxygen/html/group__libvlc__event.html#gga284c010ecde8abca7d3f262392f62fc6a4b6dc42c4bc2b5a29474ade1025c951d
Related
I just started using Psychopy in order to create my first adaptive staircase experiment.
I tried to set up the experiment by using the Builder interface. The loop type I'm using is the staircase, not the interleaved staircase.
In the experiment, I would like to change the contrast of the image according to the participants response.
I've already designed the experiment so far that I can present the start stimulus to the participants, when the program runs. Also, the participant can respond. But the problem is, that my stimulus does not change at all after a participant responded. I've tried so many things to fix that, starting from inserting every possible stimulus manually, coding it according to the tutorial of Yentl de Kloe, but nothing is working - the simulus remains unchanged which leads to the result that the experiment runs forever, if I dont cancel it manually.
Is there anyone, who can tell me a simple (for a beginner understandable), but detailed solution how to solve this problem within the Psychopy Builder?
Thank you in advance!
Experimental Structure
Staircase Loop
I am trying to do a project, and in part of the project I have the user say a word which gets recorded. This word then gets the silence around it cut out, and there is a button that plays back their word without the silence. I am using librosa's librosa.effects.trim command to achieve this.
For example:
def record_audio():
global myrecording
global yt
playsound(beep1)
myrecording = sd.rec(int(seconds * fs), samplerate=fs, channels=1)
sd.wait()
playsound(beep2)
#trimming the audio
yt, index = librosa.effects.trim(myrecording, top_db=60)
However, when I play the audio back, I can tell that it is not trimming the recording. The variable explorer shows that myrecording and yt are the same length. I can hear it when I play what is supposed to be the trimmed audio clip back as well. I don't get any error messages when this occurs either. Is there any way to get librosa to actually clip the audio? I have tried adjusting top_db and that did not fix it. Aside from that, I am not quite sure what I could be doing wrong.
For a real answer, you'd have to post a sample recording so that we could inspect what exactly is going on.
In lieu of of that, I'd like to refer to this GitHub issue, where one of the main authors of librosa offers advice for a very similar issue.
In essence: You want to lower the top_db threshold and reduce frame_length and hop_length. E.g.:
yt, index = librosa.effects.trim(myrecording, top_db=50, frame_length=256, hop_length=64)
Decreasing hop_length effectively increases the resolution for trimming. Decreasing top_db makes the function less sensitive, i.e., low level noise is also regarded as silence. Using a computer microphone, you do probably have quite a bit of low level background noise.
If this all does not help, you might want to consider using SOX, or its Python wrapper pysox. It also has a trim function.
Update Look at the waveform of your audio. Does it have a spike somewhere at the beginning? Some crack sound perhaps. That will keep librosa from trimming correctly. Perhaps manually throwing away the first second (=fs samples) and then trimming solves the issue:
librosa.effects.trim(myrecording[fs:], top_db=50, frame_length=256, hop_length=64)
I am writing a Tetris program with PyGame, and came across a funny problem.
Before I ask the question, here is the pseudo-code:
while True:
# In this part, the human controls the block to go left, right, or speed down
if a key is pressed and the block isnt touching the floor:
if the key is K-left:
move piece left one step
if the key is K-right:
move piece right one step
if the key is K-down:
move piece down one step
# This part of the code makes the piece fall by itself
if the block isnt touching the floor:
move block down one step
# This part makes the while loop wait 0.4 seconds so that the block does not move
# down so quickly
wait 0.4 seconds
The problem is that, because of the "wait 0.4 seconds" part of the code, the part that the human controls can only move every 0.4 seconds. I would like it so that the block moves as fast as the human can press the key, while at the same time, the block dropping every 0.4 seconds. How could I arrange the code so that it will do that? Thanks!
The main problem I see here is that you are limiting your framerate using a wait of 0.4 seconds.
You should not limit framerate, but instead, you should limit how fast your block falls.
If I remember well, there was a formula you could use to do just that. It was based on the amout of time elapsed since the last frame. It looked like:
fraction of a second elapsed since last frame * distance you want your block to move in a second
This way, you can keep your mainloop intact, and the move processing will happen at every frame.
You could also do...
...
# This part of the code makes the piece fall by itself
if the block isn't touching the floor and
the block hasn't automatically moved in the last 0.4 seconds:
move block down one step
...
Just realize you'll be doing a lot of polling if the user hasn't struck any keys.
You may try asking gamedev.stackexchange.com instead. Check the site for Game Loops, and check out other example pygame projects to see how they're doing it. Having a good game loop is essential and will take care of things for you such as user inputs and a consistent frame rate.
Edit: https://gamedev.stackexchange.com/questions/651/tips-for-writing-the-main-game-loop
When doing games you should always try to do something like this:
while not finished:
events = get_events() # get the user input
# update the world based on the time that elapsed and the events
world.update(events, dt)
word.draw() # render the world
sleep(1/30s) # go to next frame
The sleep time should be variable so it takes into consideration the amount of time spend drawing and calculating the world updates.
The world update method would look something like this:
def update(self, events, dt):
self.move(events) # interpret user action
self.elapsed += dt
if self.elapsed > ADVANCE_TIME:
self.piece.advance()
self.elapsed = 0
The other way of implementing this (so you dont redraw too much) is to have events fired when the user orders a piece to be moved or when ADVANCE_TIME time passes. In each event handler you would then update the world and redraw.
This is assuming you want the pieces to move one step at a time and not continuous. In any case, the change for continuous movement is pretty trivial.
I want to gather numbers that are being output from a specific window in real time as data points.
I have a piece of equipment with an internal pressure level I would like to monitor. The only output that the software I'm using gives me is a single float from the last ~second in a box within the software. I've asked the manufacturers if there was any way of internal accessing this output and they basically told me there is none.
Individual measurements don't mean much to me, and I'd like to see the change in pressure across time. But watching this single value all day isn't very practical for me. So I want to make something that can read a specific line of text by recognizing the words or by giving exact coordinates on the screen (either works), say 'Output PSI: ##.###' and get the ##.### in python as a data point every time the number changes.
Are there modules that anyone has experience with that might be of use here?
First off, sorry for this lenghty text. I'm new to python and matplotlib, so please bear with me.
As a followup to this question I found the way of generating the grid to be quite time consuming on a Raspberry Pi using web2py. I have a csv file with round about 12k lines looking like this:
1;1.0679759979248047;0.0;147.0;0.0;;{'FHR1': 'US', 'FHR2': 'INOP', 'MHR': 'INOP'};69;good;;;;1455891539.502167
The thing is, that reading those 12k lines with numpy.genfromtxt already takes 30 something seconds. Populating the chart then (without the fancy grids) took another 30 seconds, just using columns 1, 3 and 7 of that csv. But after adding the solution time exploded to 170 seconds. So now I have to figure out what to do to reduce time consumption to somewhere under a minute.
My first thought is to eliminate the csv - I'm the one reading the data anyway, so I could skip that by either keeping the data in memory or by just writing it into the plot right away. And that's what I did, with a (in my mind) simple test layout and using the pdf backend. I chose to write the data into the chart every time I get them and save the chart once the transmission is done. I thought that should work fine, well it doesn't. I keep getting ludicrous errors:
RuntimeError: RRuleLocator estimated to generate 9178327 ticks from 0001-01-01 15:20:31.883239+00:00 to 0001-04-17 20:52:39.779205+00:00: exceeds Locator.MAXTICKS * 2 (6000000)
And believe me, I keep increasing those maxticks with every test run to top the number the error message says. Its ridiculous because that message is just for 60 seconds of data, and I want to go somewhere near 24 hours of data. I would either like the RRuleLocator to stop estimating or to just shut up and wait for the data to end. I really don't think I can make an MCVE out of this, but I can carve out the details I'm most likely messing up.
First off, I got some classes set up, so no globals. To simplify I have a communications class, that reads the serial port at one second intervals. This is running fine and up till yesterday wrote whatever came in on the serial port into a csv. Now I wanted to see if I could populate the chart while getting the data, and just save it, once I'm done. So for testing I added this to my .py
import matplotlib
matplotlib.use('PDF') # I want a PDF in the end
from matplotlib import dates
import matplotlib.pyplot as plt
import numpy as np
from numpy import genfromtxt
Then some members to the communication class, that come from the charting part, mainly above mentioned solution. I initialize them in the classes init
self.fig = None
self.ctg = None
self.toco = None
then I have this method I call, once I feel the data I'm receiving is in correct form/state so that the chart may be prepared for populating with data
def prepchart(self):
# how often to show xticklabels and repeat yticklabels:
print('prepchart')
xtickinterval = 5
hfmt = dates.DateFormatter('%H:%M:%S')
self.fig = plt.figure()
self.ctg = self.fig.add_subplot(2, 1, 1) # two rows, one column, first plot
plt.ylim(50, 210)
self.toco = self.fig.add_subplot(2, 1, 2)
plt.ylim(0, 100)
# Set the minor ticks to every 30 seconds
minloc = dates.SecondLocator(bysecond=[0,30])
minloc.MAXTICKS = 3000000
self.ctg.xaxis.set_minor_locator(minloc)
# self.ctg.xaxis.set_minor_locator(dates.MinuteLocator())
self.ctg.xaxis.set_major_formatter(hfmt)
self.toco.xaxis.set_minor_locator(dates.MinuteLocator())
self.toco.xaxis.set_major_formatter(hfmt)
# actg.xaxis.set_ticks(rotation=45)
plt.xticks(rotation=45)
Then every so often once I have data I want to plot I'll do this in my data processing method
self.ctg.plot_date(timevalue, heartrate, '-')
self.toco.plot_date(timevalue, toco, '-')
finally once no more data is sent (this can be after one minute or 24 hours) I'll call
def handleCTG(self):
self.fig.savefig('/home/pi/test.pdf')
In conclusion: Am I going at this completely wrong or is there just an error in my code? And is this really a way to reduce waiting time for the chart to be generated?
OK, so here's the deal. Obviously web2py runs a pretty tight ship. Meaning that there are not so many threads floating around, and it sure wont start a new thread for my little chart creation. I sort of noticed this, when I followed CPU usage on my Raspis taskmanager and only ever saw something around 25%. Now the Raspberry Pi has 4 kernels... go do the math. First I ran my script outside of web2py on my Raspi and, lo and behold, the entire thing including csv-reading and chart rendering only takes 20s. From there on (inspired by How to run a task outside web2py and retrieve the output) it's a piece of cake: use the well documented subprocess to call a new python with this script and done. So thanks to anyone who gave this some thought.