Is it possible to record from a mic while playback of an audio file continues?
If so, can I use a headphone splitter to record exactly what the listener hears to the same track?
Ideally, I would like a stereo audio file wherein one track is the original audio file, and the second track is what the mic simultaneously recorded.
Context:
In my experiment, participants will listen to audio clips, then attempt to synchronize with them using a musical instrument, while the audio clip continues to play.
It's really important I'm able to analyze how closely they are able to reproduce/temporally coordinate with the stimulus. I'm not too concerned with quality, as long as I can accurately compare event onsets.
Related
I need to extract audio stream from a video and check whether it has any pitch changes or abnormalities. Ideally, we want to quantify any pitch changes in the audio stream. I'm aware that I can use ffmpeg to extract the audio stream from the video. However, what tools or programs (python?) can then be used to identify and quantify any pitch changes or abnormalities in the audio stream?
Pitch analysis is not an easy task, luckily there are existing solutions for that. https://pypi.org/project/crepe/ is an example that looks promising.
You could read the resulting CSV of pitch data into a Pandas dataframe and perform whatever data analysis you can think of.
For example for the pitch change analysis you could do
df['pitch_change'] = df.frequency.diff(periods=1)
To get a column representing the pitch change of every time unit.
I am trying to find out the audio lag during a video call (in Zoom/ Google Meet). The r goal is to measure the lag in audio and video in the output (meeting attender's side).
For that, I have two video :
Vid1 (recording at presenter's side).
Vid2 (recording at attender's side).
Both the recordings can be started at any time during the meeting.
Now since I am comparing the videos, I need to find out the common frame between two videos. So that if I play the videos side by side, they should play in sync.
Since V1 and V2 is not same (V2 maybe pixelated), so how can two videos be compared to find the common frame?
I have continuous videos taken from two cameras placed on up right and up left corners of my car's windshield (please note that they are not fixed to each other, and I aligned them approximately straight). Now I am trying to make a 3D point cloud out of that and have no idea how to do that. I surfed the internet a lot and still couldn't find any useful info. Can you send me some links or hints on how can I make that work in Python.
You can try the stereo matching and point cloud generation implementation in the OpenCV library. Start with this short Python sample.
I suppose that you have two independent video streams that are not exactly synchronized. You will have to synchronize them first, because the linked sample expects two images, not videos. Extract images from videos using OpenCV or ffmpeg and find an image pair that shares exactly the same timepoint (e.g. green appearing on a traffic light). Alternatively you can use the audio tracks for synchronization, see https://github.com/benkno/audio-offset-finder. Beware: synchronization based on a single frame pair or a short audio excerpt will probably work only for few minutes before and after the synchronized timepoint.
I have just started to work on data in the form of audio. I am using librosa as a tool. My project requires me to extract features like:
Total duration of the audio
Minimum Intensity of the audio signal
Maximum Intensity of the audio signal
Mean Intensity of the audio signal
Jitter
Rate of speaking
Number of Pauses
Maximum Duration of Pauses
Average Duration of Pauses
Total Duration of Pauses
Although, I know about these terms but I have no idea how to extract these from an audio file. Are these inbuilt in some form in the librosa.feature variable? Or we need to manually calculate these? Can someone guide me how to proceed?
I know that this job can be performed using softwares like Praat, but I need to do it in python.
Praat can be used for spectral analysis (spectrograms), pitch
analysis, formant analysis, intensity analysis, jitter, shimmer, and
voice breaks.
I have a code where I specified two different audio files in two different channels and plays simultaneously, but I need a way to make each file play on only one channel and the other on the other channel. For instance, two audio files playing simultaneously on two seperate channels, Right and Left. Such that an audio plays on right speaker and the other audio plays on left speaker.
I tried with the code below, but the audio is not mapping to any specific channel but are playing on both speakers.
pygame.mixer.init(frequency=44000, size=-16,channels=2, buffer=4096)
#pygame.mixer.set_num_channels(2)
m = pygame.mixer.Sound('tone.wav')
n = pygame.mixer.Sound('sound.wav')
pygame.mixer.Channel(1).play(m,-1)
pygame.mixer.Channel(2).play(n,-1)
Any help is much appreciated.
The docs say that you have to pass the volume of the left and right speaker to Channel.set_volume.
# Create Sound and Channel instances.
sound0 = pg.mixer.Sound('my_sound.wav')
channel0 = pg.mixer.Channel(0)
# Play the sound (that will reset the volume to the default).
channel0.play(sound0)
# Now change the volume of the specific speakers.
# The first argument is the volume of the left speaker and
# the second argument is the volume of the right speaker.
channel0.set_volume(1.0, 0.0)
Also, if you don't want to manage the channels yourself, you can just use the channel that Sound.play() returns.
channel = sound0.play()
channel.set_volume(1.0, 0.0)