why no sound when I use CompositeAudioClip in python moviepy? - python

here is the python script.
from moviepy.editor import *
videoclip = VideoFileClip("1_0_1522314608.m4v")
audioclip = AudioFileClip("jam_1_Mix.mp3")
new_audioclip = CompositeAudioClip([videoclip.audio, audioclip])
videoclip.audio = new_audioclip
videoclip.write_videofile("new_filename.mp4")
then returned
[MoviePy] >>>> Building video new_filename.mp4 [MoviePy] Writing audio
in new_filenameTEMP_MPY_wvf_snd.mp3
100%|████████████████████████████████████████| 795/795 [00:01<00:00,
466.23it/s] [MoviePy] Done. [MoviePy] Writing video new_filename.mp4 100%|███████████████████████████████████████| 1072/1072 [01:26<00:00,
10.31it/s] [MoviePy] Done. [MoviePy] >>>> Video ready: new_filename.mp4
1_0_1522314608.m4v and jam_1_Mix.mp3 they both have sound.
but the new file new_filename.mp4 no sound.
did I do something wrong? please help. thank you.

I had similar issues. I found that sometimes the audio is in fact present, but will not be played by all players. (Quicktime does not work, but VLC does).
I finally use something like :
video_clip.set_audio(composite_audio).write_videofile(
composite_file_path,
fps=None,
codec="libx264",
audio_codec="aac",
bitrate=None,
audio=True,
audio_fps=44100,
preset='medium',
audio_nbytes=4,
audio_bitrate=None,
audio_bufsize=2000,
# temp_audiofile="/tmp/temp.m4a",
# remove_temp=False,
# write_logfile=True,
rewrite_audio=True,
verbose=True,
threads=None,
ffmpeg_params=None,
progress_bar=True)
A few remarks :
aac seems to work better for quicktime than mp3.
logfile file generation helps to diagnose what is happening behind the scenes
temp_audiofile can be tested on its own
set_audio() returns a new clip and leave the object on which it is called unchanged
Setting the audio clip duration to match the video helps :
composite_audio = composite_audio.subclip(0, video_clip.duration)
composite_audio.set_duration(video_clip.duration)

Related

Librosa adds noise to wav data

I was programming a little something reading a file and playing it back. I need it to use librosa, it it's impossible i might be able to fix it. the simpleaudio bit is easier to replace. This is my code:
from pathlib import Path
import librosa
import simpleaudio as sa
def play_audio(audio, sampling_rate):
print("PLAYING AUDIO")
wave_obj = sa.WaveObject(audio, sample_rate=sampling_rate)
play_obj = wave_obj.play()
play_obj.wait_done()
in_fpath = Path("trump.wav")
original_wav, sampling_rate = librosa.load(in_fpath)
play_audio(original_wav, int(sampling_rate))
And it loads the data and plays it back, but it's just that if I play trump.wav file in Windows on the music player, it sounds like it should. When I do it in Python this way, however, it becomes EXTREMELY noisy. You can still hear what he is saying, but barely. Where is the problem? Librosa or SimpleAudio?
I have a suspicion. librosa.load returns an array with float data but simpleaudio needs integer data. Please try to change the dtype of audio:
import numpy as np
# [...]
audio *= 32767 / np.max(np.abs(audio)) # re-scaling
audio = audio.astype(np.int16) # change data type
wave_obj = sa.WaveObject(audio, sample_rate=sampling_rate)
# [...]
Also see the documentation of simpleaudio:
https://simpleaudio.readthedocs.io/en/latest/tutorial.html#using-numpy
I figured out the problem, i just needed to change the WaveObject or librosa.load (don't remember which) with a byte_rate (something with byte_) to 4 instead of the default of 2.

Program not playing sound

I am programmin a text based game in Python and I want to play effects music over the battle music, I have written the following module to play sounds
#Music.py:
#V2
import winsound as ws
def playS(a):
p=("C:\\Users\\user\\OneDrive\\Desktop\\Game\\soundLibrary\\"+(a)+".wav")
ws.PlaySound(p,ws.SND_ASYNC)
def playE(a):
p=("C:\\Users\\user\\OneDrive\\Desktop\\Game\\soundLibrary\\"+(a)+".wav")
ws.PlaySound(p,ws.SND_NOSTOP)
When the function playE('effect') is played the following error message is returned
File "c:\Users\user\OneDrive\Desktop\Game\Music.py", line 10, in playE
ws.PlaySound(p,ws.SND_NOSTOP)
RuntimeError: Failed to play sound
If anyone could say why it would be appreciated.
(note: playS() works fine)
Playsound doesn't actually let you play two sounds simultaeneously, hence it is returning an error.

Issues capturing from RTSP (Wyze Cam V2) using OpenCV on Raspberry Pi

I am an "advanced beginner" in Python, but a relative newbie with the Raspberry Pi...
What I'm trying to do:
I'm trying to capture a frame from the RTSP stream from a Wyze Cam V2 and save the image to a file. My code works - most of the time. But sometimes it fails for long periods of time. After much experimentation and trial and error I have determined that it is more likely to fail when the camera is in the dark! This seems very consistent.
My Code:
This is not the code from my actual project - it is the code I have been using to troubleshoot.
import cv2
import imageio
class Camera:
def __init__(self, ipaddress):
self.ipaddress = ipaddress
print("About to create VideoStream")
self.vs = cv2.VideoCapture(ipaddress, cv2.CAP_FFMPEG)
self.vs.set(cv2.CAP_PROP_BUFFERSIZE, 3)
if self.vs.isOpened():
print("Successfully created")
self.vs.release()
else:
print("Unable to create")
def capture(self):
self.vs.open(self.ipaddress)
success, frame = self.vs.read()
self.vs.release()
if success:
print("Capture Success")
return frame
else:
print("Failed to capture")
print("VideoCapture isOpen is " + str(self.vs.isOpened()))
return None
def is_opened(self):
return self.vs.isOpened()
# In actual code CAMNAME is the camera's name, PASSWORD is the password
# and XXX.XXX.X.XXX is the ip address
camera = Camera("rtsp://CAMNAME:PASSWORD#XXX.XXX.X.XXX/live")
leave = False
while not leave:
frame = camera.capture()
if frame is None:
print("Frame is none")
print("VideoCapture isOpen is " + str(camera.is_opened()))
else:
print("Successful capture - writing to file")
frame_color = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
imageio.imwrite("test.jpg", frame_color)
response = input("Capture again? ")
if len(response) == 0:
response = "y"
if response[0] == 'n':
leave = True
What I Have Tried:
The code works fine when run on Windows 10. So not directly a problem with the code. In fact, when the Pi is having trouble capturing, doing it at the same time on Windows works. So definitely the issue is the Pi interacting with OpenCV or the camera.
Since I had ++ trouble installing OpenCV:
I have tried re-installing it on a fresh install of the OS with PIP (sudo pip install opencv-contrib-python==4.1.0.25).
I tried to build OpenCV from scratch - took 2.5 days and failed miserably - likely I screwed up somewhere in the process, but don't feel like spending another 2 days doing this.
Finally I downloaded a Raspbian image with OpenCV pre-compiled (https://medium.com/#aadeshshah/pre-installed-and-pre-configured-raspbian-with-opencv-4-1-0-for-raspberry-pi-3-model-b-b-9c307b9a993a). All these install methods resulted in the same issues...
I have tried opening the VideoCapture without specifying cv2.CAP_FFMPEG. I feel like it was more reliable with this option.
I have tried leaving out the change in BUFFERSIZE. I'm not sure this line of code has any effect.
What I am Using:
Raspberry Pi Model B, Rev 2, 512 kb
Raspbian Stretch - though I have had the same issues with Buster.
Wyze Cam II "beta" firmware that provides RTSP support.
Python3
OpenCV 4.1.0 (cv2.version)
What happens:
I have been troubleshooting this intermittent problem for some time, and just today realized it always works with the garage (where the camera is located) is light, and fails when it is dark. (Which made the late night troubleshooting sessions so frustrating!)
I have had many problems in the past, but now the issue seems to be that if the garage (where the camera is located) is dark, the VideoCapture object will not be created (.isOpened() == False) or the read() method will return False, None.
I used to have a problem with read() returning an old image. I can tell it is old because the camera timestamps the captures. This is why I am always opening and closing the VideoCapture - I would rather it not return an image than return the wrong/old image.
In the past, with slightly different settings, I would get warnings on the screen either during the creation of the VideoCapture object, or during the read() command. These are usually along the lines of "[h264 # 0x1ea1780] error while decoding MB 78 67, bytestream -15". I have gotten different warnings but I don't have examples right now. If I get a warning, I often get a bad image.
I have also gotten images that are distorted - that bottom of the image (sometimes a few lines, sometimes more than half of the image) looks like it it is the same line of data over and over.

What is the simplest way to get from MIDI to real audio coming out my speakers (sound synthesis) in Python?

I'm starting work on an app that will need to create sound from lots of pre-loaded ".mid" files.
I'm using Python and Kivy to create an app, as I have made an app already with these tools and they are the only code I know. The other app I made uses no sound whatsoever.
Naturally, I want to make sure that the code I write will work cross-platform.
Right now, I'm simply trying to prove that I can create any real sound from a midi note.
I took this code suggested from another answer to a similar question using FluidSynth and Mingus:
from mingus.midi import fluidsynth
fluidsynth.init('/usr/share/sounds/sf2/FluidR3_GM.sf2',"alsa")
fluidsynth.play_Note(64,0,100)
But I hear nothing and get this error:
fluidsynth: warning: Failed to pin the sample data to RAM; swapping is possible.
Why do I get this error, how do I fix it, and is this the simplest way or even right way?
I could be wrong but I don't think there is a "0" channel which is what you are passing as your second argument to .play_Note(). Try this:
fluidsynth.play_Note(64,1,100)
or (from some documentation)
from mingus.containers.note import Note
n = Note("C", 4)
n.channel = 1
n.velocity = 50
fluidSynth.play_Note(n)
UPDATE:
There are references to only channels 1-16 in the source code for that method with the default channel set to 1:
def play_Note(self, note, channel = 1, velocity = 100):
"""Plays a Note object on a channel[1-16] with a \
velocity[0-127]. You can either specify the velocity and channel \
here as arguments or you can set the Note.velocity and Note.channel \
attributes, which will take presedence over the function arguments."""
if hasattr(note, 'velocity'):
velocity = note.velocity
if hasattr(note, 'channel'):
channel = note.channel
self.fs.noteon(int(channel), int(note) + 12, int(velocity))
return True

How to stop Gstreamer from trying to initialize X11

I've been trying to create a simple audio-player which i want to run from the commandline, for this I've used Gstreamer and the pygst python bindings and my code so far looks like this:
import pygst
pygst.require('0.10')
import gst
import os
class Player(object):
mp3stream = "http://http-live.sr.se/p1-mp3-192"
def __init__(self):
self.pipeline = gst.Pipeline("RadioPipe")
self.player = gst.element_factory_make("playbin", "player")
self.pipeline.add(self.player)
self.player.set_property('uri', self.mp3stream)
self.pipeline.set_state(gst.STATE_PLAYING)
player = Player()
while 1:
if(1 == 2):
break
Now for some reason, when I run this code I get the following warnings:
** (radio.py:7803): WARNING **: Command line `dbus-launch --autolaunch=f12629ad79391c6f12cbbc1a50ccbcc8 --binary-syntax --close-stderr' exited with non-zero exit status 1: Autolaunch error: X11 initialization failed.\n
I can play music without a problem, but I would very much get rid of these warnings, now I assume that the Gstreamer library for some reason tries to start something that requires X11 but isn't necessary for the audioplaying part. Any comments on the validity of this assumption are most welcome.
Can I import something else or pass some sort of flag to stop Gstreamer from trying to initialize X11?
EDIT 1
I've tried adding this:
fakesink = gst.element_factory_make("fakesink", "fakesink")
self.player.set_property("video-sink", fakesink)
Which according to the documentation the above code will disable the automatic enabling of video-streaming. This does however not fix my problem with the warnings.
EDIT 2
Okay so the element(?) playbin is something like a ready-made piping of several audio and video related stuff, I'm sorry I can't explain it better for now. However it seems that playbin initializes some elements(?) that tries to access X11. I'm guessing that since I'm not playing anything videorelated it doesn't crash. I've managed to edit some of the playbin elements(?) but none of them fixes the X11 warning.
The current code looks like this:
self.pipeline = gst.Pipeline("RadioPipe")
self.player = gst.element_factory_make("playbin", "player")
pulse = gst.element_factory_make("pulsesink", "pulse")
fakesink = gst.element_factory_make("fakesink", "fakesink")
self.player.set_property('uri', channel)
self.player.set_property("audio-sink", pulse)
self.player.set_property("video-sink", fakesink)
self.pipeline.add(self.player)
The questionmark after element has to do with me not being sure that is the correct wording.
You should be able to disable the video flag in playbin's flag properties. Alternately, if you do need video and know which video sink you need, set the video-sink property accordingly.

Categories

Resources