python-vlc.. output each from dual monitors - python

using python-vlc in python
I want to display a video on each of the two monitors
2 videos played successfully
But only monitor 1 shows the video
Is there a way to show the video on the second monitor?
p1 = vlc.Instance('--directx-device=\\\\.\\DISPLAY1')
# p1 = vlc.Instance('--qt-fullscreen-screennumber=1')
media = p1.media_new("t1.mp4")
player = p1.media_player_new()
player.set_media(media)
player.video_set_scale(0.5)
player.play()
p2 = vlc.Instance('--directx-device=\\\\.\\DISPLAY2')
# p2 = vlc.Instance('--qt-fullscreen-screennumber=2')
media = p2.media_new("t2.mp4")
player2 = p2.media_player_new()
player2.set_media(media)
player2.video_set_scale(0.5)
player2.play()
I searched and put options in vlc.Instance, but the behavior was the same
Help me
I'm using python on windows

Related

MoviePy multiple textClips one after another

I have an audio file and its script file, that looks like this
One day I was playing Fortnite
solos until I got a strange fend
invites I never seen before I joned
it an he said time to play
a game I sruged it of like
no big deal I regreted doing that
but he kept saying time to play
over and over and over ugen
My goal is to make a video where voice is followed by text appearing on the screen (next line appears, previous disappears). The way I do it is obviously wrong and it just renders all lines at the beginning of the video stacked on each other + I can not know how much lines of script there will be so doing texts[0], texts[1]... is not an option. Please send help!
My code:
videoclip = VideoFileClip("Satisfying Minecraft Parkour.mp4")
audioclip = AudioFileClip(f"audio.mp3")
new_audioclip = CompositeAudioClip([audioclip])
videoclip.audio = new_audioclip
texts = []
with open(f'text.txt', 'r') as f:
for line in f:
txt_clip = TextClip(line, fontsize = 55, color = 'white')
txt_clip = txt_clip.set_pos('center')
txt_clip = txt_clip.set_duration(audio_in_seconds/len(str(text))*len(line))
texts.append(txt_clip)
video = CompositeVideoClip([videoclip, texts[0]])
video = CompositeVideoClip([video, texts[1]])

How to change volume of stem files while playing using python

I'm attempting to write a python project that plays multiple parts of a song at the same time.
For background information, a song is split into "stems", and then each stem is played simultaneously to recreate the full song. What I am trying to achieve is using potentiometers to control the volume of each stem, so that the user can mix songs differently. For a product relation, the StemPlayer from Kanye West is what I am trying to achieve.
I can change the volume of the overlayed song at the end, but what I want to do is change the volume of each stem using a potentiometer while the song is playing. Is this even possible using pyDub? Below is the code I have right now.
from pydub import AudioSegment
from pydub.playback import play
vocals = AudioSegment.from_file("walkin_vocals.mp3")
drums = AudioSegment.from_file("walkin_drums.mp3")
bass = AudioSegment.from_file("walkin_bass.mp3")
vocalsDrums = vocals.overlay(drums)
bassVocalsDrums = vocalsDrums.overlay(bass)
songQuiet = bassVocalsDrums - 20
play(songQuiet)
Solved this question, I ended up using pyaudio instead of pydub.
With pyaudio, I was able to define a custom stream_callback function. Within this callback function, I multiply each stem by a modifier, then add each stem to one audio output.
def callback(in_data, frame_count, time_info, status):
global drumsMod, vocalsMod, bassMod, otherMod
drums = drumsWF.readframes(frame_count)
vocals = vocalsWF.readframes(frame_count)
bass = bassWF.readframes(frame_count)
other = otherWF.readframes(frame_count)
decodedDrums = numpy.frombuffer(drums, numpy.int16)
decodedVocals = numpy.frombuffer(vocals, numpy.int16)
decodedBass = numpy.frombuffer(bass, numpy.int16)
decodedOther = numpy.frombuffer(other, numpy.int16)
newdata = (decodedDrums*drumsMod + decodedVocals*vocalsMod + decodedBass*bassMod + decodedOther*otherMod).astype(numpy.int16)
return (newdata.tobytes(), pyaudio.paContinue)

python, cv2.imshow(), raspberryPi and a black screen

Currently trying write code with a GUI which will allow for toggling on/off image processing. Ideally the code will allow for turning on/off window view, real time image processing (pretty basic), and controlling an external board.
The problem I'm having revolves around the cv2.imshow() function. A few months back I made a push to increase processing rates by switching from picamera to cv2 where I can perform more complex computations like background subtraction without having to call python all the time. using the bcm2835-v4l2 package, I was able to pull images directly from the picamera using cv2.
fast forward 6 months and while trying to update the code, I find that the function cv2.imshow() does not display correctly anymore. I thought it might be a problem with bcm2835-v4l2 but tests using matplotlib show that the connection is fine. it appears to have everything to do with cv2.imshow() or so I guess.
I am actually creating a separate thread using the threading module for image capture and I am wondering if this could be the culprit. I don't think so though as typing in the commands
import cv2
camera = cv2.VideoCapture(0)
grabbed,frame = camera.read()
cv2.imshow(frame)
produces the same black screen
Down below is my code I am using (on the RPi3) and some images show the error and what is expected.
as for reference here are the details about my system
Raspberry pi3
raspi stretch
python 3.5.1
opencv 3.4.1
Code
import cv2
from threading import Thread
import time
import numpy as np
from tkinter import Button, Label, mainloop, Tk, RIGHT
class GPIOControllersystem:
def __init__(self,OutPinOne=22, OutPinTwo=27,Objsize=30,src=0):
self.Objectsize = Objsize
# Build GUI controller
self.TK = Tk() # Place TK GUI class into self
# Variables
self.STSP = 0
self.ShutdownVar = 0
self.Abut = []
self.Bbut = []
self.Cbut = []
self.Dbut = []
# setup pi camera for aquisition
self.resolution = (640,480)
self.framerate = 60
# Video capture parameters
(w,h) = self.resolution
self.bytesPerFrame = w * h
self.Camera = cv2.VideoCapture(src)
self.fgbg = cv2.createBackgroundSubtractorMOG2()
def Testpins(self):
while True:
grabbed,frame = self.Camera.read()
frame = self.fgbg.apply(frame)
if self.ShutdownVar ==1:
break
if self.STSP == 1:
pic1, pic2 = map(np.copy,(frame,frame))
pic1[pic1 > 126] = 255
pic2[pic2 <250] = 0
frame = pic1
elif self.STSP ==1:
time.sleep(1)
cv2.imshow("Window",frame)
cv2.destroyAllWindows()
def MProcessing(self):
Thread(target=self.Testpins,args=()).start()
return self
def BuildGUI(self):
self.Abut = Button(self.TK,text = "Start/Stop System",command = self.CallbackSTSP)
self.Bbut = Button(self.TK,text = "Change Pump Speed",command = self.CallbackShutdown)
self.Cbut = Button(self.TK,text = "Shutdown System",command = self.callbackPumpSpeed)
self.Dbut = Button(self.TK,text = "Start System",command = self.MProcessing)
self.Abut.pack(padx=5,pady=10,side=RIGHT)
self.Bbut.pack(padx=5,pady=10,side=RIGHT)
self.Cbut.pack(padx=5,pady=10,side=RIGHT)
self.Dbut.pack(padx=5,pady=10,side=RIGHT)
Label(self.TK, text="Controller").pack(padx=5, pady=10, side=RIGHT)
mainloop()
def CallbackSTSP(self):
if self.STSP == 1:
self.STSP = 0
print("stop")
elif self.STSP == 0:
self.STSP = 1
print("start")
def CallbackShutdown(self):
self.ShutdownVar = 1
def callbackPumpSpeed(self):
pass
if __name__ == "__main__":
GPIOControllersystem().BuildGUI()
Using matplotlib.pyplot.imshow(), I can see that the connection between the raspberry pi camera and opencv is working through the bcm2835-v4l2 connection.
However when using opencv.imshow() the window result in a blackbox, nothing is displayed.
Update: so while testing I found out that when I perform the following task
import cv2
import matplotlib
camera = cv2.VideoCapture(0)
grab,frame = camera.read()
matplotlib.pyplot.imshow(frame)
grab,frame = camera.read()
matplotlib.pyplot.imshow(frame)
update was solved and not related to the main problem. This was a buffering issue. Appears to have no correlation to cv2.imshow()
on a raspberry you should work with
from picamera import PiCamera
checkout pyimagesearch for that

Pepper Live 2-way Audio Streaming Error

I'm trying to establish a real-time audio communication between Pepper's tablet and my PC. I'm using Gstreamer to establish that. The audio from Pepper's mic to PC is working but there seems to be no audio from my PC to Pepper's tablet. What am I doing wrong?
PC side:
audio_pipeline = Gst.Pipeline('audio_pipeline')
audio_udpsrc = Gst.ElementFactory.make('udpsrc', None)
audio_udpsrc.set_property('port', args.audio)
audio_caps = Gst.caps_from_string('application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96')
audio_filter = Gst.ElementFactory.make('capsfilter', None)
audio_filter.set_property('caps',audio_caps)
audio_depay = Gst.ElementFactory.make('rtpL16depay', None)
audio_convert = Gst.ElementFactory.make('audioconvert', None)
audio_sink = Gst.ElementFactory.make('alsasink', None)
audio_sink.set_property('sync',False)
audio_pipeline.add(audio_udpsrc,audio_filter,audio_depay,audio_convert,audio_sink)
audio_udpsrc.link(audio_filter)
audio_filter.link(audio_depay)
audio_depay.link(audio_convert)
audio_convert.link(audio_sink)
Robot side (Choregraphe):
audio_src = gst.element_factory_make('autoaudiosrc')
audio_convert = gst.element_factory_make('audioconvert')
audio_caps = gst.caps_from_string('audio/x-raw-int,channels=1,depth=16,width=16,rate=44100')
audio_filter = gst.element_factory_make('capsfilter')
audio_filter.set_property('caps',audio_caps)
# audio_enc = gst.element_factory_make('mad')
audio_pay = gst.element_factory_make('rtpL16pay')
audio_udp = gst.element_factory_make('udpsink')
audio_udp.set_property('host',user_ip)
audio_udp.set_property('port',int(user_audio_port))
self.audio_pipeline.add(audio_src,audio_convert,audio_filter,audio_pay,audio_udp)
gst.element_link_many(audio_src,audio_convert,audio_filter,audio_pay,audio_udp)
or
Robot's side (Python SDK):
GObject.threads_init()
Gst.init(None)
audio_pipeline = Gst.Pipeline('audio_pipeline')
audio_src = Gst.ElementFactory.make('autoaudiosrc')
audio_convert = Gst.ElementFactory.make('audioconvert')
audio_caps = Gst.ElementFactory.make('audio/x-raw-int,channels=2,depth=16,width=16,rate=44100')
audio_filter = Gst.ElementFactory.make('capsfilter')
audio_filter.set_property('caps',audio_caps)
audio_pay = Gst.ElementFactory.make('rtpL16pay')
audio_udp = Gst.ElementFactory.make('udpsink')
audio_udp.set_property('host',user_ip)
audio_udp.set_property('port',int(user_audio_port))
audio_pipeline.add(audio_src,audio_convert,audio_filter,audio_pay,audio_udp)
audio_src.link(audio_convert)
audio_convert.link(audio_filter)
audio_filter.link(audio_pay)
audio_pay.link(audio_udp)
audio_pipeline.set_state(Gst.State.PLAYING)
Computer's mic to Pepper:
audio_port = 80
s_audio_pipeline = Gst.Pipeline('s_audio_pipeline')
s_audio_src = Gst.ElementFactory.make('autoaudiosrc')
s_audio_convert = Gst.ElementFactory.make('audioconvert')
s_audio_caps = Gst.ElementFactory.make('audio/x-raw-int,channels=2,depth=16,width=16,rate=44100')
s_audio_filter = Gst.ElementFactory.make('capsfilter')
s_audio_filter.set_property('caps',audio_caps)
s_audio_pay = Gst.ElementFactory.make('rtpL16pay')
s_audio_udp = Gst.ElementFactory.make('udpsink')
s_audio_udp.set_property('host',ip)
s_audio_udp.set_property('port',int(audio_port))
s_audio_pipeline.add(s_audio_src,s_audio_convert,s_audio_filter,s_audio_pay,s_audio_udp)
s_audio_src.link(s_audio_convert)
s_audio_convert.link(s_audio_filter)
s_audio_filter.link(s_audio_pay)
s_audio_pay.link(s_audio_udp)
Pepper receiving:
audio = 80
r_audio_pipeline = Gst.Pipeline('r_audio_pipeline')
#defining audio pipeline attributes
r_audio_udpsrc = Gst.ElementFactory.make('udpsrc', None)
r_audio_udpsrc.set_property('port', audio)
r_audio_caps = Gst.caps_from_string('application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)2, format=(string)S16LE, channel-positions=(int)1, payload=(int)96')
r_audio_filter = Gst.ElementFactory.make('capsfilter', None)
r_audio_filter.set_property('caps',r_audio_caps)
r_audio_depay = Gst.ElementFactory.make('rtpL16depay', None)
r_audio_convert = Gst.ElementFactory.make('audioconvert', None)
r_audio_sink = Gst.ElementFactory.make('alsasink', None)
r_audio_sink.set_property('sync',False)
#linking the various attributes
r_audio_pipeline.add(r_audio_udpsrc,r_audio_filter,r_audio_depay,r_audio_convert,r_audio_sink)
r_audio_udpsrc.link(r_audio_filter)
r_audio_filter.link(r_audio_depay)
r_audio_depay.link(r_audio_convert)
r_audio_convert.link(r_audio_sink)
r_audio_pipeline.set_state(Gst.State.PLAYING)
I think there might be a problem with the pepper's receiving port number... I tried different port numbers (including 9559) but nothing seemed to work. Is the source ID wrong?
Is it possible to run the 2-way stream in the same pipeline?
I took a look at other libraries like ffmpeg and PyAudio, but I couldn't any method for live streaming.
Make sure you run the Python script on the robot.
Also, did you run the GMainLoop ?
Choregraphe behaviors are run in NAOqi, and NAOqi runs a GMainLoop already in the background. Maybe this is what is missing in your stand-alone script.
Finally, you show no piece of code in your snippets that is meant to take the PC's audio to the network, nor from the network to Pepper's speakers.

Why is the Windows prompt unresponsive after using a python curses window?

First of all, I'm a newby in python. Had to take a course of it in college and got hooked by its efficiency.
I have this sticky problem where the Windows 7 prompt becomes unresponsive after using a curses window. In Windows 10 it works well. Note that I'm using the Win7 terminal with its default settings. In my code I create a curses window to show 2 simultaneous progress bars, each for a file download. I implemented this by passing the curses window to a FileDownload class (one class instance for each download) that handles its progress bar inside this window. Oddly, in Windows 7 when the downloads are done and the control returns to the prompt, it becomes unresponsive to the keyboard. I worked around this by invoking curses.endwin() after using the window, but this causes the prompt to display all the way down the screen buffer, what hides the curses window.
Here is my code. Any ideas are greatly appreciated. Thanks!
# Skeleton version for simulations.
# Downloads 2 files simultaneously and shows a progress bar for each.
# Each file download is a FileDownload object that interacts with a
# common curses window passed as an argument.
import requests, math, threading, curses, datetime
class FileDownload:
def __init__(self, y_pos, window, url):
# Y position of the progress bar in the download queue window.
self.__bar_pos = int(y_pos)
self.__progress_window = window
self.__download_url = url
# Status of the file download object.
self.__status = "queued"
t = threading.Thread(target=self.__file_downloader)
t.start()
# Downloads selected file and handles its progress bar.
def __file_downloader(self):
file = requests.get(self.__download_url, stream=True)
self.__status = "downloading"
self.__progress_window.addstr(self.__bar_pos + 1, 1, "0%" + " " * 60 + "100%")
size = int(file.headers.get('content-length'))
win_prompt = "Downloading " + format(size, ",d") + " Bytes:"
self.__progress_window.addstr(self.__bar_pos, 1, win_prompt)
file_name = str(datetime.datetime.now().strftime("%Y-%m-%d_%H.%M.%d"))
dump = open(file_name, "wb")
# Progress bar length.
bar_space = 58
# Same as an index.
current_iteration = 0
# Beginning position of the progress bar.
progress_position = 4
# How many iterations will be needed (in chunks of 1 MB).
iterations = math.ceil(size / 1024 ** 2)
# Downloads the file in 1MB chunks.
for block in file.iter_content(1024 ** 2):
dump.write(block)
# Progress bar controller.
current_iteration += 1
step = math.floor(bar_space / iterations)
if current_iteration > 1:
progress_position += step
if current_iteration == iterations:
step = bar_space - step * (current_iteration - 1)
# Updates the progress bar.
self.__progress_window.addstr(self.__bar_pos + 1, progress_position,
"#" * step)
dump.close()
self.__status = "downloaded"
# Returns the current status of the file download ("queued", "downloading" or
# "downloaded").
def get_status(self):
return self.__status
# Instantiates each file download.
def files_downloader():
# Creates curses window.
curses.initscr()
win = curses.newwin(8, 70)
win.border(0)
win.immedok(True)
# Download URLs.
urls = ["http://ipv4.download.thinkbroadband.com/10MB.zip",
"http://ipv4.download.thinkbroadband.com/5MB.zip"]
downloads_dct = {}
for n in range(len(urls)):
# Progress bar position in the window for the file.
y_pos = n * 4 + 1
downloads_dct[n + 1] = FileDownload(y_pos, win, urls[n])
# Waits for all files to be downloaded before passing control of the terminal
# to the user.
all_downloaded = False
while not all_downloaded:
all_downloaded = True
for key, file_download in downloads_dct.items():
if file_download.get_status() != "downloaded":
all_downloaded = False
# Prevents the prompt from returning inside the curses window.
win.addstr(7, 1, "-")
# This solves the unresponsive prompt issue but hides the curses window if the screen buffer
# is higher than the window size.
# curses.endwin()
while input("\nEnter to continue: ") == "":
files_downloader()
Perhaps you're using cygwin (and ncurses): ncurses (like any other curses implementation) changes the terminal I/O mode when it is running. The changes that you probably are seeing is that
input characters are not echoed
you have to type controlJ to end an input line, rather than just Enter
output is not flushed automatically at the end of each line
It makes those changes to allow it to read single characters and also to use the terminal more efficiently.
To change back to the terminal's normal I/O mode, you would use the endwin function. The reset_shell_mode function also would be useful.
Further reading:
endwin (ncurses manual)
reset_shell_mode (ncurses manual)

Categories

Resources