ffmpeg - Poll folder for files, and stream as video with rtp - python

(I'm a newbie when it comes to ffmpeg).
I have an image source which saves files to a given folder in a rate of 30 fps. I want to wait for every (let's say) 30 frames chunk, encode it to h264 and stream it with RDP to some other app.
I thought about writing a python app which just waits for the images, and then executes an ffmpeg command. For that I wrote the following code:
main.py:
import os
import Helpers
import argparse
import IniParser
import subprocess
from functools import partial
from Queue import Queue
from threading import Semaphore, Thread
def Run(config):
os.chdir(config.Workdir)
iteration = 1
q = Queue()
Thread(target=RunProcesses, args=(q, config.AllowedParallelRuns)).start()
while True:
Helpers.FileCount(config.FramesPathPattern, config.ChunkSize * iteration)
command = config.FfmpegCommand.format(startNumber = (iteration-1)*config.ChunkSize, vFrames=config.ChunkSize)
runFunction = partial(subprocess.Popen, command)
q.put(runFunction)
iteration += 1
def RunProcesses(queue, semaphoreSize):
semaphore = Semaphore(semaphoreSize)
while True:
runFunction = queue.get()
Thread(target=HandleProcess, args=(runFunction, semaphore)).start()
def HandleProcess(runFunction, semaphore):
semaphore.acquire()
p = runFunction()
p.wait()
semaphore.release()
if __name__ == '__main__':
argparser = argparse.ArgumentParser()
argparser.add_argument("config", type=str, help="Path for the config file")
args = argparser.parse_args()
iniFilePath = args.config
config = IniParser.Parse(iniFilePath)
Run(config)
Helpers.py (not really relevant):
import os
import time
from glob import glob
def FileCount(pattern, count):
count = int(count)
lastCount = 0
while True:
currentCount = glob(pattern)
if lastCount != currentCount:
lastCount = currentCount
if len(currentCount) >= count and all([CheckIfClosed(f) for f in currentCount]):
break
time.sleep(0.05)
def CheckIfClosed(filePath):
try:
os.rename(filePath, filePath)
return True
except:
return False
I used the following config file:
Workdir = "C:\Developer\MyProjects\Streaming\OutputStream\PPM"
; Workdir is the directory of reference from which all paths are relative to.
; You may still use full paths if you wish.
FramesPathPattern = "F*.ppm"
; The path pattern (wildcards allowed) where the rendered images are stored to.
; We use this pattern to detect how many rendered images are available for streaming.
; When a chunk of frames is ready - we stream it (or store to disk).
ChunkSize = 30 ; Number of frames for bulk.
; ChunkSize sets the number of frames we need to wait for, in order to execute the ffmpeg command.
; If the folder already contains several chunks, it will first process the first chunk, then second, and so on...
AllowedParallelRuns = 1 ; Number of parallel allowed processes of ffmpeg.
; This sets how many parallel ffmpeg processes are allowed.
; If more than one chunk is available in the folder for processing, we will execute several ffmpeg processes in parallel.
; Only when on of the processes will finish, we will allow another process execution.
FfmpegCommand = "ffmpeg -re -r 30 -start_number {startNumber} -i F%08d.ppm -vframes {vFrames} -vf vflip -f rtp rtp://127.0.0.1:1234" ; Command to execute when a bulk is ready for streaming.
; Once a chunk is ready for processing, this is the command that will be executed (same as running it from the terminal).
; There is however a minor difference. Since every chunk starts with a different frame number, you can use the
; expression of "{startNumber}" which will automatically takes the value of the matching start frame number.
; You can also use "{vFrames}" as an expression for the ChunkSize which was set above in the "ChunkSize" entry.
Please note that if I set "AllowedParallelRuns = 2" then it allows multiple ffmpeg processes to run simultaneously.
I then tried to play it with ffplay and see if I'm doing it right.
The first chunk was streamed fine. The following chunks weren't so great. I got a lot of [sdp # 0000006de33c9180] RTP: dropping old packet received too late messages.
What should I do so I get the ffplay, to play it in the order of the incoming images? Is it right to run parallel ffmpeg processes? Is there a better solution to my problem?
Thank you!

As I stated in the comment, since you rerun ffmpeg each time, the pts values are reset, but the client perceives this as a single continuous ffmpeg stream and thus expects increasing PTS values.
As I said you could use a ffmpeg python wrapper to control the streaming yourself, but yeah that is quite an amount of code. But, there is actually a dirty workaround.
So, apparently there is a -itsoffset parameter with which you can offset the input timestamps (see FFmpeg documentation). Since you know and control the rate, you could pass an increasing value with this parameter, so that each next stream is offset with the proper duration. E.g. if you stream 30 frames each time, and you know the fps is 30, the 30 frames create a time interval of one second. So on each call to ffmepg you would increase the -itsoffset value by one second, thus that should be added to the output PTS values. But I can't guarantee this works.
Since the idea about -itsoffset did not work, you could also try feeding the jpeg images via stdin to ffmpeg - see this link.

Related

Python multiprocessing progress approach

I've been busy writing my first multiprocessing code and it works, yay.
However, now I would like some feedback of the progress and I'm not sure what the best approach would be.
What my code (see below) does in short:
A target directory is scanned for mp4 files
Each file is analysed by a separate process, the process saves a result (an image)
What I'm looking for could be:
Simple
Each time a process finishes a file it sends a 'finished' message
The main code keeps count of how many files have finished
Fancy
Core 0 processing file 20 of 317 ||||||____ 60% completed
Core 1 processing file 21 of 317 |||||||||_ 90% completed
...
Core 7 processing file 18 of 317 ||________ 20% completed
I read all kinds of info about queues, pools, tqdm and I'm not sure which way to go. Could anyone point to an approach that would work in this case?
Thanks in advance!
EDIT: Changed my code that starts the processes as suggested by gsb22
My code:
# file operations
import os
import glob
# Multiprocessing
from multiprocessing import Process
# Motion detection
import cv2
# >>> Enter directory to scan as target directory
targetDirectory = "E:\Projects\Programming\Python\OpenCV\\videofiles"
def get_videofiles(target_directory):
# Find all video files in directory and subdirectories and put them in a list
videofiles = glob.glob(target_directory + '/**/*.mp4', recursive=True)
# Return the list
return videofiles
def process_file(videofile):
'''
What happens inside this function:
- The video is processed and analysed using openCV
- The result (an image) is saved to the results folder
- Once this function receives the videofile it completes
without the need to return anything to the main program
'''
# The processing code is more complex than this code below, this is just a test
cap = cv2.VideoCapture(videofile)
for i in range(10):
succes, frame = cap.read()
# cv2.imwrite('{}/_Results/{}_result{}.jpg'.format(targetDirectory, os.path.basename(videofile), i), frame)
if succes:
try:
cv2.imwrite('{}/_Results/{}_result_{}.jpg'.format(targetDirectory, os.path.basename(videofile), i), frame)
except:
print('something went wrong')
if __name__ == "__main__":
# Create directory to save results if it doesn't exist
if not os.path.exists(targetDirectory + '/_Results'):
os.makedirs(targetDirectory + '/_Results')
# Get a list of all video files in the target directory
all_files = get_videofiles(targetDirectory)
print(f'{len(all_files)} video files found')
# Create list of jobs (processes)
jobs = []
# Create and start processes
for file in all_files:
proc = Process(target=process_file, args=(file,))
jobs.append(proc)
for job in jobs:
job.start()
for job in jobs:
job.join()
# TODO: Print some form of progress feedback
print('Finished :)')
I read all kinds of info about queues, pools, tqdm and I'm not sure which way to go. Could anyone point to an approach that would work in this case?
Here's a very simple way to get progress indication at minimal cost:
from multiprocessing.pool import Pool
from random import randint
from time import sleep
from tqdm import tqdm
def process(fn) -> bool:
sleep(randint(1, 3))
return randint(0, 100) < 70
files = [f"file-{i}.mp4" for i in range(20)]
success = []
failed = []
NPROC = 5
pool = Pool(NPROC)
for status, fn in tqdm(zip(pool.imap(process, files), files), total=len(files)):
if status:
success.append(fn)
else:
failed.append(fn)
print(f"{len(success)} succeeded and {len(failed)} failed")
Some comments:
tqdm is a 3rd-party library which implements progressbars extremely well. There are others. pip install tqdm.
we use a pool (there's almost never a reason to manage processes yourself for simple things like this) of NPROC processes. We let the pool handle iterating our process function over the input data.
we signal state by having the function return a boolean (in this example we choose randomly, weighting in favour of success). We don't return the filename, although we could, because it would have to be serialised and sent from the subprocess, and that's unnecessary overhead.
we use Pool.imap, which returns an iterator which keeps the same order as the iterable we pass in. So we can use zip to iterate files directly. Since we use an iterator with unknown size, tqdm needs to be told how long it is. (We could have used pool.map, but there's no need to commit the ram---although for one bool it probably makes no difference.)
I've deliberately written this as a kind of recipe. You can do a lot with multiprocessing just by using the high-level drop in paradigms, and Pool.[i]map is one of the most useful.
References
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool
https://tqdm.github.io/

Python: Eliminating gaps between segments of recorded audio

I am using Python sounddevice library to record audio, but I can't seem to eliminate ~0.25 to ~0.5 second gaps between what should be consecutive audio files. I think this is because the file writing takes up time, so I learned to use Multiprocessing and Queues to separate out the file writing but it hasn't helped. The most confusing thing is that the logs suggest that the iterations in Main()'s loop are near gapless (only 1-5 milliseconds) but mysteriously the audio_capture function is taking longer than expected even tho nothing else significant is being done. I tried to reduce the script as much as possible for this post. My research has all pointed to this threading/multiprocessing approach, so I am flummoxed.
Background: 3.7 on Raspbian Buster
I am dividing the data into segments so that the files are not too big and I imagine programming tasks must deal with this challenge. I also have 4 other subprocesses doing various things after.
Log: The audio_capture part should only take 10:00
08:26:29.991 --- Start of segment #0
08:36:30.627 --- End of segment #0 <<<<< This is >0.6 later than it should be
08:36:30.629 --- Start of segment #1 <<<<< This is near gapless with the prior event
Script:
import logging
import sounddevice
from scipy.io.wavfile import write
import time
import os
from multiprocessing import Queue, Process
# this process is a near endless loop
def main():
fileQueue = Queue()
writerProcess = Process(target=writer, args=(fileQueue,))
writerProcess.start()
for i in range(9000):
fileQueue.put(audio_capture(i))
writerProcess.join()
# This func makes an audio data object from a sound source
def audio_capture(i):
cycleNumber = str(i)
logging.debug('Start of segment #' + cycleNumber)
# each cycle is 10 minutes at 32000Hz sample rate
audio = sounddevice.rec(frames=600 * 32000, samplerate=32000, channels=2)
name = time.strftime("%H-%M-%S") + '.wav'
path = os.path.join('/audio', name)
sounddevice.wait()
logging.debug('End of segment #' + cycleNumber)
return [audio, path]
# This function writes the files.
def writer(input_queue):
while True:
try:
parameters = input_queue.get()
audio = parameters[0]
path = parameters[1]
write(filename=path, rate=32000, data=audio)
logging.debug('File is written')
except:
pass
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s.%(msecs)03d --- %(message)s', datefmt='%H:%M:%S',handlers=[logging.FileHandler('/audio/log.txt'), logging.StreamHandler()])
main()
The documentation tells us that sounddevice.rec() is not meant for gapless recording:
If you need more control (e.g. block-wise gapless recording, overlapping recordings, …), you should explicitly create an InputStream yourself. If NumPy is not available, you can use a RawInputStream.
There are multiple examples for gapless recording in the example programs.
Use Pyaudio, open a non-blocking audio-stream. you can find a very good basic example on the Pyaudio documentation frontpage. Choose a buffer size, I recommend 512 or 1024. Now just append the incoming data to a numpy array. I sometimes store up to 30 seconds of audio in one numpy array. When reaching the end of a segment, create another empty numpy array and start over. Create a thread and save the first segment somewhere. The recording will continue and not one sample will be dropped ;)
Edit: if you want to write 10 mins in one file, I would suggest just create 10 arrays á 1 minute and then append and save them.

Speeding up process speed of file downloads from the web

I'm writing a program that has to download a bunch of files from the web before it can even run, so I created a function that will download all the files and "initialize" the program called init_program, how it works is it runs through a couple dicts that have urls to a gistfiles on github. It pulls the urls and uses urllib2 to download them. I won't be able to add all the files but you can try it out by cloning the repo here. Here's the function that will create the files from the gists:
def init_program():
""" Initialize the program and allow all the files to be downloaded
This will take awhile to process, but I'm working on the processing
speed """
downloaded_wordlists = [] # Used to count the amount of items downloaded
downloaded_rainbow_tables = []
print("\n")
banner("Initializing program and downloading files, this may take awhile..")
print("\n")
# INIT_FILE is a file that will contain "false" if the program is not initialized
# And "true" if the program is initialized
with open(INIT_FILE) as data:
if data.read() == "false":
for item in GIST_DICT_LINKS.keys():
sys.stdout.write("\rDownloading {} out of {} wordlists.. ".format(len(downloaded_wordlists) + 1,
len(GIST_DICT_LINKS.keys())))
sys.stdout.flush()
new_wordlist = open("dicts/included_dicts/wordlists/{}.txt".format(item), "a+")
# Download the wordlists and save them into a file
wordlist_data = urllib2.urlopen(GIST_DICT_LINKS[item])
new_wordlist.write(wordlist_data.read())
downloaded_wordlists.append(item + ".txt")
new_wordlist.close()
print("\n")
banner("Done with wordlists, moving to rainbow tables..")
print("\n")
for table in GIST_RAINBOW_LINKS.keys():
sys.stdout.write("\rDownloading {} out of {} rainbow tables".format(len(downloaded_rainbow_tables) + 1,
len(GIST_RAINBOW_LINKS.keys())))
new_rainbowtable = open("dicts/included_dicts/rainbow_tables/{}.rtc".format(table))
# Download the rainbow tables and save them into a file
rainbow_data = urllib2.urlopen(GIST_RAINBOW_LINKS[table])
new_rainbowtable.write(rainbow_data.read())
downloaded_rainbow_tables.append(table + ".rtc")
new_rainbowtable.close()
open(data, "w").write("true").close() # Will never be initialized again
else:
pass
return downloaded_wordlists, downloaded_rainbow_tables
This works, yes, however it's extremely slow, due to the size of the files, each file has at least 100,000 lines in it. How can I speed up this function to make it faster and more user friendly?
Some weeks ago I faced a similar situation where it was needed to download many huge files but all simple pure Python solutions that I found was not good enough in terms of download optimization. So I found Axel — Light command line download accelerator for Linux and Unix
What is Axel?
Axel tries to accelerate the downloading process by using multiple
connections for one file, similar to DownThemAll and other famous
programs. It can also use multiple mirrors for one download.
Using Axel, you will get files faster from Internet. So, Axel can
speed up a download up to 60% (approximately, according to some
tests).
Usage: axel [options] url1 [url2] [url...]
--max-speed=x -s x Specify maximum speed (bytes per second)
--num-connections=x -n x Specify maximum number of connections
--output=f -o f Specify local output file
--search[=x] -S [x] Search for mirrors and download from x servers
--header=x -H x Add header string
--user-agent=x -U x Set user agent
--no-proxy -N Just don't use any proxy server
--quiet -q Leave stdout alone
--verbose -v More status information
--alternate -a Alternate progress indicator
--help -h This information
--version -V Version information
As axel is written in C and there's no C extension for Python, so I used the subprocess module to execute him externally and works perfectly for me.
You can do something like this:
cmd = ['/usr/local/bin/axel', '-n', str(n_connections), '-o',
"{0}".format(filename), url]
process = subprocess.Popen(cmd,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
You can also parse the progress of each download parsing the output of the stdout.
while True:
line = process.stdout.readline()
progress = YOUR_GREAT_REGEX.match(line).groups()
...
You're blocking whilst you wait for each download. So the total time is the sum of the round trip time for each download. Your code will likely spend a lot of time waiting for the network traffic. One way to improve this is not to block whilst you wait for each response. You can do this in several ways. For example by handing off each request to a separate thread (or process), or by using an event loop and coroutines. Read up on the threading and asyncio modules.

Synchronize a Linux system command and a while-loop in Python

With the RaspberryPi system I have to synchronize a Raspbian system command (raspivid -t 20000) with a while loop that reads continuously from a sensor adn stores samples in an array. The Raspbian command start a video recording by the RaspberryPi camera CSI module and I have to be sure that it starts at the same instant of the acquisition by the sensor. I have seen many solution that have confused me among modules like multiprocessing, threading, subprocess, ecc. So far the only thing that I have understood is that the os.system() function blocks execution of following python's commands placed in the script as long as it runs. So if I try with:
import os
import numpy as np
os.system("raspivid -t 20000 /home/pi/test.h264")
data = np.zeros(20000, dtype="float") #memory pre-allocation supposing I have to save 20000 samples from the sensor (1 for each millisecond of the video)
indx=0
while True:
sens = readbysensor() #where the readbysensor() function is defined before in the script and reads a sample from the sensor
data[indx]=sens
if indx==19999:
break
else:
indx+=1
that while-loop will run only when the os.system() function will finish. But as I wrote above I need that the two processes are synchronized and work in parallel. Any suggestion?
Just add an & at the end, to make the process detach to the background:
os.system("raspivid -t 20000 /home/pi/test.h264 &")
According to bash man pages:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
Also, if you want to minimize the time it takes for the loop to start after executing raspivid, you should allocate your data and indx prior to the call:
data = np.zeros(20000, dtype="float")
indx=0
os.system("raspivid -t 20000 /home/pi/test.h264 &")
while True:
# ....
Update:
Since we discussed further in the comments, it is clear that there is no really a need to start the loop "at the same time" as raspivid (whatever that might mean), because if you are trying to read data from the I2C and make sure you don't miss any data, you will be best of starting the reading operation prior to running raspivid. This way you are certain that in the meantime (however big of delay there is between those two executions) you are not missing any data.
Taking this into consideration, your code could look something like this:
data = np.zeros(20000, dtype="float")
indx=0
os.system("(sleep 1; raspivid -t 20000 /home/pi/test.h264) &")
while True:
# ....
This is the simplest version in which we add a delay of 1 second before running raspivid, so we have time to enter our while loop and start waiting for I2C data.
This works, but it is hardly a production quality code. For a better solution, run the data acquisition function in one thread and the raspivid in a second thread, preserving the launch order (the reading thread is started first).
Something like this:
import Queue
import threading
import os
# we will store all data in a Queue so we can process
# it at a custom speed, without blocking the reading
q = Queue.Queue()
# thread for getting the data from the sensor
# it puts the data in a Queue for processing
def get_data(q):
for cnt in xrange(20000):
# assuming readbysensor() is a
# blocking function
sens = readbysensor()
q.put(sens)
# thread for processing the results
def process_data(q):
for cnt in xrange(20000):
data = q.get()
# do something with data here
q.task_done()
t_get = threading.Thread(target=get_data, args=(q,))
t_process = threading.Thread(target=process_data, args=(q,))
t_get.start()
t_process.start()
# when everything is set and ready, run the raspivid
os.system("raspivid -t 20000 /home/pi/test.h264 &")
# wait for the threads to finish
t_get.join()
t_process.join()
# at this point all processing is completed
print "We are all done!"
You could rewrite your code as:
import subprocess
import numpy as np
n = 20000
p = subprocess.Popen(["raspivid", "-t", str(n), "/home/pi/test.h264"])
data = np.fromiter(iter(readbysensor, None), dtype=float, count=n)
subprocess.Popen() returns immidiately without waiting for raspivid to end.

Hadoop streaming --- expensive shared resource (COOL)

I am looking for a nice pattern for python hadoop streaming that involves loading an expensive resource, for example a pickled python object on the server. Here is what I came up with; I've tested by piping input files and slow running programs directly into the script in bash, but haven't yet run it on a hadoop cluster. For you hadoop wizards---am i handling io such that this will work as a python streaming job? I guess I'll go spin up something on amazon to test but it would be nice if someone knew off the top.
you can test it out via cat file.txt | the_script or ./a_streaming_program | the_script
#!/usr/bin/env python
import sys
import time
def resources_for_many_lines():
# load slow, shared resources here
# for example, a shared pickle file
# in this example we use a 1 second sleep to simulate
# a long data load
time.sleep(1)
# we will pretend the value zero is the product
# of our long slow running import
resource = 0
return resource
def score_a_line(line, resources):
# put fast code to score a single example line here
# in this example we will return the value of resource + 1
return resources + 1
def run():
# here is the code that reads stdin and scores the model over a streaming data set
resources = resources_for_many_lines()
while 1:
# reads a line of input
line = sys.stdin.readline()
# ends if pipe closes
if line == '':
break
# scores a line
print score_a_line(line, resources)
# prints right away instead of waiting
sys.stdout.flush()
if __name__ == "__main__":
run();
This looks fine to me. I often load up yaml or sqlite resources in my mappers.
You typically won't be running that many mappers in your job so even if you spend a couple of seconds in loading something from disk it's usually not a huge problem.

Categories

Resources