I am trying to write a script that would automatically take duration values and chop an audio file into smaller splits. To do this I have saved all my start times and durations in a list. And this is the code I am trying to run.
for k in range(0,len(start_time)):
s=start_time[k]
e=Duration[k]
filename = "output%d.mp3" % (k)
!ffmpeg -i Audio.mp3 -ss s -t e -acodec copy filename
k=k+1
On running this I get the following error
Invalid duration specification for ss: s
I suspect that this error arises because of the fact that since I am using lists to call elements, the time stamp comes with quotes on both sides. Secondly, I am not sure how to specify the filename, such that each created split has the name of this format Output_1.mp3 and so on. The integer will be an identifier of the split. What could be possible fixes for this piece of code? Please note that I am running this on Google Colab.
Google Colab is running Jupyter notebook powered by IPython in its cloud, IPython uses special syntax for shell invocation (commands starting from ! exclamation mark), i.e. they are executed in (temporary) shell session. In case of Google Colab it's bash:
res = !echo $SHELL
print(res)
> ['/bin/bash']
As I checked, ffmpeg is indeed available on Google Colab:
res = !which ffmpeg
print(res)
> ['/usr/bin/ffmpeg']
, so you have legitimate error message printed by ffmpeg: "Invalid duration specification ... " (I mean not by shell or python) and it just means that variable you're passing is not substituted with its value the way you do. And it's so because this above mentioned special syntax is not followed for passing variable to the shell, check this; variables should be curly-braced to be passed:
!ffmpeg -i Audio.mp3 -ss {s} -t {e} -acodec copy {filename}
Related
ffmpeg.output(audioo, videoo, name).global_args("-f mp4").run()
I have this code which takes in the audio and video, and gives the output file a name.
Then I have .global_args, which takes in the arguments, but, when I try to run the script
it says
Unrecognized option 'f mp4'.
Error splitting the argument list: Option not found
But, if I use a flag like, -y which looks like this
ffmpeg.output(audioo, videoo, names).global_args("-y").run()
it works but then in the name string, It needs to be like 'output.mp4'
but I want to use the -f flag so It could be 'output' only, and the format is decided by the flag.
You may think it's useless but if it's needed I could explain my reasons further.
I am running a Python script which takes the dump of CSVs from a Postgres database and then I want to escape double quotes in all these files. So I am using sed to do so.
In my Python code:
sed_for_quotes = 'sed -i s/\\"//g /home/ubuntu/PTOR/csvdata1/'+table+'.csv'
subprocess.call(sed_for_quotes, shell=True)
The process completes without any error, but when I load these tables to Redshift, I get error No delimiter found and upon checking the CSV, I find that one of the rows is only half-loaded,for example if it is a timestamp column, then only half of it is loaded, and there is no data after that in the table (while the actual CSV has that data before running sed). And that leads to the No delimiter found error.
But when I run sed -i s/\"//g filename.csvon these files in the shell it works fine and the csv after running sed has all the rows. I have checked that there is no problem with the data in the files.
What is the reason for this not working in a Python program ? I have also tried using sed -i.bak in the python program but that makes no difference.
Please Note that I am using an extra backslash(\) in the Python code because I need to escape the other backslash.
Other approaches tried:
Using subprocess.Popen without any buffer size and with positive buffer size, but that didn't help
Using subprocess.Popen(sed_for_quotes,bufsize=-4096) (negative buffer size) worked for one of
the files which was giving the error, but then encountered the same
problem in another file.
Do not use intermediate shell when you do not need to. And check for return code of the subprocess to make sure it completed successfully (check_call does this for you)
path_to_file = ... # e.g. '/home/ubuntu/PTOR/csvdata1/' + table + '.csv'
subprocess.check_call(['sed', '-i', 's/"//g', path_to_file])
By "intermediate" shell I mean the shell process run by subprocess that parses the command (± splits by whitespace but not only) and runs it (runs sed in this example). Since you precisely know what arguments sed should be invoked with, you do not need all this and it's best to avoid that.
Put your sed into a shell script, e.g.
#!/bin/bash
# Parameter $1 = Filename
sed -i 's/"//g' "$1"
Call your shell script using subprocess:
sed_for_quotes = 'my_sed_script /home/ubuntu/PTOR/csvdata1/'+table+'.csv'
Use docs.python.org/3.6: shlex.split
shlex.split(s, comments=False, posix=True)
Split the string s using shell-like syntax.
I'm creating an archive in Python using this code:
#Creates archive using string like [proxy_16-08-15_08.57.07.tar]
proxyArchiveLabel = 'proxy_%s' % EXECUTION_START_TIME + '.tar'
log.info('Packaging %s ...' % proxyArchiveLabel)
#Removes .tar from label during creation
shutil.make_archive(proxyArchiveLabel.rsplit('.',1)[0], 'tar', verbose=True)
So this creates an archive fine in the local directory. The problem is, there's a specific directory in my archive I want to remove, due to it's size and lack of necessity for this task.
ExecWithLogging('tar -vf %s --delete ./roles/jobs/*' % proxyArchiveLabel)
# ------------
def ExecWithLogging(cmd):
print cmd
p = subprocess.Popen(cmd.split(' '), env=os.environ, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
log.info(p.stdout.readline().strip())
if(p.poll() is not None):
break
However, this seems to do basically nothing. The size remains the same. If I print cmd inside of the ExecWithLogging, and copy/past that command to a terminal in the working directory of the script, it works fine. Just to be sure, I also tried hard-coding the full path to where the archive is created as part of the tar -vf %s --delete command, but still nothing seemed to happen.
I do get this output in my INFO log: tar: Pattern matching characters used in file names, so I'm kind of thinking Popen is interpreting my command incorrectly somehow... (or rather, I'm more likely passing in something incorrectly).
Am I doing something wrong? What else can I try?
You may have to use the --wildcards option in the tar command, which enables pattern matching. This may well be what you are seeing in your log, be it somewhat cryptically.
Edit:
In answer to your question Why? I suspect that the shell is performing the wildcard expansion whilst the command proffered through Popen is not. The --wildcard option for tar, forces tar to perform the wildcard expansion.
For a more detailed explanation see here:
Tar and wildcards
I've been searching for how to do this without any success. I've inherited a python script for performing an hourly backup on our database. The original script is too slow, so I'm trying a faster utility. My new command would look like this if typed into a shell:
pg_basebackup -h 127.0.0.1 -F t -X f -c fast -z -D - --username=postgres > db.backup.tgz
The problem is that the original script uses call(cmd) and it fails if the above is the cmd string. I've been looking for how to modify this to use popen but cannot find any examples where a file create redirect is used as in
>. The pg_basebackup as shown will output to stdout. The only way I've succeeded so far is to change -D - to -D some.file.tgz and then move the file to the archive, but I'd rather do this in one step.
Any ideas?
Jay
May be like this ?
with open("db.backup.tgz","a") as stdout:
p = subprocess.Popen(cmd_without_redirector, stdout=stdout, stderr=stdout, shell=True)
p.wait()
Hmmm... The pg_basebackup executable must be able to attach to that file. If I open the file in the manner you suggest, I don't know the correct syntax in python to be able to do that. If I try putting either " > " or " >> " in the string to call with cmd(), python pukes on it. That's my real problem that I'm not finding any guidance on.
I am developing a program which needs to use os.system because of the old Python limitations. Currently I'm stuck in one small spot.
os.system("C:\\FIOCheck\\xutil.exe -i get phy" +HBEA + ">C:\\FIOCheck\\HBEAResult.txt")
This is the piece of code I am trying to work through. It will access an external program, which has some parameters. HBEA is the variable I am trying to pass (which is received earlier in the program). The program then takes whatever the .exe created and pipes it to an external file. At this point, the variable HBEA is not being passed to the command line, so the .exe never runs, which causes the .txt to be blank. Since the file is blank, I cannot grab data from it and therefore cannot complete the program.
Any ideas?
EDIT:
So I attempted the following code per some suggestions:
cmd = "C:\\FIOCheck\\xutil.exe -i get phy " +HBEA + ">C:\\FIOCheck\\HBEAResult.txt"
print cmd
os.system(cmd)
The following output was generated:
50012BE00004BDFF #HBEA variable
C:\FIOCheck\xutil.exe -i get phy 50012BE00004BDFF>C:\FIOCheck\HBEAResult.txt #the cmd var
However this still isn't passing the value through. Is the HBEA variable too long?
SOLVED
This program worked with some editing from the best answer. The commands were passing correctly, however the way I formatted it was not correct. The new formatting looks like:
cmd = "C:\\FIOCheck\\xutil.exe -i " + HBEA + " get ver" + ">C:\\FIOCheck\\HBEAResult.txt"
os.system(cmd)
Thanks for the help!
os.system("C:\\FIOCheck\\xutil.exe -i get phy" +HBEA + ">C:\\FIOCheck\\HBEAResult.txt")
should that be
os.system("C:\\FIOCheck\\xutil.exe -i get phy " +HBEA + ">C:\\FIOCheck\\HBEAResult.txt")
and you can always build the string first
cmd = "C:\\FIOCheck\\xutil.exe -i get phy " +HBEA + ">C:\\FIOCheck\\HBEAResult.txt"
print cmd
os.system(cmd)