run bash script for each files in folder - python

i found this script but it need input filename and output filename to work
i'm windows user so i dont know how to run this script for each files in folder
what i want is :
sourcefile=$1 -> this should be input directory
destfile=$2 -> output directory or just originalfilename_preview
so when i'm try to excute script, it will run through files in input directory and excute two ffmpeg script inside
the first ffmpeg script will split video into multiple files in temp folder
the second ffmpeg merge those files in temp folder and complete the whole process with output folder or originalfilename_preview
-> loop for next files until completed
sourcefile=$1
destfile=$2
# Overly simple validation
if [ ! -e "$sourcefile" ]; then
echo 'Please provide an existing input file.'
exit
fi
if [ "$destfile" == "" ]; then
echo 'Please provide an output preview file name.'
exit
fi
# Get video length in seconds
length=$(ffprobe $sourcefile -show_format 2>&1 | sed -n 's/duration=//p' | awk '{print int($0)}')
# Start 20 seconds into the video to avoid opening credits (arbitrary)
starttimeseconds=20
# Mini-snippets will be 2 seconds in length
snippetlengthinseconds=2
# We'll aim for 5 snippets spread throughout the video
desiredsnippets=5
# Ensure the video is long enough to even bother previewing
minlength=$(($snippetlengthinseconds*$desiredsnippets))
# Video dimensions (these could probably be command line arguments)
dimensions=640:-1
# Temporary directory and text file where we'll store snippets
# These will be cleaned up and removed when the preview image is generated
tempdir=snippets
listfile=list.txt
# Display and check video length
echo 'Video length: ' $length
if [ "$length" -lt "$minlength" ]
then
echo 'Video is too short. Exiting.'
exit
fi
# Loop and generate video snippets
mkdir $tempdir
interval=$(($length/$desiredsnippets-$starttimeseconds))
for i in $(seq 1 $desiredsnippets)
do
# Format the second marks into hh:mm:ss format
start=$(($(($i*$interval))+$starttimeseconds))
formattedstart=$(printf "%02d:%02d:%02d\n" $(($start/3600)) $(($start%3600/60)) $(($start%60)))
echo 'Generating preview part ' $i $formattedstart
# Generate the snippet at calculated time
ffmpeg -i $sourcefile -vf scale=$dimensions -preset fast -qmin 1 -qmax 1 -ss $formattedstart -t $snippetlengthinseconds -threads $(nproc) $tempdir/$i.mp4
done
# Concat videos
echo 'Generating final preview file'
# Generate a text file with one snippet video location per line
# (https://trac.ffmpeg.org/wiki/Concatenate)
for f in $tempdir/*; do echo "file '$f'" >> $listfile; done
# Concatenate the files based on the generated list
ffmpeg -f concat -safe 0 -i $listfile -threads $(nproc) -an -tune zerolatency -x264opts bitrate=2000:vbv-maxrate=2000:vbv-bufsize=166 -vcodec libx264 -f mpegts -muxrate 2000K -y $destfile.mp4
echo 'Done! Check ' $destfile '.mp4!'
# Cleanup
rm -rf $tempdir $listfile
source: https://davidwalsh.name/video-preview
#Christopher Hoffman
wsl already installed, of course, i'm already run this script without problem but i need to manual input/output filename
./preview.sh input.mp4 out
#Renaud Pacalet
yes, all files in input directory or drag&drop files (but all files
in directory seem like easier)
i think modify script
output file have suffix in name "_preview"
if it have suffix, same input folder is ok,
video files (mkv,mp4,avi,..)
some files name have unicode character so i think input file will
inside " "

The easiest is probably to keep the script as it is and to use a bash loop to process all files in the input directory. Let's assume:
the input directory is /my/video/files,
you want to store all outputs in directory /some/where,
the script you show is in /else/where/myscript.sh,
you want to process all files in the input directory.
Just open a terminal where bash is the interactive shell and type:
shopt -s nullglob
chmod +x /else/where/myscript.sh
mkdir -p /some/where
cd /my/video/files
for f in *; do
/else/where/myscript.sh "$f" "/some/where/$f"
done
shopt -u nullglob
Explanations:
shopt -s nullglob enables the nullglob option. Without this, if there are no files at all in the input directory, there would still be one iteration of the loop with f=*. shopt -u nullglob disables it when we are done.
chmod +x /else/where/myscript.sh makes your script executable, just in case it was not already.
mkdir -p /some/where creates the output directory, just in case it did not exist yet.
cd /my/video/files changes the current directory to the input directory in which you have your video files.
for f in *; do loops over all files in the current directory (this is what the * stands for). In each iteration variable f is assigned the current file name.
/else/where/myscript.sh "$f" "/some/where/$f" executes your script with two parameters: the name of the input file and the name of the output file, both quoted with double quotes to prevent word splitting.
Note: if all files are not video files you can be more specific:
for f in *.mkv *.mp4 *.avi; do
...
Of course, for easier reuse, you can also create a new shell script file with all this.

Related

I want to make my script work with redirection [duplicate]

The following Perl script (my.pl) can read from either the file in the command line arguments or from standard input (STDIN):
while (<>) {
print($_);
}
perl my.pl will read from standard input, while perl my.pl a.txt will read from a.txt. This is very handy.
Is there an equivalent in Bash?
The following solution reads from a file if the script is called with a file name as the first parameter $1 and otherwise from standard input.
while read line
do
echo "$line"
done < "${1:-/dev/stdin}"
The substitution ${1:-...} takes $1 if defined. Otherwise, the file name of the standard input of the own process is used.
Perhaps the simplest solution is to redirect standard input with a merging redirect operator:
#!/bin/bash
less <&0
Standard input is file descriptor zero. The above sends the input piped to your bash script into less's standard input.
Read more about file descriptor redirection.
Here is the simplest way:
#!/bin/sh
cat -
Usage:
$ echo test | sh my_script.sh
test
To assign stdin to the variable, you may use: STDIN=$(cat -) or just simply STDIN=$(cat) as operator is not necessary (as per #mklement0 comment).
To parse each line from the standard input, try the following script:
#!/bin/bash
while IFS= read -r line; do
printf '%s\n' "$line"
done
To read from the file or stdin (if argument is not present), you can extend it to:
#!/bin/bash
file=${1--} # POSIX-compliant; ${1:--} can be used either.
while IFS= read -r line; do
printf '%s\n' "$line" # Or: env POSIXLY_CORRECT=1 echo "$line"
done < <(cat -- "$file")
Notes:
- read -r - Do not treat a backslash character in any special way. Consider each backslash to be part of the input line.
- Without setting IFS, by default the sequences of Space and Tab at the beginning and end of the lines are ignored (trimmed).
- Use printf instead of echo to avoid printing empty lines when the line consists of a single -e, -n or -E. However there is a workaround by using env POSIXLY_CORRECT=1 echo "$line" which executes your external GNU echo which supports it. See: How do I echo "-e"?
See: How to read stdin when no arguments are passed? at stackoverflow SE
I think this is the straightforward way:
$ cat reader.sh
#!/bin/bash
while read line; do
echo "reading: ${line}"
done < /dev/stdin
--
$ cat writer.sh
#!/bin/bash
for i in {0..5}; do
echo "line ${i}"
done
--
$ ./writer.sh | ./reader.sh
reading: line 0
reading: line 1
reading: line 2
reading: line 3
reading: line 4
reading: line 5
The echo solution adds new lines whenever IFS breaks the input stream. #fgm's answer can be modified a bit:
cat "${1:-/dev/stdin}" > "${2:-/dev/stdout}"
The Perl loop in the question reads from all the file name arguments on the command line, or from standard input if no files are specified. The answers I see all seem to process a single file or standard input if there is no file specified.
Although often derided accurately as UUOC (Useless Use of cat), there are times when cat is the best tool for the job, and it is arguable that this is one of them:
cat "$#" |
while read -r line
do
echo "$line"
done
The only downside to this is that it creates a pipeline running in a sub-shell, so things like variable assignments in the while loop are not accessible outside the pipeline. The bash way around that is Process Substitution:
while read -r line
do
echo "$line"
done < <(cat "$#")
This leaves the while loop running in the main shell, so variables set in the loop are accessible outside the loop.
Perl's behavior, with the code given in the OP can take none or several arguments, and if an argument is a single hyphen - this is understood as stdin. Moreover, it's always possible to have the filename with $ARGV.
None of the answers given so far really mimic Perl's behavior in these respects. Here's a pure Bash possibility. The trick is to use exec appropriately.
#!/bin/bash
(($#)) || set -- -
while (($#)); do
{ [[ $1 = - ]] || exec < "$1"; } &&
while read -r; do
printf '%s\n' "$REPLY"
done
shift
done
Filename's available in $1.
If no arguments are given, we artificially set - as the first positional parameter. We then loop on the parameters. If a parameter is not -, we redirect standard input from filename with exec. If this redirection succeeds we loop with a while loop. I'm using the standard REPLY variable, and in this case you don't need to reset IFS. If you want another name, you must reset IFS like so (unless, of course, you don't want that and know what you're doing):
while IFS= read -r line; do
printf '%s\n' "$line"
done
More accurately...
while IFS= read -r line ; do
printf "%s\n" "$line"
done < file
Please try the following code:
while IFS= read -r line; do
echo "$line"
done < file
I combined all of the above answers and created a shell function that would suit my needs. This is from a Cygwin terminal of my two Windows 10 machines where I had a shared folder between them. I need to be able to handle the following:
cat file.cpp | tx
tx < file.cpp
tx file.cpp
Where a specific filename is specified, I need to used the same filename during copy. Where input data stream has been piped through, then I need to generate a temporary filename having the hour minute and seconds. The shared mainfolder has subfolders of the days of the week. This is for organizational purposes.
Behold, the ultimate script for my needs:
tx ()
{
if [ $# -eq 0 ]; then
local TMP=/tmp/tx.$(date +'%H%M%S')
while IFS= read -r line; do
echo "$line"
done < /dev/stdin > $TMP
cp $TMP //$OTHER/stargate/$(date +'%a')/
rm -f $TMP
else
[ -r $1 ] && cp $1 //$OTHER/stargate/$(date +'%a')/ || echo "cannot read file"
fi
}
If there is any way that you can see to further optimize this, I would like to know.
#!/usr/bin/bash
if [ -p /dev/stdin ]; then
#for FILE in "$#" /dev/stdin
for FILE in /dev/stdin
do
while IFS= read -r LINE
do
echo "$#" "$LINE" #print line argument and stdin
done < "$FILE"
done
else
printf "[ -p /dev/stdin ] is false\n"
#dosomething
fi
Running:
echo var var2 | bash std.sh
Result:
var var2
Running:
bash std.sh < <(cat /etc/passwd)
Result:
root:x:0:0::/root:/usr/bin/bash
bin:x:1:1::/:/usr/bin/nologin
daemon:x:2:2::/:/usr/bin/nologin
mail:x:8:12::/var/spool/mail:/usr/bin/nologin
Two principle ways:
Either pipe the argument files and stdin into a single stream and process that like stdin (stream approach)
Or redirect stdin (and argument files) into a named pipe and process that like a file (file approach)
Stream approach
Minor revisions to earlier answers:
Use cat, not less. It's faster and you don't need pagination.
Use $1 to read from first argument file (if present) or $* to read from all files (if present). If these variables are empty, read from stdin (like cat does)
#!/bin/bash
cat $* | ...
File approach
Writing into a named pipe is a bit more complicated, but this allows you to treat stdin (or files) like a single file:
Create pipe with mkfifo.
Parallelize the writing process. If the named pipe is not read from, it may block otherwise.
For redirecting stdin into a subprocess (as necessary in this case), use <&0 (unlike what others have been commenting, this is not optional here).
#!/bin/bash
mkfifo /tmp/myStream
cat $* <&0 > /tmp/myStream & # separate subprocess (!)
AddYourCommandHere /tmp/myStream # process input like a file,
rm /tmp/myStream # cleaning up
File approach: Variation
Create named pipe only if no arguments are given. This may be more stable for reading from files as named pipes can occasionally block.
#!/bin/bash
FILES=$*
if echo $FILES | egrep -v . >&/dev/null; then # if $FILES is empty
mkfifo /tmp/myStream
cat <&0 > /tmp/myStream &
FILES=/tmp/myStream
fi
AddYourCommandHere $FILES # do something ;)
if [ -e /tmp/myStream ]; then
rm /tmp/myStream
fi
Also, it allows you to iterate over files and stdin rather than concatenate all into a single stream:
for file in $FILES; do
AddYourCommandHere $file
done
The following works with standard sh (tested with Dash on Debian) and is quite readable, but that's a matter of taste:
if [ -n "$1" ]; then
cat "$1"
else
cat
fi | commands_and_transformations
Details: If the first parameter is non-empty then cat that file, else cat standard input. Then the output of the whole if statement is processed by the commands_and_transformations.
The code ${1:-/dev/stdin} will just understand the first argument, so you can use this:
ARGS='$*'
if [ -z "$*" ]; then
ARGS='-'
fi
eval "cat -- $ARGS" | while read line
do
echo "$line"
done
Reading from stdin into a variable or from a file into a variable.
Most examples in the existing answers use loops that immediately echo each of line as it is read from stdin. This might not be what you really want to do.
In many cases you need to write a script that calls a command which only accepts a file argument. But in your script you may want to support stdin also. In this case you need to first read full stdin and then provide it as a file.
Let's see an example. The script below prints the certificate details of a certificate (in PEM format) that is passed either as a file or via stdin.
# print-cert script
content=""
while read line
do
content="$content$line\n"
done < "${1:-/dev/stdin}"
# Remove the last newline appended in the above loop
content=${content%\\n}
# Keytool accepts certificate only via a file, but in our script we fix this.
keytool -printcert -v -file <(echo -e $content)
# Read from file
cert-print mycert.crt
# Owner: CN=....
# Issuer: ....
# ....
# Or read from stdin (by pasting)
cert-print
#..paste the cert here and press enter
# Ctl-D
# Owner: CN=....
# Issuer: ....
# ....
# Or read from stdin by piping to another command (which just prints the cert(s) ). In this case we use openssl to fetch directly from a site and then print its info.
echo "" | openssl s_client -connect www.google.com:443 -prexit 2>/dev/null \
| sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p' \
| cert-print
# Owner: CN=....
# Issuer: ....
# ....
This one is easy to use on the terminal:
$ echo '1\n2\n3\n' | while read -r; do echo $REPLY; done
1
2
3
I don't find any of these answers acceptable. In particular, the accepted answer only handles the first command line parameter and ignores the rest. The Perl program that it is trying to emulate handles all the command line parameters. So the accepted answer doesn't even answer the question.
Other answers use Bash extensions, add unnecessary 'cat' commands, only work for the simple case of echoing input to output, or are just unnecessarily complicated.
However, I have to give them some credit, because they gave me some ideas. Here is the complete answer:
#!/bin/sh
if [ $# = 0 ]
then
DEFAULT_INPUT_FILE=/dev/stdin
else
DEFAULT_INPUT_FILE=
fi
# Iterates over all parameters or /dev/stdin
for FILE in "$#" $DEFAULT_INPUT_FILE
do
while IFS= read -r LINE
do
# Do whatever you want with LINE here.
echo $LINE
done < "$FILE"
done
As a workaround, you can use the stdin device in the /dev directory:
....| for item in `cat /dev/stdin` ; do echo $item ;done
With...
while read line
do
echo "$line"
done < "${1:-/dev/stdin}"
I got the following output:
Ignored 1265 characters from standard input. Use "-stdin" or "-" to tell how to handle piped input.
Then decided with for:
Lnl=$(cat file.txt | wc -l)
echo "Last line: $Lnl"
nl=1
for num in `seq $nl +1 $Lnl`;
do
echo "Number line: $nl"
line=$(cat file.txt | head -n $nl | tail -n 1)
echo "Read line: $line"
nl=$[$nl+1]
done
Use:
for line in `cat`; do
something($line);
done

arranging text files side by side using python

I have 3000 text files in a directory and each .txt file contain single column data. i want to arrange them side by side to make it a mxn matrix file.
For example: paste 1.txt 2.txt 3.txt 4.txt .............3000.txt in linux
For this i tried
printf "%s\n" *.txt | sort -n | xargs -d '\n' paste
However it gives error paste: filename.txt: Too many open files
please suggest a better solution for the same using python.
Based on the inputs received, please follow the below steps.
# Change ulimit to increase the no of open files at a time
$ ulimit -n 4096
# Remove blank lines from all the files
$ sed -i '/^[[:space:]]*$/d' *.txt
# Join all files side by side to form a matrix view
$ paste $(ls -v *.txt) > matrix.txt
# Fill the blank values in the matrix view with 0's using awk inplace
$ awk -i inplace 'BEGIN { FS = OFS = "\t" } { for(i=1; i<=NF; i++) if($i ~ /^ *$/) $i = 0 }; 1' matrix.txt
You don't need python for this; if you first increase the number of open files a process can have using ulimit, it becomes easy to get columns in the right order in bash, zsh, or ksh93 shells, using paste and brace expansion to generate the filenames in the desired order instead of having to sort the results of filename expansion:
% ulimit -n 4096
% paste {1..3000}.txt > matrix.txt
(I tested this in all three shells I mentioned on a Linux box, and it works with all of them with no errors about the command line being too long or anything else.)
You could also arrange to have the original files use a different naming scheme that sorts naturally, like 0001.txt, 0002.txt, ..., 3000.txt and then just paste [0-9]*.txt > matrix.txt.

Create a Stdout and stderr files with multiple commands and compress the output

Since I have many commands in a bash script I would like to use the following formula to get stdout and stderr files:
{
...
commands
...
} 2>stderr.txt >stdout.txt
However my code is a little bit more complicated than that.
Firstly, the bash file is run using session = Popen(['/home/claudio/programs/instruction.sh', variable1, variable2], stdout=PIPE, stderr=PIPE)
Then because within the 'commands' reference, a variable ($folder) is created and it is useful to find the correct pathway for the final archive compression.
Since the sderr and the stdout files must be stored within the final compress folder I thought that the final code could sound something like that
#!bin/bash
variable1=$1
variable2=$2
folder=$variable1$variable2
{
...
commands
...
folder=$variable1$variable2
path=/home/claudio/test/$folder/final/proof
mv *.jpeg $path
...
} 2>/home/claudio/test/$folder/final/proof/stderr.txt >/home/claudio/test/$folder/final/proof/stdout.txt
cd /home/claudio/test/ && tar -zcf $folder.tar.gz $folder
mv $folder.tar.gz '/home/claudio/newfiles/'
However it does not work and despite the files would be correctly created, the instructions within the brackets are not read.
Is there an easy way to solve this issue using bash or python as well?
Thank you
EDIT:
As required I forward here a brief example:
#!bin/bash
start=$1
name=$2
folder=$start$name
{
start=$1
name=$2
folder=$start$name
mkdir /home/claudio/Scrivania/$folder
echo $PWD
echo "sounds good"
} 2>/home/claudio/Scrivania/$folder/stderr.txt >/home/claudio/Scrivania/$folder/stdout.txt
cd /home/claudio/Scrivania
tar -zcf $folder.tar.gz $folder
output:
line 15: /home/claudio/Scrivania/23421claudio/stderr.txt Not existing file or directory
{
mkdir /home/claudio/Scrivania/$folder
} 2>/home/claudio/Scrivania/$folder/stderr.txt
I think the error is clear - there is no such path when creating the redirection. You seem to be creating the directory after creating the redirection. First create the directory, so it exists, then redirect stuff to a file inside it. A redirection will not automatically create the path to a file. Tested on repl.
mkdir ./"$folder" # yay create the folder
{
stuff...
} 2>./"$folder"/stderr.txt # redirect to a file inside _existing_ folder

grouping and divding files which contains numbers in it into saperate folders

I wanted to move the files in group of 30 in sequence starting from image_1,image_2... from current folder to the new folder.
the file name pattern is like below
image_1.png
image_2.png
.
.
.
image_XXX.png
I want to move image_[1-30].png to folder fold30
and image[31-60].png to fold60 and so on
I have following code to do this and it works wanted to know is there any shortcut to do this.
or is there any smaller code that i can write for the same
#!/bin/bash
counter=0
folvalue=30
totalFiles=$(ls -1 image_*.png | sort -V | wc -l)
foldernames=fold$folvalue
for file in $(ls -1 image_*.png | sort -V )
do
((counter++))
mkdir -p $foldernames
mv $file ./$foldernames/
if [[ "$counter" -eq "$folvalue" ]];
then
let folvalue=folvalue+30
foldernames="fold${folvalue}"
echo $foldernames
fi
done
the above code moves image_1,image_2,..4..30 in folder
fold30
image_31,....image_60 to folder
fold60
I really recommend using sed all the time. It's hard on the eyes but once you get used to it you can do all these jaring tasks in no time.
What it does is simple. Running sed -e "s/regex/substitution/" <(cat file) goes through each line replacing matching patterns regex with substitution.
With it you can just transform your input into comands and pipe it to bash.
If you want to know more there's good documentation here. (also not easy on the eyes though)
Anyway here's the code:
while FILE_GROUP=$(find . -maxdepth 0 -name "image_*.png" | sort -V | head -30) && [ -n "$FILE_GROUP" ]
do
$FOLDER="${YOUR_PREFIX}$(sed -e "s/^.*image_//" -e "s/\.png//" <(echo "$FILE_GROUP" | tail -1))"
mkdir -p $FOLDER
sed -e "s/\.\///" -e "s|.*|mv & $FOLDER|" <(echo "$FILE_GROUP") | bash
done
And here's what it should do:
- while loop grabs the first 30 files.
- take the number out of the last of those files and name the directory
- mkdir FOLDER
- go through each line and turn $FILE into mv $FILE $FOLDER then execute those lines (pipe to bash)
note: replace $YOUR_PREFIXwith your folder
EDIT: surprisingly the code did not work out of the box(who would have thought...) But I've done some fixing and testing and it should work now.
The simplest way to do that is with rename, a.k.a. Perl rename. It will:
let you run any amount of code of arbitrary complexity to figure out a new name,
let you do a dry run telling you what it would do without doing anything,
warn you if any files would be overwritten,
automatically create intermediate directory hierarchies.
So the command you want is:
rename -n -p -e '(my $num = $_) =~ s/\D//g; $_ = ($num+29)-(($num-1)%30) . "/" . $_' *png
Sample Output
'image_1.png' would be renamed to '30/image_1.png'
'image_10.png' would be renamed to '30/image_10.png'
'image_100.png' would be renamed to '120/image_100.png'
'image_101.png' would be renamed to '120/image_101.png'
'image_102.png' would be renamed to '120/image_102.png'
'image_103.png' would be renamed to '120/image_103.png'
'image_104.png' would be renamed to '120/image_104.png'
...
...
If that looks correct, you can run it again without the -n switch to do it for real.

How to call a python script from bash with arguments?

I am going round and round in circles here and there is probably a very simple answer.
I have a bash script that loops around video files in a thumbdrive and if it finds the .mp4 extension, it will upload to youtube using Youtube's python example - which works from the command line.
I'm having trouble passing the arguments to the python script call in the bash script:
cd /home/pi/Documents
echo $PWD
for _file in /media/pi/*/*.mp4; do
clean=$(echo $_file|wc -m)
if [ $clean -gt 15 ]; then
did=$(grep "$_file" /home/pi/Documents/uploaded_videos.txt | wc -l)
if [ $did = 0 ]; then
#Edit this to your liking
#echo "now uploading $_file"
args="--file=\"${_file}\" –-title=\"${_file}\" –-description=\"show 2018\" -–keywords=\"2018,show, Winter,show 2018,\" -–category=\"28\" -–privacyStatus=\"private\""
python yt_up.py "$args"
echo $_file uploaded to youtube via google developer api | logger
echo $_file >> /home/pi/Documents/uploaded_videos.txt
fi
fi
done
But the arguments are not recognised by the yt_up.py script. I've tried various combinations of quotes but I just can't get it to work.
I can pass one argument to the script:
python yt_up.py --file="$_file" works,
but adding additional arguments were not recognised.
Lots of antipatterns and misconceptions in your current code!
cd /home/pi/Documents
For best practice you should test whether this succeeded. Probably not a problem currently, but it doesn't hurt to do so.
echo $PWD
Missing quotes here, but that's not fatal.
for _file in /media/pi/*/*.mp4; do
clean=$(echo $_file|wc -m)
That's not how to count the number of characters in a string. You should use clean=${#_file} instead.
if [ $clean -gt 15 ]; then
Here I guess you want to know whether the glob matched anything. That's not how to proceed. You either want to use the shopt -s nullglob option of Bash, or use if [ -e "$_file" ]; then to check whether the glob matched an actual file.
did=$(grep "$_file" /home/pi/Documents/uploaded_videos.txt | wc -l)
if [ $did = 0 ]; then
That's how you check whether a file doesn't contains a string. Use if ! grep -q "$_file" /home/pi/Documents/uploaded_videos.txt; then instead.
#Edit this to your liking
#echo "now uploading $_file"
args="--file=\"${_file}\" –-title=\"${_file}\" –-description=\"show 2018\" -–keywords=\"2018,show, Winter,show 2018,\" -–category=\"28\" -–privacyStatus=\"private\""
Here you have misconceptions about how the shell reads a command. Quote removal is performed before variable expansion, so your quotes are wrong. Typically you want to use an array! that's what Bash arrays are for! Also note that you have some weird hyphens here.
python yt_up.py "$args"
echo $_file uploaded to youtube via google developer api | logger
echo $_file >> /home/pi/Documents/uploaded_videos.txt
fi
fi
done
Here's a possibility to fix your mistakes (and hopefully make it work):
#!/bin/bash
# We define a variable with the path to the uploaded_videos.txt file
uploaded_videos=/home/pi/Documents/uploaded_videos.txt
# Will expand non-matching globs to nothing:
shopt -s nullglob
# cd into the directory, and exit if fails
cd /home/pi/Documents || exit
# Use the builtin pwd instead of echo "$PWD"
pwd
for file in /media/pi/*/*.mp4; do
if ! grep -qFx "$file" "$uploaded_videos"; then
# Define an array to contain the arguments to pass
args=(
--file="$file"
--title="$file"
--description="show 2018"
--keywords="2018,show, Winter,show 2018,"
--category="28"
--privacyStatus="private"
)
# call your script with the fields of the array as arguments
python yt_up.py "${args[#]}"
echo "$file uploaded to youtube via google developer api" | logger
echo "$file" >> "$uploaded_videos"
fi
done
You could improve the final step by explicitly checking whether yt_up.py succeeded:
if python yt_up.py "${args[#]}"; then
echo "$file uploaded to youtube via google developer api" | logger
echo "$file" >> "$uploaded_videos"
else
echo "Something wrong happened!"
fi
I think you have just to add you argument at in same line as python command. like this
python yt_up.py --file="${_file}" –-title="${_file}" --description="show 2018" --keywords="2018,show, Winter,show 2018," -–category="28" --privacyStatus="private"
It works
Your args variable for python is a large sys.argv[1], so may be you have troubles because of it.
Rewrite shell like this:
cd /home/pi/Documents
echo $PWD
for _file in /media/pi/*/*.mp4; do
clean=$(echo $_file|wc -m)
if [ $clean -gt 15 ]; then
did=$(grep "$_file" /home/pi/Documents/uploaded_videos.txt | wc -l)
if [ $did = 0 ]; then
#Edit this to your liking
#echo "now uploading $_file"
args=( $(echo "--file=\"${_file}\" –-title=\"${_file}\" –-description=\"show 2018\" -–keywords=\"2018,show, Winter,show 2018,\" -–category=\"28\" -–privacyStatus=\"private\"") )
python yt_up.py "${args[#]}"
echo $_file uploaded to youtube via google developer api | logger
echo $_file >> /home/pi/Documents/uploaded_videos.txt
fi
fi
done
Now args is array, and each it's element will be read by python as separated sys.argv[i] element.

Categories

Resources