I fear that my question is a duplicate but I can't find the answer. Maybe you can help me?
I would like to restart my kivy-program if I save the kv or py file.
I tried with
inotifywait -mq -e close_write /home/name/kivy/ | while read FILE
do
pkill python
python /home/name/kivy/main.py
done
If I change a file the first time, main.py starts, but if I change it again I need to close the program by hand before it restarts.
Instead of pkill python I also tried to use
kill $(ps aux | pgrep '[p]ython' | awk '{print $2}')
but with the same result and the problem that the mintMenu.py is closing, too.
Should I use something totally different to inotify?
I'm using entr to achieve the same thing. Once installed (e.g. via brew), just run the following command in your work directory /home/name/kivy/:
find . -name "*.py" -or -name "*.kv" | entr sh -c "pkill -f python main.py ; python main.py &"
Related
I'm a brand new noob in python universe, so don't judge me too fast :-)
I'm trying to force a python script to reload or restart at the beggining of a bash script.
I've tried :
pkill -f myscript.py
and
killall myscript.py
and others...
Actually, I would like to make run the same script that call .wav files after having changed those .wav files... If I don't reload the script or restart it, it keeps playing the old files.
Maybe, there is other solutions.
Here is the script I want to reload (it's a button script playing music for my daughter)
#!/usr/bin/env python3
import pygame
from gpiozero import LED, Button
from signal import pause
pygame.init()
button_sounds = {Button(2): pygame.mixer.Sound("/home/pi/gpio-music-box/samples/1.wav"),
Button(3): pygame.mixer.Sound("/home/pi/gpio-music-box/samples/2.wav"),
Button(4): pygame.mixer.Sound("/home/pi/gpio-music-box/samples/3.wav"),
Button(17): pygame.mixer.Sound("/home/pi/gpio-music-box/samples/4.wav")}
for button, sound in button_sounds.items():
button.when_pressed = sound.play
pause()
And here is my bash script :
#!/bin/bash
***HERE THE COMMAND I NEED !***
rm -r /home/pi/gpio-music-box/samples/*
cp -r /home/pi/gpio-music-box/comptines/* /home/pi/gpio-music-box/samples/
/home/pi/gpio-music-box/music.py
Thank you very much, and scuze my english, I'm french :-)
Andy
try this
#!/bin/bash
pid=$(ps auxwww | grep nameOfScript.py | grep -v grep | awk '{print $2}')
kill -9 $pid
rm -r /home/pi/gpio-music-box/samples/*
cp -r /home/pi/gpio-music-box/comptines/* /home/pi/gpio-music-box/samples/
nohup /home/pi/gpio-music-box/music.py &
Have a nice day
Firstly, you can reduce a lot of the "noise" from ps by using output formatting. You can then stop the need for using both grep and awk by using awk to do the searching also.
ps -eo "%p %a" | awk '/nameOfScript.py/ && $1 != PROCINFO["pid"] { print "kill -9 "$1 }'
This forces ps to only print the pid (%p) and the full command (%a). The output is then piped to awk where is searches for lines with the name of the script contained. It discounts any entries with the current process id of awk and then uses this to print a kill command with the relevant process id.
Once you have verified that the kill command displays as expected, you can use awk's system function to actually run the command through:
ps -eo "%p %a" | awk '/prometheous-things.py/ && $1 != PROCINFO["pid"] { system("kill -9 "$1) }'
I am trying to automate my task through the terminal using bash. I have a python script that takes two parameters (paths of input and output) and then the script runs and saves the output in a text file.
All the input directories have a pattern that starts from "g-" whereas the output directory remains static.
So, I want to write a script that could run on its own so that I don't have to manually run it on hundreds of directories.
$ python3 program.py ../g-changing-directory/ ~/static-directory/ > ~/static-directory/final/results.txt
You can do it like this:
find .. -maxdepth 1 -type d -name "g-*" | xargs -n1 -P1 -I{} python3 program.py {} ~/static-directory/ >> ~/static-directory/final/results.txt
find .. will look in the parent directory -maxdepth 1 will look only on the top level and not take any subdirectories -type d only takes directories -name "g-*" takes objects starting with g- (use -iname "g-*" if you want objects starting with g- or G-).
We pipe it to xargs which will apply the input from stdin to the command specified. -n1 tells it to start a process per input word, -P1 tells it to only run one process at a time, -I{} tells it to replace {} with the input in the command.
Then we specify the command to run for the input, where {} is replaced by xargs.: python3 program.py {} ~/static-directory/ >> ~/static-directory/final/results.txt have a look at the >> this will append to a file if it exists, while > will overwrite the file, if it exists.
With -P4 you could start four processes in parallel. But you do not want to do that, as you are writing into one file and multi-processing can mess up your output file. If every process would write into its own file, you could do multi-processing safely.
Refer to man find and man xargs for further details.
There are many other ways to do this, as well. E.g. for loops like this:
for F in $(ls .. | grep -oP "g-.*"); do
python3 program.py $F ~/static-directory/ >> ~/static-directory/final/results.txt
done
There are many ways to do this, here's what I would write:
find .. -type d -name "g-*" -exec python3 program.py {} ~/static-directory/ \; > ~/static-directory/final/results.txt
You haven't mentioned if you want nested directories to be included, if the answer is no then you have to add the -maxdepth parameter as in #toydarian's answer.
I am trying to execute a bash script from python code. The bash script has some grep commands in a pipe inside a for loop. When I run the bash script itself it gives no errors but when I use it within the python code it says: grep:write error.
The command that I call in python is:
subprocess.call("./change_names.sh",shell=True)
The bash script is:
#!/usr/bin/env bash
for file in *.bam;do new_file=`samtools view -h $file | grep -P '\tSM:' | head -n 1 | sed 's/.\+SM:\(.\+\)/\1/' | sed 's/\t.\+//'`;rename s/$file/$new_file.bam/ $file;done
What am I missing?
You should not use shell=True when you are running a simple command which doesn't require the shell for anything in the command line.
subprocess_call(["./change_names.sh"])
There are multiple problems in the shell script. Here is a commented refactoring.
#!/usr/bin/env bash
for file in *.bam; do
# Use modern command substitution syntax; fix quoting
new_file=$(samtools view -h "$file" |
grep -P '\tSM:' |
# refactor to a single sed script
sed -n 's/.\+SM:\([^\t]\+\).*/\1/p;q')
# Fix quoting some more; don't use rename
mv "$file" "$new_file.bam"
done
grep -P doesn't seem to be necessary or useful here, but without an example of what the input looks like, I'm hesitant to refactor that into the sed script too. I hope I have guessed correctly what your sed version does with the \+ and \t escapes which aren't entirely portable.
This will still produce a warning that you are not reading all of the output from grep in some circumstances. A better solution is probably to refactor even more of this into your Python script.
import glob
for file in glob.glob('*.bam'):
new_name = subprocess.check_output(['samtools', 'view', '-h', file])
for line in new_name.split('\n'):
if '\tSM:' in line:
dest = line.split('\t')[0].split('SM:')[-1] + '.bam'
os.rename(file, dest)
break
Hi try with below modification which will fix your issue.
for file in *.bam;do new_file=`unbuffer samtools view -h $file | grep -P '\tSM:' | head -n 1 | sed 's/.\+SM:\(.\+\)/\1/' | sed 's/\t.\+//'`;rename s/$file/$new_file.bam/ $file;done
Or else try to redirect your standard error to dev/null like below
for file in *.bam;do new_file=`samtools view -h $file >2>/dev/null | grep -P '\tSM:' | head -n 1 | sed 's/.\+SM:\(.\+\)/\1/' | sed 's/\t.\+//'`;rename s/$file/$new_file.bam/ $file;done
Your actual issue is with this command samtools view -h $file While you are running the script from python you should provide a full path like below:-
/fullpath/samtools view -h $file
I am doing two "similar" things:
(1) in python
import os
os.system('cat ...input | awk -f ...awk' -v seed=$RANDOM)
(2) in linux terminal
cat ...input | awk -f ...awk' -v seed=$RANDOM
Actually, my awk file will return a randomized input file, but if I run way(1) many times, the result always be same(only one result). But If I run way(2), then every time I can get a randomized file. What's wrong with it?
If I want to run this command in python, how should I do then?
Thank you so much for you answer.
EDIT:
Adding the actual code:
(1) in python
import os
os.system("cat data/MD-00001-00000100.input | awk -f utils/add_random_real_weights.awk -v seed=$RANDOM")
(2) in linux:
cat data/MD-00001-00000100.input | awk -f utils/add_random_real_weights.awk -v seed=$RANDOM
I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.
As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop
Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.