I am trying to create a bash script that runs a python3 script with pdb trace set. As such, in my bash script I have the following lines:
python3 path/to/my_script.py
n
What I want to happen is for bash to run the python script, which will then open the python debugger. Then, the bash script will send the key 'n' to the pdb shell so that pdb executes the first line in the python script.
The script does not work as expected and bash waits until the python script has completed (or exited) to execute 'n' in the command line which just opens node.
I thought this might be a problem unique to pdb shells so I tried executing the following in bash:
python3
print("Hello")
However, again, we observe that the script creates a python3 shell and then waits for the shell to exit before executing print("Hello") in the terminal. I understand that I could use python3 -c for this case, but that does not address the case of passing commands to the pdb shell in the context of the running script.
Is there any way to send the 'n' command to the pdb shell that the python script generates?
Your code will try to run two commands. First, it will run your python script, then it will try to run a command called n. Assuming your script needs you read from stdin you can do one of the following:
Use a herestring:
python3 path/to/my_script.py <<< "n"
Use a pipeline:
One example echo "n" | python3 path/to/my_script.py
Echo is not the only command you can use. You can also use printf or even yes for this use case.
You can use a coprocess to send and receive from pdb.
#! /bin/bash
send() {
echo "$1" >&${PDB[1]}
}
recv() {
IFS= read -r -u ${PDB[0]} line
echo $line
}
coproc PDB { /path/to/my_script.py; }
send 'n'
recv
#...
send 'n'
recv
I'm using a sh file to launch some other commands from a jupyterhub_config.py file, which it basically recreates a pod for a user.
The issue seems to be here in sh file that I have refactor best to my knowledge.
echo "Executing the command: $#"
exec "$#" & tail -f /var/log/logShell/logInShell.log
echo "External user startup script finished"
The expected behavior would be that the commands passed to "$#" gets executed, and then the logs that are in logInShell.log are shown in the shell. However, the logs are never shown in shell despite the file having several lines.
From what I have read, running exec will replace the subprocess, which it makes me think that also replaces the shell and that's why the result of tail is never seen in the current shell.
Once the pod is running, I can see the tail command running using ps -A. However, the logs are never gotten to the shell at all. Does anybody knows what would be the right way to get the run tail in the current process or shell?.
Leave out the exec keyword. You may also want to monitor when the main program you're wrapping exits, and shut down tail if/when that happens.
#!/bin/sh
# warning: this is not actually echoing the same thing as the command you're running
echo "Executing the command: $#"
# --follow=name --retry makes tail behave right even if the log is only created
# after tail's startup is complete; requires GNU extensions
"$#" & main_pid=$!
tail --follow=name --retry /var/log/logShell/logInShell.log & tail_pid=$!
wait "$main_pid"; main_rc=$?
echo "External user startup script finished"
kill "$tail_pid"
exit "$main_rc"
Now, about why echo "Arguments are: $#" is actively misleading / worse than useless:
Consider ./yourscript "first word" "*" "last word" -- the quotes around first word and last word are critical to understanding the command line correctly; same for the quotes around *.
However, echo "Arguments are: $#" runs echo "Arguments are: first word" "*" "last word" -- which writes on output Arguments are: first word * last word, completely destroying the record of where the quotes were in the original command.
If your shell is bash, zsh, or sufficiently modern ksh, this is a nonfatal problem: Instead of echo "Arguments are: $#", you can use:
# bash only
printf -v args_str '%q ' "$#"
echo "Arguments are: $args_str"
...or, with bash 5.0 or newer:
echo "Arguments are: ${*#Q}
...but with baseline POSIX sh, your best bet is probably to print each argument on a second line if you want to disambiguate:
echo "Arguments are:"
printf ' - %s\n' "$#"
I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.
I'm using Paramiko in order to execute a single or a multiple commands and get its output.
Since Paramiko doesn't allow executing multiple commands on the same channel session I'm concatenating each command from my command list and executing it in a single line, but the output can be a whole large output text depending on the commands so it's difficult to differentiate which output is for each command.
ssh.exec_command("pwd ls- l cd / ls -l")
I want to have something like:
command_output = [('pwd','output_for_pwd'),('ls -l','output_for_ls'), ... ]
to work easier with every command output.
Is there a way to do it without changing the Paramiko library?
The only solution is (as #Barmar already suggested) to insert unique separator between individual commands. Like:
pwd && echo "end-of-pwd" && cd /foo && echo "end-of-cd" && ls -l && echo "end-of-ls"
And then look for the unique string in the output.
Though imo, it is much better to simply separate the commands into individual exec_command calls. Though I do not really think that you need to execute multiple commands in a row often. Usually you only need something like, cd or set, and these commands do not really output anything.
Like:
pwd
ls -la /foo (or cd /foo && ls -la)
For a similar questions, see:
Execute multiple dependent commands individually with Paramiko and find out when each command finishes (for "shell" channel)
Combining interactive shell and recv_exit_status method using Paramiko
I used to do this for sending commands in ssh and telnet, you can capture the output with each command and try.
cmd = ['pwd', 'ls - lrt', 'exit']
cmd_output =[]
for cmd in cmd:
tn.write(cmd)
tn.write("\r\n")
out = tn.read_until('#')
cmd_output.append((cmd,out))
print out
I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.
As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop
Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.