I have a bash build script which I run sourced so I can activate a Python virtual environment. I firstly run the unit-tests with python3.7 -m unittest. However, if these fail I don't want to run the main program. So, I need to deactivate the virtual environment (so the terminal is back to its original state) and then return 1 to exit the build script.
So my script looks like this
# activate virtual env ...
python3.7 -m unittest || deactivate; return 1;
python3.7 app.py
deactivate
When the unit tests fail, python3.7 -m unittest returns 1 and the virtual environment deactivates as expected.
When the unit tests run successfully, python3.7 -m unittest returns 0, however strangely the right side of the pipe seems to sort of run. I haven't figured out if it's an oddity with bash or with deactivate but here are some samples of the behaviour:
(exit 0) || deactivate; echo "Tests failed"; return 1; (Output: "Tests failed", deactivate not run)
(exit 0) || echo "Deactivating"; deactivate; echo "Tests failed"; return 1; (Output: "Tests failed", deactivate ran)
(exit 0) || echo "Tests failed"; return 1; (Output: Nothing, deactivate not run)
The last case of those three makes sense and follows expected behaviour, the other two do not.
This is related to Bash Pitfall 22, but not exactly the same. The important point is how the statements are grouped:
cmd1 || cmd2; cmd3
will run cmd1, and if the exit status is non-zero, cmd2; then, no matter what, cmd3.
The intention is rather this:
cmd1 || { cmd2; cmd3; }
If cmd1 fails, run cmd2 and cmd3.
Related
I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.
I'm new to shell scripting, i want the command to be in running always.
My .sh file - startscrapy.sh
#!/bin/bash
echo "Scrapyd is started now"
scrapyd
i have changed the permission also chmod +x etc/init.d/startscrapy.sh
I have placed this file in etc/init.d but it is not working.
My understanding as of now " the location etc/init.d is to run the .sh files
whenever the server or system boots up but i want my .sh file to be running state always.
Using crontab you can easily auto start any scripts in ubuntu.
Please do the following steps,
Run the command crontab -e so that you can edit the crontab.
Now add the following line to the crontab editor #reboot sudo
<script> in your case it should be #reboot sudo scrapyd.
Now reboot your system, then you will find scrapyd running.
Hope it Helps.
Take a look at this init.d template and change your one accordingly.
Then you need to register the startup script with your initialisation daemon. Under Ubuntu that would be update-rc.d NAMEofDAEMON default
You want to create a daemon. There are some tutorials on internet to do this , i took this one for you. On the final part, you might use a different way to register the script, this one is for ubuntu.
you need to put the following into a file of the name of your choice (i will take "startscrapy.sh" for now) (you can modify it, obviously, according to your needs)
#!/bin/sh -e
DAEMON="scrapyd" #Command to run
daemon_OPT="" #arguments for your program
DAEMONUSER="user" #Program user
daemon_NAME="scarpyd" #Program name (need to be identical to the executable).
PATH="/sbin:/bin:/usr/sbin:/usr/bin" #don't touch
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $daemon_NAME Daemon"
start-stop-daemon --background --name $daemon_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $daemon_OPT
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $daemon_NAME Daemon"
start-stop-daemon --name $daemon_NAME --stop --retry 5 --quiet --name $daemon_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $daemon_NAME || true #replace with an apropriate killing method
sleep 2
killall -q -9 $daemon_NAME || true #replace with an apropriate killing method
;;
status)
status_of_proc "$daemon_NAME" "$DAEMON" "system-wide $daemon_NAME" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$daemon_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0
Then run as root :
chmod +x etc/init.d/startscrapy.sh
chmod 0755 /etc/init.d/startscrapy.sh (modify by your script location)
systemctl daemon-reload
update-rc.d startscrapy.sh defaults
To remove the daemon, run as root :
update-rc.d -f startscrapy.sh remove
I have a bash script (Controller.sh) that invokes a python script (MyDaemon.py). The latter takes an argument and a command, and can be invoked from the command line like so:
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue start
or
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue stop
or
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue status
I am attempting to get Controller.sh to invoke MyDaemon.py and then exit with a status. The python script should be kicked off and Controller.sh should return. This is my Controller.sh code:
COLOR=$1
COMMAND=$2
DIRNAME=`dirname $0`
RESULT="/tmp/$COLOR.$COMMAND.result"
# remove any old console output
rm -f $RESULT 2>/dev/null
#start with CPU affinity for anything other than CPU 0.
sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p $COLOR $COMMMAND</dev/null >$RESULT 2>&1
STATUS=$?
# print output
cat $RESULT
# check on success
if [ $STATUS -ne 0 ]
then
echo "ERROR: $COLOR $COMMAND failed"
exit 1
fi
Now, if on the command line I invoke Controller.sh blue start it kicks off the python script, but Controller.sh does not return a status. On the other hand, if I run the following it does return:
[nford#myserver]# sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p blue start</dev/null >/tmp/blah.log 2>&1
Started with pid 1326
[nford#myserver]#
I am forced to conclude that there is something about the bash script that is preventing it from returning.
It should be noted that MyDaemon.py does fork processes, which is why I need to redirect output. It should also be noted that I'm lifting the majority of this from another script that does something similar with a php script; some of the meaning I'm fuzzy on (such as STATUS=$?). That said, even if I cut out everything after the sudo taskset invocation line, it still fails to return cleanly. How do I get the bash script to properly execute this command?
Post-Script: I'm a little baffled how this question is 'too specific' and was down-voted/voted to close. In an attempt to be crystal clear; I'm trying to understand the differences in how a forking process script runs in the context of the command line versus a bash script. I've provided a specific example above, but this is a general concept.
UPDATE:
This results when I run the script using bash -x, further showing that it dies on the sudo taskset line. The fact it's left off the start command is confusing.
[nford#myserver]# bash -x Controller.sh Blue start
+ COLOR=Blue
+ COMMAND=start
++ dirname Controller.sh
+ DIRNAME=.
+ RESULT=/tmp/Blue.start.result
+ rm -f /tmp/Blue.start.result
+ sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p Blue
UPDATE:
bash -x reveals the problem: the start command is not being passed through: a typo in the variable name produces as silent bash error. Takeaway: use bash -x for debugging!
Because of your typo - You should use set -u at the top of your scripts, it's a life saver and stops sleepless nights as well as negating the pulling of hair.
set -u would have given you...
myscript.sh: line 11: COMMMAND: unbound variable
Remember you can run scripts like so bash -u myscript.sh arg1 arg2 likewise with -x, they both help it tracking down script issues.
How do you create a Bash script to activate a Python virtualenv?
I have a directory structure like:
.env
bin
activate
...other virtualenv files...
src
shell.sh
...my code...
I can activate my virtualenv by:
user#localhost:src$ . ../.env/bin/activate
(.env)user#localhost:src$
However, doing the same from a Bash script does nothing:
user#localhost:src$ cat shell.sh
#!/bin/bash
. ../.env/bin/activate
user#localhost:src$ ./shell.sh
user#localhost:src$
What am I doing wrong?
When you source, you're loading the activate script into your active shell.
When you do it in a script, you load it into that shell which exits when your script finishes and you're back to your original, unactivated shell.
Your best option would be to do it in a function
activate () {
. ../.env/bin/activate
}
or an alias
alias activate=". ../.env/bin/activate"
You should call the bash script using source.
Here is an example:
#!/bin/bash
# Let's call this script venv.sh
source "<absolute_path_recommended_here>/.env/bin/activate"
On your shell just call it like that:
> source venv.sh
Or as #outmind suggested: (Note that this does not work with zsh)
> . venv.sh
There you go, the shell indication will be placed on your prompt.
Although it doesn't add the "(.env)" prefix to the shell prompt, I found this script works as expected.
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". ../.env/bin/activate; exec /bin/bash -i"
e.g.
user#localhost:~/src$ which pip
/usr/local/bin/pip
user#localhost:~/src$ which python
/usr/bin/python
user#localhost:~/src$ ./shell
user#localhost:~/src$ which pip
~/.env/bin/pip
user#localhost:~/src$ which python
~/.env/bin/python
user#localhost:~/src$ exit
exit
Sourcing runs shell commands in your current shell. When you source inside of a script like you are doing above, you are affecting the environment for that script, but when the script exits, the environment changes are undone, as they've effectively gone out of scope.
If your intent is to run shell commands in the virtualenv, you can do that in your script after sourcing the activate script. If your intent is to interact with a shell inside the virtualenv, then you can spawn a sub-shell inside your script which would inherit the environment.
Here is the script that I use often. Run it as $ source script_name
#!/bin/bash -x
PWD=`pwd`
/usr/local/bin/virtualenv --python=python3 venv
echo $PWD
activate () {
. $PWD/venv/bin/activate
}
activate
You can also do this using a subshell to better contain your usage - here's a practical example:
#!/bin/bash
commandA --args
# Run commandB in a subshell and collect its output in $VAR
# NOTE
# - PATH is only modified as an example
# - output beyond a single value may not be captured without quoting
# - it is important to discard (or separate) virtualenv activation stdout
# if the stdout of commandB is to be captured
#
VAR=$(
PATH="/opt/bin/foo:$PATH"
. /path/to/activate > /dev/null # activate virtualenv
commandB # tool from /opt/bin/ which requires virtualenv
)
# Use the output from commandB later
commandC "$VAR"
This style is especially helpful when
a different version of commandA or commandC exists under /opt/bin
commandB exists in the system PATH or is very common
these commands fail under the virtualenv
one needs a variety of different virtualenvs
What does sourcing the bash script for?
If you intend to switch between multiple virtualenvs or enter one virtualenv quickly, have you tried virtualenvwrapper? It provides a lot of utils like workon venv, mkvirtualenv venv and so on.
If you just run a python script in certain virtualenv, use /path/to/venv/bin/python script.py to run it.
As others already stated, what you are doing wrong is not sourcing the script you created. When you run the script just like you showed, it creates a new shell which activates the virtual environment and then exits, so there are no changes to your original shell from which you ran the script.
You need to source the script, which will make it run in your current shell.
You can do that by calling source shell.sh or . shell.sh
To make sure the script is sourced instead of executed normally, its nice to have some checks in place in the script to remind you, for example the script I use is this:
#!/bin/bash
if [[ "$0" = "$BASH_SOURCE" ]]; then
echo "Needs to be run using source: . activate_venv.sh"
else
VENVPATH="venv/bin/activate"
if [[ $# -eq 1 ]]; then
if [ -d $1 ]; then
VENVPATH="$1/bin/activate"
else
echo "Virtual environment $1 not found"
return
fi
elif [ -d "venv" ]; then
VENVPATH="venv/bin/activate"
elif [-d "env"]; then
VENVPATH="env/bin/activate"
fi
echo "Activating virtual environment $VENVPATH"
source "$VENVPATH"
fi
It's not bulletproof but it's easy to understand and does its job.
You should use multiple commands in one line. for example:
os.system(". Projects/virenv/bin/activate && python Projects/virenv/django-project/manage.py runserver")
when you activate your virtual environment in one line, I think it forgets for other command lines and you can prevent this by using multiple commands in one line.
It worked for me :)
When I was learning venv I created a script to remind me how to activate it.
#!/bin/sh
# init_venv.sh
if [ -d "./bin" ];then
echo "[info] Ctrl+d to deactivate"
bash -c ". bin/activate; exec /usr/bin/env bash --rcfile <(echo 'PS1=\"(venv)\${PS1}\"') -i"
fi
This has the advantage that it changes the prompt.
As stated in other answers, when you run a script, it creates a sub-shell.
When the script exits, all modifications to that shell are lost.
What we need is actually to run a new shell where the virtual environment is active, and not exit from it.
Be aware, this is a new shell, not the one in use before you run your script.
What this mean is, if you type exit in it, it will exit from the subshell, and return to the previous one (the one where you ran the script), it won't close your xterm or whatever, as you may have expected.
The trouble is, when we exec bash, it reads its rc files (/etc/bash.bashrc, ~/.bashrc), which will change the shell environment. The solution is to provide bash with a way to setup the shell as usual, while additionnally activate the virtual environment. To do this, we create a temporary file, recreating the original bash behavior, and adding a few things we need to enable our venv. We then ask bash to use it instead of its usual rc files.
A beneficial side-effect of having a new shell "dedicated" to our venv, is that to deactivate the virtual environment, the only thing needed is to exit the shell.
I use this in the script exposed below to provide a 'deactivate' option, which acts by sending a signal to the new shell (kill -SIGUSR1), this signal is intercepted (trap ...) and provoke the exit from the shell.
Note: i use SIGUSR1 as to not interfere with whatever could be set in the "normal" behavior.
The script i use:
#!/bin/bash
PYTHON=python3
myname=$(basename "$0")
mydir=$(cd $(dirname "$0") && pwd)
venv_dir="${mydir}/.venv/dev"
usage() {
printf "Usage: %s (activate|deactivate)\n" "$myname"
}
[ $# -eq 1 ] || { usage >&2; exit 1; }
in_venv() {
[ -n "$VIRTUAL_ENV" -a "$VIRTUAL_ENV" = "$venv_dir" -a -n "$VIRTUAL_ENV_SHELL_PID" ]
}
case $1 in
activate)
# check if already active
in_venv && {
printf "Virtual environment already active\n"
exit 0
}
# check if created
[ -e "$venv_dir" ] || {
$PYTHON -m venv --clear --prompt "venv: dev" "$venv_dir" || {
printf "Failed to initialize venv\n" >&2
exit 1
}
}
# activate
tmp_file=$(mktemp)
cat <<EOF >"$tmp_file"
# original bash behavior
if [ -f /etc/bash.bashrc ]; then
source /etc/bash.bashrc
fi
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
# activating venv
source "${venv_dir}/bin/activate"
# remove deactivate function:
# we don't want to call it by mistake
# and forget we have an additional shell running
unset -f deactivate
# exit venv shell
venv_deactivate() {
printf "Exitting virtual env shell.\n" >&2
exit 0
}
trap "venv_deactivate" SIGUSR1
VIRTUAL_ENV_SHELL_PID=$$
export VIRTUAL_ENV_SHELL_PID
# remove ourself, don't let temporary files laying around
rm -f "${tmp_file}"
EOF
exec "/bin/bash" --rcfile "$tmp_file" -i || {
printf "Failed to execute virtual environment shell\n" >&2
exit 1
}
;;
deactivate)
# check if active
in_venv || {
printf "Virtual environment not found\n" >&2
exit 1
}
# exit venv shell
kill -SIGUSR1 $VIRTUAL_ENV_SHELL_PID || {
printf "Failed to kill virtual environment shell\n" >&2
exit 1
}
exit 0
;;
*)
usage >&2
exit 1
;;
esac
I simply added this into my .bashrc-personal config file.
function sv () {
if [ -d "venv" ]; then
source "venv/bin/activate"
else
if [ -d ".venv" ]; then
source ".venv/bin/activate"
else
echo "Error: No virtual environment detected!"
fi
fi
}
I've recently started using git, and also begun unit testing (using Python's unittest module). I'd like to run my tests each time I commit, and only commit if they pass.
I'm guessing I need to use pre-commit in /hooks, and I've managed to make it run the tests, but I can't seem to find a way to stop the commit if they tests fail. I'm running the tests with make test, which in turn is running python3.1 foo.py --test. It seems like I don't get a different exit condition whether the tests pass or fail, but I may be looking in the wrong place.
Edit: Is this something uncommon that I want to do here? I would have thought it was a common requirement...
Edit2: Just in case people can't be bothered to read the comments, the problem was that unittest.TextTestRunner doesn't exit with non-zero status, whether the test suite is successful or not. To catch it, I did:
result = runner.run(allTests)
if not result.wasSuccessful():
sys.exit(1)
I would check to make sure that each step of the way, your script returns a non-zero exit code on failure. Check to see if your python3.1 foo.py --test returns a non-zero exit code if a test fails. Check to make sure your make test command returns a non-zero exit code. And finally, check that your pre-commit hook itself returns a non-zero exit code on failure.
You can check for a non-zero exit code by adding || echo $? to the end of a command; that will print out the exit code if the command failed.
The following example works for me (I'm redirecting stderr to /dev/null to avoid including too much extraneous output here):
$ python3.1 test.py 2>/dev/null || echo $?
1
$ make test 2>/dev/null || echo $?
python3.1 test.py
2
$ .git/hooks/pre-commit 2>/dev/null || echo $?
python3.1 test.py
1
test.py:
import unittest
class TestFailure(unittest.TestCase):
def testFail(self):
assert(False)
if __name__ == '__main__':
unittest.main()
Makefile:
test:
python3.1 test.py
.git/hooks/pre-commit:
#!/bin/sh
make test || exit 1
Note the || exit 1. This isn't necessary if make test is the last command in the hook, as the exit status of the last command will be the exit status of the script. But if you have later checks in your pre-commit hook, then you need to make sure you exit with an error; otherwise, a successful command at the end of the hook will cause your script to exit with a status of 0.
Could you parse the result of the python test session and make sure to exit your pre-commit hook with a non-zero status?
The hook should exit with non-zero status after issuing an appropriate message if
it wants to stop the commit.
So if your python script does not return the appropriate status for any reason, you need to determine that status directly from the pre-commit hook script.
That would ensure the commit does not go forward if the tests failed.
(or you could call from the hook a python wrapper which would call the tests, and ensure a sys.exit(exit_status) according to the test results).
Another option, if you don't want to handle manually pre-commit's:
There is nice tool to run tests and syntax checks for Python, Ruby and so on: github/overcommit