Travis script exit code never fails even when it should - python

The script portion of my Travis yml file that looks like this:
script:
- ./run_tests.sh
The script itself runs some tests on Sauce Labs. If the script fails due to test failures, it still exits with code 0 and the build continues on to pass as well. Why doesn't the script exit with a failure code if a test fails?
When I output the exit code from the end of my script file, I get 0. When I output the exit code in the .travis.yml file immediately after the script command, I get 1:
echo $?
0
The command "./run_tests.sh" exited with 0.
$ echo $?
1

I realized this was because I'm actually running my tests using unittest.TextTestRunner, and the exit code from those tests is always 0 unless you specifically catch the test failures and exit based on them:
ret = not runner.run(test_suite).wasSuccessful()
sys.exit(ret)

Related

Sys exit return value not showing up when echo in bash [duplicate]

I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.

Gdb Buffer Overflow; Python won't execute

I have a problem with gdb, I cant make a python command run inside. It just hangs forever until i press enter a second time.
gdb$ run $(python -c "print('A'*50)");
Starting program: /home/Myprogram $(python -c "print('A'*50)");
[buf]:
[check] 0x4030201
[Inferior 1 (process 27229) exited normally]
--------------------------------------------------------------------------[regs]
EAX:Error while running hook_stop:
No registers.
I went searching and each time someone uses this same command :
Starting program: /home/Myprogram $(python -c "print('A'*50)");
They have a line 50 'A's just below.
the second part (after [buf] : ) is only shown if i press enter a second time. If i do nothing it just hangs and doesn't execute python.
Any advices?
I finally found the solution :
echo "your python code" > bla.py
python bla.py > output
gdb yourProgram
In gdb :
gdb$ run < output

How to check whether or not a python script is up?

I want to make sure my python script is always running, 24/7. It's on a Linux server. If the script crashes I'll restart it via cron.
Is there any way to check whether or not it's running?
Taken from this answer:
A bash script which starts your python script and restarts it if it didn't exit normally :
#!/bin/bash
until script.py; do
echo "'script.py' exited with code $?. Restarting..." >&2
sleep 1
done
Then just start the monitor script in background:
nohup script_monitor.sh &
Edit for multiple scripts:
Monitor script:
cat script_monitor.sh
#!/bin/bash
until ./script1.py
do
echo "'script1.py' exited with code $?. Restarting..." >&2
sleep 1
done &
until ./script2.py
do
echo "'script2.py' exited with code $?. Restarting..." >&2
sleep 1
done &
scripts example:
cat script1.py
#!/usr/bin/python
import time
while True:
print 'script1 running'
time.sleep(3)
cat script2.py
#!/usr/bin/python
import time
while True:
print 'script2 running'
time.sleep(3)
Then start the monitor script:
./script_monitor.sh
This starts one monitor script per python script in the background.
Try this and enter your script name.
ps aux | grep SCRIPT_NAME
Create a script (say check_process.sh) which will
Find the process id for your python script by using ps command.
Save it in a variable say pid
Create an infinite loop. Inside it, search for your process. If found then sleep for 30 or 60 seconds and check again.
If pid not found, then exit the loop and send mail to your mail_id saying that process is not running.
Now call check_process.sh by a nohup so it will run in background continuously.
I implemented it way back and remember it worked fine.
You can use
runit
supervisor
monit
systemd (i think)
Do not hack this with a script
upstart, on Ubuntu, will monitor your process and restart it if it crashes. I believe systemd will do that too. No need to reinvent this.

Detect if Jasmine failed via Bash

I'm running Jasmine tests in my web app and I want to create a bash script that runs the test and pushes the current code to the remote git repository if there are no failures. Everything is super-duper except the fact that I can't tell if the tests succeeded or failed. How can I do it? If there is no way to do it in bash I can do it in python or nodejs.
I want the code look like this:
#!/bin/bash
succeeded=$(grunt test -no_output) #or some thing like it
if[ succeeded = 'True'] than
git push origin master
fi
It looks like grunt uses exit codes to indicate whether tasks are successful. You can use this to determine whether to push:
if grunt test -no_output; then
git push origin master
fi
This tests for a 0 (success) exit code from grunt and pushes if it receives one.
Run command then look at $?. Example:
if [ $? -eq 0 ]
then
echo "Successfully created file"
else
echo "Could not create file" >&2
fi

Commit in git only if tests pass

I've recently started using git, and also begun unit testing (using Python's unittest module). I'd like to run my tests each time I commit, and only commit if they pass.
I'm guessing I need to use pre-commit in /hooks, and I've managed to make it run the tests, but I can't seem to find a way to stop the commit if they tests fail. I'm running the tests with make test, which in turn is running python3.1 foo.py --test. It seems like I don't get a different exit condition whether the tests pass or fail, but I may be looking in the wrong place.
Edit: Is this something uncommon that I want to do here? I would have thought it was a common requirement...
Edit2: Just in case people can't be bothered to read the comments, the problem was that unittest.TextTestRunner doesn't exit with non-zero status, whether the test suite is successful or not. To catch it, I did:
result = runner.run(allTests)
if not result.wasSuccessful():
sys.exit(1)
I would check to make sure that each step of the way, your script returns a non-zero exit code on failure. Check to see if your python3.1 foo.py --test returns a non-zero exit code if a test fails. Check to make sure your make test command returns a non-zero exit code. And finally, check that your pre-commit hook itself returns a non-zero exit code on failure.
You can check for a non-zero exit code by adding || echo $? to the end of a command; that will print out the exit code if the command failed.
The following example works for me (I'm redirecting stderr to /dev/null to avoid including too much extraneous output here):
$ python3.1 test.py 2>/dev/null || echo $?
1
$ make test 2>/dev/null || echo $?
python3.1 test.py
2
$ .git/hooks/pre-commit 2>/dev/null || echo $?
python3.1 test.py
1
test.py:
import unittest
class TestFailure(unittest.TestCase):
def testFail(self):
assert(False)
if __name__ == '__main__':
unittest.main()
Makefile:
test:
python3.1 test.py
.git/hooks/pre-commit:
#!/bin/sh
make test || exit 1
Note the || exit 1. This isn't necessary if make test is the last command in the hook, as the exit status of the last command will be the exit status of the script. But if you have later checks in your pre-commit hook, then you need to make sure you exit with an error; otherwise, a successful command at the end of the hook will cause your script to exit with a status of 0.
Could you parse the result of the python test session and make sure to exit your pre-commit hook with a non-zero status?
The hook should exit with non-zero status after issuing an appropriate message if
it wants to stop the commit.
So if your python script does not return the appropriate status for any reason, you need to determine that status directly from the pre-commit hook script.
That would ensure the commit does not go forward if the tests failed.
(or you could call from the hook a python wrapper which would call the tests, and ensure a sys.exit(exit_status) according to the test results).
Another option, if you don't want to handle manually pre-commit's:
There is nice tool to run tests and syntax checks for Python, Ruby and so on: github/overcommit

Categories

Resources