Detect if Jasmine failed via Bash - python

I'm running Jasmine tests in my web app and I want to create a bash script that runs the test and pushes the current code to the remote git repository if there are no failures. Everything is super-duper except the fact that I can't tell if the tests succeeded or failed. How can I do it? If there is no way to do it in bash I can do it in python or nodejs.
I want the code look like this:
#!/bin/bash
succeeded=$(grunt test -no_output) #or some thing like it
if[ succeeded = 'True'] than
git push origin master
fi

It looks like grunt uses exit codes to indicate whether tasks are successful. You can use this to determine whether to push:
if grunt test -no_output; then
git push origin master
fi
This tests for a 0 (success) exit code from grunt and pushes if it receives one.

Run command then look at $?. Example:
if [ $? -eq 0 ]
then
echo "Successfully created file"
else
echo "Could not create file" >&2
fi

Related

Interact with docker container in the middle of a bash script execution [in that container]

I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"

Sys exit return value not showing up when echo in bash [duplicate]

I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.

How to prevent push to master branch using python?

I have written a pre-push hook in python which prevents push to the master branch partially. i.e when in feature branch and given this command git push origin master,the files are pushed.
In the below image when the head is in master branch, the push is prevented.
But when the head is in feature1 branch, the push to master is not prevented.
My code so far :
#!/usr/bin/env python
import sys
import re
from subprocess import check_output
branch = check_output(['git', 'symbolic-ref','--short','HEAD']).strip()
print('branch-name:',branch.decode('utf-8')) #this prints the current branch: feature (if in feature)
if ((branch.decode('utf-8')) != 'master'):
print('into if clause')
print('push to remote successful')
sys.exit(0)
else :
print('into else clause')
print('you are not allowed to push to the master branch')
sys.exit(1)
I want to modify the code in such a manner that following commands must not be allowed(irrespective of the branch it is in) : git push --force origin master ; git push --delete origin master ; git push origin master ; git co master ; git push --force origin. Thanks in advance.
If you are using free-plan on private repo in Github, you may not be able to use protected branch feature. So you need to block any push / commit from local.
Please keep in mind, it can be bypassed easily with --no-verify command.
I recommend you to do it using husky instead of python, since it is way easier I think..
This is what I did to make it work locally and distributed to all repo's members.
First of all, you need to install husky to control pre-commit and pre-push hook.
Then, I made a pre-push bash script and commit it inside the repository. Then call this script from husky pre-push hook with husky parameter.
This is my husky configuration inside package.json (you can set separated config if you want)
"husky": {
"hooks": {
"pre-commit": "./commands/pre-commit",
"pre-push": "./commands/pre-push $HUSKY_GIT_STDIN"
}
},
as you can see I have 2 scripts, one for pre-push and one for pre-commit.
And this is my commands/pre-push bash script
#!/bin/bash
echo -e "===\n>> Talenavi Pre-push Hook: Checking branch name / Mengecek nama branch..."
BRANCH=`git rev-parse --abbrev-ref HEAD`
PROTECTED_BRANCHES="^(master|develop)"
if [[ $1 != *"$BRANCH"* ]]
then
echo -e "\n🚫 You must use (git push origin $BRANCH) / Anda harus menggunakan (git push origin $BRANCH).\n" && exit 1
fi
if [[ "$BRANCH" =~ $PROTECTED_BRANCHES ]]
then
echo -e "\n🚫 Cannot push to remote $BRANCH branch, please create your own branch and use PR."
echo -e "🚫 Tidak bisa push ke remote branch $BRANCH, silahkan buat branch kamu sendiri dan gunakan pull request.\n" && exit 1
fi
echo -e ">> Finish checking branch name / Selesai mengecek nama branch.\n==="
exit 0
The script basically will do 2 things:
This script will block anybody who tries to push to a certain branch (in my case I don't want anybody -including myself- to push directly to master and develop branch). They need to work in their own branch and then create a pull request.
This script will block anybody who tries to push to a branch that is different from their current active branch. For example you are in branch fix/someissue but then you mistakenly type git push origin master.
For more detailed instructions you can follow from this article:
https://github.com/talenavi/husky-precommit-prepush-githooks

attatch existing process for sucess or fail commands?

In Bash we can run programs like this
./mybin && echo "OK"
and this
./mybin || echo "fail"
But how do we attach success or fail commands to existing process?
Edit for explaination:
Now we are running ./mybin .
When mybin exits with or without error it will do nothing, but during mybin is running,
how can I achieve the same effect like I started the process like ./mybin && echo ok || echo fail ?
In both shell script and python programming way would be awesome, thanks!
But how do we attatch sucess or fail commands to existing already running process?
Assuming you're referring to a command that is run in the background, you can use wait to wait for that command to finish and retrieve its return status. For example:
./command & # run in background
wait $! && echo "completed successfully"
Here I used $! to get the PID of the backgrounded process, but it can be the PID of any child process of the shell.

HOW TO use fabric use with dtach,screen,is there some example

i have googled a lot,and in fabric faq also said use screen dtach with it ,but didn't find how to implement it?
bellow is my wrong code,the sh will not execute as excepted it is a nohup task
def dispatch():
run("cd /export/workspace/build/ && if [ -f spider-fetcher.zip ];then mv spider-fetcher.zip spider-fetcher.zip.bak;fi")
put("/root/build/spider-fetcher.zip","/export/workspace/build/")
run("cd /export/script/ && sh ./restartCrawl.sh && echo 'finished'")
I've managed to do it in two steps:
Start tmux session on remote server in detached mode:
run("tmux new -d -s foo")
Send command to the detached tmux session:
run("tmux send -t foo.0 ls ENTER")
here '-t' determines target session ('foo') and 'foo.0' tells the
number of the pane the 'ls' command is to be executed in.
you can just prepend screen to the command you want to run:
run("screen long running command")
Fabric though doesn't keep state like something like expect would, as each run/sudo/etc are their own sperate command runs without knowing the state of the last command. Eg run("cd /var");run("pwd") will not print /var but the home dir of the user who has logged into the box.

Categories

Resources