I want to query the status of the git repo using python. I am using:
subprocess.check_output("[[ -z $(git status -s) ]] && echo 'clean'", shell=True).strip()
This works fine on MacOS. However, on Ubuntu Linux I get an error message:
{CalledProcessError}Command '[[ -z $(git status -s) ]] && echo 'clean'' returned non-zero exit status 127.
I went into the same folder and manually ran
[[ -z $(git status -s) ]] && echo 'clean'
and it works fine.
I further ran other commands like
subprocess.check_output("ls", shell=True).strip()
and that also works fine.
What is going wrong here?
When you set shell=True, subprocess executes your command using /bin/sh. On Ubuntu, /bin/sh is not Bash, and you are using Bash-specific syntax ([[...]]). You could explicitly call out to bash instead:
subprocess.check_output(["/bin/bash", "-c", "[[ -z $(git status -s) ]] && echo 'clean'"]).strip()
But it's not clear why you're bothering with a shell script here: just run git status -s with Python, and handle the result yourself:
out = subprocess.run(['git', 'status', '-s'], stdout=subprocess.PIPE)
if not out.stdout:
print("clean")
Related
I want to start a bunch of docker containers with a help of a Python script. I am using subprocess library for that. Essentially, I am trying to run this docker command
docker = f"docker run -it --rm {env_vars} {hashes} {results} {script} {pipeline} --name {project} {CONTAINER_NAME}"
in a new terminal window.
Popen(f'xterm -T {project} -geometry 150x30+100+350 -e {docker}', shell=True)
# or
Popen(f'xfce4-terminal -T {project} --minimize {hold} -e="{docker}"', shell=True)
Container's CMD looks like this. It's a bash script that runs other scripts and funtions in them.
CMD ["bash", "/run_pipeline.sh"]
What I am trying to do is to run an interective shell (bash) from one of these nested scripts in a specific place in case of a failure (i.e. when some condition met) to be able to investigate the problem in script, do something to fix it and continue execution (or just exit if I could not fix it).
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
I want to do these fully automatically so I don't have to manually keep track of what is going on with a script and execute docker attach... when needed, because I will run multiple of such containers simultaneously.
The problem is that this "rescue" bash process exits immediately and I don't know why. I think it should be something about ttys and stuff, but I've tried bunch of fiddling around with it and had no success.
I tried different combinations of -i, -t and -d of a docker command, tried to use docker attach... right after starting container with -d and also tried starting python script directly from bash in a terminal (I am using Pycharm by default). Besides I tried to use socat, screen, script and getty commands (in nested bash script), but I don't know how to use them properly so it didn't end good as well. At this point I'm too confused to understand why it isn't working.
EDIT:
Adding minimal NOT reproducable (of what is not working) example of how I am starting a container.
# ./Dockerfile
FROM debian:bookworm-slim
SHELL ["bash", "-c"]
CMD ["bash", "/run_pipeline.sh"]
# run 'docker build -t test .'
# ./small_example.py
from subprocess import Popen
if __name__ == '__main__':
env_vars = f"-e REPO_NAME=test -e PROJECT=test_test"
script = f'-v "$(pwd)"/run_pipeline.sh:/run_pipeline.sh:ro'
docker = f"docker run -it --rm {env_vars} {script} --name test_name test"
# Popen(f'xterm -T test -geometry 150x30+100+350 +hold -e "{docker}"', shell=True).wait()
Popen(f'xfce4-terminal -T test --hold -e="{docker}"', shell=True).wait()
# ./run_pipeline.sh
# do some hard work
ls non/existent/path
if [ $? -ne 0 ]; then
echo Investigate manually: "$REPO_NAME"
bash
if [ $? -ne 0 ]; then exit 33; fi
fi
It seems like the problem can be in a run_pipeline.sh script, but I don't want to upload it here, it's a bigger mess than what I described earlier. But I will say anyway that I am trying to run this thing - https://github.com/IBM/D2A.
So I just wanted some advice on a tty stuff that I am probably missing.
Run the initial container detached, with input and a tty.
docker run -dit --rm {env_vars} {script} --name test_name test
Monitor the container logs for the output, then attach to it.
Here is a quick script example (without a tty in this case, only because of the demo using echo to input)
#!/bin/bash
docker run --name test_name -id debian \
bash -c 'echo start; sleep 10; echo "reading"; read var; echo "var=$var"'
while ! docker logs test_name | grep reading; do
sleep 3
done
echo "attach input" | docker attach test_name
The complete output after it finishes:
$ docker logs test_name
start
reading
var=attach input
The whole process would be easier to control via the Docker Python SDK rather than having a layer of shell between the python and Docker.
As I said in a comment to Matt's answer, his solution in my situation does not work either. I think it's a problem with the script that I'm running. I think it's because some of the many shell processes (https://imgur.com/a/JiPYGWd) are taking up allocated tty, but I don't know for sure.
So I came up with my own workaround. I simply block an execution of the script by creating a named pipe and then reading it.
if [ $? -ne 0 ]; then
echo Investigate _make_ manually: "$REPO_NAME"
mkfifo "/tmp/mypipe_$githash" && echo "/tmp/mypipe_$githash" && read -r res < "/tmp/mypipe_$githash"
if [ $res -ne 0 ]; then exit 33; fi
fi
Then I just launch terminal emulator and execute docker exec in it to start a new bash process. I do it with a help of Docker Python SDK by monitoring the output of a container so I know when to launch terminal.
def monitor_container_output(container):
line = b''
for log in container.logs(stream=True):
if log == b'\n':
print(line.decode())
if b'mypipe_' in line:
Popen(f'xfce4-terminal -T {container.name} -e="docker exec -it {container.name} bash"', shell=True).wait()
line = b''
continue
line += log
client = docker.from_env()
conatiner = client.containers.run(IMAGE_NAME, name=project, detach=True, stdin_open=True, tty=True,
auto_remove=True, environment=env_vars, volumes=volumes)
monitor_container_output(container)
After I finish my investigation of a problem in that new bash process, I will send "status code of investigation" to tell the script to continue running or exit.
echo 0 > "/tmp/mypipe_$githash"
I'd like the following piece of code to pass:
import subprocess
subprocess.run("pkill -f non_existent_process_name || true", shell=True, check=True)
However, it always errors:
subprocess.CalledProcessError: Command 'pkill -f non_existent_process_name || true' died with <Signals.SIGTERM: 15>.
Why is that? How can I make this work?
In a normal terminal shell, running (pkill -f non_existent_process_name || true); echo $? shows 0, which means the part in parenthesis exits with 0, as expected.
The reason you're getting the SIGTERM is because you're inadvertently killing the shell that's running pkill.
When you run a command with shell=True it runs the command as the argument to a shell, something like this:
/bin/sh -c 'pkill -f non_existent_process_name || true'
Since non_existent_process_name is part of the command line for /bin/sh, the shell is killed.
Based on exactly what process you're trying to match, you'll need to experiment with options to pkill. Either remove -f or adding -x might work.
I use Python to call a Bash script. I use the run() function for that which was introduced in Python 3.5.
I want to use the returncode for something, so I use this:
result = subprocess.run(["./app/first_deployment.sh", arg], stdout=subprocess.PIPE,)
if result.returncode == 0:
# do something
My Bash file:
# First condition
if grep -q 'string' file.txt
then
# Second condition
if grep -q 'anotherstring' file.txt
then
echo "Success"
exit 0
else
echo "Fail message 2"
exit 1
fi
else
echo "Fail message 1"
exit 1
fi
So it seems to work, the correct messages I do see in the logs. However result.returncode ALWAYS is code 0, which means succesfull. Why is that and how can I make sure it works?
Update (full script):
#!/bin/bash
basedir="/home/dpa/clients"
user=$1
archive_url=$2
repo_name=$3
port=$4
deployment_tag=$5
mkdir $basedir/$user
mkdir $basedir/$user/$repo_name
curl -o $basedir/$user/$repo_name/$deployment_tag.tar.gz $archive_url
mkdir $basedir/$user/$repo_name/$deployment_tag
tar -xvf $basedir/$user/$repo_name/$deployment_tag.tar.gz -C $basedir/$user/$repo_name/$deployment_tag --strip-components 1
rm -rf $basedir/$user/$repo_name/$deployment_tag.tar.gz
# Check if a production.yml file exists in the new directory
if [ -f "$basedir/$user/$repo_name/$deployment_tag/production.yml" ]
then
# Check for the websecure endpoint
if grep -q 'traefik.http.routers.$$$UNIQUE_DEPLOYMENT_TAG-secure.entrypoints=websecure' $basedir/$user/$repo_name/$deployment_tag/production.yml
then
# Check for the host rule
if grep -q 'traefik.http.routers.$$$UNIQUE_DEPLOYMENT_TAG-secure.rule=Host' $basedir/$user/$repo_name/$deployment_tag/production.yml
then
# Check if the proxy network exists
if grep -q 'network=proxy' $basedir/$user/$repo_name/$deployment_tag/production.yml
then
sed -i "s/\$\$\$PORT/${port}/g" $basedir/$user/$repo_name/$deployment_tag/production.yml
sed -i "s/\$\$\$UNIQUE_DEPLOYMENT_TAG/${deployment_tag}/g" $basedir/$user/$repo_name/$deployment_tag/production.yml
# docker-compose -f $basedir/$user/$repo_name/$deployment_tag/production.yml build
# docker-compose -f $basedir/$user/$repo_name/$deployment_tag/production.yml up -d
echo "Deployment succesfull! Your app is online :)"
exit 0
else
echo "Proxy network rule not found in yml config."
exit 1
fi
else
echo "Traefik host rule not found in yml config."
exit 1
fi
else
echo "Traefik websecure endpoint not found in yml config."
set -x
exit 1
fi
else
echo "No production.yml could be found. Please follow the docs and include the correct YAML file."
exit 1
fi
UPDATE2 (output):
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1845 100 1845 0 0 7561 0 --:--:-- --:--:-- --:--:-- 7530
+ mkdir /home/dpa/clients/foo/testapi1/testapi1-305983855
+ tar -xvf /home/dpa/clients/foo/testapi1/testapi1-305983855.tar.gz -C /home/dpa/clients/foo/testapi1/testapi1-305983855 --strip-components 1
+ rm -rf /home/dpa/clients/foo/testapi1/testapi1-305983855.tar.gz
+ '[' -f /home/dpa/clients/foo/testapi1/testapi1-305983855/production.yml ']'
+ grep -q 'traefik.http.routers.$$$UNIQUE_DEPLOYMENT_TAG-secure.entrypoints=websecure' /home/dpa/clients/foo/testapi1/testapi1-305983855/production.yml
+ echo 'Traefik websecure endpoint not found in yml config.'
+ exit 1
CompletedProcess(args=['./app/first_deployment.sh', 'foo', 'https://codeload.github.com/foo/testapi1/legacy.tar.gz/master?token=changedthis', 'testapi1', '7039', 'testapi1-305983855'], returncode=0, stdout=b'foo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/dependabot.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/workflows/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/workflows/ci.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.gitignore\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.vscode/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.vscode/settings.json\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/docker-compose.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/local.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/production.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/.dockerignore\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/Dockerfile\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/app/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/app/__init__.py\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/app/main.py\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/requirements.txt\nTraefik websecure endpoint not found in yml config.\n')
Update 3:
So after following the advice of simplying the script, I tried the bash script file with just the config below. And this gave me.. exit code 1! As expected. Great that seems to work.
#!/bin/bash
basedir="/home/dpa/clients"
user=$1
archive_url=$2
repo_name=$3
port=$4
deployment_tag=$5
# # Check if a production.yml file exists in the new directory
if [ -f "$basedir/$user/$repo_name/$deployment_tag/production.yml" ]
then
echo "Test complete"
exit 0
else
echo "No production.yml could be found. Please follow the docs and include the correct YAML file."
exit 1
fi
Now when I added just the line mkdir before that. It ended up in a exit code 0. Which is weird because it should give me an exit code 1, since the directory does not exist. So the following code:
#!/bin/bash
basedir="/home/dpa/clients"
user=$1
archive_url=$2
repo_name=$3
port=$4
deployment_tag=$5
mkdir "$basedir/$user"
# # Check if a production.yml file exists in the new directory
if [ -f "$basedir/$user/$repo_name/$deployment_tag/production.yml" ]
then
echo "Test complete"
exit 0
else
echo "No production.yml could be found. Please follow the docs and include the correct YAML file."
exit 1
fi
I tried this with commands like cd or ls as well, all ending up in the same result, an exit code 0. So for some reason whenever a shell command is ran succesfully, it results in exit code 0 in the Python function. Because a file check only did work. So it must be a Python related problem..
I am working on a project which involves wapiti and nikto web tools. i have managed to produce one report for both these tool with this command
python wapiti.py www.kca.ac.ke ;perl nikto.pl -h www.kca.ac.ke -Display V -F htm -output /root/.wapiti/generated_report/index.html.
But i would like to run a command like
python wapiti.py www.kca.ac.ke
and get both the wapiti and nikto web scan report. How do i achieve this guys?
A shell script would work. Save the following as 'run_wapiti_and_nikto_scans', then run it as:
bash run_wapiti_and_nikto_scans www.my.site.com
Here is the script:
#!/bin/bash
SITE=$1
if [ -n "$SITE" ]; then # -n tests to see if the argument is non empty
echo "Looking to scan $SITE"
echo "Running 'python wapiti.py $SITE'"
python wapiti.py $SITE || echo "Failed to run wapiti!" && exit 1;
echo "Running 'perl nikto.pl -h $SITE -Display V -F htm -output /root/.wapiti/generated_report/index.html'"
perl nikto.pl -h $SITE -Display V -F htm -output /root/.wapiti/generated_report/index.html || echo "Failed to run nikto!" && exit 1;
echo "Done!"
exit 0; # Success
fi
echo "usage: run_wapiti_and_nikto_scans www.my.site.com";
exit 1; # Failure
I have a bash script (Controller.sh) that invokes a python script (MyDaemon.py). The latter takes an argument and a command, and can be invoked from the command line like so:
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue start
or
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue stop
or
/usr/bin/python /opt/stuff/MyDaemon.py -p Blue status
I am attempting to get Controller.sh to invoke MyDaemon.py and then exit with a status. The python script should be kicked off and Controller.sh should return. This is my Controller.sh code:
COLOR=$1
COMMAND=$2
DIRNAME=`dirname $0`
RESULT="/tmp/$COLOR.$COMMAND.result"
# remove any old console output
rm -f $RESULT 2>/dev/null
#start with CPU affinity for anything other than CPU 0.
sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p $COLOR $COMMMAND</dev/null >$RESULT 2>&1
STATUS=$?
# print output
cat $RESULT
# check on success
if [ $STATUS -ne 0 ]
then
echo "ERROR: $COLOR $COMMAND failed"
exit 1
fi
Now, if on the command line I invoke Controller.sh blue start it kicks off the python script, but Controller.sh does not return a status. On the other hand, if I run the following it does return:
[nford#myserver]# sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p blue start</dev/null >/tmp/blah.log 2>&1
Started with pid 1326
[nford#myserver]#
I am forced to conclude that there is something about the bash script that is preventing it from returning.
It should be noted that MyDaemon.py does fork processes, which is why I need to redirect output. It should also be noted that I'm lifting the majority of this from another script that does something similar with a php script; some of the meaning I'm fuzzy on (such as STATUS=$?). That said, even if I cut out everything after the sudo taskset invocation line, it still fails to return cleanly. How do I get the bash script to properly execute this command?
Post-Script: I'm a little baffled how this question is 'too specific' and was down-voted/voted to close. In an attempt to be crystal clear; I'm trying to understand the differences in how a forking process script runs in the context of the command line versus a bash script. I've provided a specific example above, but this is a general concept.
UPDATE:
This results when I run the script using bash -x, further showing that it dies on the sudo taskset line. The fact it's left off the start command is confusing.
[nford#myserver]# bash -x Controller.sh Blue start
+ COLOR=Blue
+ COMMAND=start
++ dirname Controller.sh
+ DIRNAME=.
+ RESULT=/tmp/Blue.start.result
+ rm -f /tmp/Blue.start.result
+ sudo taskset -c 1-8 /usr/bin/python /opt/stuff/MyDaemon.py -p Blue
UPDATE:
bash -x reveals the problem: the start command is not being passed through: a typo in the variable name produces as silent bash error. Takeaway: use bash -x for debugging!
Because of your typo - You should use set -u at the top of your scripts, it's a life saver and stops sleepless nights as well as negating the pulling of hair.
set -u would have given you...
myscript.sh: line 11: COMMMAND: unbound variable
Remember you can run scripts like so bash -u myscript.sh arg1 arg2 likewise with -x, they both help it tracking down script issues.