I use Python to call a Bash script. I use the run() function for that which was introduced in Python 3.5.
I want to use the returncode for something, so I use this:
result = subprocess.run(["./app/first_deployment.sh", arg], stdout=subprocess.PIPE,)
if result.returncode == 0:
# do something
My Bash file:
# First condition
if grep -q 'string' file.txt
then
# Second condition
if grep -q 'anotherstring' file.txt
then
echo "Success"
exit 0
else
echo "Fail message 2"
exit 1
fi
else
echo "Fail message 1"
exit 1
fi
So it seems to work, the correct messages I do see in the logs. However result.returncode ALWAYS is code 0, which means succesfull. Why is that and how can I make sure it works?
Update (full script):
#!/bin/bash
basedir="/home/dpa/clients"
user=$1
archive_url=$2
repo_name=$3
port=$4
deployment_tag=$5
mkdir $basedir/$user
mkdir $basedir/$user/$repo_name
curl -o $basedir/$user/$repo_name/$deployment_tag.tar.gz $archive_url
mkdir $basedir/$user/$repo_name/$deployment_tag
tar -xvf $basedir/$user/$repo_name/$deployment_tag.tar.gz -C $basedir/$user/$repo_name/$deployment_tag --strip-components 1
rm -rf $basedir/$user/$repo_name/$deployment_tag.tar.gz
# Check if a production.yml file exists in the new directory
if [ -f "$basedir/$user/$repo_name/$deployment_tag/production.yml" ]
then
# Check for the websecure endpoint
if grep -q 'traefik.http.routers.$$$UNIQUE_DEPLOYMENT_TAG-secure.entrypoints=websecure' $basedir/$user/$repo_name/$deployment_tag/production.yml
then
# Check for the host rule
if grep -q 'traefik.http.routers.$$$UNIQUE_DEPLOYMENT_TAG-secure.rule=Host' $basedir/$user/$repo_name/$deployment_tag/production.yml
then
# Check if the proxy network exists
if grep -q 'network=proxy' $basedir/$user/$repo_name/$deployment_tag/production.yml
then
sed -i "s/\$\$\$PORT/${port}/g" $basedir/$user/$repo_name/$deployment_tag/production.yml
sed -i "s/\$\$\$UNIQUE_DEPLOYMENT_TAG/${deployment_tag}/g" $basedir/$user/$repo_name/$deployment_tag/production.yml
# docker-compose -f $basedir/$user/$repo_name/$deployment_tag/production.yml build
# docker-compose -f $basedir/$user/$repo_name/$deployment_tag/production.yml up -d
echo "Deployment succesfull! Your app is online :)"
exit 0
else
echo "Proxy network rule not found in yml config."
exit 1
fi
else
echo "Traefik host rule not found in yml config."
exit 1
fi
else
echo "Traefik websecure endpoint not found in yml config."
set -x
exit 1
fi
else
echo "No production.yml could be found. Please follow the docs and include the correct YAML file."
exit 1
fi
UPDATE2 (output):
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1845 100 1845 0 0 7561 0 --:--:-- --:--:-- --:--:-- 7530
+ mkdir /home/dpa/clients/foo/testapi1/testapi1-305983855
+ tar -xvf /home/dpa/clients/foo/testapi1/testapi1-305983855.tar.gz -C /home/dpa/clients/foo/testapi1/testapi1-305983855 --strip-components 1
+ rm -rf /home/dpa/clients/foo/testapi1/testapi1-305983855.tar.gz
+ '[' -f /home/dpa/clients/foo/testapi1/testapi1-305983855/production.yml ']'
+ grep -q 'traefik.http.routers.$$$UNIQUE_DEPLOYMENT_TAG-secure.entrypoints=websecure' /home/dpa/clients/foo/testapi1/testapi1-305983855/production.yml
+ echo 'Traefik websecure endpoint not found in yml config.'
+ exit 1
CompletedProcess(args=['./app/first_deployment.sh', 'foo', 'https://codeload.github.com/foo/testapi1/legacy.tar.gz/master?token=changedthis', 'testapi1', '7039', 'testapi1-305983855'], returncode=0, stdout=b'foo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/dependabot.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/workflows/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.github/workflows/ci.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.gitignore\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.vscode/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/.vscode/settings.json\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/docker-compose.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/local.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/production.yml\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/.dockerignore\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/Dockerfile\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/app/\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/app/__init__.py\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/app/main.py\nfoo-testapi1-b1c9fd5be165b850d2b94cde30affa622b5c3621/testapi1/requirements.txt\nTraefik websecure endpoint not found in yml config.\n')
Update 3:
So after following the advice of simplying the script, I tried the bash script file with just the config below. And this gave me.. exit code 1! As expected. Great that seems to work.
#!/bin/bash
basedir="/home/dpa/clients"
user=$1
archive_url=$2
repo_name=$3
port=$4
deployment_tag=$5
# # Check if a production.yml file exists in the new directory
if [ -f "$basedir/$user/$repo_name/$deployment_tag/production.yml" ]
then
echo "Test complete"
exit 0
else
echo "No production.yml could be found. Please follow the docs and include the correct YAML file."
exit 1
fi
Now when I added just the line mkdir before that. It ended up in a exit code 0. Which is weird because it should give me an exit code 1, since the directory does not exist. So the following code:
#!/bin/bash
basedir="/home/dpa/clients"
user=$1
archive_url=$2
repo_name=$3
port=$4
deployment_tag=$5
mkdir "$basedir/$user"
# # Check if a production.yml file exists in the new directory
if [ -f "$basedir/$user/$repo_name/$deployment_tag/production.yml" ]
then
echo "Test complete"
exit 0
else
echo "No production.yml could be found. Please follow the docs and include the correct YAML file."
exit 1
fi
I tried this with commands like cd or ls as well, all ending up in the same result, an exit code 0. So for some reason whenever a shell command is ran succesfully, it results in exit code 0 in the Python function. Because a file check only did work. So it must be a Python related problem..
Related
I installed the latest version of docker, installed WSL 2 according by the manual. And installed the container with the command docker-compose up. I need to run the tests by command tests/run_tests.sh. But after launching, after a few seconds, the window with the test closes, my container disappears in the docker, and when I try to write the command docker-compose up again, I get an error Error response from daemon: open \\.\pipe\docker_engine_linux: The system cannot find the file specified.
run_tests:
#!/usr/bin/env sh
# To run locally, execute the command NOT in container:
# bash tests/run_tests.sh
set -x
if [ -z "$API_ENV" ]; then
API_ENV=test
fi
if [ "$API_ENV" = "bitbucket_test" ]; then
COMPOSE_FILE="-f docker-compose.test.yml"
fi
docker-compose build connectors
API_ENV=$API_ENV docker-compose ${COMPOSE_FILE} up -d --force-recreate
connectors_container=$(docker ps -f name=connectors -q | tail -n1)
if [ "$API_ENV" = "bitbucket_test" ]; then
mkdir -p artifacts && docker logs --follow ${connectors_container} > ./artifacts/docker_connectors_logs.txt 2>&1 &
pytest_n_processes=100
else
pytest_n_processes=25
fi
# Timeout for the tests. In bitbucket we want to stop the tests a bit before the max time, so that
# artifacts are created and logs can be inspected
timeout_cmd="timeout 3.5m"
if [ "$API_ENV" = "bitbucket_test" ] || [ "$API_ENV" = "test" ]; then
export PYTEST_SENTRY_DSN='http://d07ba0bfff4b41888e311f8398321d14#sentry.windsor.ai/4'
export PYTEST_SENTRY_ALWAYS_REPORT=1
fi
git fetch origin "+refs/heads/master:refs/remotes/origin/master"
# Lint all the files that are modified in this branch
$(dirname "$0")/run_linters.sh &
linting_pid=$!
# bitbucket pipelines have 8 workers, use 6 for tests
#
# WARNING: Tests require gunicorn and is enabled when containers are started with: API_ENV=test docker-compose up -d --force-recreate
# Tests are run in parallel and the cache-locking in threaded flask doesnt work in this case
${timeout_cmd} docker exec ${connectors_container} bash -c \
"PYTEST_SENTRY_DSN=$PYTEST_SENTRY_DSN \
PYTEST_SENTRY_ALWAYS_REPORT=$PYTEST_SENTRY_ALWAYS_REPORT \
pytest \
--cov=connectors --cov=api --cov=base \
--cov-branch --cov-report term-missing --cov-fail-under=71.60 \
--timeout 60 \
-v \
--durations=50 \
-n $pytest_n_processes \
tests || ( \
code=$? `# store the exit code to exit with it` \
&& echo 'TESTS FAILED' \
&& mkdir -p ./artifacts \
&& docker logs ${connectors_container} > ./artifacts/docker_connectors_failure_logs.txt 2>&1 `# Ensure that the logs are complete` \
) "&
# Get the tests pid
tests_pid=$!
# wait for linting to finish
wait $linting_pid
linting_code=$?
echo "Linting code: ${linting_code}"
if [ $linting_code -ne 0 ]; then
echo 'Linting failed'
# kill running jobs on exit in local ubuntu. Some tests were left running by only killing the test_pid.
kill "$(jobs -p)"
# kills the test process explicitly in gitlab pipelines. Was needed because jobs returns empty in gitlab pipelines.
kill $tests_pid
exit 1
fi
# wait for tests to finish
wait $tests_pid
testing_code=$?
echo "Testing code: ${testing_code}"
if [ $testing_code -ne 0 ]; then
echo 'Tests failed'
exit 1
else
echo 'Tests and linting passed'
exit 0
fi
I want to query the status of the git repo using python. I am using:
subprocess.check_output("[[ -z $(git status -s) ]] && echo 'clean'", shell=True).strip()
This works fine on MacOS. However, on Ubuntu Linux I get an error message:
{CalledProcessError}Command '[[ -z $(git status -s) ]] && echo 'clean'' returned non-zero exit status 127.
I went into the same folder and manually ran
[[ -z $(git status -s) ]] && echo 'clean'
and it works fine.
I further ran other commands like
subprocess.check_output("ls", shell=True).strip()
and that also works fine.
What is going wrong here?
When you set shell=True, subprocess executes your command using /bin/sh. On Ubuntu, /bin/sh is not Bash, and you are using Bash-specific syntax ([[...]]). You could explicitly call out to bash instead:
subprocess.check_output(["/bin/bash", "-c", "[[ -z $(git status -s) ]] && echo 'clean'"]).strip()
But it's not clear why you're bothering with a shell script here: just run git status -s with Python, and handle the result yourself:
out = subprocess.run(['git', 'status', '-s'], stdout=subprocess.PIPE)
if not out.stdout:
print("clean")
So I have the following in my shell script:
python get_link.py $password | wget --content-disposition -i-
mkdir web_folder
mv *.zip web_folder
So the first line is executing a python script i wrote which prints out a website link and wget immediately retrieves the link returned by the python script and downloads a zip file.
The second line makes a new folder called "web_folder" and the third line is moving the zip file that was downloaded by wget into the "web_folder"
The problem I'm facing is that, if the python script fails due to error such as $password is having the wrong password, the rest of the shell script command is still executing. In my case, the following is printed:
mv: cannot stat ‘*.zip’: No such file or directory
The mkdir and the mv command somewhat still executes even if the python script fails. How do i ensure that the script comes to a complete halt when the python script fails?
If you are using bash, look into PIPESTATUS variable.
${PIPESTATUS[0]} will have the return code from the first pipe.
#!/bin/bash
python get_link.py $password | wget --content-disposition -i-
if [ ${PIPESTATUS[0]} -eq 0 ]
then
echo "python get_link.py successful code here"
else
echo "python get_link.py failed code here"
fi
A compact solution, chain everything with and's:
(python get_link.py $password | wget --content-disposition -i-) && (mkdir web_folder) && (mv *.zip web_folder)
Less compact solution:
python get_link.py $password | wget --content-disposition -i-
if [ $? -eq 0 ]; then
mkdir web_folder
mv *.zip web_folder
fi
I'm trying to debug some unit tests that have been provided for testing an integration.
I'm sure this worked last time I tested it on my local machine, but that seems to have changed - the file hasn't been altered, so I don't know what's changed since then.
I have stripped out the identifying comments and changed some names from the original unit tests because it's proprietary software.
The syntax error is:
File "unitTests.sh", line 39
gLastFullPath=`python -c "import os; print os.path.realpath('${1}')"`
^
SyntaxError: invalid syntax
The full script is here:
#!/bin/bash
# If non-zero, then run in debug mode, outputting debug information
debug=0
# Set the following to 1 to force an error for testing purposes
forceError=0
separator="===================================================================================================="
#-------------------------------------------------------------------------------
# Convert the specified path to a full path and return it in the gLastFullPath
# global variable.
#
# Input params:
# $1 - Path to convert to full
#
# Output params:
# $gLastFullPath - Set to the converted full path
gLastFullPath=""
getFullPath()
{
# Use Python (because it's easier than Bash) to convert the passed path to
# a full path.
gLastFullPath=`python -c "import os; print os.path.realpath('${1}')"`
}
#-------------------------------------------------------------------------------
fatalError()
{
echo "${separator}"
echo "Fatal Error: $1"
echo "${separator}"
exit 1
}
#-------------------------------------------------------------------------------
# If a file or folder exists at the specified path, then delete it. If it's a
# directory, then its entire contents is deleted.
#-------------------------------------------------------------------------------
deleteIfExists()
{
if [[ 0 -ne $debug ]]; then
echo "deleteIfExists called..."
fi
if [[ -e "$1" ]]; then
# If it's a directory, then make sure it contains no locked files
if [[ -d "$1" ]]; then
chflags -R nouchg "$1"
fi
if [[ 0 -ne $debug ]]; then
echo " Deleting the existing file or directory:"
echo " $1"
fi
# Do the remove and check for an error.
/bin/rm -rf "$1"
if [[ $? -ne 0 ]]; then
fatalError "Unable to delete $1."
fi
fi
if [[ 0 -ne $debug ]]; then
echo
fi
}
#-------------------------------------------------------------------------------
# Script starts here
#-------------------------------------------------------------------------------
# Get the full path to this script
scriptPath=`which "$0"`
getFullPath "${scriptPath}"
scriptFullPath="${gLastFullPath}"
scriptDir=`dirname "${scriptFullPath}"`
scriptName=`basename "${scriptFullPath}"`
if [[ 0 -ne $debug ]]; then
echo "$scriptName: Debug tracing is on."
echo
fi
# Get the SDK project root path
getFullPath "${scriptDir}/.."
projRoot="${gLastFullPath}"
# Get the top of the server tree
getFullPath "${projRoot}/SUBSYS_TOP"
subsysTop="${gLastFullPath}"
libPythonBase="${projRoot}/src/lib/py/devilsoftPy"
devilsoftPython="${libPythonBase}/devilsoftpy"
if [[ 0 -ne $debug ]]; then
echo "$scriptName: Project root dir: \"${projRoot}\""
echo "$scriptName: SUBSYS_TOP: \"${subsysTop}\""
echo "$scriptName: Lib python base: \"${libPythonBase}\""
echo "$scriptName: devilsoft python: \"${devilsoftPython}\""
echo
fi
# First we have to launch the test python server. This is used by some of the other client tests to
# run against.
testServer="${devilsoftPython}/test/TestServer.py"
if [[ ! -f "${testServer}" ]]; then
fatalError "Could not find the expected test server: \"${testServer}\""
fi
# Carve out a place for our test server log file
tempFolder="/tmp/devilsoft"
mkdir -p "${tempFolder}"
testServerLogFile="${tempFolder}/TestServer.log"
echo "Starting the test server: \"${testServer}\""
echo " Logging to this file: \"${testServerLogFile}\""
export PYTHONPATH="${libPythonBase}:${PYTHONPATH}"; "${testServer}" > "${testServerLogFile}" 2>&1 &
testServerPid=$!
echo " Server started with pid ${testServerPid}..."
echo
echo " Taking a little snooze to let the test server initialize..."
sleep 2
# If we're forcing errors for testing, then kill the test server. This will cause downstream scripts
# to fail because there will be no server to talk to.
if [[ $forceError -ne 0 ]]; then
echo "Forcing downstream errors by killing the test server..."
kill ${testServerPid}
wait ${testServerPid}
testServerPid=0
echo
fi
testResultsLogFile="${tempFolder}/TestResults.log"
echo "Testing each python script in the library..."
echo " Test results will be written to this log file: \"${testResultsLogFile}\""
echo
deleteIfExists "${testResultsLogFile}"
# Save and set the field separator so that we can handle spaces in paths
SAVEIFS=$IFS
IFS=$'\n'
failedScripts=()
lastError=0
pythonSources=($(find "${devilsoftPython}" -name '*.py' ! -name '*.svn*' ! -name '__init__.py' ! -name 'TestServer.py' ! -name 'ServerClient.py'))
for pythonSourceFile in ${pythonSources[*]}; do
echo " Testing python source \"${pythonSourceFile}\""
export PYTHONPATH="${libPythonBase}:${PYTHONPATH}"; "${pythonSourceFile}" >> "${testResultsLogFile}" 2>&1
result=$?
if [[ $result -ne 0 ]]; then
pythonSourceName=`basename "${pythonSourceFile}"`
echo " Error ${result} returned from the above script ${pythonSourceName}!"
lastError=${result}
failedScripts+=("${pythonSourceFile}")
fi
done
echo
# Restore the original field separator
IFS=$SAVEIFS
if [[ ${testServerPid} -ne 0 ]]; then
echo "Telling the test server to quit..."
kill ${testServerPid}
wait ${testServerPid}
echo
fi
# If we got an error, tell the user
if [[ $lastError -ne 0 ]]; then
echo "IMPORTANT! The following scripts failed with errors:"
for failedScript in "${failedScripts[#]}"; do
echo " \"${failedScript}\""
done
echo
fatalError "Review the log files to figure out why the above scripts failed."
fi
echo "${separator}"
echo " Hurray! All tests passed!"
echo "${separator}"
echo
exit 0
This is all being run in Python 2.7
This is a bash script, not a Python script. Run it using ./script_name.sh or bash script_name.sh instead of python script_name.sh.
I am working on a project which involves wapiti and nikto web tools. i have managed to produce one report for both these tool with this command
python wapiti.py www.kca.ac.ke ;perl nikto.pl -h www.kca.ac.ke -Display V -F htm -output /root/.wapiti/generated_report/index.html.
But i would like to run a command like
python wapiti.py www.kca.ac.ke
and get both the wapiti and nikto web scan report. How do i achieve this guys?
A shell script would work. Save the following as 'run_wapiti_and_nikto_scans', then run it as:
bash run_wapiti_and_nikto_scans www.my.site.com
Here is the script:
#!/bin/bash
SITE=$1
if [ -n "$SITE" ]; then # -n tests to see if the argument is non empty
echo "Looking to scan $SITE"
echo "Running 'python wapiti.py $SITE'"
python wapiti.py $SITE || echo "Failed to run wapiti!" && exit 1;
echo "Running 'perl nikto.pl -h $SITE -Display V -F htm -output /root/.wapiti/generated_report/index.html'"
perl nikto.pl -h $SITE -Display V -F htm -output /root/.wapiti/generated_report/index.html || echo "Failed to run nikto!" && exit 1;
echo "Done!"
exit 0; # Success
fi
echo "usage: run_wapiti_and_nikto_scans www.my.site.com";
exit 1; # Failure