IF statement not executing in bash script - Comparing Variable to Int - python

Ubuntu 18.04:
I created a script to check if a python script is running and to start it if not. I'm using echo statements for debugging that output to verify.txt.
The debug statements inside the IF statement are not executing. I believe it's the syntax comparing a variable to an int. Does this look correct?
# Script to check if python script is running
echo "(verify.sh): checking to see if scripts are running..." > verify.txt
output="$(pgrep -f -c myprogram.py)"
echo "(verify.sh): assigned the output correctly as $output" > verify.txt
if [[$output -eq 0]];
then
echo "(verify.sh): entered the if loop" > verify.txt
python /home/User/myprogram.py &
echo "(verify.sh): started myprogram.py" > verify.txt
fi
Note: The file name is verify.sh, so I added it to the echo just to keep track of who was writing to the debug file.

You need to add some spaces, it should look like if [[ $output -eq 0 ]].
Try it out.

Related

Logging to a file and adding timestamps to every single command bash script

I'm writing a bash script that integrates with several modules and call several scripts (python, js, etc). I want to log everything (from bash, from the python/js scripts) to one single log file, and add timestamps to it.
So, say my Bash script is called bash-script.sh, my Python script is called python-script.pyand my js script is js-script.js. The log file is out.log
Now, I'll split this question into two.
Part I:
To log every single command in the bash script and add timestamps to it, bash-script.sh begins with the following:
#!/bin/bash
source ~/.bashrc
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1> out.log 2>&1
exec > >(while IFS= read -r line; do printf '%s %s %s\n' "$(date --rfc-3339=seconds)" "[Bash-script]" "$line"; done 3>out.log)
The problem is, the script is adding timestamps to everything, including newlines. In addition, since I'm calling external scripts, it is adding some symbols that makes the whole thing super ugly. Here's an example of an output:
022-05-18 16:37:01+00:00 [Bash-script] Bash script started!
2022-05-18 16:37:01+00:00 [Bash-script] Stopping all services
2022-05-18 16:37:02+00:00 [Bash-script] ● srv1.service - Service1
2022-05-18 16:37:02+00:00 [Bash-script] Drop-In: /etc/systemd/system/
2022-05-18 16:37:06+00:00 [Bash-script]
2022-05-18 16:37:06,232 [MainThread] INFO [python-script] In python-script.py
I want to get rid of the newlines and symbols (●)
In addition, I have a function WaitForServices that I run after starting a list of services and it basically waits (by sleeping x amount of seconds) until all services are up completely. Until then, it keeps printing waiting for services x.
The log entries in the out.log file these 'waiting' prints are just echo'd without a timestamp. Output example:
*Im not even sure it makes sense with the done 3>out.log, pls correct me if it's wrong.
Part II
Since I want to integrate with other scripts such as python and js, and write to the same log, I'm redirecting the output with >> and I'm not sure it is a good practice. I also want the bash script to exit completely if the python or the js scripts returns an error (exit code that diff than 0). That's how I currently do it:
bash-script.sh:
#!/bin/bash
source ~/.bashrc
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1> out.log 2>&1
exec > >(while IFS= read -r line; do printf '%s %s %s\n' "$(date --rfc-3339=seconds)" "[Bash-script]" "$line"; done 3>out.log)
function exitCode {
if [ "$?" != "0" ];then
echo Exiting...
exit;
fi
}
.....code
.....code
PYTHONPATH=.. /path-to-virtual-env/python python-script.py >> out.log ; exitCode $?
.....more code
sudo node /path-to-js-file/js-script.js >> out.log ; exitCode $?
.....more code
Now, I'm pretty sure it is stupid but I couldn't find any other way to achieve it. I am using the logging lib in the python script but when I specify filename=out.log (which is in the same dir) nothing really happens.
logging.basicConfig(filename='out.log',format="%(asctime)s %(levelname)s [%(module)s] %(message)s", level=logging.INFO, force=True)
Thank you very much for any help
I'm not sure if this could work, but what if you had another bash script, which ran the first bash script and piped the output to something like ets, which would automatically timestamp every line.
/path/to/bash-script.sh 2>&1 | ets -f "[%F %T] [bash-script]"
/path/to/python-script.py 2>&1 | ets -f "[%F %T] [python-script]"
/path/to/js-script.js 2>&1 | ets -f "[%F %T] [js-script]"
The current road you're going down is very confusing and complex. This obviously doesn't get rid of blank lines, etc. You'd need a separate script to parse that out later, I'm thinking.

Pass variable from Python to Bash

I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!

Bash if else then in one line with bash version of sys.exit()

I am running this bash command:
xe pbd-unplug uuid=$PBDUUID
If the result exists - i.e. if the variable PBDUUID is not empty then I would like to run another command:
xe sr-forget uuid=$SRUUID
However, if the variable is blank then I would like print an error message
Error: No PBD.
and for the script to exit immediately (similar to sys.exit() in Python).
Is there a way to combine this if else then into one line? Also, what is the bash equivalent of sys.exit()?
Additional Information/Comment:
Regarding comment by Dilettant, yes that approach (if [ -z ${PBDUUID} ]; then) will also work. I was not aware of it. Thanks for this. That seems quite intuitive.
if xe pbd-unplug uuid="$PBDUUID"; then xe sr-forget "uuid=$SRUUID"; else echo "Error: No PBD."; exit 1; fi
More readably, that is:
if xe pbd-unplug uuid="$PBDUUID"; then
xe sr-forget "uuid=$SRUUID"
else
echo "Error: No PBD." >&2
exit 1
fi
BTW, if your goal is to check whether a variable is blank, that would look more like the following:
if [ -z "$PDBUUID" ]; then
xe pdb-unplug uuid="$PDBUUID" && xe sr-forget "uuid=$SRUUID"
else
echo "Error: No PBD." >&2
exit 1
fi
If exit doesn't do as you intend, then this code is presumably running in a subshell, and thus exit is exiting that subshell rather than your script as a whole. See SubShell or the "Actions that Create a Subshell" section of the processtree bash-hackers.org page.

Propagate python script exit code to calling shell script

I have the following in a python script (using python 3.4), if there is any failure:
exit(1)
The purpose of this python script is to print() some value, so that it may be assigned to a shell variable like so:
#!/bin/bash
set -e
result=$(python my-script.py)
echo $?
However, if the script fails, the echo above returns 0. I want set -e to immediately interrupt and fail the script if the python script fails (i.e. returns non zero exit code).
How can I make this work? I tried set -eo pipefail as well in my bash script and I've noticed no difference.
I'm running Ubuntu 15.
EDIT
My python script (verbatim) just in case it is the suspect here...
import re
import sys
version_regex = r'(?:(?:release|hotfix)\/|^)([\d.]+)-\d+-g[a-f\d]+$'
result = re.search(version_regex, sys.argv[1])
if not result:
exit(1)
print(result.group(1))
Note that I have tried both sys.exit(1) and exit(1) in my experiments but I never saw my bash script fail.
EDIT 2
parsed_version=$(python parse-git-describe.py $describe_result; echo $?)
echo $parsed_version
echo $?
The above bash script gives me the following output:
1
0
Note that the script parse-git-describe.py is the same as the python script provided earlier.
EDIT 3
Apparently local causes this to break. EDIT 2 above was wrong, that is the result with local inside a shell function:
foo()
{
local parsed_version=$(python parse-git-describe.py $describe_result; echo $?)
echo $parsed_version
echo $?
}
Result is:
1
0
But if I remove local it works fine?
foo()
{
parsed_version=$(python parse-git-describe.py $describe_result; echo $?)
echo $parsed_version
echo $?
}
Result is:
1
1
Why does local matter?
You can to set the exit code if you finish your script by calling sys.exit(code).
The local can only be used within a function in bash. (So your bash script code above doesn't work either.)
Why don't you call your python script in the bash script and check the return value and then jump out of your script if necessary?
$ python -c "exit(0)"
$ [ $? == 0 ] || echo "fail"
$ python -c "exit(1)"
$ [ $? == 0 ] || echo "fail"
fail
You can easily adapt this in your shell script, I guess.
Why does local matter??
The local keyword specifies the variable's scope. The following sample script should illustrate:
#!/bin/bash
HELLO=Hello # global variable, HELLO
function hello {
local HELLO=World # local variable with same name; diff scope
echo $HELLO # this is the local, not global!
}
echo $HELLO # print global
hello # function call!
echo $HELLO # global unchanged!
Running this code from the shell (placed in test.sh script) produces:
➜ Projects ./test.sh
Hello
World
Hello
When you use local, the variable and the exit code it had go out of scope when the function returns to the caller. Your function should return the value to the caller if you don't want that to happen.

Python script to Batch file

I have a batch file that runs a python script. I am running Python 3.2. I want to send a variable like an integer or string from the python script back to the batch file, is this possible?
I know I can accept command line arguments in the Python script with sys.argv. Was hoping there was some feature that allows me to do the reverse.
In your Python script, just write to standard out: sys.stdout.write(...)
I'm not sure what scripting language you are using, maybe you could elaborate on that, for now I'll assume you are using bash (unix shell).
So, In your batch script you can have the output of the python script into a variable like this:
#run the script and store the output into $val
val = `python your_python_script.py`
#print $val
echo $val
EDIT it turns out, it is Windows batch
python your_python_script.py > tmpFile
set /p myvar= < tmpFile
del tmpFile
echo %myvar%
If a int is enough for you, then you can use
sys.exit(value)
in your python script. That exits the application with a status code of value
In your batch file you can then read it as the %errorlevel% environment variable.
You can't "send" a string. You can print it out and have the calling process capture it, but you can only directly return numbers from 0 through 255.
Ignacio is dead on. The only thing you can return is your exit status. What I've done previously is have the python script (or EXE in my case) output the next batch file to be run, then you can put in whatever values you'd like and run it. The batch file that calls the python script then calls the batch file you create.
You can try this batch script for this issue, as an example:
#echo off
REM %1 - This is the parameter we pass with the desired return code for the Python script that will be captured by the ErrorLevel env. variable.
REM A value of 0 is the default exit code, meaning it has all gone well. A value greater than 0 implies an error
REM and this value can be captured and used for any error control logic and handling within the script
set ERRORLEVEL=
set RETURN_CODE=%1
echo (Before Python script run) ERRORLEVEL VALUE IS: [ %ERRORLEVEL% ]
echo.
call python -c "import sys; exit_code = %RETURN_CODE%; print('(Inside python script now) Setting up exit code to ' + str(exit_code)); sys.exit(exit_code)"
echo.
echo (After Python script run) ERRORLEVEL VALUE IS: [ %ERRORLEVEL% ]
echo.
And when you run it a couple of times with different return code values you can see the expected behaviour:
PS C:\Scripts\ScriptTests> & '\TestPythonReturnCodes.cmd' 5
(Before Python script run) ERRORLEVEL VALUE IS: [ 0 ]
(Inside python script now) Setting up exit code to 5
(After Python script run) ERRORLEVEL VALUE IS: [ 5 ]
PS C:\Scripts\ScriptTests> & '\TestPythonReturnCodes.cmd' 3
(Before Python script run) ERRORLEVEL VALUE IS: [ 0 ]
(Inside python script now) Setting up exit code to 3
(After Python script run) ERRORLEVEL VALUE IS: [ 3 ]
PS C:\Scripts\ScriptTests

Categories

Resources