shell script not working with nohup - python

I am trying to run a shell script with the nohup command. The shell script takes an array of files, runs a python program on each file in a loop, and appends the output to a file. This works fine on the server, but if I try to use the nohup command it does not work. I have successfully run other programs using nohup on this server, just not this script.
#!/bin/sh
ARRAY=(0010.dat 0020.dat 0030.dat)
rm batch_results.dat
touch batch0.dat
touch batch_results.dat
for file in ${ARRAY[#]}
do
python fof.py $file > /dev/null
python mdisk5.py > ./batch0.dat
tail -1 batch0.dat
tail -1 batch0.dat >> batch_results.dat
done
The program works fine when I run it while staying connected to the server, for example
./batch.sh > /dev/null &
./batch.sh > ./output.txt &
However, when I try to run it with the nohup command,
nohup ./batch.sh > /dev/null &
if I exit the server and come back the output file (batch_results.dat) does not have any data.
I am sure I am missing some simple fix or command in here. Any ideas?
Edit:
The program fof.py produces two files that are used as input for mdisk5.py.
When I exit the server while running nohup, these two files are produced, but only for the first input file '0010.dat'. The output files batch0.dat and batch_results.dat remain empty.

Here's your problem:
#!/bin/sh
sh does not support arrays. Either change your shebang line to invoke a shell that does support arrays, like bash, or use a normal, whitespace separated string of your data files in a like
DAT_FILES="0010.dat 0020.dat 0030.dat"
for file in $DAT_FILES
do
...
done

Related

Python script runs on command line but not from .sh file

I'm attempting to create a .sh file to batch a number of runs of a neural network on Python whilst on holidays.
At the moment I have been calling this from the command line:
python neural_network_trainer.py [args]
I now have a .sh script written:
#!/bin/bash
python neural_network_trainer.py [args]
# Repeated with varied args
That I am attempting to call in the same terminal as the original command line was running:
./august_hols.sh
I get the following error:
File "/data/Python-3.6.9/lib/python3.6/site.py", line 177
file=sys.stderr)
^
SyntaxError: invalid syntax
Where the Python install is in /data (for reasons).
Running which on the command line reports the correct Python directory set via an alias in ~/.bashrc:
alias python=/data/Python-3.6.9/bin/python3
But running which between the Bash shebang and the first python call reports /bin/python.
I've attempted to set the alias again at the start of the .sh script to no avail. I'm scratching my head as this is exact process I have used elsewhere, albeit not on this precise PC. I can copy the exact command from the top of the bash file into the terminal and it runs fine, try and call ./august_hols.sh and get the above Python error.
Where is Bash getting that path from, and why is it not using my expected route through ~/.bashrc?
Bash sub-shell does not inherit alias in the main shell
You can source the script (run in the main shell), instead of execute it (run in the sub-shell)
source script.sh
EDIT:
Solution 2:
Run bash as the login shell so ~/.bashrc is executed, so your alias is loaded before your script.
The subshell needs to be interactive to enable alias, because alias is enabled by default only for interactive shell, but script is non-interactive by default.
bash --login -i script.sh
Solution 3:
Similar to above, except alias is enabled explicitly
bash --login -O expand_aliases script.sh
Have you tried:
python=/data/Python-3.6.9/bin/python3 ./[your_bash].sh
In your .sh
Do this
#!/usr/bin/env bash
export PATH=/data/Python-3.6.9/bin:$PATH
exec python neural_network_trainer.py "$#"
Aliases are tricky.
A maybe more nasty solution
mapfile < <(declare -p | grep -m 1 BASH_ALIASES) && bash script.sh "${MAPFILE[#]}"
within your script you will need
shopt -s expand_aliases
eval $1
echo ${BASH_ALIASES[python]}
python --version
How about this:
#!/bin/bash
/data/Python-3.6.9/bin/python3 neural_network_trainer.py [args]
# Repeated with varied args

Write background command output (stdout) to file during execution

I have a command that takes a long time that I like to run in the background like this:
python3 script.py -f input.key -o output >> logs/script.log 2>&1 &
This works perfectly in the sense that the command is indeed in the background and I can check the output and potential errors later.
The main problem is the output is only appended after the command is completely finished, whereas I would like to have up-to-date log messages so I check the progress.
So currently the log would be empty and than suddenly at 08:30 two lines would appear:
[08:00] Script starting...
[08:30] Script finished!
Instead, I would like to have output saved to file before the command is completely finished.
Since you are calling a Python script you would want to use the -u option, which forces the stdout and stderr streams to be unbuffered.
$ python3 -u script.py -f input.key -o output >> logs/script.log 2>&1 &
You can check the log periodically using cat or realtime in combination with watch:
$ watch cat logs/script.log
↳ https://docs.python.org/3.7/using/cmdline.html#cmdoption-u

Redirect nohup to stdout on a script

I have a .sh script that via nohup runs a lot of Python scripts.
I use nohup to run them in the background and avoid neccesary enter pressing when the "nohup: ignoring input and appending output to 'nohup.out'" message appears.
nohup python script2.py 2> /dev/null &
nohup python script2.py 2> /dev/null &
[...]
When I run this .sh, I get the two Python scripts running in the background, ok.
This Python scripts generate a log file, which I deriver to std.stdout:
stream_config = logging.StreamHandler(sys.stdout) <-- here
stream_config.setFormatter(formatter)
importer_logger.addHandler(stream_config)
Due to my implementation, I want to launch those nohup but sending the stdout to to PID 1
I can do this easily without using nohup as follows:
python tester.py > /proc/1/fd/1
But how can I combine that "don't press enter to continue" and "deriver the file to stdout"?
I tried these with no luck:
nohup python tester.py > /proc/1/fd/1 2> /dev/null &
nohup python tester.py 2> /proc/1/fd/1 &
Thanks in advance.
You can fix your command by using the shell to indirectly start your program so that you can redirect its output rather than the output of nohup:
nohup bash -c "exec python tester.py > /proc/1/fd/1" 2> /dev/null &
This isn't the best option but it at least corrects the issue that made your attempts fail.

Shell script: time and python into a file

I need to write a bash script, that executes a python program and i need to output the time of execution and the results in the same file.
I CAN NOT edit the python code.
As there are multiple tests I want to execute them in background.
I've tried this
#!bin/bash
$(time python3 program.py file1 > solAndTimeFile1.txt &)
but it didn't work at all, it only outputs the python program results in the solAndTimeFile1.txt and the time is shown in the terminal.
I've also tried this:
#!bin/bash
$(time python3 program.py file1 > solAndTimeFile1.txt >> solAndTimeFile1.txt &)
Same output and makes even less sense to me.
Put your command into curly braces so it is run in a subshell and you can capture its output. To redirect both stdout and stderr to a file use &>file. See man bash for further information.
{ time python3 program.py file1; } &>solAndTimeFile1.txt &

Have python script run in background of unix

I have a python script that I want to execute in the background on my unix server. The catch is that I need the python script to wait for the previous step to finish before moving onto the next task, yet I want my job to continue to run after I exit.
I think I can set up as follows but would like confirmation:
An excerpt of the script looks like this where command 2 is dependent on the output from command 1 since it outputs an edited executable file in same directory. I would like to point out that commands 1 and 2 do not have the nohup/& included.
subprocess.call('unix command 1 with options', shell=True)
subprocess.call('unix command 2 with options', shell=True)
If when I initiate my python script like so:
% nohup python python_script.py &
Will my script run in the background since I explicitly did not put nohup/& in my scripted unix commands but instead ran the python script in the background?
yes, by running your python script with nohup (no hangup), your script won't keel over when the network is severed and the trailing & symbol will run your script in the background.
You can still view the output of your script, nohup will pipe the stdout to the nohop.out file. You can babysit the output in real time by tailing that output file:
$ tail -f nohop.out
quick note about the nohup.out file...
nohup.out The output file of the nohup execution if
standard output is a terminal and if the
current directory is writable.
or append the command with & to run the python script as a deamon and tail the logs.
$ nohup python python_script.py > my_output.log &
$ tail -f my_output.log
You can use nohup
chomd +x /path/to/script.py
nohup python /path/to/script.py &
Or
Instead of closing your terminal, use logout It is not SIGHUP when you do logout thus the shell won't send a SIGHUP to any of its children.children.

Categories

Resources