nohup not logging in nohup.out - python

I am running a python script of web2py and want to log its output. I am using following command
nohup python /var/www/web2py/web2py.py -S cloud -M -N -R applications/cloud/private/process.py >>/var/log/web2pyserver.log 2>&1 &
The process is running but it is not logging into the file. I have tried without nohup also but it is still same.
The default logging of nohup in nohup.out is also not working.
Any suggestion what might be going wrong?

Nothing to worry. Actually the python process along with nohup was logging the file in batch mode and i could see the output only after quite some time and not instantaneously.

nohup will try to create the file in the local directory. Can you create a file in the folder you are running it from ?

If you've got commas in your print statements there's a good chance it's due to buffering. You can put a sys command (forget which) in your code or when you run the nohup, just add the option -u and you'll disable std(in|out|err) buffering

Don't worry about this, it is because of the buffering mechanism, run your Python script with the -u flag will solve the problem:
nohup python -u code.py > code.log &
or just
nohup python -u code.py &

Related

continue program even after logout [duplicate]

I have Python script bgservice.py and I want it to run all the time, because it is part of the web service I build. How can I make it run continuously even after I logout SSH?
Run nohup python bgservice.py & to get the script to ignore the hangup signal and keep running. Output will be put in nohup.out.
Ideally, you'd run your script with something like supervise so that it can be restarted if (when) it dies.
If you've already started the process, and don't want to kill it and restart under nohup, you can send it to the background, then disown it.
Ctrl+Z (suspend the process)
bg (restart the process in the background
disown %1 (assuming this is job #1, use jobs to determine)
Running a Python Script in the Background
First, you need to add a shebang line in the Python script which looks like the following:
#!/usr/bin/env python3
This path is necessary if you have multiple versions of Python installed and /usr/bin/env will ensure that the first Python interpreter in your $$PATH environment variable is taken. You can also hardcode the path of your Python interpreter (e.g. #!/usr/bin/python3), but this is not flexible and not portable on other machines. Next, you’ll need to set the permissions of the file to allow execution:
chmod +x test.py
Now you can run the script with nohup which ignores the hangup signal. This means that you can close the terminal without stopping the execution. Also, don’t forget to add & so the script runs in the background:
nohup /path/to/test.py &
If you did not add a shebang to the file you can instead run the script with this command:
nohup python /path/to/test.py &
The output will be saved in the nohup.out file, unless you specify the output file like here:
nohup /path/to/test.py > output.log &
nohup python /path/to/test.py > output.log &
If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
# doesn't create nohup.out
nohup command >/dev/null 2>&1
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
# runs in background, still doesn't create nohup.out
nohup command >/dev/null 2>&1 &
You can find the process and its process ID with this command:
ps ax | grep test.py
# or
# list of running processes Python
ps -fA | grep python
ps stands for process status
If you want to stop the execution, you can kill it with the kill command:
kill PID
You could also use GNU screen which just about every Linux/Unix system should have.
If you are on Ubuntu/Debian, its enhanced variant byobu is rather nice too.
You might consider turning your python script into a proper python daemon, as described here.
python-daemon is a good tool that can be used to run python scripts as a background daemon process rather than a forever running script. You will need to modify existing code a bit but its plain and simple.
If you are facing problems with python-daemon, there is another utility supervisor that will do the same for you, but in this case you wont have to write any code (or modify existing) as this is a out of the box solution for daemonizing processes.
Alternate answer: tmux
ssh into the remote machine
type tmux into cmd
start the process you want inside the tmux e.g. python3 main.py
leaving the tmux session by Ctrl+b then d
It is now safe to exit the remote machine. When you come back use tmux attach to re-enter tmux session.
If you want to start multiple sessions, name each session using Ctrl+b then $. then type your session name.
to list all session use tmux list-sessions
to attach a running session use tmux attach-session -t <session-name>.
You can nohup it, but I prefer screen.
Here is a simple solution inside python using a decorator:
import os, time
def daemon(func):
def wrapper(*args, **kwargs):
if os.fork(): return
func(*args, **kwargs)
os._exit(os.EX_OK)
return wrapper
#daemon
def my_func(count=10):
for i in range(0,count):
print('parent pid: %d' % os.getppid())
time.sleep(1)
my_func(count=10)
#still in parent thread
time.sleep(2)
#after 2 seconds the function my_func lives on is own
You can of course replace the content of your bgservice.py file in place of my_func.
Try this:
nohup python -u <your file name>.py >> <your log file>.log &
You can run above command in screen and come out of screen.
Now you can tail logs of your python script by: tail -f <your log file>.log
To kill you script, you can use ps -aux and kill commands.
The zsh shell has an option to make all background processes run with nohup.
In ~/.zshrc add the lines:
setopt nocheckjobs #don't warn about bg processes on exit
setopt nohup #don't kill bg processes on exit
Then you just need to run a process like so: python bgservice.py &, and you no longer need to use the nohup command.
I know not many people use zsh, but it's a really cool shell which I would recommend.
If what you need is that the process should run forever no matter whether you are logged in or not, consider running the process as a daemon.
supervisord is a great out of the box solution that can be used to daemonize any process. It has another controlling utility supervisorctl that can be used to monitor processes that are being run by supervisor.
You don't have to write any extra code or modify existing scripts to make this work. Moreover, verbose documentation makes this process much simpler.
After scratching my head for hours around python-daemon, supervisor is the solution that worked for me in minutes.
Hope this helps someone trying to make python-daemon work
You can also use Yapdi:
Basic usage:
import yapdi
daemon = yapdi.Daemon()
retcode = daemon.daemonize()
# This would run in daemon mode; output is not visible
if retcode == yapdi.OPERATION_SUCCESSFUL:
print('Hello Daemon')

What are the alternatives to "python3 sample_program.py &" via ssh?

I am running a python script sample_program.py on python via ssh. I log into the machine, and run
python3 sample_program.py &
and log off with the command 'exit'. Unfortunately, the script stops running after a few minutes.
What else could I use to run python scripts remotely and not keep the Terminal open?
nohup
nohup python3 sample_program.py &
is the simplest way (man nohup):
nohup - run a command immune to hangups, with output to a non-tty
and IMHO it is installed everywhere.
at
You can use the at command. The at execute commands at a later time. The at utility shall read commands from standard input and group them together as an at-job, to be executed at a later time.
For more information, options, examples, and others see the [Ubuntu Manpage Repository][1]
Example:
at now +8 hours -f python3 sample_program.py
You can also use convenient shorthands, like tomorrow or noon, as in
echo "tweet fore" | at teatime
Independently of any terminal
ssh root#remoteserver '/root/backup.sh </dev/null >/var/log/root-backup.log 2>&1 &'
You need to close all file descriptors that are connected to the ssh socket, because the ssh session won't close as long as some remote process has the socket open. If you aren't interested in the script's output (presumably because the script itself takes care of writing to a log file), redirect it to /dev/null (but note that this will hide errors such as not being able to start the script).
Using nohup has no useful effect here. nohup arranges for the program it runs not to receive a HUP signal if the program's controlling terminal disappears, but here there is no terminal in the first place, so nothing is going to send a SIGHUP to the process out of the blue. Also, nohup redirects standard output and standard error (but not standard input) to a file, but only if they're connected to a terminal, which, again, they aren't.
You can set a cron job.
For example if now the time is 14:39:00 and today is friday, 30 august, you can add the following cron job (to be executed after 8 hours) in your crontab file using crontab -e command:
39 22 30 8 5 /path/to/python3 /path/to/sample_program.py
Add the shebang to the start of your scripts!
#!/usr/bin/python3
Give it permissions to execute.
chmod +x python3
Execute remotely!
sudo nohup ./python3 >/dev/null 2>&1 &
This way it will run as a background process and detach from the terminal, and you will not be writing an unnecessary nohup.out file.
You DO NOT even need the .py file extension in Linux, nor do you need to use more characters than needed:
{ python3 python3.py }
is just the same with
{ ./python3 }
It just needs the shebang and to be executable.

Python stdout logging: terminal vs bash file

I am not expert in Bash and Python, so this question might appear silly.
I have a Python script called learn.py and I noticed two different behaviours of the standard output, when redirected to a log file.
If I call this from terminal, I can see the log file size growing while the script is running.
$ ./learn.py > file.log
However, if I create a bash file for the same purpose:
#!/bin/bash
./learn.py > file.log
the script starts (I checked with pgrep) but it does not seem to run, as the log file stays empty. Am I missing something?
I solved using the Logging facility for Python, by inserting
import logging
logging.basicConfig(filename='file.log',level=logging.INFO)
and replacing every occurrence of print "..." with
logging.info("...")
The final Bash script:
#!/bin/bash
./learn.py
you can use nohup:
Also you can use & to make it run in the background
#!/bin/bash
nohup python learn.py >> file.log &

"Command not found" when using python for shell scripting

I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"

fabric can NOT call the remote script with nohup

In remote server, I have a script test.sh like:
#!/bin/bash
echo "I'm here!"
nohup sleep 100&
From local, I run 'fab runtest' to call the remote test.sh.
def runtest():
run('xxxx/test.sh')
I can get the output "I'm here!", but I can Not find the sleep process in remote sever.
What did I miss?
Thanks!
It is possible to run the nohup inside the script in remote machine?
I checked the answer here and Fabric FAQ, also get the hints from fabric appears to start apache2 but doesn't and it works for me to combine them together
You can keep your test.sh without changes, and add pty=False with related shell redirection.
from fabric.api import *
def runtest():
run("nohup /tmp/test.sh >& /dev/null < /dev/null &",pty=False)
At least, it works for me.
According to the Fabric FAQ you can no longer effectively do this. Instead you should use tmux, screen, dtach or even better use the python daemon package:
import daemon
from spam import do_main_program
with daemon.DaemonContext():
do_main_program()
We ran into this problem and found that you can use nohup in a command, but not in the script itself.
For example, run('nohup xxxx/test.sh') works.

Categories

Resources