Through Fabric, I am trying to start a celerycam process using the below nohup command. Unfortunately, nothing is happening. Manually using the same command, I could start the process but not through Fabric. Any advice on how can I solve this?
def start_celerycam():
'''Start celerycam daemon'''
with cd(env.project_dir):
virtualenv('nohup bash -c "python manage.py celerycam --logfile=%scelerycam.log --pidfile=%scelerycam.pid &> %scelerycam.nohup &> %scelerycam.err" &' % (env.celery_log_dir,env.celery_log_dir,env.celery_log_dir,env.celery_log_dir))
I'm using Erich Heine's suggestion to use 'dtach' and it's working pretty well for me:
def runbg(cmd, sockname="dtach"):
return run('dtach -n `mktemp -u /tmp/%s.XXXX` %s' % (sockname, cmd))
This was found here.
As I have experimented, the solution is a combination of two factors:
run process as a daemon: nohup ./command &> /dev/null &
use pty=False for fabric run
So, your function should look like this:
def background_run(command):
command = 'nohup %s &> /dev/null &' % command
run(command, pty=False)
And you can launch it with:
execute(background_run, your_command)
This is an instance of this issue. Background processes will be killed when the command ends. Unfortunately on CentOS 6 doesn't support pty-less sudo commands.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
For even more information see this link.
you just need to run
run("(nohup yourcommand >& /dev/null < /dev/null &) && sleep 1")
DTACH is the way to go. It's a software you need to install like a lite version of screen.
This is a better version of the "dtach"-method found above, it will install dtach if necessary. It's to be found here where you can also learn how to get the output of the process which is running in the background:
from fabric.api import run
from fabric.api import sudo
from fabric.contrib.files import exists
def run_bg(cmd, before=None, sockname="dtach", use_sudo=False):
"""Run a command in the background using dtach
:param cmd: The command to run
:param output_file: The file to send all of the output to.
:param before: The command to run before the dtach. E.g. exporting
environment variable
:param sockname: The socket name to use for the temp file
:param use_sudo: Whether or not to use sudo
"""
if not exists("/usr/bin/dtach"):
sudo("apt-get install dtach")
if before:
cmd = "{}; dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(
before, sockname, cmd)
else:
cmd = "dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(sockname, cmd)
if use_sudo:
return sudo(cmd)
else:
return run(cmd)
May this help you, like it helped me to run omxplayer via fabric on a remote rasberry pi!
You can use :
run('nohup /home/ubuntu/spider/bin/python3 /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py > /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py.log 2>&1 &', pty=False)
nohup did not work for me and I did not have tmux or dtach installed on all the boxes I wanted to use this on so I ended up using screen like so:
run("screen -d -m bash -c '{}'".format(command), pty=False)
This tells screen to start a bash shell in a detached terminal that runs your command
You could be running into this issue
Try adding 'pty=False' to the sudo command (I assume virtualenv is calling sudo or run somewhere?)
This worked for me:
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
Edit: I had to make sure the pid file was removed first so this was the full code:
# Create new celerycam
sudo('rm celerycam.pid', warn_only=True)
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
I was able to circumvent this issue by running nohup ... & over ssh in a separate local shell script. In fabfile.py:
#task
def startup():
local('./do-stuff-in-background.sh {0}'.format(env.host))
and in do-stuff-in-background.sh:
#!/bin/sh
set -e
set -o nounset
HOST=$1
ssh $HOST -T << HERE
nohup df -h 1>>~/df.log 2>>~/df.err &
HERE
Of course, you could also pass in the command and standard output / error log files as arguments to make this script more generally useful.
(In my case, I didn't have admin rights to install dtach, and neither screen -d -m nor pty=False / sleep 1 worked properly for me. YMMV, especially as I have no idea why this works...)
Related
I want to do the following thing :
if condition:
cmd="ssh machine1 && sudo su - && df -h PathThatRequiresRootPriv | grep ..."
proc = subprocess.Popen(cmd,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,env=os.environ)
(out_space,err) = proc.communicate()
if err:
print err
log.warning('%s',err)
exit(1)
But i am clearly missing something because the program doesn't do anything.
Thank you for your help in advance.
You are building commands in the form cmd1 && cmd2 && cmd3. The 3 commands are executed one at a time on the local machine unless one of them return false. And from your title it is not what you expect... And the construct sudo su - will act the same and expect its commands from its own standard input and not from the next command.
The correct way here would be:
if condition:
loc_cmd="ssh machine1"
remcmd="sudo -i 'df -h PathThatRequiresRootPriv | grep ...'"
proc = subprocess.Popen(loc_cmd,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,env=os.environ)
(out_space,err) = proc.communicate(rem_cmd)
Say differently, you should only execute ssh on the local machine, pass sudo -i to the remote as a request to execute a command after simulating an initial login, and finally pass the pipeline as the parameter to sudo.
You must look fabric - if you want use python
or ansible
in fabric you can do different things on remote server like this:
from fabric.api import *
def check_disk_space(username, passw='none'):
host = '%s#%s' % (env.user, env.host)
env.passwords[host] = 'rootPass'
# run from user
run('df -h')
# run from sudo
sudo('df -h')
host='anyuser#10.10.10.101'
execute(check_disk_space, hosts=[host], username='anyuser', passw='')
Both support 'become' methods for execute remote commands through sudo
When I use docker run in interactive mode I am able to run the commands I want to test some python stuff.
root#pydock:~# docker run -i -t dockerfile/python /bin/bash
[ root#197306c1b256:/data ]$ python -c "print 'hi there'"
hi there
[ root#197306c1b256:/data ]$ exit
exit
root#pydock:~#
I want to automate this from python using the subprocess module so I wrote this:
run_this = "print('hi')"
random_name = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20))
command = 'docker run -i -t --name="%s" dockerfile/python /bin/bash' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'cat <<\'PYSTUFF\' | timeout 0.5 python | head -n 500000 \n%s\nPYSTUFF' % run_this
output = subprocess.check_output([command],shell=True,stderr=subprocess.STDOUT)
command = 'exit'
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'docker ps -a | grep "%s" | awk "{print $1}" | xargs --no-run-if-empty docker rm -f' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
This is supposed to create the container, run the python command on the container and exit and remove the container. It does all this except the command is run on the host machine and not the docker container. I guess docker is switching shells or something like that. How do I run python subprocess from a new shell?
It looks like you are expecting the second command cat <<... to send input to the first command. But the two subprocess commands have nothing to do with each other, so this doesn't work.
Python's subprocess library, and the popen command that underlies it, offer a way to get a pipe to stdin of the process. This way, you can send in the commands you want directly from Python and don't have to attempt to get another subprocess to talk to it.
So, something like:
from subprocess import Popen, PIPE
p = Popen("docker run -i -t --name="%s" dockerfile/python /bin/bash", stdin=PIPE)
p.communicate("timeout 0.5 python | head -n 500000 \n" % run_this)
(I'm not a Python expert; apologies for errors in string-forming. Adapted from this answer)
You actually need to spawn a new child on the new shell you are opening.So after docker creation run docker enter or try the same operation with pexpect instead of subprocess.`pexpect spawns a new child and that way you can send commands.
I have a very strange issue that I can't seem to figure out.
When I execute a python script containing the following lines while inside a SSH terminal (putty), it works fine. But the moment I run the script via crontab or even nohup python myscript >/dev/null 2>&1& it doesn't seem to execute these commands.
subprocess.call('rsync -avr /path/to/folder/. --include "delta.*" --exclude "*" -e "ssh -o StrictHostKeyChecking=no -i /path/to/key.pem" ec2-user#'+server+':/path/to/folder/', shell=True)
local('ssh -t -o StrictHostKeyChecking=no -i /path/to/key.pem ec2-user#'+server+' "sudo /usr/bin/indexer -c /path/to/sphinx.conf --merge main delta --rotate"')
Basically all the above is doing is syncing a folder with new sphinx search engine updates to a remote server, then the second line runs a remote ssh command to force the search engine to rotate updates into production.
I do have fabric installed (hence the local command) but to avoid having to fab a second file I was hoping a single line of code could allow me to execute sudo commands on a remote server.
Can someone help me out?
I found the answer, for ssh commands in a script run in the background, you need to to have -t -t to force a pseudo terminal.
Reference:
Pseudo-terminal will not be allocated because stdin is not a terminal
I am trying to execute a script on a remote host using a detached screen session. I tried out the example Fabric gives and unfortunately couldn't get it to work.
from fabric.api import run
def yes():
run('screen -d -m "yes"')
Executing fab yes on my local machine correctly connects it to the remote host and says the command has been run, however nothing is executed on the remote host. Trying screen -d -m "yes" on either machine works as expected.
If anyone could point out what I'm doing wrong I'd greatly appreciate it. Also, on a side note, why are there quotes around the yes in the command? Would it work without the quotes? Thanks!
run('screen -d -m yes; sleep 1') works.
Not sure if Fabric or screen are to blame for this behaviour though.
Although AVB answer is perfect I'll add a small tip which may help someone like me. If you want to run more than one command put them to a executable file.
This will not work:
run('screen -d -m "./ENV/bin/activate; python run.py; sleep 1"')
So create a run.sh file:
#!/bin/bash
source ENV/bin/activate
python run.py
And use it like run('screen -d -m ./run.sh; sleep 1')
Use it like this:
run('sudo screen -d -m python app.py && sleep 1', pty=True)
screen -d -m
Start a session that starts in disconnected mode
i have googled a lot,and in fabric faq also said use screen dtach with it ,but didn't find how to implement it?
bellow is my wrong code,the sh will not execute as excepted it is a nohup task
def dispatch():
run("cd /export/workspace/build/ && if [ -f spider-fetcher.zip ];then mv spider-fetcher.zip spider-fetcher.zip.bak;fi")
put("/root/build/spider-fetcher.zip","/export/workspace/build/")
run("cd /export/script/ && sh ./restartCrawl.sh && echo 'finished'")
I've managed to do it in two steps:
Start tmux session on remote server in detached mode:
run("tmux new -d -s foo")
Send command to the detached tmux session:
run("tmux send -t foo.0 ls ENTER")
here '-t' determines target session ('foo') and 'foo.0' tells the
number of the pane the 'ls' command is to be executed in.
you can just prepend screen to the command you want to run:
run("screen long running command")
Fabric though doesn't keep state like something like expect would, as each run/sudo/etc are their own sperate command runs without knowing the state of the last command. Eg run("cd /var");run("pwd") will not print /var but the home dir of the user who has logged into the box.