Switching user in Fabric - python

I have a problem when using Fabric to mimic my SSH workflow to deploy my web application.
Here's my usual flow of commands when I SSH to a server:
SSH using root user. ssh root#1.2.3.4
Switch to web user: su - web
Change directory: cd /srv/web/prod/abc_project
Start virtualenv: workon abc_env
Perform git pull: git pull origin master
Run a script: build_stuff -m build
Run another script: ./run
I tried to write this as a deploy script in Fabric and I get a shell output when su - web is entered. I have to hit Ctrl-D to continue the script. I am also unable to activate my virtualenv....because: su - web successfully switches the user to web but because of the Ctrl-d (so that I can continue the Fabric script), it logs out of that user and back to root.
Here's my script:
env.user = 'root'
#roles('web')
def deploy():
dev_path = '/srv/web/prod'
app_path = '/srv/web/prod/rhino'
workon = 'workon rhino_env'
with prefix('su - web'):
puts('Switched to `web` user')
with settings(warn_only=True):
run('kill -9 `cat /srv/web/run/rhino/rhino.pid`')
puts('Stopped rhino...')
with cd(app_path):
run('git reset --hard HEAD')
puts('Discarded all untracked and modified files')
run('git checkout master')
run('git pull origin master')
users = run('users')
puts('Output from `users` command: %s' % users)
run(workon)
run('build_assets -m build')
run('cd %(dev_path)s; chown -R web:ebalu rhino' % {'dev_path': dev_path})
run('cd %(app_path)s; ./run' % {'app_path': app_path})
pid = run('cat /srv/web/run/rhino/rhino.pid')
puts('Rhino started again with pid: %s.' % pid)
...there's one more thing: No, I can't login as web initially, I have to login as root. It is the web user that has the virtualenv not the root user.

First of all, you should use sudo when executing commands under another user. Second, workon sets environment variables for current shell. Since fabric invokes new shell for every command, you should run workon rhino_env in every command, where you need virtualenv (i.e. as prefix). With this edits yor code should look like this:
env.user = 'root'
#roles('web')
def deploy():
dev_path = '/srv/web/prod'
app_path = '/srv/web/prod/rhino'
workon = 'workon rhino_env; '
with settings(warn_only=True):
run('kill -9 `cat /srv/web/run/rhino/rhino.pid`')
puts('Stopped rhino...')
with cd(app_path):
sudo('git reset --hard HEAD', user='web')
puts('Discarded all untracked and modified files')
sudo('git checkout master', user='web')
sudo('git pull origin master', user='web')
users = run('users')
puts('Output from `users` command: %s' % users)
with prefix(workon):
sudo('build_assets -m build', user='web')
with cd(dev_path):
run('chown -R web:ebalu rhino')
with cd(app_path):
sudo('./run', user='web')
pid = run('cat /srv/web/run/rhino/rhino.pid')
puts('Rhino started again with pid: %s.' % pid)

The way I achieve this is with
from fabric.api import settings
with settings(user='otheruser'):
...
You will be prompted for the password of otheruser, though only once. So it is not equivalent so sudo su otheruser, where root logs in to the user account without a password, but is is a simple way to switch between users in your script, only typing each password once

One possible solution is to use the sudo operation instead of changing the remote user with su.

Related

Is there a python edge software update tool?

So I have a simple app written in python and running on a set (10) Raspberry Pis.
It is a folder with one runnable script.
I want to have on my external server with public IP some kind of CI/CD like service that would deploy updates to all edge nodes and restart my app on them.
Internet is rare on edge devices thus I want to push updates when I push some button on the server
Is there such thing for python programs that are meant to run on edge devices?
As I understand the main problem is to update and run some script on multiple Raspberry Pi boards, correct?
There are a lot of ready-made solution like dokku or piku. Both allows you to do git push deployments to your own servers (manually).
Or you can develop your own solution, using GitHub webhooks or some HTML form (for manual push) and Flask web-server that will do CI/CD steps internally.
You'll need to run script like above on each node/board. And configure Webhook with URL similar to: http://your-domain-or-IP.com:8000/deploy-webhook but with different port per node.
Or you can open that page manually from browser. Or create separate page that allows you to do that asynchronously. As you'll wish.
from flask import Flask
import subprocess
app = Flask(__name__)
script_proc = None
src_path = '~/project/src/'
def bash(cmd):
subprocess.Popen(cmd)
def pull_code(path):
bash('git -C {path} reset --hard'.format(path=path))
bash('git -C {path} clean -df'.format(path=path))
bash('git -C {path} pull -f'.format(path=path))
# or
# If need just to copy files to remote machine:
# (example of remote path "pi#192.168.1.1:~/project/src/")
bash('scp -r {src_path} {dst_path}'.format(src_path=src_path, dst_path=path))
def installation(python_path):
bash('{python_path} -m pip install -r requirements.txt'.format(python_path=python_path))
def stop_script():
global script_proc
if script_proc:
script_proc.terminate()
def start_script(python_path, script_path, args=None):
global script_proc
script_proc = subprocess.Popen(
'{} {} {}'.format(str(python_path), script_path, ' '.join(args) or '')
)
#app.route('/deploy-webhook')
def deploy_webhook():
project_path = '~/project/some_project_path'
script_path = 'script1.py'
python_path = 'venv/bin/python'
pull_code(project_path)
installation(python_path)
stop_script()
start_script(python_path, script_path)
return 'Deployed'
If your don't need a user interface and use linux I want suggest to use a bash script.
I wrote a simple bash script "to push an update and restart" to
as set for raspberry pi's. Please configure before ssh with key-less login.
#!/bin/bash
listOfIps=(
192.168.1.100
192.168.1.101
192.168.1.102
192.168.1.103
)
username="pi"
destDir="work/"
pythonScriptName="fooScript.py"
for i in "${listOfIps[#]}"
do
echo "will copy folder \"$1\" with content to ip: ${i} and perform"
echo "scp -r $1 ${username}#${i}:${destDir}"
scp -r $1 ${username}#${i}:${destDir}
echo "will kill all python scripts unfriendly"
ssh ${username}#${i} "pkill python"
echo "will restart my python scripts ${pythonScriptName} in dir ${destDir} "
ssh ${username}#${i} "python3 ${destDir}/${pythonScriptName} &"
done
exit 0
save the code in file copyToAll.sh edit username destDir and your script name and make it executable:
chmod 755 copyToAll.sh
call
copyToAll.sh myFileToSend

SSHing from within a python script and run a sudo command having to give sudo password

I am trying to SSH into another host from within a python script and run a command that requires sudo.
I'm able to ssh from the python script as follows:
import subprocess
import sys
import json
HOST="hostname"
# Ports are handled in ~/.ssh/config since we use OpenSSH
COMMAND="sudo command"
ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
shell=False,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result = ssh.stdout.readlines()
if result == []:
error = ssh.stderr.readlines()
print(error)
else:
print(result)
But I want to run a command like this after sshing :
extract_response = subprocess.check_output(['sudo -u username internal_cmd',
'-m', 'POST',
'-u', 'jobRun/-/%s/%s' % (job_id, dataset_date)])
return json.loads(extract_response.decode('utf-8'))[0]['id']
How do I do that?
Also, I don't want to be providing the sudo password every time I run this sudo command, for that I have added this command (i.e., internal_cmd from above) at the end of visudo in the new host I'm trying to ssh into. But still when just typing this command directly in the terminal like this:
ssh -t hostname sudo -u username internal_cmd -m POST -u/-/1234/2019-01-03
I am being prompted to give the password. Why is this happening?
You can pipe the password by using the -S flag, that tells sudo to read the password from the standard input.
echo 'password' | sudo -S [command]
You may need to play around with how you put in the ssh command, but this should do what you need.
Warning: you may know this already... but never store your password directly in your code, especially if you plan to push code to something like Github. If you are unaware of this, look into using environment variables or storing the password in a separate file.
If you don't want to worry about where to store the sudo password, you might consider adding the script user to the sudoers list with sudo access to only the command you want to run along with the no password required option. See sudoers(5) man page.
You can further restrict command access by prepending a "command" option to the beginning of your authorized_keys entry. See sshd(8) man page.
If you can, disable ssh password authentication to require only ssh key authentication. See sshd_config(5) man page.

How fabric work with 'sudo su user'

My request is simple:
ssh to a remote server with user0
switch user to user1 using: 'sudo su user1'
list all items in current folder
My expected code:
def startRedis():
run('sudo su - user1')
print(run('ls'))
However, it ends with out: user1#server:~$
And waiting for my interactive command forever, never executing the second line. It seems sudo su opened a new shell.
Can anyone help solving this simple task?
You can set sudo_user property in env. this way fabric will switch user to the desired user.
Official doc: http://docs.fabfile.org/
Password for switching user can be specified in the env. itself to avoid getting a prompt when the method is invoked.
fabfile.py
from fabric.api import env, sudo
env.sudo_user='user1'
env.password = '***'
def list_items():
sudo('ls')
Run below command & specify the hosts after -H
fab -H host1 list_items

fabric difference sudo() run('sudo cmd')

I'm wondering what is the difference between the function sudo() and the function run('sudo -u user smth')
On the doc there is :
sudo is identical in every way to run, except that it will always wrap
the given command in a call to the sudo program to provide superuser
privileges.
But a few time, sudo('cmd') prompt me a password, but if I switch with run('sudo cmd') it works without prompting me anything. Is there anything that change between the two ? (I remember someone on SO saying that sudo and run(sudo cmd) are not for the same use, but I can't find it back)
I found these two difference.
1: Fabric maintains an in-memory password
2: sudo accepts additional user and group arguments
First, fabric would get password from cache when using sudo(), then you do not need to enter password. But if you use run('sudo cmd'), you need to enter password for each 'sudo cmd'.
Second, if you want to execute a command not under root but other user group like www, you just need to set env.sudo_user = 'www' or sudo('cmd', user='www'). The first would execute each sudo() under www, the second would execute this single cmd under www. But you need to edit to run("sudo -u 'www' cmd") when use run() command.
from fabric.api import sudo, run, env
env.hosts = ['host_ip',]
env.user = 'user_name'
env.sudo_user = 'sudo_user'
def test_1():
run('sudo pwd')
def test_2():
sudo('pwd')
$ fab -I --show=debug test_1 test_2
Initial value for env.password: # enter password
Commands to run: test_1, test_2
Parallel tasks now using pool size of 1
[ip_address] Executing task 'test_1'
[ip_address] run: /bin/bash -l -c "sudo pwd"
[ip_address] out: [sudo] password for billy: # needs to enter password here
[ip_address] out: /home/billy
[ip_address] out:
Parallel tasks now using pool size of 1
[ip_address] Executing task 'test_2'
[ip_address] sudo: sudo -S -p 'sudo password:' -u "root" /bin/bash -l -c "pwd"
[ip_address] out: sudo password: # only prompt, do not need enter password
[ip_address] out: /home/billy
[ip_address] out:
Done.
Disconnecting from ip_address... done.
Since Fabric 2, you can invoke sudo via run(), which will prompt for the password unless you use the auto-responder, details here. Note that the sudo command usually caches the password remotely, so next invocations of sudo during the same connection will not prompt for password.
However, the Fabric sudo() helper makes using sudo much easier, details here. You need to ensure that the sudo.password configuration value is filled in (via config object, config file, environment variable, or --prompt-for-sudo-password). Here's how I do it with the keyring module:
from fabric import task
import keyring
#task
def restart_apache(connection):
# set the password with keyring.set_password('some-host', 'some-user', 'passwd')
connection.config.sudo.password = keyring.get_password(connection.host, 'some-user')
connection.sudo('service apache2 restart')

fabric restarting my supervisord service kills the service

I have a tornado web application running using supervisord. I manually start, stop and restart the Supervisord service from ssh and it works fine. I am using fabric from my local machine to do the following:
Run unit tests
commit/push to my git server
pull changes from development server
restart the Supervisord service to update the application
I get no errors when running the fabfile, but my server is down after I run it. The following is my fabfile code:
from fabric.api import *
from fabric.context_managers import settings
def prepare_deploy():
local('py.test')
local('git add .')
x = raw_input('what would you like your commit message to be? ')
local('git diff --quiet --exit-code --cached || git commit -m "' + x + '"')
local('git push origin master')
def dev():
prepare_deploy()
x = raw_input('Please enter the location of your keyFile for the dev server ')
if x == '':
key = 'key_file'
else:
key = x
with settings(
host_string='dev_server_name',
user='user_name',
Key_filename=key,
use_ssh_config = True):
code_dir='~/path/to/code/'
with cd(code_dir):
run("git pull origin master")
run("sudo python setup.py install")
run("sudo service supervisord restart")
After this is done, my web application is down. Any ideas on why this is happening?
Supervisor is a tool to manage services, there is no need to restart it just to restart something under its control.
It comes with a command line tool to manage processes, supervisorctl. You can use it as a CLI interface or an interactive shell.
If you want to restart a service supervisorctl restart <service name> (with appropriate rights, so probably using sudo does that. If you changed the service's configuration, use supervisorctl update to restart affected processes. This way you get to use the logfile from supervisor in case your process doesn't start.
Supervisor has bug in init.d script. Don't do restart. Do stop and then start.

Categories

Resources