You know that on EC2, there is no password associated with "ubuntu" user. With the following lines, if I try to run :
fab development install_dir
I get :
[ec2-46-51-132-252.eu-west-1.compute.amazonaws.com] sudo: chown -R webadmin:webadmin /var/www
[ec2-46-51-132-252.eu-west-1.compute.amazonaws.com] Login password:
I tried to add shell=False to sudo method (according to Can I prevent fabric from prompting me for a sudo password?), but it doesn't change anything
Any idea ? Thanks a lot !
def development():
env.envname = 'development'
env.user = 'ubuntu'
env.group = 'ubuntu'
env.chuser = 'webadmin'
env.chgroup = 'webadmin'
env.hosts = ['ec2-***.eu-west-1.compute.amazonaws.com']
env.envname_abriev = 'dev'
env.key_filename = '/home/xx/.ssh/xx.pem'
env.postgresql_version = '9.0'
def install_dir():
if not exists('/var/www'):
sudo('mkdir /var/www')
sudo('chown -R %s:%s /var/www' % (env.chuser, env.chgroup))
Download (or create) a keypair file from aws as shown below
Create a file called fabfile.py
and set its contents as follows:
from fabric.context_managers import cd
from fabric.operations import sudo
from fabric.api import run, env
import os
HOME = os.getenv('HOME')
env.user = 'ubuntu'
env.hosts = ['PUBLICDNS.ap-southeast-1.compute.amazonaws.com','ANOTHERSERVER.compute.amazonaws.com'] #can add multiple instances
env.key_filename = [
'%s/<your-keypair-file>.pem'%HOME
] #assuming keypair file is located in HOME
#example code we want to run on remote machine
def update():
with cd('/var/www'):
sudo('svn update')
with cd ('/var/www/cache'):
run('rm -rf *')
sudo('service lighttpd restart')
To run the file, type fab update in the terminal.
You need to specify keypair file name associated with your EC2 instance while running fab command.
Usage: fab [options] <command>[:arg1,arg2=val2,host=foo,hosts='h1;h2',...] ...
Options:
-R ROLES, --roles=ROLES
comma-separated list of roles to operate on
-i KEY_FILENAME path to SSH private key file. May be repeated.
Related
I'm trying to download all private repos in my organisation.
I have a script that I want to run once a day using Fargate.
The problem I'm experiencing when running it is below:
Warning: Permanently added the RSA host key for IP address '192.30.253.113' to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I understand the error and in my dockerfile I add an ssh key to the image:
FROM python:3.6
RUN mkdir /backup
WORKDIR /backup
ADD . /backup/
RUN mkdir /root/.ssh/
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 400 /root/.ssh/id_rsa
RUN python3 -m pip install -r requirements.txt
This is the script where I try to download all repos and upload them to S3 bucket:
TOKEN = os.environ['TOKEN']
DATE = str(date.today())
def archive(zipname, directory):
return shutil.make_archive(zipname, 'zip', root_dir=directory,
base_dir=None)
def assume_role(role_to_assume, duration=900):
sts_client = boto3.client('sts')
assumed_role = sts_client.assume_role(
RoleArn=role_to_assume,
RoleSessionName='session',
DurationSeconds=duration
)
credentials = assumed_role['Credentials']
return (credentials['AccessKeyId'], credentials['SecretAccessKey'],
credentials['SessionToken'])
def upload_to_s3(key, file_name, access_role):
access_key, secret_key, session_token = assume_role(access_role)
s3 = boto3.resource(
's3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token
)
s3.Bucket('zego-github-backup').put_object(
Key=key,
Body=file_name
)
print('Uploaded')
def login_github():
g = Github(TOKEN)
org = g.get_organization("Organisation").get_repos()
role = "arn:aws:iam::7893729191287:role/Github_backup"
for repo in org:
repo_name = repo.name
key = f"{repo_name} {DATE}.zip"
ssh_url = repo.ssh_url
os.system(f"GIT_SSH_COMMAND=\"ssh -o StrictHostKeyChecking=no\" git clone --depth 1 {ssh_url}")
archive(f"{repo_name} {DATE}", repo_name)
archived_file = open(key, 'rb')
upload_to_s3(key, archived_file, role)
shutil.rmtree(repo_name)
os.remove(f"{repo_name} {DATE}.zip")
print("Done")
login_github()
What am I doing wrong? Or am I missing some steps?
Not sure if I missed anything from your script but I didn't see you starting the ssh-agent anywhere and then adding the key to it.
From GitHub's guide,
$ eval "$(ssh-agent -s)"
$ ssh-add /root/.ssh/id_rsa
Hopefully that helps!
There is a very simple to use python tool that automatically backs up organisations' repositories in .zip format by saving public and private repositories and all their branches. If you want to do your backups every x amount of time, all you have to do is run the tool from a crontab on your server. It works with the Github API, if you have python on your AWS instance, the tool will be very useful : https://github.com/BuyWithCrypto/OneGitBackup
I have written a simple code wherein I invoked ssh to one of my lab Device ip through os.system. Now the problem for me is how do I supply a password? Can I do it via python?
Below is the code that I have been using:
import os
os.system("ssh 192.168.1.100")
I tried to understand how the os module works but so far am not able to supply password argument - how do I supply a password via python to this program?
My environment is Bash.
Install sshpass, then launch the command with os.system like that:
os.system("sshpass -p \'yourpassword\' ssh -o StrictHostKeyChecking=no yourusername#hostname")
Or you can use a fabric script to copy your public key into the system then you will not need a password for connection
from fabric.api import env, sudo, run
def copy_pub_key(ip, user, password, pub_key):
env.host_string = ip
env.user = user
env.password = password
if user == "root":
sudo('echo \'{pub_key}\' >> /root/.ssh/authorized_keys'.format(
pub_key=pub_key,
user=user
))
else:
sudo('echo \'{pub_key}\' >> /home/{user}/.ssh/authorized_keys'.format(
pub_key=pub_key,
user=user
))
I'm wondering what is the difference between the function sudo() and the function run('sudo -u user smth')
On the doc there is :
sudo is identical in every way to run, except that it will always wrap
the given command in a call to the sudo program to provide superuser
privileges.
But a few time, sudo('cmd') prompt me a password, but if I switch with run('sudo cmd') it works without prompting me anything. Is there anything that change between the two ? (I remember someone on SO saying that sudo and run(sudo cmd) are not for the same use, but I can't find it back)
I found these two difference.
1: Fabric maintains an in-memory password
2: sudo accepts additional user and group arguments
First, fabric would get password from cache when using sudo(), then you do not need to enter password. But if you use run('sudo cmd'), you need to enter password for each 'sudo cmd'.
Second, if you want to execute a command not under root but other user group like www, you just need to set env.sudo_user = 'www' or sudo('cmd', user='www'). The first would execute each sudo() under www, the second would execute this single cmd under www. But you need to edit to run("sudo -u 'www' cmd") when use run() command.
from fabric.api import sudo, run, env
env.hosts = ['host_ip',]
env.user = 'user_name'
env.sudo_user = 'sudo_user'
def test_1():
run('sudo pwd')
def test_2():
sudo('pwd')
$ fab -I --show=debug test_1 test_2
Initial value for env.password: # enter password
Commands to run: test_1, test_2
Parallel tasks now using pool size of 1
[ip_address] Executing task 'test_1'
[ip_address] run: /bin/bash -l -c "sudo pwd"
[ip_address] out: [sudo] password for billy: # needs to enter password here
[ip_address] out: /home/billy
[ip_address] out:
Parallel tasks now using pool size of 1
[ip_address] Executing task 'test_2'
[ip_address] sudo: sudo -S -p 'sudo password:' -u "root" /bin/bash -l -c "pwd"
[ip_address] out: sudo password: # only prompt, do not need enter password
[ip_address] out: /home/billy
[ip_address] out:
Done.
Disconnecting from ip_address... done.
Since Fabric 2, you can invoke sudo via run(), which will prompt for the password unless you use the auto-responder, details here. Note that the sudo command usually caches the password remotely, so next invocations of sudo during the same connection will not prompt for password.
However, the Fabric sudo() helper makes using sudo much easier, details here. You need to ensure that the sudo.password configuration value is filled in (via config object, config file, environment variable, or --prompt-for-sudo-password). Here's how I do it with the keyring module:
from fabric import task
import keyring
#task
def restart_apache(connection):
# set the password with keyring.set_password('some-host', 'some-user', 'passwd')
connection.config.sudo.password = keyring.get_password(connection.host, 'some-user')
connection.sudo('service apache2 restart')
I want to deploy my code in localhost and my live version for this automation i used fabric. My basic fabric file look like:
def localhost():
"Use the local virtual server"
env.hosts = ['127.0.0.1']
env.user = 'user'
env.path = '/var/www/html/{}'.format(env['project_name'])
env.virtualhost_path = env.path
def webserver():
"Use the actual webserver"
env.hosts = ['www.example.com']
env.user = 'username'
env.path = '/var/www/html/{}'.format(env['project_name'])
env.virtualhost_path = env.path
def setup():
require('hosts', provided_by=[localhost])
require('path')
sudo("apt-get update -y")
sudo("apt-get install git -y")
sudo("apt-get install postgresql libpq-dev python-dev python-pip -y")
sudo("apt-get install redis-server -y")
sudo("apt-get install nginx -y")
sudo('aptitude install -y python-setuptools')
sudo('apt-get install python-pip')
sudo('pip install virtualenv virtualenvwrapper')
For now i only want to deploy to my local machine. When I do this it gives me erro saying
The command 'setup' failed because the following required environment variable was not defined:
hosts
Try running the following command prior to this one, to fix the problem:
localhost
What does provided_by=([localhost]) do in here. I guess it should provide the information like hosts and user in localhost.
Why I am getting this error ??
Need help
I'm not sure why that doesn't work other than it's not mentioned in the docs as how host lists get constructed. Your options for setting the host value are:
Specify env.hosts = ['127.0.0.1'] globally in your fabfile
Pass the host to fab: fab -H 127.0.0.1 setup
Call the localhost task: fab localhost setup
Use the #hosts decorator on your setup function
See http://docs.fabfile.org/en/1.10/usage/execution.html#how-host-lists-are-constructed
fabric.operations.require(*keys, **kwargs):
Check for given keys in the shared environment dict and abort if not
found... The optional keyword argument provided_by may be a list of functions or function names or a single function or function name which the user should be able to execute in order to set the key or keys; it will be included in the error output if requirements are not met.
http://docs.fabfile.org/en/1.10/api/core/operations.html?highlight=require#fabric.operations.require
That why you get the error message saying to run localhost first, then setup:
fab localhost setup
I have a problem when using Fabric to mimic my SSH workflow to deploy my web application.
Here's my usual flow of commands when I SSH to a server:
SSH using root user. ssh root#1.2.3.4
Switch to web user: su - web
Change directory: cd /srv/web/prod/abc_project
Start virtualenv: workon abc_env
Perform git pull: git pull origin master
Run a script: build_stuff -m build
Run another script: ./run
I tried to write this as a deploy script in Fabric and I get a shell output when su - web is entered. I have to hit Ctrl-D to continue the script. I am also unable to activate my virtualenv....because: su - web successfully switches the user to web but because of the Ctrl-d (so that I can continue the Fabric script), it logs out of that user and back to root.
Here's my script:
env.user = 'root'
#roles('web')
def deploy():
dev_path = '/srv/web/prod'
app_path = '/srv/web/prod/rhino'
workon = 'workon rhino_env'
with prefix('su - web'):
puts('Switched to `web` user')
with settings(warn_only=True):
run('kill -9 `cat /srv/web/run/rhino/rhino.pid`')
puts('Stopped rhino...')
with cd(app_path):
run('git reset --hard HEAD')
puts('Discarded all untracked and modified files')
run('git checkout master')
run('git pull origin master')
users = run('users')
puts('Output from `users` command: %s' % users)
run(workon)
run('build_assets -m build')
run('cd %(dev_path)s; chown -R web:ebalu rhino' % {'dev_path': dev_path})
run('cd %(app_path)s; ./run' % {'app_path': app_path})
pid = run('cat /srv/web/run/rhino/rhino.pid')
puts('Rhino started again with pid: %s.' % pid)
...there's one more thing: No, I can't login as web initially, I have to login as root. It is the web user that has the virtualenv not the root user.
First of all, you should use sudo when executing commands under another user. Second, workon sets environment variables for current shell. Since fabric invokes new shell for every command, you should run workon rhino_env in every command, where you need virtualenv (i.e. as prefix). With this edits yor code should look like this:
env.user = 'root'
#roles('web')
def deploy():
dev_path = '/srv/web/prod'
app_path = '/srv/web/prod/rhino'
workon = 'workon rhino_env; '
with settings(warn_only=True):
run('kill -9 `cat /srv/web/run/rhino/rhino.pid`')
puts('Stopped rhino...')
with cd(app_path):
sudo('git reset --hard HEAD', user='web')
puts('Discarded all untracked and modified files')
sudo('git checkout master', user='web')
sudo('git pull origin master', user='web')
users = run('users')
puts('Output from `users` command: %s' % users)
with prefix(workon):
sudo('build_assets -m build', user='web')
with cd(dev_path):
run('chown -R web:ebalu rhino')
with cd(app_path):
sudo('./run', user='web')
pid = run('cat /srv/web/run/rhino/rhino.pid')
puts('Rhino started again with pid: %s.' % pid)
The way I achieve this is with
from fabric.api import settings
with settings(user='otheruser'):
...
You will be prompted for the password of otheruser, though only once. So it is not equivalent so sudo su otheruser, where root logs in to the user account without a password, but is is a simple way to switch between users in your script, only typing each password once
One possible solution is to use the sudo operation instead of changing the remote user with su.