Connecting to EC2 using keypair (.pem file) via Fabric - python

Anyone has any Fabric recipe that shows how to connect to EC2 using the pem file?
I tried writing it with this manner:
Python Fabric run command returns "binascii.Error: Incorrect padding"
But I'm faced with some encoding issue, when I execute the run() function.

To use the pem file I generally add the pem to the ssh agent, then simply refer to the username and host:
ssh-add ~/.ssh/ec2key.pem
fab -H ubuntu#ec2-host deploy
or specify the env information (without the key) like the example you linked to:
env.user = 'ubuntu'
env.hosts = [
'ec2-host'
]
and run as normal:
fab deploy

Without addressing your encoding issue, you might put your EC2 stuff into an ssh config file:
~/.ssh/config
or, if global:
/etc/ssh_config
There you can specify your host, ip address, user, identify file, etc., so it's a simple matter of:
ssh myhost
Example:
Host myhost
User ubuntu
HostName 174.129.254.215
IdentityFile ~/.ssh/mykey.pem
For more details: man ssh_config

Another thing you can do is set the key_filename in the env variable: https://stackoverflow.com/a/5327496/1729558

Related

Ansible with Github: Permission denied (Publickey)

I'm trying to understand the GitHub ssh configuration with Ansible (I'm working on the Ansible: Up & Running book). I'm running into two issues.
Permission denied (publickey) -
When I first ran the ansible-playbook mezzanine.yml playbook, I got a permission denied:
failed: [web] => {"cmd": "/usr/bin/git ls-remote '' -h refs/heads/HEAD", "failed": true, "rc": 128}
stderr: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
msg: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
FATAL: all hosts have already failed -- aborting
Ok, fair enough, I see several people have had this problem. So I jumped to appendix A on running Git with SSH and it said to run the ssh-agent and add the id_rsa public key:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa
Output: Identity AddedI ran ssh-agent -l to check and got the long string: 2048 e3:fb:... But I got the same output. So I checked the Github docs on ssh key generations and troubleshooting which recommended updating the ssh config file on my host machine:
Host github.com
User git
Port 22
Hostname github.com
IdentityFile ~/.ssh/id_rsa
TCPKeepAlive yes
IdentitiesOnly yes
But this still provides the same error. So at this point, I start thinking it's my rsa file, which leads me to my second problem.
Key Generation Issues - I tried to generate an additional cert to use, because the Github test threw another "Permission denied (publickey)" error.
Warning: Permanently added the RSA host key for IP address '192.30.252.131' to the list of known hosts.
Permission denied (publickey).
I followed the Github instructions from scratch and generated a new key with a different name.
ssh-keygen -t rsa -b 4096 -C "me#example.com"
I didn't enter a passphrase and saved it to the .ssh folder with the name git_rsa.pub. I ran the same test and got the following:
$ ssh -i ~/.ssh/git_rsa.pub -T git#github.com
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for '/Users/antonioalaniz1/.ssh/git_rsa.pub' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: ~/.ssh/github_rsa.pub
Permission denied (publickey).
I checked on the permissions and did a chmod 700 on the file and I still get Permission denied (publickey). I even attempted to enter the key into my Github account, but first got a message that the key file needs to start with ssh-rsa. So I started researching and hacking. Started with just entering the long string in the file (it started with --BEGIN PRIVATE KEY--, but I omitted that part after it failed); however, Github's not accepting it, saying it's invalid.
This is my Ansible command in the YAML file:
- name: check out the repository on the host
git: repo={{ repo_url }} dest={{ proj_path }} accept_hostkey=yes
vars:
repo_url: git#github.com:lorin/mezzanine-example.git
This is my ansible.cfg file with ForwardAgent configured:
[defaults]
hostfile = hosts
remote_user = vagrant
private_key_file = .vagrant/machines/default/virtualbox/private_key
host_key_checking = False
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o ForwardAgent=yes
The box is an Ubuntu Trusty64 using Mac OS. If anyone could clue me into the file permissions and/or Github key generation, I would appreciate it.
I suspect the key permissions issue is because you are passing the public key instead of the private key as the arugment to "ssh -i". Try this instead:
ssh -i ~/.ssh/git_rsa -T git#github.com
(Note that it's git_rsa and not git_rsa.pub).
If that works, then make sure it's in your ssh-agent. To add:
ssh-add ~/.ssh/git_rsa
To verify:
ssh-add -l
Then check that Ansible respects agent forwarding by doing:
ansible web -a "ssh-add -l"
Finally, check that you can reach GitHub via ssh by doing:
ansible web -a "ssh -T git#github.com"
You should see something like:
web | FAILED | rc=1 >>
Hi lorin! You've successfully authenticated, but GitHub does not provide shell access.
I had the same problem, it took me some time, but I have found the solution.
The problem is the URL is incorrect.
Just try to change it to:
repo_url: git://github.com/lorin/mezzanine-example.git
I ran into this issue and discovered it by turning verbosity up on the ansible commands (very very useful for debugging).
Unfortunately, ssh often throws error messages that don't quite lead you in the right direction (aka permission denied is very generic...though to be fair that is often thrown when there is a file permission issue so perhaps not quite so generic). Anyways, running the ansible test command with verbose on helps recreate the issue as well as verify when it is solved.
ansible -vvv all -a "ssh -T git#github.com"
Again, the setup I use (and a typical one) is to load your ssh key into the agent on the control machine and enable forwarding.
steps are found here Github's helpful ssh docs
it also stuck out to me that when I ssh'd to the box itself via the vagrant command and ran the test, it succeeded. So I had narrowed it down to how ansible was forwarding the connection. For me what eventually worked was setting
[paramiko_connection]
record_host_keys = False
In addition to the other config that controls host keys verification
host_key_checking = False
which essentially adds
-o StrictHostKeyChecking=no
to the ssh args for you, and
-o UserKnownHostsFile=/dev/null
was added to the ssh args as well
found here:
Ansible issue 9442
Again, this was on vagrant VMs, more careful consideration around host key verification should be taken on actual servers.
Hope this helps

How to do multihop ssh with fabric

I have a nat and it has various server
So from my local server I want to go to nat and then from nat i have to ssh to other machines
Local-->NAT(abcuser#publicIP with key 1)-->server1(xyzuser#localIP with key 2)
nat has different ssh key
and each of the server has different ssh key
how can i accomplish this type of multihop ssh using fabric
I tried using env.roledefs feature but it doesnt seems to be working
also I am not sure how to define two ssh keys.I know we can define a list of keys with env.key_filename but issue is will it check each key with each server?How can I be more specific and match a key with one server only
I have tried using command from my local machine
fab deploy -g 'ec2-user#54.251.151.39' -i '/home/aman/Downloads/aws_oms.pem'
and my script is
from __future__ import with_statement
from fabric.api import local, run, cd, env, execute
env.hosts=['ubuntu#10.0.0.77']
env.key_filename=['/home/ec2-user/varnish_cache.pem']
def deploy():
run("uname -a")
It's possible. Double hop to 10.0.0.2 (and list files) via gateway hop 10.0.0.1. Basically, you simply nest the connections with the gateway parameter.
# coding: utf-8
from fabric import Connection
path = '/'
conn1 = Connection(host='user1#10.0.0.1', connect_kwargs={'password': '***'})
conn2 = Connection(host='user2#10.0.0.2', connect_kwargs={'password': '***'}, gateway=conn1)
result = conn2.run(f'''cd {path} && ls -al''', hide=True)
conn2.close()
conn1.close()
msg = "Ran {0.command!r} on {0.connection.host}, got stdout:\n{0.stdout}"
print(msg.format(result))
Please remember to run the SSH connection manually once to introduce the servers to each other!
Install via
pip3 install --upgrade fabric
pip3 install cryptography==2.4.2 # optional to hide some annoying warnings
http://docs.fabfile.org/en/latest/concepts/networking.html
Python 3.6+.
In order to connect to remote hosts via an intermediate server, you can use the --gateway command-line option :
http://docs.fabfile.org/en/latest/usage/fab.html#cmdoption-g
Or, alternatively, set the env.gateway variable inside your fabfile :
http://docs.fabfile.org/en/latest/usage/env.html#gateway
For more detail information, see:
http://docs.fabfile.org/en/stable/concepts/networking.html#ssh-gateways

Python ssh tunneling over multiple machines with agent

A little context is in order for this question: I am making an application that copies files/folders from one machine to another in python. The connection must be able to go through multiple machines. I quite literally have the machines connected in serial so I have to hop through them until I get to the correct one.
Currently, I am using python's subprocess module (Popen). As a very simplistic example I have
import subprocess
# need to set strict host checking to no since we connect to different
# machines over localhost
tunnel_string = "ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string.split())
# Do work, copy files etc. over ssh on localhost with port 9999
proc.terminate()
My question:
When doing it like this, I cannot seem to get agent forwarding to work, which is essential in something like this. Is there a way to do this?
I tried using the shell=True keyword in Popen like so
tunnel_string = "eval `ssh-agent` && ssh-add && ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string, shell=True)
# etc
The problem with this is that the name of the machines is given by user input, meaning they could easily inject malicious shell code. A second problem is that I then have a new ssh-agent process running every time I make a connection.
I have a nice function in my bashrc which identifies already running ssh-agents and sets the appropriate environment variables and adds my ssh key, but of cource subprocess cannot reference functions defined in my bashrc. I tried setting the executable="/bin/bash" variable with shell=True in Popen to no avail.
You should give Fabric a try.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
The program below will give you a test run.
First install fabric with pip install fabric then save the code below in fabfile.py
from fabric.api import *
env.hosts = ['server url/IP'] #change to ur server.
env.user = #username for the server
env.password = #password
def run_interactive():
with settings(warn_only = True)
cmd = 'clear'
while cmd is not 'stop fabric':
run(cmd)
cmd = raw_input('Command to run on server')
Change to the directory containing your fabfile and run fab run_interactive then each command you enter will be run on the server
I tested your first simplistic example and agent forwarding worked. The only think that I can see that might cause problems is that the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are not set correctly in the shell that you execute your script from. You might use ssh -v to get a better idea of where things are breaking down.
Try setting up a SSH config file: https://linuxize.com/post/using-the-ssh-config-file/
I frequently am required to tunnel through a bastion server and I use a configuration like so in my ~/.ssh/config file. Just change the host and user names. This also presumes that you have entries for these host names in your hosts (/etc/hosts) file.
Host my-bastion-server
Hostname my-bastion-server
User user123
AddKeysToAgent yes
UseKeychain yes
ForwardAgent yes
Host my-target-host
HostName my-target-host
User user123
AddKeysToAgent yes
UseKeychain yes
I then gain access with syntax like:
ssh my-bastion-server -At 'ssh my-target-host -At'
And I issue commands against my-target-host like:
ssh my-bastion-server -AT 'ssh my-target-host -AT "ls -la"'

"Operation not Permitted" for Redis

I am developing on a mac which already have redis installed. by default it doesn't have a redis.conf so the default settings were used when I $ redis-server
# Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf'
I am trying to use redis-py and have the following
import redis
r = redis.Redis('localhost')
r.set('foo','bar')
r.get('foo')
but got the following error
redis.exceptions.ResponseError: operation not permitted
I also tried in terminal $ redis-cli ping, but then i get the following error
(error) ERR operation not permitted
I suppose since there is no redis.conf the default settings doesn't have a password right? Anyways, I also tried to create a redis.conf
$ echo "requirepass foobared" >> redis.conf
$ redis-server redis.conf
then on another window
$ redis-cli
$ redis 127.0.0.1:6379> AUTH foobared
(error) ERR invalid password
also modified the second line of the python script to
r = redis.StrictRedis(host='localhost', port=6379, db=0, password='foobared')
but then I got
redis.exceptions.ResponseError: invalid password
what could I be doing wrong??? Thanks
Without any redis.conf file, Redis uses the default built-in configuration. So the default database file (dump.rdb) and the Append Only File are created in the server directory. Maybe the user running Redis does not have write permission on this dir.
So either you give him the permission, or you define another working directory using a config file.
You'll find a default config file for redis 2.6 here
You must modify this line in the file:
# The working directory.
dir ./

ssh python fabric

I don't actually understand how to do that.
I have access to local computer for example 192.168.1.101 with some_user. From that computer i have access to another comp (via vpn) 10.0.132.17 and only from here i can reach access to computer 10.0.132.15 where i need to deploy my script.
so I need to:
$ ssh some_user#192.168.1.101 -> ssh another_user#10.0.132.17 -> ssh another_user#10.0.132.15
may i somehow do: ssh some_user#192.168.1.101 -p 2222 and get access to another_user#10.0.132.15?
or in python fabric to write somehow env variable?
Another option than using an explicit tunnel is to set up ssh to transparently forward through your proxies. Put in your ~/.ssh/config something like the following:
Host proxy_midstage
User another_user
HostName 10.0.132.17
ProxyCommand ssh -q some_user#192.168.1.101 nc %h %p
Host proxy_final
User another_user
HostName 10.0.132.15
ProxyCommand ssh -q proxy_midstage nc %h %p
Then the command ssh proxy_final will jump you straight to the deployment server. Presumably fabric can use that, though I'm not positive.

Categories

Resources