ssh python fabric - python

I don't actually understand how to do that.
I have access to local computer for example 192.168.1.101 with some_user. From that computer i have access to another comp (via vpn) 10.0.132.17 and only from here i can reach access to computer 10.0.132.15 where i need to deploy my script.
so I need to:
$ ssh some_user#192.168.1.101 -> ssh another_user#10.0.132.17 -> ssh another_user#10.0.132.15
may i somehow do: ssh some_user#192.168.1.101 -p 2222 and get access to another_user#10.0.132.15?
or in python fabric to write somehow env variable?

Another option than using an explicit tunnel is to set up ssh to transparently forward through your proxies. Put in your ~/.ssh/config something like the following:
Host proxy_midstage
User another_user
HostName 10.0.132.17
ProxyCommand ssh -q some_user#192.168.1.101 nc %h %p
Host proxy_final
User another_user
HostName 10.0.132.15
ProxyCommand ssh -q proxy_midstage nc %h %p
Then the command ssh proxy_final will jump you straight to the deployment server. Presumably fabric can use that, though I'm not positive.

Related

Connect to database through double SSH tunnel using Python Paramiko

Here is my ssh command that establish a tunnel with Postgres database by proxying through bastion host -
ssh -i C:/public/keys/my_key.pem -o "ProxyCommand ssh -W %h:%p username1#bastion_host.com" username2#ssh_host_ip -N -L 12345:postgres_host.com:5432 ssh_host_ip
I want to convert it into Python script using sshtunnel utility. But have hard time to figure out what to pass where in the utility as depicted:
in example 4 over this link: https://github.com/pahaz/sshtunnel#example-4
or this link: Double SSH tunnel within Python
I went through few posts on Stack Overflow but did not see a straightforward way of doing it. Developers are using agent forwarding as a solution to Proxy command. Any straightforward conversion of above command to sshtunnel or Paramiko or any other pythonic way would be really helpful.
Based on Connecting to PostgreSQL database through SSH tunneling in Python, the following should do:
with SSHTunnelForwarder(
'bastion_host',
ssh_username="username1", ssh_password="password1",
remote_bind_address=('ssh_host_ip', 22)) as bastion:
bastion.start()
with SSHTunnelForwarder(
('127.0.0.1', bastion.local_bind_port),
ssh_username="username2", ssh_pkey="C:/public/keys/my_key.pem",
remote_bind_address=('postgres_host', 5432)) as ssh:
ssh.start()
engine = create_engine(
'postgresql://<db_username>:<db_password>#127.0.0.1:' +
str(ssh.local_bind_port) + '/database_name')

How to initiate SSH connection from within a Fabric run command

I have a remote server, say 1.2.3.4which is running a docker container that is serving SSHD mapped to port 49222 on the docker host, so to connect to it manually I would do:
workstation$ ssh 1.2.3.4 -t "ssh root#localhost -p 49222" and arrive at the docker container SSH command prompt root#f383b4f71eeb:~#
If I run a fabric command which triggers run('ssh root#localhost -p 49222') then I instead am asked for the root password. However it does not accept the root password which I know to be correct, so I suspect the password prompt is originating from the host and not the docker container.
I defined the following task in my fabfile.py:
#task
def ssh():
env.forward_agent = True
run('ssh root#localhost -p 49222')
with settings(output_prefix=False, forward_agent=True):
run('ssh root#localhost -p 49222')
And in the remote servers sshd_config I needed to set:
AllowAgentForwarding yes
In addition, the output_prefix=False is useful to remove the [hostname] run: prefix that fabric adds to the start of every line, which is fairly annoying for every line of a remote shell.

Python ssh tunneling over multiple machines with agent

A little context is in order for this question: I am making an application that copies files/folders from one machine to another in python. The connection must be able to go through multiple machines. I quite literally have the machines connected in serial so I have to hop through them until I get to the correct one.
Currently, I am using python's subprocess module (Popen). As a very simplistic example I have
import subprocess
# need to set strict host checking to no since we connect to different
# machines over localhost
tunnel_string = "ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string.split())
# Do work, copy files etc. over ssh on localhost with port 9999
proc.terminate()
My question:
When doing it like this, I cannot seem to get agent forwarding to work, which is essential in something like this. Is there a way to do this?
I tried using the shell=True keyword in Popen like so
tunnel_string = "eval `ssh-agent` && ssh-add && ssh -oStrictHostKeyChecking=no -L9999:127.0.0.1:9999 -ACt machine1 ssh -L9999:127.0.0.1:22 -ACt -N machineN"
proc = subprocess.Popen(tunnel_string, shell=True)
# etc
The problem with this is that the name of the machines is given by user input, meaning they could easily inject malicious shell code. A second problem is that I then have a new ssh-agent process running every time I make a connection.
I have a nice function in my bashrc which identifies already running ssh-agents and sets the appropriate environment variables and adds my ssh key, but of cource subprocess cannot reference functions defined in my bashrc. I tried setting the executable="/bin/bash" variable with shell=True in Popen to no avail.
You should give Fabric a try.
It provides a basic suite of operations for executing local or remote
shell commands (normally or via sudo) and uploading/downloading files,
as well as auxiliary functionality such as prompting the running user
for input, or aborting execution.
The program below will give you a test run.
First install fabric with pip install fabric then save the code below in fabfile.py
from fabric.api import *
env.hosts = ['server url/IP'] #change to ur server.
env.user = #username for the server
env.password = #password
def run_interactive():
with settings(warn_only = True)
cmd = 'clear'
while cmd is not 'stop fabric':
run(cmd)
cmd = raw_input('Command to run on server')
Change to the directory containing your fabfile and run fab run_interactive then each command you enter will be run on the server
I tested your first simplistic example and agent forwarding worked. The only think that I can see that might cause problems is that the environment variables SSH_AGENT_PID and SSH_AUTH_SOCK are not set correctly in the shell that you execute your script from. You might use ssh -v to get a better idea of where things are breaking down.
Try setting up a SSH config file: https://linuxize.com/post/using-the-ssh-config-file/
I frequently am required to tunnel through a bastion server and I use a configuration like so in my ~/.ssh/config file. Just change the host and user names. This also presumes that you have entries for these host names in your hosts (/etc/hosts) file.
Host my-bastion-server
Hostname my-bastion-server
User user123
AddKeysToAgent yes
UseKeychain yes
ForwardAgent yes
Host my-target-host
HostName my-target-host
User user123
AddKeysToAgent yes
UseKeychain yes
I then gain access with syntax like:
ssh my-bastion-server -At 'ssh my-target-host -At'
And I issue commands against my-target-host like:
ssh my-bastion-server -AT 'ssh my-target-host -AT "ls -la"'

Blocking certain ip's if exceeds 'tries per x'

Having a server that has to handle lots of TCP-requests from gprs-modules I think it is handy to set up something to protect this server from multiple requests from certain ip's.
Now I want to make something(within python) that will check how much times a certain ip tries to connect and if this exceeds a given amount of tries this ip will be blocked for a given amount of time (or forever).
I am wondering if there are libraries present to do this, or how I should tackle this problem in my code.
Don't tackle this from your code - this is what a firewall is designed to do.
Using iptables its trivial:
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --update --seconds 600 --hitcount 2 -j DROP
The above means "drop anything that makes more than 2 connection attempts in 10 minutes at port $PORT"
If you decide you do want to handle this in code, you don't need a separate library (although using one will probably be more efficient), you can add something like the following to your connection handler:
from collections import defaultdict, deque
from datetime import datetime
floodlog = defaultdict(deque)
def checkForFlood(clientIP):
"""check if how many times clientIP has connected within TIMELIMIT, and block if more than MAX_CONNECTEIONS_PER_TIMELIMIT"""
now = datetime.now()
clientFloodLog = floodlog[clientIP]
clientFloodLog.append(now)
if len(clientFloodLog) > MAX_CONNECTIONS_PER_TIMELIMIT:
earliestLoggedConenction = clientFloodLog.popleft()
if now - earliestLoggedConnection < TIMELIMIT:
blockIP(clientIP)
As Burhan Khalid said. You don't want to try this in your code. It's not very performant and that's what firewalls are made for.
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --set
iptables -I INPUT -p tcp --dport $PORT -i eth0 -m state --state NEW -m recent --update --seconds 600 --hitcount 2 -j DROP
This example is very usefull but not very handy. The problem is that you're also limiting good/trusted connections.
You need to be more flexible. On a linux-based OS you can use fail2ban. It's a very handy tool to prevent your services of bruteforce attacks by using dynamic iptables rules. On Debian/Ubuntu you can install it by using apt-get. If you're on CentOS you need to use a third party repository.
Log every connection into a logfile:
[Jun 3 03:52:23] server [pid]: Connect from 1.2.3.4
[Jun 3 03:52:23] server [pid]: Failed password for $USER from 1.2.3.4 port $DST
[Jun 3 03:52:23] server [pid]: Connect from 2.3.4.5
[Jun 3 03:52:23] server [pid]: Successful login from 2.3.4.5
Now monitor this file with fail2ban and define a regex to difference between successful and failed logins. Tell fail2ban how long it should block the IP for you and if you would like to get an email notification.
The documentation is very good so have a look onto here how you have to configure fail2ban to monitor your logile: fail2ban docu
You don't have to watch only for failed logins. You can also try to watch out for portscans. And the biggest win: don't only secure your application. Safe also your SSH, HTTP, etc logins for beeing bruteforced! ;)
For a pure Python solution, I think you could reuse something I developed for the same problem, but for the client point of view: avoiding to issue more than 'x tries per sec' to a service provider.
The code is available on GitHub: you can probably reuse most of it, but you'll need to replace the time.sleep call for your 'blacklisting' mechanism.

Connecting to EC2 using keypair (.pem file) via Fabric

Anyone has any Fabric recipe that shows how to connect to EC2 using the pem file?
I tried writing it with this manner:
Python Fabric run command returns "binascii.Error: Incorrect padding"
But I'm faced with some encoding issue, when I execute the run() function.
To use the pem file I generally add the pem to the ssh agent, then simply refer to the username and host:
ssh-add ~/.ssh/ec2key.pem
fab -H ubuntu#ec2-host deploy
or specify the env information (without the key) like the example you linked to:
env.user = 'ubuntu'
env.hosts = [
'ec2-host'
]
and run as normal:
fab deploy
Without addressing your encoding issue, you might put your EC2 stuff into an ssh config file:
~/.ssh/config
or, if global:
/etc/ssh_config
There you can specify your host, ip address, user, identify file, etc., so it's a simple matter of:
ssh myhost
Example:
Host myhost
User ubuntu
HostName 174.129.254.215
IdentityFile ~/.ssh/mykey.pem
For more details: man ssh_config
Another thing you can do is set the key_filename in the env variable: https://stackoverflow.com/a/5327496/1729558

Categories

Resources