Unable to connect to redis server deployed on Amazon ec2 on port 6379 - python

I have set up a redis server on AWS ec2 instance following https://medium.com/#andrewcbass/install-redis-v3-2-on-aws-ec2-instance-93259d40a3ce
I am running a python script on another ec2 instance
import redis
try:
conn = redis.Redis(host=<private ip address>,port=6379, db=1)
user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
conn.hmset("pythonDict", user)
conn.hgetall("pythonDict")
except Exception as e:
print(e)
In the security groups of the redis server, i have allowed inbound traffic on port 6379
While running the above script, i am getting following error:
Error 111 connecting to 172.31.22.71:6379. Connection refused.
I have already tried changing the bind value in conf file, as suggested by a few answers to similar questions on stack overflow, but it didn't work

Assuming your other instance is within the same subnet as the Redis instance, my suggestion would be to review a couple of things:
Make sure among your security group inbound rules, you have your Redis port set up for the subnet, like:
6379 (REDIS) 172.31.16.0/20
From within your Redis configuration (e.g. /etc/redis/redis.conf), in case this hasn't been done, either bind the server to the private IP (bind 172.31.22.71) or simply comment out any existing localhost binding, then restart Redis.

Related

redisai Client password/auth process

I am trying to connect to a redisai server through the redisai-py Client. The server is password protected and the Client is passed host, port, and password as arguments. However, the client times out on a tensorset/get even though it returns a connection object.
import redisai
r = redisai.Client(host='<host>', port=<port>, password='<password>')
in redis-cli, you would
redis-cli
auth <password>
...
which works just fine. There doesn't seem to be a way to perform this action through a redisai-py Client despite it extending the StrictRedis class. Since the Client won't connect without authentication, I cannot access the data.
The solution to accessing the redisai database involved creating inbound port rules focused directly around the VNet the Azure VM nodes were located on.
When connecting with redisai Client, the private IP address is used and the argument for port is left out.
import redisai
r = redisai.Client(host=<Private IP>)
r.ping()
# PONG
The primary node inbound port rules:
Worker inbound port rule:
However, this does not solve the issue around the client hanging and providing authentication when the redisai database is exposed but requires a password.

Cannot connect remotely to postgresql database hosted on amazon ec2 instance

I have a postgresql database hosted on my ec2 instance. I am trying to connect to it using Python from my local computer.
import psycopg2
try:
connection = psycopg2.connect(user="postgres",
password="<password>",
host="ec2-***-***-***-***.***-***-1.compute.amazonaws.com",
port="5432",
database="<db_name>")
cursor = connection.cursor()
cursor.execute(f"SELECT * FROM <tablename>;")
record = cursor.fetchall()
print(record, "\n")
except (Exception, psycopg2.Error) as error:
print("Error while connecting to PostgreSQL", error)
finally:
try:
if connection:
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
except:
print("no work")
But I get
Error while connecting to PostgreSQL could not connect to server: Connection timed out (0x0000274C/10060)
Is the server running on host "ec2-***-***-***-***.***-***-1.compute.amazonaws.com" (**.**.**.***) and accepting
TCP/IP connections on port 5432?
no work
My pg_hba.conf file looks like this
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres peer
#host replication postgres 127.0.0.1/32 ident
#host replication postgres ::1/128 ident
and my postgresql.conf file looks like
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
I have an ec2 Security Group:
What am I doing wrong? I am a complete newbie, any help appreciated!
Connection timeout usually indicates that you cannot reach the host or are being blocked by a firewall.
First try to ping the host from your machine. If it pings all right, then there is probably a firewall between you and your EC2 instance. Your EC2 security screenshot looks right to me.
Are you behind a firewall that might be blocking outbound sessions from your local computer to the Internet?
After some troubleshooting over chat, we found that the AWS Security Group allowing TCP/5432 inbound was not assigned to the EC2 instance.
There are many possible causes, so I would start with a trusted DB client like DBeaver and attempt to make the connection from your local machine to rule out python issues.
Depending on your setup, you may have a second incoming firewall (iptables, etc) running inside your ec2 instance that needs to be configured or disabled.
Log into the ec2 console and see if you can connect to the server. Load a db client like pgsql inside ec2 and attempt to connect to the server with localhost:5432 as the target.
You may need to alter pg_hba.conf so that the server will generate log files, but that can tell you if the server is being reached, and what the problem is.

Translate Oracle SQL Developer SSH Host w/ Local Port Forward Connection to Python

I'm trying to create a Python connection to a remote server through an SSH Jump Host (one I've successfully created in Oracle SQL Developer) but can't replicate in Python. Can connect to SSH Host successfully but fail to forward the port to the remote server due to timeout or error opening tunnels. Safe to assume my code is incorrect rather than server issues. Also need a solution that doesn't use the "with SSHTunnelForwarder() as server:" approach because I need a continuous session similar to OSD/cx_Oracle session rather than a batch processing function.
Similar examples provided here (and elsewhere) using paramiko, sshtunnel, and cx_Oracle haven't worked for me. Many other examples don't require (or at least clearly specify) separate login credentials for the remote server. I expect the critical unclear piece is which local host + port to use, which my SQL Developer connection doesn't require explicitly (although I've tried using the ports OSD chooses, not at the same time).
Closest match I think was best answer from paramiko-port-forwarding-around-a-nat-router
OSD Inputs
SSH Host
- host = proxy_hostname
- port = proxy_port = 22
- username = proxy_username
- password = proxy_password
Local Port Forward
- host = remote_hostname
- port = remote_port = 1521
- automatically assign local port = True
Connection
- username = remote_username
- password = remote_password
- connection type = SSH
- SID = remote_server_sid
Python Code
i.e., analogous code from paramiko-port-forwarding-around-a-nat-router
import paramiko
from paramiko import SSHClient
# Instantiate a client and connect to the proxy server
proxy_client = SSHClient()
proxy_client.connect(
proxy_hostname,
port=proxy_port,
username=proxy_username,
password=proxy_password)
# Get the client's transport and open a `direct-tcpip` channel passing
# the destination hostname:port and the local hostname:port
transport = proxy_client.get_transport()
dest_addr = (remote_hostname,remote_port)
local_addr = ('localhost',55587)
channel = transport.open_channel("direct-tcpip", dest_addr, local_addr)
# Create a NEW client and pass this channel to it as the `sock` (along
# with whatever credentials you need to auth into your REMOTE box
remote_client = SSHClient()
remote_client.connect(
'localhost',
port=55587,
username=remote_username,
password=remote_password,
sock=channel)
Rather than a connection to the remote server I get
transport.py in start_client()
SSHException: Error reading SSH protocol banner
Solution
Finally figured out a solution! Analogous to OSD's automatic local port assignment and doesn't require SSHTunnelForwarder's with statement. Hope it can help someone else- use the question's OSD input variables with...
from sshtunnel import SSHTunnelForwarder
import cx_Oracle
server=SSHTunnelForwarder(
(proxy_hostname,proxy_port),
ssh_username=proxy_username,
ssh_password=proxy_password,
remote_bind_address=(remote_hostname,remote_port))
server.start()
db=cx_Oracle.connect('%s/%s#%s:%s/%s'%(remote_username,remote_password,'localhost',server.local_bind_port,remote_server_sid))
# do something with db
server.close()

Python piramiko, Connection Timed Out Error While Establishing SSH Connection

I am working on a project in Python that will create an Amazon ec2 instance, and establish a SSH and SFTP connection to transfer files and commands between my machine and ec2 instance.
So I began to code, I coded the function that creates an ec2 instance using boto3 library.
# creating a file named sefa.pem that will store the private key
outfile = open('sefa.pem', 'w')
keypair = ec2.meta.client.create_key_pair(KeyName='sefakeypair') # creates key pair
keyout= str(keypair['KeyMaterial']) # reads the key material
outfile.write(keyout) # writes the key material in sefa.pem
# creates the instance finally
response = ec2.create_instances(ImageId='ami-34913254', MinCount=1, MaxCount=1, InstanceType='t2.micro')
After that, I should establish a SSH Connection between my machine and ec2 instance to send command and I also should transfer and bring back files between my machine and ec2 instance.
After research, I found out that there is a Python library called piramiko for establishing SSH Connection and SFTP Connection between my computer and ec2 instance.
I tried to establish a SSH Connection between my computer and ec2 instance, but I have been  facing with the "[Errrno 110]Connection Timed Out Error" for a day. I have been searching the internet for hours, but I couldn't find anything useful.
Here is the code that emerges "Connection Time Out Error":
con = paramiko.SSHClient() # ssh client using paramiko library
con.set_missing_host_key_policy(paramiko.AutoAddPolicy()) # this is needed because of adding policy automautically
k = paramiko.RSAKey.from_private_key_file("sefa.pem") # k reads sefa.pem and stores private key
time.sleep(30) # added this because ec2 should do 2/2 checks before connecting
print("connecting")
con.connect(hostname=PUB_DNS, username="ubuntu", pkey=k, look_for_keys=True) # HERE IS THE ERROR, I CAN'T CONNECT
print("connected")
stdin, stdout, stderr = con.exec_command('echo "TEST"')
print(stdout.readlines())
con.close()
I can not go any further without establishing a connection between my machine and ec2 instance.
 
Do you have any suggestions to solve this problem?
Is there any alternative library to piramiko?
I managed to solve the problem. The problem is my ec2 instance. These solved the issue:
Make sure that instance's security group has ssh deamon and allows you to connect.
Make sure that you have the keypair that you created while creating the instance.
Make sure that you execute chmod 400 keypair.pem
I was facing the same error and here is how I solved it :-
Install openssh-client on your client VM and openssh-server on your server VM.
Don't execute ssh#ip address as this will log you in to your host vm and then IP on both your client and server will be the same, this is the main reason for the error.
3.Instead use
ssh-keyscan ip_address >> ~/.ssh/known_hosts
So know host key is in known hosts and original IP remains.

SSH server routes tunnel by user

Diagram of what I'm trying to accomplish:
$ sftp joe#gatewayserver.horse
SSHCLIENT_JOE --------> GATEWAY_SERVER (does logic by username to determine
| socket to forward the connection to.)
|
\ 127.0.0.1:1030 CONTAINRR_SSHD_SALLY
---> 127.0.0.1:1031 CONTAINER_SSHD_JOE
127.0.0.1:1032 CONTAINER_SSHD_MRAYMOND
This seems closest to what I'm trying to do:
paramiko server mode port forwarding
http://bitprophet.org/blog/2012/11/05/gateway-solutions/
But instead of the client doing a ProxyCommand or requesting a "direct-tcpip" channel, I want the forwarding to be done by the server, invisibly for the client.
I have been trying to do this with a paramiko server by taking the Transport object of the connecting client and making a direct-tcpip channel on behalf of the client, but I'm running into roadblocks.
I'm Using https://github.com/paramiko/paramiko/blob/master/demos/demo_server.py as a template
# There's a ServerInterface class definition that overrides check_channel_request
# (allowing for direct-tcpip and session), and the other expected overides like
# check_auth_password, etc that I'm leaving out for brevity.
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(('', 2200))
except Exception as e:
print('*** Bind failed: ' + str(e))
traceback.print_exc()
sys.exit(1)
try:
sock.listen(100)
print('Listening for connection ...')
client, addr = sock.accept()
except Exception as e:
print('*** Listen/accept failed: ' + str(e))
traceback.print_exc()
sys.exit(1)
print('Got a connection!')
try:
t = paramiko.Transport(client)
t.add_server_key(host_key)
server = Server()
try:
t.start_server(server=server)
except paramiko.SSHException:
print('*** SSH negotiation failed.')
sys.exit(1)
# Waiting for authentication.. returns an unwanted channel object since
# it isn't the right "kind" of channel
unwanted_chan = t.accept(20)
dest_addr = ("127.0.0.1", 1030) # target container port
local_addr = ("127.0.0.1", 1234) # Arbitrary port on gateway server
# Trying to put words in the client's mouth here.. fails
# What should I do?
print(" Attempting creation of direct-tcpip channel on client Transport")
tunnel_chan = t.open_channel("direct-tcpip", dest_addr, local_addr)
print("tunnel_chan created.")
tunnel_client = SSHClient()
tunnel_client.load_host_keys(host_key)
print("attempting connection using tunnel_chan")
tunnel_client.connect("127.0.0.1", port=1234, sock=tunnel_channel)
stdin, stdout, stderr = tunnel_client.exec_command('hostname')
print(stdout.readlines())
except Exception as e:
print('*** Caught exception: ' + str(e.__class__) + ': ' + str(e))
traceback.print_exc()
try:
t.close()
except:
pass
sys.exit(1)
current output:
Read key: bc1112352a682284d04f559b5977fb00
Listening for connection ...
Got a connection!
Auth attempt with key: 5605063f1d81253cddadc77b2a7b0273
Attempting creation of direct-tcpip channel on client Transport
*** Caught exception: <class 'paramiko.ssh_exception.ChannelException'>: (1, 'Administratively prohibited')
Traceback (most recent call last):
File "./para_server.py", line 139, in <module>
tunnel_chan = t.open_channel("direct-tcpip", dest_addr, local_addr)
File "/usr/lib/python2.6/site-packages/paramiko/transport.py", line 740, in open_channel
raise e
ChannelException: (1, 'Administratively prohibited')
We currently have a straight-forward sftp server where clients connect and they are chroot-ed to their respective ftp directories.
We are wanting to move the clients into lxc containers but don't want to alter how they connect to sftp.. (Since they are probably using gui ftp clients like filezilla.) I'm also not wanting to make a bridge interface and assign new ips to all the containers. Thus the containers don't have separate ips from the host, they share the same network space.
The client containers' sshds would bind to separate ports on localhost. That way they can have unique ports, and the logic of which port is chosen could conceptually be moved out to... a simple server on the physical host.
This is more of a proof-of-concept, and general curiosity on my part.
As I mentioned in a comment above, I don't know anything about paramiko, but I can comment on this from an OpenSSH perspective, and perhaps you can translate the concepts to what you need.
NAT was mentioned in comments. NAT is something done at a lower level than SSH, not something that would be set up on the basis of an SSH login (SOCK5 notwithstanding). You'd implement it in your firewall, not in your SSH configuration. The way ProxyCommand works is to negotiate the SSH connection, then hand the client to the next hop saying "Here, negotiate with this guy too." It's something implemented right inside the SSH protocol.
You may not be totally out of luck.
A standard ProxyCommand setup might look like this, with the target port specified on the client side:
host joecontainer
User joe
ProxyCommand ssh -x -a -q -Wlocalhost:1031 gatewayserver.horse
An older fashioned version of this might have used Netcat:
host joecontainer
User joe
ProxyCommand ssh -x -a -q gatewayserver.horse nc localhost 1031
The idea here is that nc localhost 1031 is the command which provides SSH access to the "next hop" in the SSH chain. You could run any command here as long as the result of that command is a connection to an SSH daemon.
But you want the port selection to be handled by the GATEWAY rather than by the client. And therein lies a bit of a crunch, because the SSH daemon is only using the target username to select which user account's authorized_keys file to read. It's the keys which are important, not the user. By the time the server gets around to running an script or command associated with a user, the SSH negotiation is complete, and it's too late to forward the connection on to the next hop.
So ... you might consider having everyone connect to a common user, and then have the port selection done on the basis of SSH key. This is the way, for example, gitolite handles users. In your case, Joe and Sally could both connect to common#gatewayserver.horse using their DSA or RSA key.
The fun part is that all your port selection gets handled within the "common" user's .ssh/authorized_keys file. The file would look something like this:
command="/usr/bin/nc localhost 1030",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa ... sally#office
command="/usr/bin/nc localhost 1031",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa ... joe#home
You can read about this under the "AUTHORIZED_KEYS FILE FORMAT" section of the sshd(8) man page.
To use this technique, we still need a client-side ProxyCommand, but because port selection happens server-side based on the client's key, everyone can use exactly the same ProxyCommand:
host mycontainer
ProxyCommand ssh -xaq common#gatewayserver.horse
Sally and Joe will run ssh-keygen to create a key pair if they haven't already. They'll send you the public key which you'll add to ~common/.ssh/authorized_keys using the format above.
When Joe connects using his key, the ssh server only runs the nc command associated with his key. And because of the ProxyCommand, that netcat's output is interpreted as a "next hope" for SSH.
I've tested this with sftp (running on my eventual target, akin to your container) and it appears to work for me.
SSH is magic. :-)
Attempting creation of direct-tcpip channel on client Transport
*** Caught exception: <class '[...]'>: (1, 'Administratively prohibited')
The container ssh server is rejecting your direct-tcpip channel request because it has been configured to refuse these requests. I gather the intent here is to proxy SFTP sessions to the correct container? And I imagine the container SSH server has been configured in the usual fashion to only permit these people to do SFTP? SFTP sessions go through a session channel, not a direct-tcpip channel.
I'm not a python coder and can't give you the specific paramiko code, but your relay agent should open a session channel to the container server and invoke the "sftp" subsystem. And if possible, your relay agent should only do this when the remote client requested an SFTP session, not for other types of channel requests.

Categories

Resources