I am trying to connect to a redisai server through the redisai-py Client. The server is password protected and the Client is passed host, port, and password as arguments. However, the client times out on a tensorset/get even though it returns a connection object.
import redisai
r = redisai.Client(host='<host>', port=<port>, password='<password>')
in redis-cli, you would
redis-cli
auth <password>
...
which works just fine. There doesn't seem to be a way to perform this action through a redisai-py Client despite it extending the StrictRedis class. Since the Client won't connect without authentication, I cannot access the data.
The solution to accessing the redisai database involved creating inbound port rules focused directly around the VNet the Azure VM nodes were located on.
When connecting with redisai Client, the private IP address is used and the argument for port is left out.
import redisai
r = redisai.Client(host=<Private IP>)
r.ping()
# PONG
The primary node inbound port rules:
Worker inbound port rule:
However, this does not solve the issue around the client hanging and providing authentication when the redisai database is exposed but requires a password.
Related
I'm trying to create a Python connection to a remote server through an SSH Jump Host (one I've successfully created in Oracle SQL Developer) but can't replicate in Python. Can connect to SSH Host successfully but fail to forward the port to the remote server due to timeout or error opening tunnels. Safe to assume my code is incorrect rather than server issues. Also need a solution that doesn't use the "with SSHTunnelForwarder() as server:" approach because I need a continuous session similar to OSD/cx_Oracle session rather than a batch processing function.
Similar examples provided here (and elsewhere) using paramiko, sshtunnel, and cx_Oracle haven't worked for me. Many other examples don't require (or at least clearly specify) separate login credentials for the remote server. I expect the critical unclear piece is which local host + port to use, which my SQL Developer connection doesn't require explicitly (although I've tried using the ports OSD chooses, not at the same time).
Closest match I think was best answer from paramiko-port-forwarding-around-a-nat-router
OSD Inputs
SSH Host
- host = proxy_hostname
- port = proxy_port = 22
- username = proxy_username
- password = proxy_password
Local Port Forward
- host = remote_hostname
- port = remote_port = 1521
- automatically assign local port = True
Connection
- username = remote_username
- password = remote_password
- connection type = SSH
- SID = remote_server_sid
Python Code
i.e., analogous code from paramiko-port-forwarding-around-a-nat-router
import paramiko
from paramiko import SSHClient
# Instantiate a client and connect to the proxy server
proxy_client = SSHClient()
proxy_client.connect(
proxy_hostname,
port=proxy_port,
username=proxy_username,
password=proxy_password)
# Get the client's transport and open a `direct-tcpip` channel passing
# the destination hostname:port and the local hostname:port
transport = proxy_client.get_transport()
dest_addr = (remote_hostname,remote_port)
local_addr = ('localhost',55587)
channel = transport.open_channel("direct-tcpip", dest_addr, local_addr)
# Create a NEW client and pass this channel to it as the `sock` (along
# with whatever credentials you need to auth into your REMOTE box
remote_client = SSHClient()
remote_client.connect(
'localhost',
port=55587,
username=remote_username,
password=remote_password,
sock=channel)
Rather than a connection to the remote server I get
transport.py in start_client()
SSHException: Error reading SSH protocol banner
Solution
Finally figured out a solution! Analogous to OSD's automatic local port assignment and doesn't require SSHTunnelForwarder's with statement. Hope it can help someone else- use the question's OSD input variables with...
from sshtunnel import SSHTunnelForwarder
import cx_Oracle
server=SSHTunnelForwarder(
(proxy_hostname,proxy_port),
ssh_username=proxy_username,
ssh_password=proxy_password,
remote_bind_address=(remote_hostname,remote_port))
server.start()
db=cx_Oracle.connect('%s/%s#%s:%s/%s'%(remote_username,remote_password,'localhost',server.local_bind_port,remote_server_sid))
# do something with db
server.close()
Following is a Python based RESTful library client (recommended by HP https://developer.hpe.com/platform/ilo-restful-api/home) that uses Redfish REST API (https://github.com/HewlettPackard/python-ilorest-library) to connect to the remote HPE iLO5 server of ProLiant DL360 Gen10 based hardware
#! /usr/bin/python
import redfish
iLO_host = "https://xx.xx.xx.xx"
username = "admin"
password = "xxxxxx"
# Create a REST object
REST_OBJ = redfish.redfish_client(base_url=iLO_host,username=username, password=password, default_prefix='/redfish/v1')
# Login into the server and create a session
REST_OBJ.login(auth="session")
# HTTP GET request
response = REST_OBJ.get("/redfish/v1/systems/1", None)
print response
REST_OBJ.logout()
I am getting RetriesExhaustedError when creating REST object. However, I can successfully do SSH to the server from the VM (RHEL7.4) where I am running this script. The authentication details are given correctly. I verified that the Web Server is enabled (both port 443 and 80) in the iLO Security - Access settings. Also, in my VM box the Firewalld service has been stopped and IPTables is flushed. But still connection could not be established. What other possibilities I can try yet?
I found the root cause. The issue is with SSL Certificate verification being done by the Python code.
This can be turned off by setting the environment variable PYTHONHTTPSVERIFY=0 before running the code solved the problem.
This is a very old topic, but perhaps for other people that have a similar issue when accessing the iLO in any way, and not just over Python:
You most likely need to update the firmware in your server, so that the TLS is updated. You will most likely need to use an old browser to do this, as modern versions of Mozilla/Chrome will not work with old TLS. I have had luck with Konqueror.
I have Neo4J running on a Docker container in which I have mapped the internal container ports 7473 and 7687 to their respective host ports 7473 and 7687, 7474 is exposed but not mapped.
The Neo4J server configuration regarding network.
# Bolt connector dbms.connector.bolt.enabled=true
#dbms.connector.bolt.tls_level=OPTIONAL
dbms.connector.bolt.listen_address=0.0.0.0:7687
# HTTP Connector. There must be exactly one HTTP connector.
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=0.0.0.0:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
dbms.connector.https.listen_address=0.0.0.0:7473
I was able to login to Neo4J's webclient through the browser and change the default password.
Regarding the Python code here's the line where I create the client.
self.client = py2neo.Graph(host =ip_address,
username=username,
password=password,
secure =use_secure,
bolt =use_bolt)
As soon as I execute a query like this one.
node = Node("FooBar", foo="bar")
self.client.create(node)
I get the following Unauthorized exception.
py2neo.database.status.Unauthorized: https://localhost:7473/db/data/
Any idea on why this may be happening?
The solution was to call a separate authentication method provided by the library like this:
auth_port = str(self._PORT_HTTPS if use_secure else self._PORT_HTTP)
py2neo.authenticate(":".join([ip_address, auth_port]), username, password)
It took me a while to get to this because at first, I thought the authentication was done automatically in the constructor and then I wasn't able to make the authentication method run because I was using the bolt port.
I have set up a redis server on AWS ec2 instance following https://medium.com/#andrewcbass/install-redis-v3-2-on-aws-ec2-instance-93259d40a3ce
I am running a python script on another ec2 instance
import redis
try:
conn = redis.Redis(host=<private ip address>,port=6379, db=1)
user = {"Name":"Pradeep", "Company":"SCTL", "Address":"Mumbai", "Location":"RCP"}
conn.hmset("pythonDict", user)
conn.hgetall("pythonDict")
except Exception as e:
print(e)
In the security groups of the redis server, i have allowed inbound traffic on port 6379
While running the above script, i am getting following error:
Error 111 connecting to 172.31.22.71:6379. Connection refused.
I have already tried changing the bind value in conf file, as suggested by a few answers to similar questions on stack overflow, but it didn't work
Assuming your other instance is within the same subnet as the Redis instance, my suggestion would be to review a couple of things:
Make sure among your security group inbound rules, you have your Redis port set up for the subnet, like:
6379 (REDIS) 172.31.16.0/20
From within your Redis configuration (e.g. /etc/redis/redis.conf), in case this hasn't been done, either bind the server to the private IP (bind 172.31.22.71) or simply comment out any existing localhost binding, then restart Redis.
Recently i implemented i small captive portal in python. i redirect users to the login page from dns requests. All worked fine until i realised when dns server i manually change on client system to a public dns, it totally bypass the captive portal. My problem is, how to redirect users even with dns servers changed or how to block all outgoing dns requests which is not using the default dns.
I was thinking listening on port 53 would capture all request using twisted.
This is a very simple example of how i am doing it:
from twisted.internet.protocol import DatagramProtocol
from twisted.internet import reactor`
class UDP(DatagramProtocol):
def datagramReceived(self, datagram, addr):
print datagram, addr
port = 53
max_byte = 512
reactor.listenUDP(port, UDP(), '', max_byte)
reactor.run()
Am i doing it wrong?
I also tried to block remote port 53 from the firewall on the main machine providing Internet connectivity but it also doesnt work.
If users are bypassing your captive portal by changing DNS, the issue is that they can route DNS requests around the portal, and therefore there's nothing you can do in the portal. You need to create routing rules which redirect all port 53 traffic on your network to your DNS server, regardless of where they're trying to send it.
The bad news is, you can't do this with Twisted. You need to do this in your router's operating system, using something like iptables.