I am having an issue connecting to on-prem Oracle databases from AWS EKS. The error I get is:
"ORA-12545: Connect failed because target host or object does not exist".
The the exact same script, from the exact same docker image, works fine when run from any other platform (not EKS).
Verified the firewall rules are in place by obtaining a shell on the pod and using telnet to the target. We do get a basic TCP connection, however any attempts to connect with the Cx oracle client fail with the above error.
Below is my cx oracle code, I don't feel there is a issue with how the connection string is passed, because the code works fine on different platform. Just that EKS container seems to have problems making a connection.
dsn_tns = cx_Oracle.makedsn(host,port,service_name=service_name)
connection = cx_Oracle.connect(user,password,dsn_tns,encoding = 'utf-8')
Related
I have a client computer (now Windows 10, soon to be Ubuntu Server) connected through a LAN Ethernet interface to a remote DB server using SQLAlchemy. The same client holds a Wifi network interface connected to Internet in order to provide external access to the client. Within the client there is a Python script that runs reading the remote DB Server and updating a local DB with records of interest.
Everything works fine while the Wifi interface is disconnected, and SQLAlchemy's engine.connect() throws a connection error when the Wifi becomes active.
The question is how to force the connection to be done through the Ethernet interface for the following commands:
engine = create_engine(url)
engine.connect()
I am expecting some sort of default routing configuration for the SQLAlchemy engine, or a workaround that does not involve SQLAlchemy.
I have whitelisted ec2 IP address but without 0.0.0.0/0 configuration, I can't connect to the Redshift database from psycopg2. What can be done about this situation where we want to request from a particular IP address.
Is this is the bug that exists when you try to connect it programmatically
Edit:
I tried to whitelist my local IP address and ran my program on the local server and it got connected. But I am not able to connect via my application hosted on EC2 instance.
I'm working on a simple Python program to query a Redshift cluster using psycopg2. When I run the code on my local machine it works as expected: it creates the connection, it runs the queries and I get the expected outcome. However, I loaded it on my EC2 instance because I want to schedule several runs a week and the execution fails with the following error:
psycopg2.OperationalError: could not connect to server: Connection timed out
Is the server running on host "xxxx" and accepting
TCP/IP connections on port 5439?
Considering that the code is working without problems on the local machine and the security settings should be the same as EC2, do you have any suggestions and/or workarounds?
Thanks a lot.
I am currently unable to connect to a MSSQL server I have running in a docker container on my mac laptop using any CLI tools or pyodbc. Connecting to, and interacting with the database with pyodbc is the goal. Strangely, using Azure Data Studio I can connect without issue.
I followed the following two tutorials to install Sql server on my mac, and then restore an old backup of an existing server.
https://database.guide/how-to-install-sql-server-on-a-mac/
https://database.guide/how-to-restore-a-sql-server-database-on-a-mac-using-azure-data-studio/
Microsoft's suggested sqlcmd statement did not work for me
sqlcmd -S <ip_address>,1433 -U SA -P '<YourNewStrong!Passw0rd>'
that was documented here: https://learn.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker?view=sql-server-2017&pivots=cs1-bash
when I first installed docker, I did not increase the memory allocation to 4GB. I did so after my container was running and restarted the container and the mac itself. My understanding is that that memory should be available to that container now.
I have tried mssql, sqlcmd, pyodbc with various connection parameters, but none of them are working for me.
I CAN connect through Azure Data Studio with the following connection information:
Connection Type: Microsoft SQL Server
Server: localhost,1401
Authentication type: Sql Login
Username: sa
password:
Based on my success interacting with the database through ADS, I've been assuming that there are errors with the connection parameters I'm passing to the CLI tools, but at this point I think I've tried just about every permutation I can think of. I've included some connection attempts and the errors they throw.
I have read many other github and stack overflow tickets, it looks like the usual causes of this issue are people not running their container or not using a sufficiently complicated PW. That does not appear to apply in my circumstance.
$ sqlcmd -S 172.17.0.2,1433 -U SA -P <TestPW123$>
Result:
SqlState HYT00, Login timeout expired
HResult 0x102, Level 11, State 0
TCP Provider: Error code 0x102
HResult 0x102, Level 11, State 0
A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.
$ sqlcmd -S localhost,1433 -U SA -P <TestPW123$>
RESULT:
sqlcmd(992,0x112ef45c0) malloc: can't allocate region
*** mach_vm_map(size=18446744073709527040) failed (error code=3)
$ mssql -s localhost -o 1433 -u sa -p <TestPW123$>
Result:
Connecting to localhost...
Error: Failed to connect to localhost:1433 - connect ECONNREFUSED 127.0.0.1:1433
I ended up solving this by creating a new docker container and restoring the database to that container, I was never able to figure out what the previous issue was, but CLI tools are all working now.
I followed the Microsoft tutorial carefully, and when trying to connect from outside the container, i used the internal network IP of my laptop. In my case it was 192.168.0.4, not the 127.0.0.2 IP's that some tutorials referenced. I did wrap my password in single quotes on the successful connection. Here's the successful connection:
'''
sqlcmd -S 192.168.0.4,1433 -U SA -P 'TestPW123$'
'''
hopefully this helps someone else.
I have a snippet of code that allows me to connect to my psql DB via ssh in Python. It works perfectly on Ubuntu 18.10 (via VirtualBox) but fails every time on windows with an error that it can't reach the remote host and port.
I'm been developing a user interface that can query data from a remote DB (logs etc.) and visualize it.
All of the development has been done using Spyder3 on Ubuntu 18.10. I never had an issue until I tried to execute the same code on Windows 10.
I tried Telnet to both the localhost:port and remote host:port (via ssh) and it works. Having looked up all the possible answers on stackoverflow and other places, I still haven't been able to fix the issue. The fact that it works on one environment and not on the other, while on the same machine, tells me it's some sort of environment setting but I don't know what it could be.
The code:
import psycopg2
import logging
logging.basicConfig(level=logging.DEBUG)
from sshtunnel import SSHTunnelForwarder
PORT = 5432
REMOTE_HOST = '111.222.111.222'
REMOTE_SSH_PORT = 22
curs = None
conn = None
server = SSHTunnelForwarder((REMOTE_HOST, REMOTE_SSH_PORT),
ssh_username='username',
ssh_password='password',
remote_bind_address=('localhost', PORT),
local_bind_address=('localhost', PORT))
server.start()
conn = psycopg2.connect(database='db_name', user='db_username', password='db_password', host='127.0.0.1', port='5432')
curs = conn.cursor()
Expected:
A successful connection to ssh and subsequent successful log-in to the database. This works on Ubuntu 18.10 via VirtualBox on the same machine.
Actual result:
2019-01-02 10:54:51,489 ERROR Problem setting SSH Forwarder up: Couldn't open tunnel localhost:5432 <> localhost:5432 might be in use or destination not reachable
I realized that my local postgres (psql) service was interfering with the port mapping as it was also using port 5432. Once I disabled the service, it worked like a charm.
I may be wrong, but I think the remote_bind_address should be set to your server's private IP. As this is where the remote machine would communicate to your machine.
remote_bind_address=(<PRIVATE_SERVER_IP>, PORT)