I prepared two VM instance with Compute Engine on GCP.
ServerA: Data processing and read/write to SQL(mysql) on ServerB.
ServerB: SQL Server (f1-micro* This is not Cloud SQL, but normal VM instance.)
Trying to access SSH from A to B in order to read/write DB on ServerB with the code below.
error code
error: ERROR | Problem setting SSH Forwarder up: Couldn't open tunnel localhost:3306 <> localhost:3306 might be in use or destination not reachable
sshtunnel.HandlerSSHTunnelForwarderError: An error occurred while opening tunnels.
#SSH connection
with SSHTunnelForwarder(
('PublicIP of ServerA', 22),
ssh_pkey=SSH_PKEY_PATH,
ssh_username=SSH_USER,
remote_bind_address=('localhost', 3306),
local_bind_address=('localhost', 3306)
) as ssh:
try:
#DB connection
connection = mysql.connector.connect(
host='localhost',
port = 3306,
user=MYSQL_USER,
passwd=MYSQL_PASS,
db=MYSQL_DB,
charset='utf8'
)
# print(connection.is_connected())
# Get Cur
cur = connection.cursor()
sql = "use dbname"
cur.execute(sql)
for i in range(len(sqlList)):
print("DB Access:" + str(sqlList[i]))
sql = str(sqlList[i])
# sql = 'create table test (id int, content varchar(32))'
cur.execute(sql)
sqlOUTPUT = cur.fetchall()
# rows = cur.fetchall()
# for row in rows:
# print(row)
except mysql.connector.Error as err:
print("Something went wrong: {}".format(err))
connection.rollback()
raise err
finally:
#Cur close
cur.close()
# Commit
connection.commit()
#DB Connection close
connection.close()
return sqlOUTPUT
But after "local_bind_address=(localhost, MYSQL_PORT)", an error occurs despite it goes through with the same code and same private key on the shell of B or on VSCode local environment.
I don't understand why it goes through with same code using shell and VSCode although it doesn't work on GCE.
Any help?
You might be able to debug this further and discard any issues with sshtunnel if you try to create the tunnel outside of the script from the client VM, with:
$ gcloud compute ssh server-a --zone=your-zone --ssh-flag='-NL 3306:127.0.0.1:3306' &
Then attempt a connection with:
$ mysql -h 127.0.0.1
Related
`db = MySQLdb.connect(
host = '12.34.567.891',
user = 'root',
passwd = '',
db = 'testdb',
port = "something-that-works")`
Very Simple Can I somehow make it so that it connects only to the ip '12.34.567.891'. Google is forwarding the port to 80 but you can't request port 80 or it ends up in an endless loop.
port=null or port = none will cause and error.
I have no issues connecting from my cli mysql client
Thank you,
I expected to be able to connect to the server no issues if I am able to do so from my cli - I need some way to send the connecting request to the raw IP no port. It may be possible python-mysql can't do this
3306 is the default MySQL port and it seems that you are using MySQL, so that should work. https://cloud.google.com/sql/docs/mysql/connect-overview
You will have an easier time connecting with the Cloud SQL Python Connector a library built purely for connecting to Cloud SQL with Python.
Looks like this:
from google.cloud.sql.connector import Connector
# build connection
def getconn() -> pymysql.connections.Connection:
with Connector() as connector:
conn = connector.connect(
"project:region:instance", # Cloud SQL instance connection name
"pymysql",
user="my-user",
password="my-password",
db="my-db-name"
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
# insert statement
insert_stmt = sqlalchemy.text(
"INSERT INTO my_table (id, title) VALUES (:id, :title)",
)
# interact with Cloud SQL database using connection pool
with pool.connect() as db_conn:
# insert into database
db_conn.execute(insert_stmt, id="book1", title="Book One")
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)
I am trying to achieve the same thing as in earlier question psycopg2: How to execute vacuum postgresql query in python script; however, the recommendation to open an autocommit connection includes a link which is broken.
The below code runs without error BUT the table is not vacuumed.
How does this need to be written to call the Vacuum Full correctly?
#!/usr/bin/python
import psycopg2
from config import config
def connect():
""" Connect to the PostgreSQL database server """
conn = None
try:
# read connection parameters
params = config()
# connect to the PostgreSQL server
conn = psycopg2.connect(**params)
conn.autocommit=1
# create a cursor
cur = conn.cursor()
# execute Vacuum Full
cur.execute('Vacuum Full netsuite_display')
# close the communication with the PostgreSQL
cur.close()
except (Exception, psycopg2.DatabaseError) as error:
print(error)
finally:
if conn is not None:
conn.close()
print('Database connection closed.')
if __name__ == '__main__':
connect()
I am writing a Python script that will read data from a SQL Server database. For this I have used pyodbc to connect to SQL Server on Windows (my driver is ODBC Driver 17 for SQL Server).
My script works fine, but I need to use a connection pool instead of a single connection to manage resources more effectively. However the documentation for pyodbc only mentions pooling without providing examples of how connection pooling can be implemented. Any ideas of how this can be done using Python while connecting to an SQL Server? I only found solutions for PostgreSQL that use psycopg2, but this does not work for me obviously.
At the moment my code looks like this (please disregard the missing indentation which happened when copying the file from my IDE):
def get_limited_rows(size):
try:
server = 'here-is-IP-address-of-servier'
database = 'here-is-my-db-name'
username = 'here-is-my-username'
password = 'here-is-my-password'
conn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+password)
cursor = conn.cursor()
print('Connected to database')
select_query = 'select APPN, APPD from MAIN'
cursor.execute(select_query)
while True:
records = cursor.fetchmany(size)
if not records:
cursor.close()
sys.exit("Completed")
else:
for record in records:
print(record)
time.sleep(10)
except pyodbc.Error as error:
print('Error reading data from table', error)
finally:
if (conn):
conn.close()
print('Data base connection closed')
I am using PyMySQL to connect to a database running on localhost. I can access the database just fine using the username/password combiunation in both the command line and adminer so the database does not appear to be the probem here.
My code is as follow. However, when using the host="127.0.0.1" options, I get an OperationalError and an Errno 111. Using the same code, but connecting via the socket Mariadb runs on is fine.
import pymysql.cursors
from pprint import pprint
# This causes an OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'localhost' ([Errno 111] Connection refused)")
# connection = pymysql.connect(
# host="127.0.0.1",
# port=3306,
# user="root",
# password="S3kr37",
# db="my_test",
# )
# This works.
connection = pymysql.connect(
user="root",
password="S3kr37",
db="my_test",
unix_socket="/var/lib/mysql/mysql.sock"
)
try:
with connection.cursor() as cursor:
sql = "select * from MySuperTable"
cursor.execute(sql)
results = cursor.fetchall()
pprint(results)
finally:
connection.close()
What am I doing wrong?
PS: Note that this question has the same problem but the solution offered is the socket. That is no good enough: I want to know why I cannot use the hostname as the documentation suggests.
Errorcode 2003 (CR_CONN_HOST_ERROR) is returned by the client library, in case the client wasn't able to establish a tcp connection to the server.
First you should check, if you can connect via telnet or mysql command line client to your server.
If not, check the server configuration file:
does the server run on port 3306?
is IPv4 disabled?
is skip-networking enabled?
is bind-address activated (with another IP?
I have a trouble connecting to hive running on remote server through my python script.
I'm using the same script (With different server details, of course) to connect to hive running on my localhost & am able to connect.
I'm starting the server on local host from command line with a command:
hive —service hiveserver2
That start the server and I run the python script
Script to connect to Hive running on local host:
import pyhs2
conn = pyhs2.connect(host='localhost', port=10000, authMechanism='PLAIN', user='hive', password ='', database='default')
with conn.cursor() as cur:
cur.execute("show databases")
for i in cur.fetch():
print i
Using above code, am able to access db # Hive on local host.
I'm using below code to connect to remote server, here I'm not doing anything on command line to start the remote server.
Script to connect to Hive running on the remote server:
conn = pyhs2.connect(host='<my remote server Ip>', port=<port no>, authMechanism='PLAIN', user='<usernameToConnectToRemoteServer>', password ="<remoteServerPassword>" database='default')
with conn.cursor() as cur:
cur.execute("show databases")
for i in cur.fetch():
print i
and this returns me a message:
thrift.transport.TTransport.TTransportException: TSocket read 0 bytes.
I've tried to google & find the solution, as much as i can, but all I see are the examples to connect to local host. please help me resolve this.
Try doing SSH to your remote machine and then connect to hive like below-
import paramiko
import traceback
def hive_query_executor():
dns_name = ''
conn_obj = paramiko.SSHClient()
conn_obj.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
conn_obj.connect(dns_name, username="hadoop",
key_filename='')# or password
Hive_query="select * from abc limit 10;"
query_execute_command = 'ihive -e "' + impala_query + '"'
std_in, std_out, std_err = conn_obj.exec_command(query_execute_command)
conn_obj.close()
except:
print "Error :" + str(traceback.format_exc())
exit(0)
hive_query_executor()