Accessing MySQL DATABASE from Python hosted on cloud - python

I have a MySQL server installed locally and I have Python code that accesses MySQL Database and executes a simple query:
from mysql.connector import connect
from mysql.connector import ProgrammingError
DB = {
'user':'andrei',
'password':'qwertttyy',
'host':'localhost',
'port':'3306',
'db':'my_database'
}
class Connection:
instance = None
def __new__(cls):
if not cls.instance:
try:
cls.instance = connect(**DB)
except:
raise
return cls.instance
def excuteDQL(query):
cnx = Connection()
cursor = cnx.cursor()
try:
cursor.execute(query)
return cursor.fetchall()
except ProgragrammingError as err:
print('You have an error in your MySQL syntax. Please check and retry')
return []
if __name__ == '__main__':
while True:
query = input('Enter a SQL query: ')
for tuple in executeDQL(query):
print(tuple)
If I go out there and find a cloud MySQL hosting service and pay for it, the access would be as easy as changing the DB mapping with different info?
I think it should be because the connection would still be over standard TCP/IP, except, in this case, it happens to come back the same machine that is emitting. I guess, under the hood, data is packed following TCP/IP rules up to the IP layer, and then these are transferred as IP Packets from the Python process through the OS Networking API to the MySQL Server listening to the port, without further down processing into the Access Layer since the packets never leave the machine, which I understand is the purpose of the Access Layer of the TCP/IP stack, that is, to abstract the physical road the data takes.
Did I say something coherent in my guessing?
If I'm wrong, How can I put a MySQL Server in the cloud?

Yes how you connect to the database would not change. It will be as simple as changing the host name and providing whatever credentials you need ( Access Token , User info, etc). The way you insert data doesn't change once you make a connection to the DB.
Here is a good script which should provide some info: https://gist.github.com/kirang89/7161185

Related

Issue Connecting to GCP SQL with python-mysql: don't want to use a port

`db = MySQLdb.connect(
host = '12.34.567.891',
user = 'root',
passwd = '',
db = 'testdb',
port = "something-that-works")`
Very Simple Can I somehow make it so that it connects only to the ip '12.34.567.891'. Google is forwarding the port to 80 but you can't request port 80 or it ends up in an endless loop.
port=null or port = none will cause and error.
I have no issues connecting from my cli mysql client
Thank you,
I expected to be able to connect to the server no issues if I am able to do so from my cli - I need some way to send the connecting request to the raw IP no port. It may be possible python-mysql can't do this
3306 is the default MySQL port and it seems that you are using MySQL, so that should work. https://cloud.google.com/sql/docs/mysql/connect-overview
You will have an easier time connecting with the Cloud SQL Python Connector a library built purely for connecting to Cloud SQL with Python.
Looks like this:
from google.cloud.sql.connector import Connector
# build connection
def getconn() -> pymysql.connections.Connection:
with Connector() as connector:
conn = connector.connect(
"project:region:instance", # Cloud SQL instance connection name
"pymysql",
user="my-user",
password="my-password",
db="my-db-name"
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
# insert statement
insert_stmt = sqlalchemy.text(
"INSERT INTO my_table (id, title) VALUES (:id, :title)",
)
# interact with Cloud SQL database using connection pool
with pool.connect() as db_conn:
# insert into database
db_conn.execute(insert_stmt, id="book1", title="Book One")
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)

Does psycopg2.connect inherit the proxy set in this context manager?

I have a Django app below that uses a proxy to connect to an external Postgres database. I had to replace another package with psycopg2 and it works fine locally, but doesn't work when I move onto our production server which is a Heroku app using QuotaguardStatic for proxy purposes. I'm not sure what's wrong here
For some reason, the psycopg2.connect part returns an error with a different IP address. Is it not inheriting the proxy set in the context manager? What would be
from apps.proxy.socks import Socks5Proxy
import requests
PROXY_URL = os.environ['QUOTAGUARDSTATIC_URL']
with Socks5Proxy(url=PROXY_URL) as p:
public_ip = requests.get("http://wtfismyip.com/text").text
print(public_ip) # prints the expected IP address
print('end')
try:
connection = psycopg2.connect(user=EXTERNAL_DB_USERNAME,
password=EXTERNAL_DB_PASSWORD,
host=EXTERNAL_DB_HOSTNAME,
port=EXTERNAL_DB_PORT,
database=EXTERNAL_DB_DATABASE,
cursor_factory=RealDictCursor # To access query results like a dictionary
) # , ssl_context=True
except psycopg2.DatabaseError as e:
logger.error('Unable to connect to Illuminate database')
raise e
Error is:
psycopg2.OperationalError: FATAL: no pg_hba.conf entry for host "12.345.678.910", user "username", database "databasename", SSL on
Basically, the IP address 12.345.678.910 does not match what was printed at the beginning of the context manager where the proxy is set. Do I need to set a proxy another method so that the psycopg2 connection uses it?

pyodbc can't connect to database

I'm using pyodbc library from here and I'm connecting this way:
conn = pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};Server=(localdb)\MSSQLLocalDB;Integrated Security=true; database = online_banking; autocommit = True')
I use MSSQLLocalDBbecause it's the default instance name for SQL Server 2014. And this last version of Python 2.7.
However I cant run any simple query, every if them raise the error, saying that there is no such object or in that particular case database:
cursor.execute('use online_banking;')
The full error:
pyodbc.Error: ('08004', "[08004] [Microsoft][SQL Server Native Client 11.0][SQL Server]Database 'online_banking' does not exist. Make sure that the name is entered correctly. (911) (SQLExecDirectW)")
So what is wrong here?
There is only 1 instance installed and such databases(.mdf)
As you can see only 1 engine:
Selecting that engine will allow me to see online_banking db
upd1 Database've been created this way:
CREATE DATABASE [online_banking]
ON PRIMARY
( NAME = N'online_banking', FILENAME = N'C:\...\online_banking.mdf' ,
SIZE = 512000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 30%)
LOG ON
( NAME = N'online_banking_log', FILENAME = N'C:\...\online_banking_log.ldf' ,
SIZE = 1024KB , MAXSIZE = 20GB , FILEGROWTH = 10%)
GO
upd2 I've used built-in tool sqlcmd.
So this sqlcmd -S (LocalDB)\MSSQLLocalDB -i C:\Users\1.sql -E have shown, that
MSSQLLocalDB doesn't have my database.
However sqlcmd -S localhost -i C:\Users\1.sql -E performed successfully.
I'm totally confused, I' ve installed only one server, moreover SQL Management studio sees only one local server with my online_banking DB. This is look really weird to me.
Trying to use this connection string in Python
conn = pyodbc.connect(r'DRIVER={SQL Server Native Client 11.0};Server=localhost;Integrated Security=true; database = online_banking; autocommit = True')
causes the error below:
pyodbc.Error: ('28000', '[28000] [Microsoft][SQL Server Native Client 11.0][SQL Server]\x... "". (18456) (SQLDriverConnect); [01S00] [Microsoft][SQL Server Native Client 11.0]\xcd\xe5\xe....xe8\xff (0); [28000] [Microsoft][SQL Server Native Client 11.0][SQL Server]\xce...ff "". (18456); [01S00] [Microsoft][SQL Server Native Client 11.0]\xcd\xe.... (0)'
upd3: Specified mdf should be attached, got it:
Tried several ways, always errors (with database specified or not in connection string):
conn = pyodbc.connect(
r'Driver={SQL Server Native Client 11.0};Server=(localdb)\MSSQLLocalDB; database =online_banking; AttachDbFilename=C:\Program Files\Microsoft SQL Server\MSSQL12.SQLSERVERINSAF\MSSQL\DATA\online_banking.mdf;Trusted_Connection=Yes; Integrated Security=true; database = online_banking;')
error: A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.
I found out, that may be related with parent server which already have attached this db, but failed to solve this.
upd4
I tried simple code from here to see if "online_banking" shows up in the list of databases for that instance. But faced another error:
pyodbc.Error: ('08001', '[08001] [Microsoft][SQL Server Native Client 11.0]\ - unreadable error
In addition that database according to SSMS seems have already attached by online_banking DB
As it turns out, the database in question was already attached to the default instance of SQL Server on the local machine, so all that was needed to connect was
import pyodbc
conn_str = (
r"Driver={SQL Server Native Client 11.0};"
r"Server=(local);"
r"Database=online_banking;"
r"Trusted_Connection=yes;"
)
conn = pyodbc.connect(conn_str)
There were two main points of confusion:
Q: What is the name of a SQL Server "default instance"?
A: It doesn't have one.
When referring to a SQL Server instance by name, a default instance simply goes by the name of the machine, while a named instance is identified by MachineName\InstanceName. So, on a server named PANORAMA
If we install a "default instance" of SQL Server we refer to it as PANORAMA.
If we install a "named instance" called "SQLEXPRESS" we refer to it as PANORAMA\SQLEXPRESS.
If we are referring to a SQL server instance on the local machine we can use (local) instead of PANORAMA.
Q: Do (local) and (localdb) mean the same thing?
A: NO.
(local) and (local)\InstanceName refer to "real" server-based instances of SQL Server. These are the instances that have been around since SQL Server was first released. They run as a service and are able to accept network connections and do all of the the things we expect a database server to do.
(localdb) and (localdb)\InstanceName references – with (localdb) usually capitalized as (LocalDB) for clarity – are used to connect to "SQL Server LocalDB" instances. These are temporary local SQL Server instances primarily intended for developers. For details see the following MSDN blog post:
SQL Express v LocalDB v SQL Compact Edition
It could possibly be a security issue. You are using integrated security so it will use the security credentials of the windows login that the client program is running. If that user or a group that the user belongs to does not have at least public access to the database, it will appear as if the database does not exist. Either ensure that the user or a group that the user is a member of is set up with a login and that it has at least public access to your database, or use SQL server authentication and send a username and password in your connection string.

Python connect to MySQL database on web server

I wanted to know the process of connecting to a MySQL database that is hosted on a web server.
I have a basic free webserver for testing on 000webhost on which I created a MySQL database.
I have the credentials for the database which I will pretend are
host - mysql.webhost000.com
user - dummy_user
password - dummy_password
database - dummy_database
and I have a python script executing from my local computer with internet access
import MySQLdb
db = MySQLdb.connect(host="mysql.webhost000.com",
port=3306,
user="dummy_user",
passwd="dummy_password",
db="dummy_database")
I was hoping it would connect as long as I have the right credentials but when I execute the script it just hangs and once I quit it I see the error
Can't connect to MySQL server on 'mysql.webhost000.com' (4)
Am I missing some steps?
There are two possible problems and im not able to recreate the first one. One is the
host="mysql.webhost000.com"
is incorrect and throwing an error. The connection could be listed as another way. The other I noticed is this is usually how I set up my connection script.
import MySQLdb
def connect():
db = MySQLdb.connect(host="mysql.webhost000.com",
port=3306,
user="dummy_user",
passwd="dummy_password",
db="dummy_database")
c = conn.cursor()
return c, db

Python and Django OperationalError (2006, 'MySQL server has gone away')

Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code:
views.py
DBNAME = "test"
DBIP = "localhost"
DBUSER = "django"
DBPASS = "password"
db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME)
cursor = db.cursor()
def list(request):
statement = "SELECT item from table where selected = 1"
cursor.execute(statement)
results = cursor.fetchall()
I have tried the following, but it still does not work:
views.py
class DB:
conn = None
DBNAME = "test"
DBIP = "localhost"
DBUSER = "django"
DBPASS = "password"
def connect(self):
self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME)
def cursor(self):
try:
return self.conn.cursor()
except (AttributeError, MySQLdb.OperationalError):
self.connect()
return self.conn.cursor()
db = DB()
cursor = db.cursor()
def list(request):
cursor = db.cursor()
statement = "SELECT item from table where selected = 1"
cursor.execute(statement)
results = cursor.fetchall()
Currently, my only workaround is to do MySQLdb.connect() in each function that uses mysql. Also I noticed that when using django's manage.py runserver, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because list() is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this?
Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware
def process_request_handler(sender, **kwargs):
t = threading.Thread(target=dispatch.execute,
args=[kwargs['nodes'],kwargs['callback']],
kwargs={})
t.setDaemon(True)
t.start()
return
process_request.connect(process_request_handler)
Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max_allowed_packet setting in MySQL.
The default value is 1048576.
So see the current value for the default, run the following SQL:
select ##max_allowed_packet;
To temporarily set a new value, run the following SQL:
set global max_allowed_packet=10485760;
To fix the problem more permanently, create a /etc/my.cnf file with at least the following:
[mysqld]
max_allowed_packet = 16M
After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.
As per the MySQL documentation, your error message is raised when the client can't send a question to the server, most likely because the server itself has closed the connection. In the most common case the server will close an idle connection after a (default) of 8 hours. This is configurable on the server side.
The MySQL documentation gives a number of other possible causes which might be worth looking into to see if they fit your situation.
An alternative to calling connect() in every function (which might end up needlessly creating new connections) would be to investigate using the ping() method on the connection object; this tests the connection with the option of attempting an automatic reconnect. I struggled to find some decent documentation for the ping() method online, but the answer to this question might help.
Note, automatically reconnecting can be dangerous when handling transactions as it appears the reconnect causes an implicit rollback (and appears to be the main reason why autoreconnect is not a feature of the MySQLdb implementation).
This might be due to DB connections getting copied in your child threads from the main thread. I faced the same error when using python's multiprocessing library to spawn different processes. The connection objects are copied between processes during forking and it leads to MySQL OperationalErrors when making DB calls in the child thread.
Here's a good reference to solve this: Django multiprocessing and database connections
For me this was happening in debug mode.
So I tried Persistent connections in debug mode, checkout the link: Django - Documentation - Databases - Persistent connections.
In settings:
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dbname',
'USER': 'root',
'PASSWORD': 'root',
'HOST': 'localhost',
'PORT': '3306',
'CONN_MAX_AGE': None
},
Check if you are allowed to create mysql connection object in one thread and then use it in another.
If it's forbidden, use threading.Local for per-thread connections:
class Db(threading.local):
""" thread-local db object """
con = None
def __init__(self, ...options...):
super(Db, self).__init__()
self.con = MySQLdb.connect(...options...)
db1 = Db(...)
def test():
"""safe to run from any thread"""
cursor = db.con.cursor()
cursor.execute(...)
This error is mysterious because MySQL doesn't report why it disconnects, it just goes away.
It seems there are many causes of this kind of disconnection. One I just found is, if the query string too large, the server will disconnect. This probably relates to the max_allowed_packets setting.
I've been struggling with this issue too. I don't like the idea of increasing timeout on mysqlserver. Autoreconnect with CONNECTION_MAX_AGE doesn't work either as it was mentioned. Unfortunately I ended up with wrapping every method that queries the database like this
def do_db( callback, *arg, **args):
try:
return callback(*arg, **args)
except (OperationalError, InterfaceError) as e: # Connection has gone away, fiter it with message or error code if you could catch another errors
connection.close()
return callback(*arg, **args)
do_db(User.objects.get, id=123) # instead of User.objects.get(id=123)
As you can see I rather prefer catching the exception than pinging the database every time before querying it. Because catching an exception is a rare case. I would expect django to reconnect automatically but they seemed to refused that issue.
This error may occur when you try to use the connection after a time-consuming operation that doesn't go to the database. Since the connection is not used for some time, MySQL timeout is hit and the connection is silently dropped.
You can try calling close_old_connections() after the time-consuming non-DB operation so that a new connection is opened if the connection is unusable. Beware, do not use close_old_connections() if you have a transaction.
The most common issue regarding such warning, is the fact that your application has reached the wait_timeout value of MySQL.
I had the same problem with a Flask app.
Here's how I solved:
$ grep timeout /etc/mysql/mysql.conf.d/mysqld.cnf
# https://support.rackspace.com/how-to/how-to-change-the-mysql-timeout-on-a-server/
# wait = timeout for application session (tdm)
# inteactive = timeout for keyboard session (terminal)
# 7 days = 604800s / 4 hours = 14400s
wait_timeout = 604800
interactive_timeout = 14400
Observation: if you search for the variables via MySQL batch mode, the values will appear as it is. But If you perform SHOW VARIABLES LIKE 'wait%'; or SHOW VARIABLES LIKE 'interactive%';, the value configured for interactive_timeout, will appear to both variables, and I don't know why, but the fact is, that the values configured for each variable at '/etc/mysql/mysql.conf.d/mysqld.cnf', will be respected by MySQL.
How old is this code? Django has had databases defined in settings since at least .96. Only other thing I can think of is multi-db support, which changed things a bit, but even that was 1.1 or 1.2.
Even if you need a special DB for certain views, I think you'd probably be better off defining it in settings.
SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness:
http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic
From there,
from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool
#event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
cursor = dbapi_connection.cursor()
try:
cursor.execute("SELECT 1")
except:
# optional - dispose the whole pool
# instead of invalidating one at a time
# connection_proxy._pool.dispose()
# raise DisconnectionError - pool will try
# connecting again up to three times before raising.
raise exc.DisconnectionError()
cursor.close()
And a test to make sure the above works:
from sqlalchemy import create_engine
e = create_engine("mysql://scott:tiger#localhost/test", echo_pool=True)
c1 = e.connect()
c2 = e.connect()
c3 = e.connect()
c1.close()
c2.close()
c3.close()
# pool size is now three.
print "Restart the server"
raw_input()
for i in xrange(10):
c = e.connect()
print c.execute("select 1").fetchall()
c.close()
I had this problem and did not have the option to change my configuration. I finally figured out that the problem was occurring 49500 records in to my 50000-record loop, because that was the about the time I was trying again (after having tried a long time ago) to hit my second database.
So I changed my code so that every few thousand records, I touched the second database again (with a count() of a very small table), and that fixed it. No doubt "ping" or some other means of touching the database would work, as well.
Firstly, You should make sure the MySQL session & global enviroments wait_timeout and interactive_timeout values. And secondly Your client should try to reconnect to the server below those enviroments values.

Categories

Resources