How can I tell the mongodb server is up and running from python? I currently use
try:
con = pymongo.Connection()
except Exception as e:
...
Or is there a better way in pymongo functions I can use?
For new versions of pymongo, from MongoClient docs:
from pymongo.errors import ConnectionFailure
client = MongoClient()
try:
# The ismaster command is cheap and does not require auth.
client.admin.command('ismaster')
except ConnectionFailure:
print("Server not available")
You can init MongoClient with serverSelectionTimeoutMS to avoid waiting for 20 seconds or so before code it raises exception:
client = MongoClient(serverSelectionTimeoutMS=500) # wait 0.5 seconds in server selection
Yes, try/except is a good (pythonic) way to check if the server is up. However, it's best to catch the specific excpetion (ConnectionFailure):
try:
con = pymongo.Connection()
except pymongo.errors.ConnectionFailure:
...
Add the following headers:
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError, OperationFailure
Create the connection with MongoDB from Python:
mongoClient = MongoClient("mongodb://usernameMongo:passwordMongo#localhost:27017/?authMechanism=DEFAULT&authSource=database_name", serverSelectionTimeoutMS=500)
Validations
try:
if mongoClient.admin.command('ismaster')['ismaster']:
return "Connected!"
except OperationFailure:
return ("Database not found.")
except ServerSelectionTimeoutError:
return ("MongoDB Server is down.")
Related
hei guys,
I have an executable python script, say get_data.py (located in project_x/src/) which is working properly, when started by: python get_data.py . It gets data (a list of id's which are necessary for further calculations) from a database via mysql.connector and then processes these data in parallel (via multiprocessing) using pool.map.
BUT it is supposed to be started by an .exe-file (located in project_x/exec/)[EDIT: This .exe uses the php command exec() to directly addresses my python script]. This is not working properly but ending in the try-except-block (in wrapper_fun) when catching the (unknown) mistake and not terminating when deleting the try-except-commands.
Do you have any idea what could be going wrong? I would appreciate any idea. I tried logging but there seems to be a permission problem. My idea is that the connection the db cannot be established and therefore there are no id's.
def calculations():
do_something...
def wrapper_fun(id):
try:
calculations(id)
except Exception:
return(False)
if __name__ == "__main__":
import multiprocessing
import mysql.connector
from mysql.connector import Error
host_name = <secret_1>
user_name = <secret_2>
passt = <secret_3>
connection = None
try:
connection = mysql.connector.connect(
host=host_name,
user=user_name,
passwd=user_password
)
except Error as err:
print(f"Error: '{err}'")
d = pd.read_sql_query(query, connection,coerce_float=False)
connection.close()
id_s = list(d.ids)
results = [pool.map(wrapper_fun,id_s)]
...
I have a python application that is reading from mysql/mariadb, uses that to fetch data from an api and then inserts results into another table.
I had setup a module with a function to connect to the database and return the connection object that is passed to other functions/modules. However, I believe this might not be a correct approach. The idea was to have a small module that I could just call whenever I needed to connect to the db.
Also note, that I am using the same connection object during loops (and within the loop passing to the db_update module) and call close() when all is done.
I am also getting some warnings from the db sometimes, those mostly happen at the point where I call db_conn.close(), so I guess I am not handling the connection or session/engine correctly. Also, the connection id's in the log warning keep increasing, so that is another hint, that I am doing it wrong.
[Warning] Aborted connection 351 to db: 'some_db' user: 'some_user' host: '172.28.0.3' (Got an error reading communication packets)
Here is some pseudo code that represents the structure I currently have:
################
## db_connect.py
################
# imports ...
from sqlalchemy import create_engine
def db_connect():
# get env ...
db_string = f"mysql+pymysql://{db_user}:{db_pass}#{db_host}:{db_port}/{db_name}"
try:
engine = create_engine(db_string)
except Exception as e:
return None
db_conn = engine.connect()
return db_conn
################
## db_update.py
################
# imports ...
def db_insert(db_conn, api_result):
# ...
ins_qry = "INSERT INTO target_table (attr_a, attr_b) VALUES (:a, :b);"
ins_qry = text(ins_qry)
ins_qry = ins_qry.bindparams(a = value_a, b = value_b)
try:
db_conn.execute(ins_qry)
except Exception as e:
print(e)
return None
return True
################
## main.py
################
from sqlalchemy import text
from db_connect import db_connect
from db_update import db_insert
def run():
try:
db_conn = db_connect()
if not db_conn:
return False
except Exception as e:
print(e)
qry = "SELECT *
FROM some_table
WHERE some_attr IN (:some_value);"
qry = text(qry)
search_run_qry = qry.bindparams(
some_value = 'abc'
)
result_list = db_conn.execute(qry).fetchall()
for result_item in result_list:
## do stuff like fetching data from api for every record in the query result
api_result = get_api_data(...)
## insert into db:
db_ins_status = db_insert(db_conn, api_result)
## ...
db_conn.close
run()
EDIT: Another question:
a) Is it ok in a loop, that does an update on every iteration to use the same connection, or would it be wiser to instead pass the engine to the run() function and call db_conn = engine.connect() and db_conn.close() just before and after each update?
b) I am thinking about using ThreadPoolExecutor instead of the loop for the API calls. Would this have implications on how to use the connection, i.e. can I use the same connection for multiple threads that are doing updates to the same table?
Note: I am not using the ORM feature mostly because I have a strong DWH/SQL background (though not so much as DBA) and I am used to writing even complex sql queries. I am thinking about switching to just using PyMySQL connector for that reason.
Thanks in advance!
Yes you can return/pass connection object as parameter but what is the aim of db_connect method, except testing connection ? As I see there is no aim of this db_connect method therefore I would recommend you to do this as I done it before.
I would like to share a code snippet from one of my project.
def create_record(sql_query: str, data: tuple):
try:
connection = mysql_obj.connect()
db_cursor = connection.cursor()
db_cursor.execute(sql_query, data)
connection.commit()
return db_cursor, connection
except Exception as error:
print(f'Connection failed error message: {error}')
and then using this one as for another my need
db_cursor, connection, query_data = fetch_data(sql_query, query_data)
and after all my needs close the connection with this method and method call.
def close_connection(connection, db_cursor):
"""
This method used to close SQL server connection
"""
db_cursor.close()
connection.close()
and calling method
close_connection(connection, db_cursor)
I am not sure can I share my github my check this link please. Under model.py you can see database methods and to see how calling them check it main.py
Best,
Hasan.
I'm using Python with asnycpg to interact with my PostgreSQL database.
After some time, if I don't interact with it and then try to do so, I get an connection is closed error. Is this a server side config or client side config issue?
How to solve it?
The database automatically closes the connection due to security reasons.
So I suggest you to open the connection to db just before running queries with asyncpg, and then to reclose the connection right afterwards.
Furthermore, you can manage the possible errors that you get when the connection is closed by properly rising exceptions.
Take a look at this example:
Import asyncpg
print("I'm going to run a query with asyncpg")
# if the connection to db is not opened, then open it
if not connection:
# try to open the connection to db
try:
connection = await asyncpg.connect(
host=YOUR_DATABASE_HOST,
user=YOUR_DATABASE_USER,
password=YOUR_DATABASE_PASS,
database=YOUR_DATABASE_DB_NAME
)
except (Exception, asyncpg.ConnectionFailureError) as error:
Print("Error while connecting to db: {}".format(error))
else:
#connection already up and running
pass
QUERY_STRING = """
INSERT INTO my_table(field_1, field_2)
VALUES ($1, $2);
"""
try:
await connection.execute(QUERY_STRING, value_to_assign_to_field_1, value_to_assign_to_field_2)
return None
# except (Exception, asyncpg.UniqueViolationError) as integrError:
# print("You are violating unique constraint.")
except (Exception, asyncpg.ConnectionFailureError) as error:
print("Connection to db has failed. Cannot add data.")
return "{}".format(error)
finally:
if (connection):
await Utils.close_connection(connection)
print("data has been added. closing the connection.")
So I have been making websites using PHP with a MySQL database, PHPMyAdmin, and XAMPP. I am trying to switch from PHP to python. All of the tutorials seem to be using SQLite instead of MYSQL. As far as I understand, sqlite is serverless and cant hold certain data types like datetime, but I need datetime in my website. How would I connect to MySQL with a python Flask project or is there a different way I need to do this?
You need to use Client Library like PyMySQL
To install pymysql use:
pip install PyMySQL
And then use this function, it will return the DB object:
def make_connection():
try:
db = pymysql.connect(host='localhost',
user='root',
password='',
db='DatabaseName',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
except Exception as error:
print(error)
return db
As what #tirth-mehta mentioned, if you want to use connect without any ORM, you should a client library. and if you feel it's too painful to not forget to close connection in every function call, you could use a decorator like this:
DB_CONFIG = {'host':'127.0.0.1',
'database':'dbname',
'user':'root',
'password':''}
def connect(func):
def _connect(*args, **kwargs):
conn = pymysql.connect(**DB_CONFIG)
try:
rv = func(conn, *args, **kwargs)
except Exception as e:
print(e)
else:
conn.commit()
finally:
conn.close()
return rv
return _connect
I am trying to setup rethinkDB with tornado. This is my db setup -
db_connection = r.connect(RDB_HOST,RDB_PORT) #Connecting to RethinkDB server
This is just for cross-checking database and table exists
def dbSetup():
print PROJECT_DB,db_connection
try:
r.db_create(PROJECT_DB).run(db_connection)
print 'Database setup completed.'
except RqlRuntimeError:
try:
r.db(PROJECT_DB).table_create(PROJECT_TABLE).run(connection)
print 'Table creation completed'
except:
print 'Table already exists.Nothing to do'
print 'App database already exists.Nothing to do'
db_connection.close()
But the try block for db_create is throwing an AttributeError: 'Future' object has no attribute '_start'. I am unable to figure out what seems to be the problem here.
rethinkdb has native async client for Tornado. The problem in your case is that connect returns only future that should be resolved (yield) - is asynchronous. And that object, Future, does not have anything like _start nor anything like rethink connection object. Example how to do it:
import rethinkdb as r
from tornado import ioloop, gen
r.set_loop_type("tornado")
#gen.coroutine
def dbSetup():
db_connection = yield r.connect(RDB_HOST,RDB_PORT)
yield r.db_create(PROJECT_DB).run(db_connection)
print 'Database setup completed.'
tornado.ioloop.IOLoop.current().run_sync(dbSetup)
More information on https://rethinkdb.com/blog/async-drivers/