Basic Locking in HTTP Server with Python - python

I am implementing a basic HTTP server that interrogates a database via PostGRESQL.
In order to handle multiple requests I need to create a thread in the server for each one, and I am currently doing this.
class MyServer(ThreadingMixIn, HTTPServer):
def __init__(self, server_address, handler_class):
super().__init__(server_address=server_address, RequestHandlerClass=handler_class)
self.KEEP_ALIVE = True
def run(self):
while self.KEEP_ALIVE:
self.handle_request()
and, for example, my do_DELETE method is done like this:
def do_DELETE(self):
# Get ID from the request path
config_id = self.path[2:]
# Connect to DB
connection_db, cursor = self.connect_to_DB()
with connection_db:
# Check if the config to delete exists
cursor.execute('SELECT * FROM configuration WHERE id=%s', (config_id, ))
res = cursor.fetchall()
# If the config to delete exists, the operation is done. Otherwise an error message is sent back
if len(res) != 0:
cursor.execute('DELETE FROM configuration WHERE id=%s', (config_id,))
response_code = 200
answer = OPERATION_SUCCESSFUL
else:
# Send error message back
response_code = 400
answer = NO_SUCH_ID_ERROR
content_type = 'text/plain'
self.reply(response_code=response_code, content_type=content_type, answer=answer)
I would like to make it so that, in the do_DELETE method, the part with the two queries to the DB is atomic, i.e. it should lock on a resource visible by all the threads... but I have no idea on how to do it. Can you help me?

Related

Python SQLalchemy - can I pass the connection object between functions?

I have a python application that is reading from mysql/mariadb, uses that to fetch data from an api and then inserts results into another table.
I had setup a module with a function to connect to the database and return the connection object that is passed to other functions/modules. However, I believe this might not be a correct approach. The idea was to have a small module that I could just call whenever I needed to connect to the db.
Also note, that I am using the same connection object during loops (and within the loop passing to the db_update module) and call close() when all is done.
I am also getting some warnings from the db sometimes, those mostly happen at the point where I call db_conn.close(), so I guess I am not handling the connection or session/engine correctly. Also, the connection id's in the log warning keep increasing, so that is another hint, that I am doing it wrong.
[Warning] Aborted connection 351 to db: 'some_db' user: 'some_user' host: '172.28.0.3' (Got an error reading communication packets)
Here is some pseudo code that represents the structure I currently have:
################
## db_connect.py
################
# imports ...
from sqlalchemy import create_engine
def db_connect():
# get env ...
db_string = f"mysql+pymysql://{db_user}:{db_pass}#{db_host}:{db_port}/{db_name}"
try:
engine = create_engine(db_string)
except Exception as e:
return None
db_conn = engine.connect()
return db_conn
################
## db_update.py
################
# imports ...
def db_insert(db_conn, api_result):
# ...
ins_qry = "INSERT INTO target_table (attr_a, attr_b) VALUES (:a, :b);"
ins_qry = text(ins_qry)
ins_qry = ins_qry.bindparams(a = value_a, b = value_b)
try:
db_conn.execute(ins_qry)
except Exception as e:
print(e)
return None
return True
################
## main.py
################
from sqlalchemy import text
from db_connect import db_connect
from db_update import db_insert
def run():
try:
db_conn = db_connect()
if not db_conn:
return False
except Exception as e:
print(e)
qry = "SELECT *
FROM some_table
WHERE some_attr IN (:some_value);"
qry = text(qry)
search_run_qry = qry.bindparams(
some_value = 'abc'
)
result_list = db_conn.execute(qry).fetchall()
for result_item in result_list:
## do stuff like fetching data from api for every record in the query result
api_result = get_api_data(...)
## insert into db:
db_ins_status = db_insert(db_conn, api_result)
## ...
db_conn.close
run()
EDIT: Another question:
a) Is it ok in a loop, that does an update on every iteration to use the same connection, or would it be wiser to instead pass the engine to the run() function and call db_conn = engine.connect() and db_conn.close() just before and after each update?
b) I am thinking about using ThreadPoolExecutor instead of the loop for the API calls. Would this have implications on how to use the connection, i.e. can I use the same connection for multiple threads that are doing updates to the same table?
Note: I am not using the ORM feature mostly because I have a strong DWH/SQL background (though not so much as DBA) and I am used to writing even complex sql queries. I am thinking about switching to just using PyMySQL connector for that reason.
Thanks in advance!
Yes you can return/pass connection object as parameter but what is the aim of db_connect method, except testing connection ? As I see there is no aim of this db_connect method therefore I would recommend you to do this as I done it before.
I would like to share a code snippet from one of my project.
def create_record(sql_query: str, data: tuple):
try:
connection = mysql_obj.connect()
db_cursor = connection.cursor()
db_cursor.execute(sql_query, data)
connection.commit()
return db_cursor, connection
except Exception as error:
print(f'Connection failed error message: {error}')
and then using this one as for another my need
db_cursor, connection, query_data = fetch_data(sql_query, query_data)
and after all my needs close the connection with this method and method call.
def close_connection(connection, db_cursor):
"""
This method used to close SQL server connection
"""
db_cursor.close()
connection.close()
and calling method
close_connection(connection, db_cursor)
I am not sure can I share my github my check this link please. Under model.py you can see database methods and to see how calling them check it main.py
Best,
Hasan.

Python Flask MySQL: How to persistently store connection pool object?

After a long search, I could not find an answer to my question and if what I desire is even possible. My question concerns a MySQL connection implementation for a Flask API. What I desire to implement is as follows:
When the Flask app is started, a create_db_connection method is called, which created a number of mysql connections in a pooling object.
For each incoming request, a get_connection method is called, to get one connection from the poule
And of course when the request is ended, a method close_connection is called to close the connection and mark it available in the pool.
The problem I'm having concerns persistently storing the connection pool such that it can be re-used for each request.
create_db_connection method:
def create_db_connection():
print("-----INITIALISING-----")
db_pool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "BestLiar_Public_API",
pool_size = 10,
autocommit = True,
pool_reset_session = True,
user = 'user',
password = 'pass',
host = 'hostel',
database =' db')
print("-----DB POOL INITIALISED-----")
// SOLUTION TO PERSISTENTLY SAVE db_pool OBJECT
get_connection method:
def __enter__(self):
try:
// SOLUTION TO FETCH THE db_pool OBJECT > NAMED AS db_pool IN LINE BELOW
self.con = db_pool.get_connection()
self.cur = self.con.cursor(dictionary=True)
if self.con.is_connected():
return {'cur': self.cur, 'con': self.con}
else:
raise NoConnectionError("No database connection", "Pool connection not connected")
except mysql.connector.PoolError:
raise SystemOverload("Too many requests, could not process","No pool connection available")
except:
raise NoConnectionError("No database connection", "Unknown reason")
close_connection method:
def __exit__(self, type, value, traceback):
if self.con:
self.cur.close()
self.con.close()
I have tried storing the db_pool object as a global variable (undesirable) and have tried the flask global object (only works for one request).
Anyone who has the key to the solution?

TWS IB Gateway (version 972/974) Client keeps disconnecting

I am trying to connect with IB Api to download some historical data. I have noticed that my client connects to the API, but then disconnects automatically in a very small period (~a few seconds).
Here's the log in the server:
socket connection for client{10} has closed.
Connection terminated.
Here's my main code for starting the app:
class TestApp(TestWrapper, TestClient):
def __init__(self):
TestWrapper.__init__(self)
TestClient.__init__(self, wrapper=self)
self.connect(config.ib_hostname, config.ib_port, config.ib_session_id)
self.session_id = int(config.ib_session_id)
self.thread = Thread(target = self.run)
self.thread.start()
setattr(self, "_thread", self.thread)
self.init_error()
def reset_connection(self):
pass
def check_contract(self, name, exchange_name, security_type, currency):
self.reset_connection()
ibcontract = IBcontract()
ibcontract.secType = security_type
ibcontract.symbol = name
ibcontract.exchange = exchange_name
ibcontract.currency = currency
return self.resolve_ib_contract(ibcontract)
def resolve_contract(self, security):
self.reset_connection()
ibcontract = IBcontract()
ibcontract.secType = security.security_type()
ibcontract.symbol=security.name()
ibcontract.exchange=security.exchange()
ibcontract.currency = security.currency()
return self.resolve_ib_contract(ibcontract)
def get_historical_data(self, security, duration, bar_size, what_to_show):
self.reset_connection()
resolved_ibcontract=self.resolve_contract(security)
data = test_app.get_IB_historical_data(resolved_ibcontract.contract, duration, bar_size, what_to_show)
return data
def create_app():
test_app = TestApp()
return test_app
Any suggestions on what could be the problem? I can show more error messages from the debug if needed.
If you can connect without issue only by changing the client ID, typically that indicates that the previous connection was not properly closed and TWS thinks its still open. To disconnect an API client you should call the EClient.disconnect function explicity, overridden in your example as:
test_app.disconnect()
Though its not necessary to disconnect/reconnect after every task, and you can just leave the connection open for extended periods.
You may sometimes encounter problems if an API function, such as reqHistoricalData, is called immediately after connection. Its best to have a small pause after initiating a connection to wait for a callback such as nextValidID to ensure the connection is complete before proceeding.
http://interactivebrokers.github.io/tws-api/connection.html#connect
I'm not sure what the function init_error() is intended for in your example since it would always be called when a TestApp object is created (whether or not there is an error).
Installing the latest version of TWS API (v 9.76) solved the problem.
https://interactivebrokers.github.io/#

AWS Lambda & MySQL Connection Handling

I am currently using AWS Lambda (Python 3.6) to talk to a MySQL database. I also have Slack commands triggering the queries to the database. On occasion, I have noticed that I can change things directly through MySQL Workbench and then trigger a query through Slack which returns old values. I currently connect to MySQL outside of the python handler like this:
BOT_TOKEN = os.environ["BOT_TOKEN"]
ASSET_TABLE = os.environ["ASSET_TABLE"]
REGION_NAME = os.getenv('REGION_NAME', 'us-east-2')
DB_NAME = os.environ["DB_NAME"]
DB_PASSWORD = os.environ["DB_PASSWORD"]
DB_DATABASE = os.environ["DB_DATABASE"]
RDS_HOST = os.environ["RDS_HOST"]
port = os.environ["port"]
try:
conn = pymysql.connect(RDS_HOST, user=DB_NAME, passwd=DB_PASSWORD, db=DB_DATABASE, connect_timeout=5, cursorclass=pymysql.cursors.DictCursor)
cursor = conn.cursor()
except:
sys.exit()
The MySQL connection is done outside of any definition at the very top of my program. When Slack sends a command, I call another definition that then queries MySQL. This works okay sometimes, but other times can send my old data that has not updated. The whole layout is like this:
imports
SQL connections
SQL query definitions
handler definition
I tried moving the MySQL connection portion inside of the handler, but then the SQL query definitions do not recognize my cursor (out of scope, I guess).
So my question is, how do I handle this MySQL connection? Is it best to keep the MySQL connection outside of any definitions? Should I open and close the connection each time? Why is my data stale? Will Lambda ALWAYS run the entire routine or can it try to split the load between servers (I swear I read somewhere that I cannot rely on Lambda to always read my entire routine; sometimes it just reads the handler)?
I'm pretty new to all this, so any suggestions are much appreciated. Thanks!
Rest of the code if it helps:
################################################################################################################################################################################################################
# Slack Lambda handler.
################################################################################################################################################################################################################
################################################################################################################################################################################################################
# IMPORTS
###############
import sys
import os
import pymysql
import urllib
import math
################################################################################################################################################################################################################
################################################################################################################################################################################################################
# Grab data from AWS environment.
###############
BOT_TOKEN = os.environ["BOT_TOKEN"]
ASSET_TABLE = os.environ["ASSET_TABLE"]
REGION_NAME = os.getenv('REGION_NAME', 'us-east-2')
DB_NAME = os.environ["DB_NAME"]
DB_PASSWORD = os.environ["DB_PASSWORD"]
DB_DATABASE = os.environ["DB_DATABASE"]
RDS_HOST = os.environ["RDS_HOST"]
port = os.environ["port"]
################################################################################################################################################################################################################
################################################################################################################################################################################################################
# Attempt SQL connection.
###############
try:
conn = pymysql.connect(RDS_HOST, user=DB_NAME, passwd=DB_PASSWORD, db=DB_DATABASE, connect_timeout=5, cursorclass=pymysql.cursors.DictCursor)
cursor = conn.cursor()
except:
sys.exit()
################################################################################################################################################################################################################
# Define the URL of the targeted Slack API resource.
SLACK_URL = "https://slack.com/api/chat.postMessage"
################################################################################################################################################################################################################
# Function Definitions.
###############
def get_userExistance(user):
statement = f"SELECT 1 FROM slackDB.users WHERE userID LIKE '%{user}%' LIMIT 1"
cursor.execute(statement, args=None)
userExists = cursor.fetchone()
return userExists
def set_User(user):
statement = f"INSERT INTO `slackDB`.`users` (`userID`) VALUES ('{user}');"
cursor.execute(statement, args=None)
conn.commit()
return
################################################################################################################################################################################################################
################################################################################################################################################################################################################
# Slack command interactions.
###############
def lambda_handler(data, context):
# Slack challenge answer.
if "challenge" in data:
return data["challenge"]
# Grab the Slack channel data.
slack_event = data['event']
slack_userID = slack_event['user']
slack_text = slack_event['text']
channel_id = slack_event['channel']
slack_reply = ""
# Check sql connection.
try:
conn = pymysql.connect(RDS_HOST, user=DB_NAME, passwd=DB_PASSWORD, db=DB_DATABASE, connect_timeout=5, cursorclass=pymysql.cursors.DictCursor)
cursor = conn.cursor()
except pymysql.OperationalError:
connected = 0
else:
connected = 1
# Ignore bot messages.
if "bot_id" in slack_event:
slack_reply = ""
else:
# Start data sift.
if slack_text.startswith("!addme"):
if get_userExistance(slack_userID):
slack_reply = f"User {slack_userID} already exists"
else:
slack_reply = f"Adding user {slack_userID}"
set_user(slack_userID)
# We need to send back three pieces of information:
data = urllib.parse.urlencode(
(
("token", BOT_TOKEN),
("channel", channel_id),
("text", slack_reply)
)
)
data = data.encode("ascii")
# Construct the HTTP request that will be sent to the Slack API.
request = urllib.request.Request(
SLACK_URL,
data=data,
method="POST"
)
# Add a header mentioning that the text is URL-encoded.
request.add_header(
"Content-Type",
"application/x-www-form-urlencoded"
)
# Fire off the request!
urllib.request.urlopen(request).read()
# Everything went fine.
return "200 OK"
################################################################################################################################################################################################################
All of the code outside the lambda handler is only run once per container. All code inside the handler is run every time the lambda is invoked.
A lambda container lasts for between 10 and 30 minutes depending on usage. A new lambda invocation may or may not run on an already running container.
It's possible you are invoking a lambda in a container that is over 5 minutes old where your connection has timed out.

Python check large quantity of requests

I'm using Python script to check if user requested exists.
using:
import MySQLdb
from flask import Flask, request, abort
app = Flask(__name__)
try:
db = MySQLdb.connect('xxx1','my_username','my_password','my_db_name')
db1 = MySQLdb.connect('xxx2','my_username','my_password','my_db_name')
db2=
db3=
except MySQLdb.OperationalError as e:
print "Caught an exception : " + str(.message)
#app.route('/')
#app.route('/<path:path>')
def page(path = ''):
user = request.args.get('user', None)
if not mac:
abort (403)
cursor = db.cursor()
query = 'Select ID from f_member where Name=%s'
db.commit()
cursor execute(query, (user, ))
row = cursor.fetchone()
cursor.close()
#cursor.db1 here
if row == None and row1 == None:
abort (403)
return 'OK', 200
if __name__ == '__main__':
app.run(host=host, port=port)
Then i have 5 nginx servers with this:
location = /auth {
proxy_pass http://xxx.xxx$request_uri;
proxy_pass_request_body off;
proxy_set_header Content_Lenght "";
proxy_set_header X-Real-IP $remote_addr;
So the thing is, this script checks if user is found in one
of the databases, if true then access the page.
Problem is my user list is now getting up to 5k users. and when i run the .py script it runs so fast (even with errors 403 people who are trying to connect), then broken pipe starts to show up.
Seems like it is getting overloaded, is there a better way to handle my script so it runs better and more efficient?
You may use a dictionary for username/id map in your Python program. Basically, when the program starts it will make a query for all the users and populate the map. Afterwards, every 20 seconds or so it will make a query to get the "changes" in f_member to update the dictionary. Lookup for username happens always in this map. If a user name is not found in the map, then only it makes a DB query (and if the user detail is found on DB, update the local map as well).
If you don't have millions of users in the table, this approach will work. Otherwise use a LRU cache.
So, after a lengthy comment thread, it appears like your flask instances might be competing for DB resources. There's also another hypothesis that saving your connections off in global scope could have some bad side effects (I could be wrong about this, but I'd be concerned about timeouts, not closing the connections, etc). Here's how I might rewrite it:
import MySQLdb
from flask import Flask, request, abort
app = Flask(__name__)
def get_db_connection_args():
try:
db_args = { 'host':'xxx1', 'user':'my_username', 'passwd':'my_password', 'db':'my_db_name' }
db1_args = { 'host':'xxx2', 'user':'my_username', 'passwd':'my_password', 'db':'my_db_name' }
db2_args = { 'host':'xxx3', 'user':'my_username', 'passwd':'my_password', 'db':'my_db_name' }
db3_args = { 'host':'xxx4', 'user':'my_username', 'passwd':'my_password', 'db':'my_db_name' }
except MySQLdb.OperationalError as e:
print "Caught an exception : " + str(.message)
return (db_args, db1_args, db2_args, db3_args)
#app.route('/')
#app.route('/<path:path>')
def page(path = ''):
user = request.args.get('user', None)
#I don't know what mac is...but it was in your original code.
if not mac:
abort (403)
found = False
db_connection_args = get_db_connection_args()
for db_connection_arg_dict in db_connection_args:
if not found:
db_conn = MySQLdb.connect(**db_connection_arg_dict)
try:
cursor = db_conn.cursor()
cursor.execute('Select ID from f_member where Name=%s', (user, ))
row = cursor.fetchone()
if row:
found = True
finally:
db_conn.close()
if found:
return 'OK', 200
abort (403)
if __name__ == '__main__':
app.run(host=host, port=port)

Categories

Resources