Mocking database call in a way that it checks the SQL query - python

I have a code that executes queries to redshift like this:
def send_sql_query(source, sql_query, lst=None):
connection = psycopg2.connect(
host=os.environ["REDSHIFT_HOST"],
port="5439",
dbname="dbname",
user=os.environ["REDSHIFT_USERNAME"],
password=os.environ["REDSHIFT_PASSWORD"],
cursor = connection.cursor()
cursor.execute(sql_query, lst)
sql_results = cursor.fetchall()
return sql_results
finally:
if connection:
connection.close()
I would like to mock the method in a way that it will retrieve and sql_query, and the method will hold a fake db data (preferable in json), but will execute the SQL on the fake data with the sql_query and return the result.
Using mock.return_value and mock.side_effect will not help, because I want to verify that the SQL query is correct. Writing a code to return results doesn't really check the SQL query
Is there a framework in python for it?

Testing the SQL requires a SQL engine. As different databases use different dialects and as you use PostgreSQL as you main database, you should install a PostgreSQL instance on you dev environment with fake data and redirect your queries there while testing.
As you use the environment to store the reference of the database, you have just to setup a test environment pointing to the test database.

Related

SQLite: How to enable explain plan with the Python API?

I'm using Python (and Peewee) to connect to a SQLite database. My data access layer (DAL) is a mix of peewee ORM and SQL-based functions. I would like to enable EXPLAIN PLAN for all queries upon connecting to the database and toggle it via configuration or CLI parameter ... how can I do that using the Python API?
from playhouse.db_url import connect
self._logger.info("opening db connection to database, creating cursor and initializing orm model ...")
self.__db = connect(url)
# add support for a REGEXP and POW implementation
# TODO: this should be added only for the SQLite case and doesn't apply to other vendors.
self.__db.connection().create_function("REGEXP", 2, regexp)
self.__db.connection().create_function("POW", 2, pow)
self.__cursor = self.__db.cursor()
self.__cursor.arraysize = 100
# what shall I do here to enable EXPLAIN PLANs?
That is a feature of the sqlite interactive shell. To get the query plans, you will need to explicitly request it. This is not quite straightforward with Peewee because it uses parameterized queries. You can get the SQL executed by peewee in a couple of ways.
# Print all queries to stderr.
import logging
logger = logging.getLogger('peewee')
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
Or for an individual query:
query = SomeModel.select()
sql, params = query.sql()
# To get the query plan:
curs = db.execute_sql('EXPLAIN ' + sql, params)
print(curs.fetchall()) # prints query plan

Python Pycharm and SQL Server connection

I would like to use data from SQL server in Pycharm using python. I have my database connection set up in Pycharm, but not sure how to access this data within my python code. I would like to query the data within the python code (similar to what I would do in R using the RODBC package).
Any suggestions on what to do or where to look would be much appreciated.
I have been having issues with this over learning this the last few days. (database / python) For me I am working in flask but it doesn't really seem to matter.
I did get this to work though not exactly what you ask but might get you a start
import MySQLdb
def database():
db = MySQLdb.connect(host="localhost", port=3306, user="root", passwd="admin", db="echo")
cursor = db.cursor()
cursor.execute( "INSERT INTO `post` (`hello`) VALUES (null), ('hello_world')" )
db.commit()
db.close()
I had to just set up my database from the command line. Its not pretty or intuitive but should get you started.
If you want to work with Python objects rather than SQL, I'd use SqlAlchemy and reflection.
from sqlalchemy import MetaData, create_engine
from sqlalchemy.orm import Session
from sqlalchemy.ext.automap import automap_base
engine = create_engine('mysql+mysqldb://...', pool_recycle=3600)
metadata = MetaData()
metadata.reflect(bind=engine)
session = Session(engine)
Base = automap_base(metadata=metadata)
Base.prepare()
# assuming I have a table named 'users'
Users = Base.classes.users
someUsers = Users.query.filter(Users.name.in_(['Jack', 'Bob', 'YakMan')).all()
import mysql.connector
connection=mysql.connector.connect(user='root', password='daniela', host='localhost', database='girrafe')
mycursor=connection.cursor()
There is a concept called OR(Object Relational) Mapping in python, which can be used for database connections. One of the modules that you need to import for the purpose is SQLAlchemy.
First, you will need to install sqlalchemy by:
pip install sqlalchemy
Now, for database connection, we have an Engine class in the sqlalchemy, which is responsible for the database connectivity. We create an object of the Engine class for establishing connection.
from sqlalchemy import create_engine,MetaData,select
engine=create_engine("mysql://user:pwd#localhost/dbname", echo=True)
connection=engine.connect()
The process of reading the database and creating metadata is called Reflection.
metadata=MetaData()
query=select([Student]) #Assuming that my database already has a table named Student
result=connection.execute(query)
row=result.fetchall() #This works similar to the select* query
In this way, you can manipulate data through other queries too, using sqlalchemy!

stored proc not committing with sqlalchemy/pyodbc

I am using sqlalchemy/pyodbc to connect to a MS SQL 2012 server. I chose sqlalchemy because of the direct integration with pandas dataframes using .read_sql and .to_sql.
At a high level, my code is:
df = dataframe.read_sql("EXEC sp_getsomedata")
<do some stuff here>
finaldf.to_sql("loader_table", engine,...)
This part works great, very easy to read, etc. The problem is that I have to run a final stored proc to insert the data from the loader table into the live table. Normally, sqlalchemy knows to commit after INSERT/UPDATE/DELETE, but doesn't want to do the commit for me when I run this final stored proc.
After having tried multiple approaches, I see the transaction in the db sitting uncommitted. I know sqlalchemy is very flexible and I am using about 3% of its functionality, what is the simplest way to get this working? I think I need to be using sqlalchemy core and not ORM. I saw examples using sessionmaker, but I think that monopolizes the engine object and doesn't let pandas access it.
connection = engine.connect()
transaction = connection.begin()
connection.execute("EXEC sp_doLoaderStuff")
transaction.commit()
connection.close()
I have tried calling .execute from the connection level, from a cursor level, and even using the .raw_connection() method without success.
connection = engine.raw_connection()
connection.autocommit = True
cursor = connection.cursor()
cursor.execute("EXEC sp_doLoaderStuff")
connection.commit()
connection.close()
Any ideas what I am missing here?
Completely self-inflicted. The correct working code using the raw_connection() method that is working fine is:
connection = engine.raw_connection()
cursor = connection.cursor()
cursor.execute("EXEC sp_doLoaderStuff")
connection.commit()
connection.close()

Accessing test PostgreSQL database from python

I can't seem to correctly connect and pull from a test postgreSQL database in python. I installed PostgreSQL using Homebrew. Here's how I have been accessing the database table and value from the terminal:
xxx-macbook:~ xxx$ psql
psql (9.4.0)
Type "help" for help.
xxx=# \dn
List of schemas
Name | Owner
--------+---------
public | xxx
(1 row)
xxx=# \connect postgres
You are now connected to database "postgres" as user "xxx".
postgres=# SELECT * from test.test;
coltest
-----------
It works!
(1 row)
But when trying to access it from python, using the code below, it doesn't work. Any suggestions?
########################################################################################
# Importing variables from PostgreSQL database via SQL commands
db_conn = psycopg2.connect(database='postgres',
user='xxx')
cursor = db_conn.cursor()
#querying the database
result = cursor.execute("""
Select * From test.test
""")
print "Result: ", result
>>> Result: None
It should say: Result: It works!
You need to fetch the results.
From the docs:
The [execute()-]method returns None. If a query was executed, the returned values can be retrieved using fetch*() methods.
Example:
result = cursor.fetchall()
For reference:
http://initd.org/psycopg/docs/cursor.html#execute
http://initd.org/psycopg/docs/cursor.html#fetch
Note that (unlike psql) psycopg2 wraps anything in transactions. So if you intend to issue persistent changes to the database (INSERT, UPDATE, DELETE, ...) you need to commit them explicitly. Otherwise changes will be rolled back automatically when the connection object is destroyed. Read more on that topic here:
http://initd.org/psycopg/docs/usage.html
http://initd.org/psycopg/docs/usage.html#transactions-control

Database does not update automatically with MySQL and Python

I'm having some trouble updating a row in a MySQL database. Here is the code I'm trying to run:
import MySQLdb
conn=MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor=conn.cursor()
cursor.execute("UPDATE compinfo SET Co_num=4 WHERE ID=100")
cursor.execute("SELECT Co_num FROM compinfo WHERE ID=100")
results = cursor.fetchall()
for row in results:
print row[0]
print "Number of rows updated: %d" % cursor.rowcount
cursor.close()
conn.close()
The output I get when I run this program is:
4Number of rows updated: 1
It seems like it's working but if I query the database from the MySQL command line interface (CLI) I find that it was not updated at all. However, if from the CLI I enter UPDATE compinfo SET Co_num=4 WHERE ID=100; the database is updated as expected.
What is my problem? I'm running Python 2.5.2 with MySQL 5.1.30 on a Windows box.
I am not certain, but I am going to guess you are using a INNODB table, and you haven't done a commit. I believe MySQLdb enable transactions automatically.
Call conn.commit() before calling close.
From the FAQ: Starting with 1.2.0, MySQLdb disables autocommit by default
MySQLdb has autocommit off by default, which may be confusing at first. Your connection exists in its own transaction and you will not be able to see the changes you make from other connections until you commit that transaction.
You can either do conn.commit() after the update statement as others have pointed out, or disable this functionality altogether by setting conn.autocommit(True) right after you create the connection object.
You need to commit changes manually or turn auto-commit on.
The reason SELECT returns the modified (but not persisted) data is because the connection is still in the same transaction.
I've found that Python's connector automatically turns autocommit off, and there doesn't appear to be any way to change this behaviour. Of course you can turn it back on, but then looking at the query logs, it stupidly does two pointless queries after connect to turn autocommit off then back on.
Connector/Python Connection Arguments
Turning on autocommit can be done directly when you connect to a database:
import mysql.connector as db
conn = db.connect(host="localhost", user="root", passwd="pass", db="dbname", autocommit=True)
MySQLConnection.autocommit Property
Or separately:
import MySQLdb
conn = MySQLdb.connect(host="localhost", user="root", passwd="pass", db="dbname")
cursor = conn.cursor()
conn.get_autocommit() # will return **False**
conn.autocommit(True) # will make it True
conn.get_autocommit() # Should return **True** now
cursor = conn.cursor()
Explicitly committing the changes is done with
conn.commit()
I have to execute SET autocommit=true on my mysqlWorkbench app script

Categories

Resources