i am having problems to load data into an access-database. For testing purpose i build a little convert functions which takes all data-sets from a hdf-file and writes it into the accdb. Without the #event.listens_for(engine, "before_cursor_execute") functionality it works, but veeery slow. With it, it creates an odd behavior. It creates only one empty table (from the first df) in the db and finishes execution. The for-loop will never be finished and no error raises.
Maybe it’s because the sqlalchemy-access package doesn’t support fast_executemany but couldn’t find any related information about it. Does any of you have some input for me how i can solve it or be able to write data in a faster way into the db?
big thanks!
import urllib
from pathlib import Path
from sqlalchemy import create_engine, event
# PATHS
HOME = Path(__file__).parent
DATA_DIR = HOME / 'output'
FILE_ACCESS = DATA_DIR / 'db.accdb'
FILE_HDF5 = DATA_DIR / 'Data.hdf'
# FUNCTIONS
def convert_from_hdf_to_accb():
# https://github.com/gordthompson/sqlalchemy-access/wiki/Getting-Connected
driver = '{Microsoft Access Driver (*.mdb, *.accdb)}'
conn_str = 'DRIVER={};DBQ={};'.format(driver, FILE_ACCESS)
conn_url = "access+pyodbc:///?odbc_connect={}".format(urllib.parse.quote_plus(conn_str))
# https://medium.com/analytics-vidhya/speed-up-bulk-inserts-to-sql-db-using-pandas-and-python-61707ae41990
# https://github.com/pandas-dev/pandas/issues/15276
# https://stackoverflow.com/questions/48006551/speeding-up-pandas-dataframe-to-sql-with-fast-executemany-of-pyodbc
engine = create_engine(conn_url)
#event.listens_for(engine, "before_cursor_execute")
def receive_before_cursor_execute(conn, cursor, statement, params, context, executemany):
if executemany:
cursor.fast_executemany = True
with pd.HDFStore(path=FILE_HDF5, mode="r") as store:
for key in store.keys():
df = store.get(key)
df.to_sql(name=key, con=engine, index=False, if_exists='replace')
print(' IT NEVER REACHES AND DOESNT RAISE AN ERROR :( ')
# EXECUTE
if __name__ == "__main__":
convert_from_hdf_to_accb()
Maybe it’s because the sqlalchemy-access package doesn’t support fast_executemany
That is true. pyodbc's fast_executemany feature requires that the driver support an internal ODBC mechanism called "parameter arrays", and the Microsoft Access ODBC driver does not support them.
See also
https://github.com/mkleehammer/pyodbc/wiki/Driver-support-for-fast_executemany
Related
I am quite new in Python, any advice or link will help.
I have created two python scripts, -
Main.py which calls SQLcon.py.
SQLcon.py only creates connection to SQL server and downloads data based on multiple queries.
Later,
Main.py code reads/creates pandas dataframes from excel files which are downloaded by SQLcon and does calculations and etc and etc.
the File for the SQL connection and queries in the SQLcon.py has the main structure as below
Problems:
A) Quite a lot of queries are done and quite a lot of temporary files are created.
B) I do not want to keep the SQL related code on the Main file
Wanted Outcome:
I want to use dfX = pd.read_sql_query(qryX, engine) (or similar) in the main file and to get rid of part for saving/reading excel files.
Also, - would be nice to keep one connection during all these queries as multiple re-connections will slow down the code.
I am not sure how to start...
Thinking of putting main SQL connection into the function and call it from Main...
But it will create multiple re-connections...
import sqlalchemy as sa # and other imports
load_dotenv()
# .env passwords and etc.
'''...'''
# creating SQL connection via sqlalchemy
connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
engine = sa.create_engine(connection_url)
engine.echo = False
# creating dfs
df1 = pd.read_sql_query(qry1, engine)
dfA = pd.read_sql_query(qryA, engine)
dfZ = pd.read_sql_query(qryZ, engine)
engine.dispose() #not sure if dispose() is needed
# saving dfs
df1.to_excel(r'C:\Test\df1_tbl_Data.xlsx', index=False)
dfA.to_excel(r'C:\Test\dfA_tbl_Data.xlsx', index=False)
dfZ.to_excel(r'C:\Test\dfZ_tbl_Data.xlsx', index=False)
Consider building a collection of your data pulls in a user defined method. Then, call it whenever needed by main or other scripts:
SQLcon.py
import sqlalchemy as sa
# and other
imports load_dotenv()
# .env passwords and etc. '''...'''
def pull_data():
# creating SQL connection via sqlalchemy
connection_url = URL.create(
"mssql+pyodbc",
query={"odbc_connect": connection_string}
)
engine = sa.create_engine(connection_url)
engine.echo = False
# creating dfs
df_dict = {
"df1": pd.read_sql_query(qry1, engine),
"dfA": pd.read_sql_query(qryA, engine),
"dfZ": pd.read_sql_query(qryZ, engine)
}
# releasing engine
engine.dispose()
return df_dict
Main.py (import above as a module)
from SQLcon import pull_data
...
# CALL AS NEEDED
df_dict = pull_data()
# ACCESS DICT ELEMENTS
df_dict["df1"]
df_dict["dfA"]
df_dict["dfZ"]
...
I am setting up a new computer at work, and after installing anaconda and other various packages I have on my other computer, I am attempting to run some code that works fine on my other computer.
However, when trying to use SQLalchemy to import into redshift, I am getting a new error that I can't find anything on via google:
'SQLTable' object has no attribute 'insert_statement'
this appears to be some issue with padas.io.sql but I have no clue what
here is the code block:
import io
from pandas.io.sql import SQLTable
def _execute_insert(self, conn, keys, data_iter):
print("Using monkey-patched _execute_insert")
data = [dict((k, v) for k, v in zip(keys, row)) for row in data_iter]
conn.execute(self.insert_statement().values(data))
SQLTable._execute_insert = _execute_insert
import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy import text
dbschema='xref'
engine = create_engine('not_showing_you_this_part',
connect_args={'options': '-csearch_path={}'.format(dbschema)})
# test
from sqlalchemy import event, create_engine
#event.listens_for(engine, 'before_cursor_execute')
def receive_before_cursor_execute(conn, cursor, statement, params, context, executemany):
if executemany:
cursor.fast_executemany = True
cursor.commit()
# end test
api_start_time = time.time()
print('starting SQL query')
# change yh to the dataframe you want to upload
# under name = : enter in the name of the table you want to create or append to
df.to_sql(name='computer_test', con = engine, if_exists = 'append',index=False)
print('sql insert took: ' + str((time.time() - api_start_time)) + ' seconds')
for reference, the monkey-patch part is from:
How to speed up insertion from pandas.DataFrame .to_sql
full error in image
I kept on searching for the answer, only to see that a gentlemen had answered you question in the comment section.
I was using a very similar code for connecting and inserting to Redshift.
And the mistake I was committing was to use the below
conn.execute(self.insert_statement().values(data))
Replace the above with the code below:
conn.execute(self.table.insert().values(data))
Shoutout to https://stackoverflow.com/users/6560549/supershoot for answering it in the comments.
I am attempting to use SQL Server 2017 filestream in python. All of the functionality i use goes through sqlalchemy, thus i am attempting to find a way of using this, since i haven't found any implementation within sqlalchemy or other libraries (may have missed something, if so please point me to a working and tested implementation).
I have decided to approach this using the dll, based on https://github.com/VisionMark/django-mssql-filestream/blob/master/sql_filestream/win32_streaming_api.py . However, my call to the OpenSqlFilestream fails and returns -1 instead of file handle. I have no idea what the issue is or how to fix it.
from ctypes import c_char, sizeof, windll
from sqlalchemy import create_engine
from sqlalchemy.orm import session_maker
import msvcrt
import os
msodbcsql = windll.LoadLibrary("C:\Windows\System32\msodbcsql17.dll")
engine = create_engine("mssql+pyodbc://user:pass#test/test?TrustedConnection=yes+driver=ODBC Driver+17+for+SQL+Server")
maker = session_maker(bind=engine)
session = session_maker()
## first query should begind transaction
path = session.execute("SELECT file_stream.PathName() FROM test_filetable").fetchall()[0][0]
## this returns str like "\\\\test\\*"
context = session.execute("SELECT GET_FILESTREAM_TRANSACTION_CONTEXT()").fetchall()[0][0]
## returns bytes
_context = (c_char*len(context)).from_buffer_copy(context)
## This call fails
handle = msodbcsql.OpenSqlFilestream(
path, # FilestreamPath
0, # DesiredAccess
0, # OpenOptions
_context, # FilestreamTransactionContext
sizeof(_context), # FilestreamTransactionContextLength
0 # AllocationSize
)
## this returns -1 instead of handle
## Never reached, but this should create usable file
desc = msvcrt.open_osfhandle(fsHandle, os.O_RDONLY)
_file = os.fdopen(desc, 'r')
All of the queries work and output (as far as i understand) correct data.
How do i obtain filestream access to a file on SQL Server 2017 from python (3.7)?
Edit: The objects i read go to the size of gigabytes and the process only needs stream access.
My guess is that your issue is related to
the fact that a SQLAlchemy Session is much more than just a raw DB API Connection, and/or
the transaction context is not appropriate for your invocation of OpenSqlFilestream
For what it's worth, the following works for me with CPython 3.7.2 and pythonnet 2.4.0:
import clr
clr.AddReference("System.Data")
from System.Data import IsolationLevel
from System.Data.SqlClient import SqlCommand, SqlConnection
from System.Data.SqlTypes import SqlFileStream
from System.IO import File, FileAccess, FileOptions
# adapted from c# code at
# https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/filestream-data
connection_string = r"Data Source=(local)\SQLEXPRESS;Initial Catalog=myDB;Integrated Security=True"
con = SqlConnection(connection_string)
con.Open()
sql = """\
SELECT Photo.PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT()
FROM employees WHERE EmployeeID = 1"""
cmd = SqlCommand(sql, con)
tran = con.BeginTransaction(IsolationLevel.ReadCommitted)
cmd.Transaction = tran
rdr = cmd.ExecuteReader()
rdr.Read()
path = rdr.GetString(0)
transaction_context = rdr.GetSqlBytes(1).Buffer
rdr.Close()
allocation_size = 0
input_stream = SqlFileStream(path, transaction_context,
FileAccess.Read, FileOptions.SequentialScan, allocation_size)
output_stream = File.Create(r"C:\Users\Gord\Desktop\photo.bmp")
input_stream.CopyTo(output_stream)
output_stream.Close()
input_stream.Close()
tran.Commit()
con.Close()
I'm trying to do some testing on our JDBC driver using Python.
Initially figuring out JPype, I eventually managed to connect the driver and execute select queries like so (reproducing a generalized snippet):
from __future__ import print_function
from jpype import *
#Start JVM, attach the driver jar
jvmpath = 'path/to/libjvm.so'
classpath = 'path/to/JDBC_Driver.jar'
startJVM(jvmpath, '-ea', '-Djava.class.path=' + classpath)
# Magic line 1
driver = JPackage('sql').Our_Driver
# Initiating a connection via DriverManager()
jdbc_uri = 'jdbc:our_database://localhost:port/database','user', 'passwd')
conn = java.sql.DriverManager.getConnection(jdbc_uri)
# Executing a statement
stmt = conn.createStatement()
rs = stmt.executeQuery ('select top 10 * from some_table')
# Extracting results
while rs.next():
''' Magic #2 - rs.getStuff() only works inside a while loop '''
print (rs.getString('col_name'))
However, I've failed to to batch inserts, which is what I wanted to test. Even when executeBatch() returned a jpype int[], which should indicate a successful insert, the table was not updated.
I then decided to try out py4j.
My plight - I'm having a hard time figuring out how to do the same thing as above. It is said py4j does not start a JVM on its own, and that the Java code needs to be prearranged with a GatewayServer(), so I'm not sure it's even feasible.
On the other hand, there's a library named py4jdbc that does just that.
I tinkered through the dbapi.py code but didn't quite understand the flow, and am pretty much jammed.
If anyone understands how to load a JDBC driver from a .jar file with py4j and can point me in the right direction, I'd be much grateful.
add a commit after adding the records and before retrieving.
conn.commit()
I have met a similar problem in airflow, I used teradata jdbc jars and jaydebeapi to connect teradata database and execute sql:
[root#myhost transfer]# cat test_conn.py
import jaydebeapi
from contextlib import closing
jclassname='com.teradata.jdbc.TeraDriver'
jdbc_driver_loc = '/opt/spark-2.3.1/jars/terajdbc4-16.20.00.06.jar,/opt/spark-2.3.1/jars/tdgssconfig-16.20.00.06.jar'
jdbc_driver_name = 'com.teradata.jdbc.TeraDriver'
host='my_teradata.address'
url='jdbc:teradata://' + host + '/TMODE=TERA'
login="teradata_user_name"
psw="teradata_passwd"
sql = "SELECT COUNT(*) FROM A_TERADATA_TABLE_NAME where month_key='202009'"
conn = jaydebeapi.connect(jclassname=jdbc_driver_name,
url=url,
driver_args=[login, psw],
jars=jdbc_driver_loc.split(","))
with closing(conn) as conn:
with closing(conn.cursor()) as cur:
cur.execute(sql)
print(cur.fetchall())
[root#myhost transfer]# python test_conn.py
[(7734133,)]
[root#myhost transfer]#
In py4j, with your respective JDBC uri:
from py4j.java_gateway import JavaGateway
# Open JVM interface with the JDBC Jar
jdbc_jar_path = '/path/to/jdbc_driver.jar'
gateway = JavaGateway.launch_gateway(classpath=jdbc_jar_path)
# Load the JDBC Jar
jdbc_class = "com.vendor.VendorJDBC"
gateway.jvm.class.forName(jdbc_class)
# Initiate connection
jdbc_uri = "jdbc://vendor:192.168.x.y:zzzz;..."
con = gateway.jvm.DriverManager.getConnection(jdbc_uri)
# Run a query
sql = "select this from that"
stmt = con.createStatement(sql)
rs = stmt.executeQuery()
while rs.next():
rs.getInt(1)
rs.getFloat(2)
.
.
rs.close()
stmt.close()
I have a Pyramid / SQLAlchemy, MySQL python app.
When I execute a raw SQL INSERT query, nothing gets written to the DB.
When using ORM, however, I can write to the DB. I read the docs, I read up about the ZopeTransactionExtension, read a good deal of SO questions, all to no avail.
What hasn't worked so far:
transaction.commit() - nothing is written to the DB. I do realize this statement is necessary with ZopeTransactionExtension but it just doesn't do the magic here.
dbsession().commit - doesn't work since I'm using ZopeTransactionExtension
dbsession().close() - nothing written
dbsession().flush() - nothing written
mark_changed(session) -
File "/home/dev/.virtualenvs/sc/local/lib/python2.7/site-packages/zope/sqlalchemy/datamanager.py", line 198, in join_transaction
if session.twophase:
AttributeError: 'scoped_session' object has no attribute 'twophase'"
What has worked but is not acceptable because it doesn't use scoped_session:
engine.execute(...)
I'm looking for how to execute raw SQL with a scoped_session (dbsession() in my code)
Here is my SQLAlchemy setup (models/__init__.py)
def dbsession():
assert (_dbsession is not None)
return _dbsession
def init_engines(settings, _testing_workarounds=False):
import zope.sqlalchemy
extension = zope.sqlalchemy.ZopeTransactionExtension()
global _dbsession
_dbsession = scoped_session(
sessionmaker(
autoflush=True,
expire_on_commit=False,
extension=extension,
)
)
engine = engine_from_config(settings, 'sqlalchemy.')
_dbsession.configure(bind=engine)
Here is a python script I wrote to isolate the problem. It resembles the real-world environment of where the problem occurs. All I want is to make the below script insert the data into the DB:
# -*- coding: utf-8 -*-
import sys
import transaction
from pyramid.paster import setup_logging, get_appsettings
from sc.models import init_engines, dbsession
from sqlalchemy.sql.expression import text
def __main__():
if len(sys.argv) < 2:
raise RuntimeError()
config_uri = sys.argv[1]
setup_logging(config_uri)
aa = init_engines(get_appsettings(config_uri))
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
print list(session.execute("""SELECT * from operations""").fetchall()) # prints inserted data
transaction.commit()
print list(session.execute("""SELECT * from operations""").fetchall()) # doesn't print inserted data
if __name__ == '__main__':
__main__()
What is interesting, if I do:
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
op = Operation(generated_description='aa', description='oo')
session.add(op)
then the first print outputs the raw SQL inserted row ('hello2' 'world'), and the second print prints both rows, and in fact both rows are inserted into the DB.
I cannot comprehend why using an ORM insert alongside raw SQL "fixes" it.
I really need to be able to call execute() on a scoped_session to insert data into the DB using raw SQL. Any advice?
It has been a while since I mixed raw sql with sqlalchemy, but whenever you mix them, you need to be aware of what happens behind the scenes with the ORM. First, check the autocommit flag. If the zope transaction is not configured correctly, the ORM insert might be triggering a commit.
Actually, after looking at the zope docs, it seems manual execute statements need an extra step. From their readme:
By default, zope.sqlalchemy puts sessions in an 'active' state when they are
first used. ORM write operations automatically move the session into a
'changed' state. This avoids unnecessary database commits. Sometimes it
is necessary to interact with the database directly through SQL. It is not
possible to guess whether such an operation is a read or a write. Therefore we
must manually mark the session as changed when manual SQL statements write
to the DB.
>>> session = Session()
>>> conn = session.connection()
>>> users = Base.metadata.tables['test_users']
>>> conn.execute(users.update(users.c.name=='bob'), name='ben')
<sqlalchemy.engine...ResultProxy object at ...>
>>> from zope.sqlalchemy import mark_changed
>>> mark_changed(session)
>>> transaction.commit()
>>> session = Session()
>>> str(session.query(User).all()[0].name)
'ben'
>>> transaction.abort()
It seems you aren't doing that, and so the transaction.commit does nothing.