Speed up insertion of pandas dataframe using fast_executemany Python pyodbc - python

I am trying to insert data contained in a .csv file from my pc to a remote server. The values are inserted in a table that contains 3 columns, namely Timestamp, Value and TimeseriesID. I have to insert approximately 3000 rows at a time, therefore I am currently using pyodbc and executemany.
My code up to now is the one shown below:
with contextlib.closing(pyodbc.connect(connection_string, autocommit=True)) as conn:
with contextlib.closing(conn.cursor()) as cursor:
cursor.fast_executemany = True # new in pyodbc 4.0.19
# Innsert values in the DataTable table
insert_df = df[["Time (UTC)", column]]
insert_df["id"] = timeseriesID
insert_df = insert_df[["id", "Time (UTC)", column]]
sql = "INSERT INTO %s (%s, %s, %s) VALUES (?, ?, ?)" % (
sqltbl_datatable, 'TimeseriesId', 'DateTime', 'Value')
params = [i.tolist() for i in insert_df.values]
cursor.executemany(sql, params)
As I am using pyodbc 4.0.19, I have the option fast_executemany set to True, which is supposed to speed up things. However, for some reason, I do not see any great improvement when I enable the fast_executemany option. Is there any alternative way that I could use in order to speed up insertion of my file?
Moreover, regarding the performance of the code shown above, I noticed that when disabling the autocommit=True option, and instead I included the cursor.commit() command in the end of my data was imported significantly faster. Is there any specific reason why this happens that I am not aware of?
Any help would be greatly appreciated :)

Regarding the cursor.commit() speed up that you are noticing: when you are using autocommit=True you are requesting the code to execute one database transaction per each of the insert. This means that the code resumes only after the database confirms the data is stored on disk. When you use cursor.commit() after the numerous INSERTs you are effectively executing one database transaction and the data is stored in RAM in the interim (it may be written to disk but not all at the time when you instruct the database to finalize the transaction).
The process of finalizing the transaction typically entails updating tables on disk, updating indexes, flushing logs, syncing copies, etc. which is costly. That is why you observe such a speed up between the 2 scenarios you describe.
When going the faster way please note that until you execute cursor.commit() you cannot be 100% sure that the data is in the database so there may be a need to reissue the query in case of an error (any partial transaction is going to be rolled back).

Related

Heroku Database: Insert operation

I am confused while inserting data to my Postgres Database in heroku.
Here's the thing,
I have created connection to database, then
cursor = conn.cursor()
cursor.execute("INSERT INTO users(username, useremail, userpass) VALUES ('"+_name+"','"+_email+"','"+_password+"')")
After executing, I checked the sql status by
print(cursor.statusmessage)
it returns,
INSERT 0 1
but on executing, data =
cursor.fetchall()
it throws me error
File "/Users/abc/PycharmProjects/testSkillNetwork/app.py",
line 75, in signUp
data = cursor.fetchall().
ProgrammingError: no results to fetch
So, i am unable to understand why 'no results' when insertion is successful.
Any help will be appreciated. Thanks.
You need to issue a SELECT query in order to retrieve data from the database.
cursor.execute("SELECT * FROM users")
cursor.fetchall()
This should give you some results.
Also, you should commit the transaction once you have finished inserting data, otherwise it will be lost. Use:
conn.commit()
Another, bigger, issue is that the way that you construct your queries is vulnerable to SQL injection. Rather than using string concatenation you should use parameterised queries:
cursor.execute("INSERT INTO users(username, useremail, userpass) VALUES (%s, %s, %s)", (_name,_email,_password))
With this style the database adapter will substitute the place holders (%s) with the values from the tuple of arguments passed to cursor.execute(). Not only is this safer, it's a lot easier to read and maintain.
I am not sure what driver are you using to connect to the database, assuming you're using psycopg2, which is one of the most famous, what you're observing is a normal behaviour. Reading from here:
A ProgrammingError is raised if the previous call to execute*() did not produce any result set or no call was issued yet.
An insert statement produces no result, other that an error in case of failure. If you want to obtain the rows that you've just inserted, query the database again:
cur.execute("SELECT * FROM users;")
cur.fetchall()
and this will give you the rows.
Aside from this, if you read the basic usage and the section of parametrized queries, never use python string concatenation when executing your queries, because it makes it vulnerable to SQL injection attacks.

Importing database with psycopg2 results a table with fewer lines than expected [duplicate]

I am using psycopg2 module in python to read from postgres database, I need to some operation on all rows in a column, that has more than 1 million rows.
I would like to know would cur.fetchall() fail or cause my server to go down? (since my RAM might not be that big to hold all that data)
q="SELECT names from myTable;"
cur.execute(q)
rows=cur.fetchall()
for row in rows:
doSomething(row)
what is the smarter way to do this?
The solution Burhan pointed out reduces the memory usage for large datasets by only fetching single rows:
row = cursor.fetchone()
However, I noticed a significant slowdown in fetching rows one-by-one. I access an external database over an internet connection, that might be a reason for it.
Having a server side cursor and fetching bunches of rows proved to be the most performant solution. You can change the sql statements (as in alecxe answers) but there is also pure python approach using the feature provided by psycopg2:
cursor = conn.cursor('name_of_the_new_server_side_cursor')
cursor.execute(""" SELECT * FROM table LIMIT 1000000 """)
while True:
rows = cursor.fetchmany(5000)
if not rows:
break
for row in rows:
# do something with row
pass
you find more about server side cursors in the psycopg2 wiki
Consider using server side cursor:
When a database query is executed, the Psycopg cursor usually fetches
all the records returned by the backend, transferring them to the
client process. If the query returned an huge amount of data, a
proportionally large amount of memory will be allocated by the client.
If the dataset is too large to be practically handled on the client
side, it is possible to create a server side cursor. Using this kind
of cursor it is possible to transfer to the client only a controlled
amount of data, so that a large dataset can be examined without
keeping it entirely in memory.
Here's an example:
cursor.execute("DECLARE super_cursor BINARY CURSOR FOR SELECT names FROM myTable")
while True:
cursor.execute("FETCH 1000 FROM super_cursor")
rows = cursor.fetchall()
if not rows:
break
for row in rows:
doSomething(row)
fetchall() fetches up to the arraysize limit, so to prevent a massive hit on your database you can either fetch rows in manageable batches, or simply step through the cursor till its exhausted:
row = cur.fetchone()
while row:
# do something with row
row = cur.fetchone()
Here is the code to use for simple server side cursor with the speed of fetchmany management.
The principle is to use named cursor in Psycopg2 and give it a good itersize to load many rows at once like fetchmany would do but with a single loop of for rec in cursor that does an implicit fetchnone().
With this code I make queries of 150 millions rows from multi-billion rows table within 1 hour and 200 meg ram.
EDIT: using fetchmany (along with fetchone() and fetchall(), even with a row limit (arraysize) will still send the entire resultset, keeping it client-side (stored in the underlying c library, I think libpq) for any additional fetchmany() calls, etc. Without using a named cursor (which would require an open transaction), you have to resort to using limit in the sql with an order-by, then analyzing the results and augmenting the next query with where (ordered_val = %(last_seen_val)s and primary_key > %(last_seen_pk)s OR ordered_val > %(last_seen_val)s)
This is misleading for the library to say the least, and there should be a blurb in the documentation about this. I don't know why it's not there.
Not sure a named cursor is a good fit without having a need to scroll forward/backward interactively? I could be wrong here.
The fetchmany loop is tedious but I think it's the best solution here. To make life easier, you can use the following:
from functools import partial
from itertools import chain
# from_iterable added >= python 2.7
from_iterable = chain.from_iterable
# util function
def run_and_iterate(curs, sql, parms=None, chunksize=1000):
if parms is None:
curs.execute(sql)
else:
curs.execute(sql, parms)
chunks_until_empty = iter(partial(fetchmany, chunksize), [])
return from_iterable(chunks_until_empty)
# example scenario
for row in run_and_iterate(cur, 'select * from waffles_table where num_waffles > %s', (10,)):
print 'lots of waffles: %s' % (row,)
As I was reading comments and answers I thought I should clarify something about fetchone and Server-side cursors for future readers.
With normal cursors (client-side), Psycopg fetches all the records returned by the backend, transferring them to the client process. The whole records are buffered in the client's memory. It is when you execute a query like curs.execute('SELECT * FROM ...'.
This question also confirms that.
All the fetch* methods are there for accessing this stored data.
Q: So how fetchone can help us memory wise ?
A: It fetches only one record from the stored data and creates a single Python object and hands you in your Python code while fetchall will fetch and create n Python objects from this data and hands it to you all in one chunk.
So If your table has 1,000,000 records, this is what's going on in memory:
curs.execute --> whole 1,000,000 result set + fetchone --> 1 Python object
curs.execute --> whole 1,000,000 result set + fetchall --> 1,000,000 Python objects
Of-course fetchone helped but still we have the whole records in memory. This is where Server-side cursors comes into play:
PostgreSQL also has its own concept of cursor (sometimes also called
portal). When a database cursor is created, the query is not
necessarily completely processed: the server might be able to produce
results only as they are needed. Only the results requested are
transmitted to the client: if the query result is very large but the
client only needs the first few records it is possible to transmit
only them.
...
their interface is the same, but behind the scene they
send commands to control the state of the cursor on the server (for
instance when fetching new records or when moving using scroll()).
So you won't get the whole result set in one chunk.
The draw-back :
The downside is that the server needs to keep track of the partially
processed results, so it uses more memory and resources on the server.

How to commit a UPDATE in raw SQL in python

cursor = connection.cursor()
cursor.execute("UPDATE public.rsvp SET status=TRUE WHERE rsvp_id=%s", [rsvp_id])
cursor.execute("SELECT status, rsvp_id FROM public.rsvp WHERE rsvp_id=%s", [rsvp_id])
row = cursor.fetchall()
When I execute this in my Django project, I get the row returned as I expect to see it, but later when I select query for the same row, it appears as tho the statement was never really run. In my code, the column "status" defaults to NULL. After this is run, I still see NULL in my table.
You didn't specify what database you're dealing with, which may change the answer somewhat. However, with most database connections you need to finish with connection.commit() to really save changes on the database. This includes both update and insert operations. Failing to commit() usually results in a rollback of the actions.

Combine inserts into one transaction Python SQLite3

I am trying to input 1000's of rows on SQLite3 with insert however the time it takes to insert is way too long. I've heard speed is greatly increased if the inserts are combined into one transactions. However, i cannot seem to get SQlite3 to skip checking that the file is written on the hard disk.
this is a sample:
if repeat != 'y':
c.execute('INSERT INTO Hand (number, word) VALUES (null, ?)', [wordin[wordnum]])
print wordin[wordnum]
data.commit()
This is what i have at the begining.
data = connect('databasenew')
data.isolation_level = None
c = data.cursor()
c.execute('begin')
However, it does not seem to make a difference. A way to increase the insert speed would be much appreciated.
According to Sqlite documentation, BEGIN transaction should be ended with COMMIT
Transactions can be started manually using the BEGIN command. Such
transactions usually persist until the next COMMIT or ROLLBACK
command. But a transaction will also ROLLBACK if the database is
closed or if an error occurs and the ROLLBACK conflict resolution
algorithm is specified. See the documentation on the ON CONFLICT
clause for additional information about the ROLLBACK conflict
resolution algorithm.
So, your code should be like this:
data = connect('databasenew')
data.isolation_level = None
c = data.cursor()
c.execute('begin')
if repeat != 'y':
c.execute('INSERT INTO Hand (number, word) VALUES (null,?)', [wordin[wordnum]])
print wordin[wordnum]
data.commit()
c.execute('commit')
https://stackoverflow.com/a/3689929/1147726 answers the question. execute('begin') does not have any effect. Apparently, a connection.commit() is sufficient.
(In case someone is still looking for an answer to this)
You should use executemany if you are just doing 1000's of inserts successively.
Look at What is the optimized way to insert large number of records (more than 40,000) in sqlite3
I just struggled with a LOT (order millions) of execute's that were taking about 30 minutes to complete - Switched to executemany and I now have it down to about 10 minutes.
You can use executemany, see this SO question: python sqlite question - Insert method

How to update records in SQL Alchemy in a Loop

I am trying to use SQLSoup - the SQLAlchemy extention, to update records in a SQL Server 2008 database. I am using pyobdc for the connections. There are a number of issues which make it hard to find a relevant example.
I am reprojection a geometry field in a very large table (2 million + records), so many of the standard ways of updating fields cannot be used. I need to extract coordinates from the geometry field to text, convert them and pass them back in. All this is fine, and all the individual pieces are working.
However I want to execute a SQL Update statement on each row, while looping through the records one by one. I assume this places locks on the recordset, or the connection is in use - as if I use the code below it hangs after successfully updating the first record.
Any advice on how to create a new connection, reuse the existing one, or accomplish this another way is appreciated.
s = select([text("%s as fid" % id_field),
text("%s.STAsText() as wkt" % geom_field)],
from_obj=[feature_table])
rs = s.execute()
for row in rs:
new_wkt = ReprojectFeature(row.wkt)
update_value = "geometry :: STGeomFromText('%s',%s)" % (new_wkt, "3785")
update_sql = ("update %s set GEOM3785 = %s where %s = %i" %
(full_name, update_value, id_field, row.fid))
conn = db.connection()
conn.execute(update_sql)
conn.close() #or not - no effect..
Updated working code now looks like this. It works fine on a few records, but hangs on the whole table, so I guess it is reading in too much data.
db = SqlSoup(conn_string)
#create outer query
Session = sessionmaker(autoflush=False, bind=db.engine)
session = Session()
rs = session.execute(s)
for row in rs:
#create update sql...
session.execute(update_sql)
session.commit()
I now get connection busy errors.
DBAPIError: (Error) ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
It looks like this could be a problem with the ODBC driver - http://sourceitsoftware.blogspot.com/2008/06/connection-is-busy-with-results-for.html
Further Update:
On the server using profiler, it shows the select statement then the first update statement are "starting" but neither complete.
If I set the Select statement to return the top 10 rows, then it does complete and the updates run.
SQL: Batch Starting Select...
SQL: Batch Starting Update...
I believe this is an issue with pyodbc and SQL Server drivers. If I remove SQL Alchemy and execute the same SQL with pyodbc it also hangs. Even if I create a new connection object for the updates.
I also tried the SQL Server Native Client 10.0 driver which is meant to allow MARS - Multiple Active Record Sets but it made no difference. In the end I have resorted to "paging the results" and updating these batches using pyodbc and SQL (see below), however I thought SQLAlchemy would have been able to do this for me automatically.
Try using a Session.
rs = s.execute() then becomes session.execute(rs) and you can replace the last three lines with session.execute(update_sql). I'd also suggest configuring your Session with autocommit off and call session.commit() at the end.
Can I suggest that when your process hangs you do a sp_who2 on the Sql box and see what is happening. Check for blocked spid's and see if you can find anything in the Sql code that can suggest what is happening. If you do find a spid that is blocking others you can do a dbcc inputbuffer(*spidid*) and see if that tells you what the query was it executed. Otherwise you can also attach the Sql profiler and trace your calls.
In some cases it could also be parallelism on the Sql server that cause blocks. Unless this is a data warehouse, I suggest turn your Max DOP off, (set it to 1). Let me know and when I check this again in the morning and you need help, I'll be glad to help.
Until I find another solution I am using a single connection and custom SQL to return sets of records, and updating these in batches. I don't think what I am doing is a particulary unique case, so I am not sure why I cannot handle multiple result sets simultaneously.
Below works but is very, very slow..
cnxn = pyodbc.connect(conn_string, autocommit=True)
cursor = cnxn.cursor()
#get total recs in the database
s = "select count(fid) as count from table"
count = cursor.execute(s).fetchone().count
#choose number of records to update in each iteration
batch_size = 100
for i in range(1,count, batch_size):
#sql to bring back relevant records in each batch
s = """SELECT fid, wkt from(select ROW_NUMBER() OVER(ORDER BY FID ASC) AS 'RowNumber'
,FID
,GEOM29902.STAsText() as wkt
FROM %s) features
where RowNumber >= %i and RowNumber <= %i""" % (full_name,i,i+batch_size)
rs = cursor.execute(s).fetchall()
for row in rs:
new_wkt = ReprojectFeature(row.wkt)
#...create update sql statement for the record
cursor.execute(update_sql)
counter += 1
cursor.close()
cnxn.close()

Categories

Resources