Pandas mysql last inserted id sqlalchemy import create_engine - python

I am trying to insert pandas dataframe data into mysql database using sqlalchemy .The table table_test has one column AUTO_INCREMENT. I want to get the value of the AUTO_INCREMENT column and use it in the later part of the program.
Below is my code :
from sqlalchemy import create_engine
mysqldb = create_engine("mysql://test:password#localhost/test")
df.to_sql(con=mysqldb, name='table_test',index=False, if_exists='append')
print (mysqldb.insert_id())
However the last print line giving me error .
File "test.py", line 208, in StagingCountLesserThanTarget
mysqldb.insert_id() AttributeError: 'Engine' object has no attribute 'insert_id'
how to get the last inserted id in mysql using sqlalchemy?

Related

How can I drop this table using SQLAlchemy?

I am trying to drop a table called 'New'. I currently have the following code:
import pandas as pd
import sqlalchemy
sqlcon = sqlalchemy.create_engine('mssql://ABSECTDCS100TL/AdventureWorks?driver=ODBC+Driver+17+for+SQL+Server')3
df = pd.read_sql_query('SELECT * FROM DimReseller', sqlcon)
df.to_sql('New',sqlcon,if_exists='append', index=False)
sqlalchemy.schema.New.drop(bind=None, checkfirst=False)
I am receiving the error:
AttributeError: module 'sqlalchemy.schema' has no attribute 'New'
Any ideas on what I'm missing here?. Thanks.
You can reflect the table into a Table object and then call its drop method:
from sqlalchemy import Table, MetaData
tbl = Table('New', MetaData(), autoload_with=sqlcon)
tbl.drop(sqlcon, checkfirst=False)
If you want to delete the table using raw SQL, you can do this:
from sqlalchemy import text
with sqlcon.connect() as conn:
# Follow the identifier quoting convention for your RDBMS
# to avoid problems with mixed-case names.
conn.execute(text("""DROP TABLE "New" """))
# Commit if necessary
conn.commit()

Pandas 0.24 read_sql operational errors

I just upgraded to Pandas 0.24.0 from 0.23.4 (Python 2.7.12), and many of my pd.read_sql queries are breaking. It looks like something related to MySQL, but it's strange that these errors only occur after updating my pandas version. Any ideas what's going on?
Here's my MySQL table:
CREATE TABLE `xlations_topic_update_status` (
`run_ts` datetime DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
Here's my query:
import pandas as pd
from sqlalchemy import create_engine
db_engine = create_engine('mysql+mysqldb://<><>/product_analytics', echo=False)
pd.read_sql('select max(run_ts) from product_analytics.xlations_topic_update_status', con = db_engine).values[0][0]
And here's the error:
OperationalError: (_mysql_exceptions.OperationalError) (1059, "Identifier name 'select max(run_ts) from product_analytics.xlations_topic_update_status;' is too long") [SQL: 'DESCRIBE `select max(run_ts) from product_analytics.xlations_topic_update_status;`']
I've also gotten this for other more complex queries, but won't post them here.
According to documentation the first argument is either a string (a table name) or SQLAlchemy Selectable (select or text object). In other words pd.read_sql() is delegating to pd.read_sql_table() and treating the entire query string as a table identifier.
Wrap your query string in a text() construct first:
stmt = text('select max(run_ts) from product_analytics.xlations_topic_update_status')
pd.read_sql(stmt, con = db_engine).values[0][0]
This way pd.read_sql() will delegate to pd.read_sql_query() instead. Another option is to call it directly.
Try using pd.read_sql_query(sql, con), instead of pd.read_sql(...).
So:
pd.read_sql_query('select max(run_ts) from product_analytics.xlations_topic_update_status', con = db_engine).values[0][0]

Fetch csv using Python mysql connector

I am trying to fetch a .csv file stored in a Mysql database as a BLOB.
I first tried with a .txt file and everything works perfectly, but I now have issue when it comes to a csv file : though the SELECT query works on Mysql, it doesn't seem to be the case on python. This error keeps appearing :
SystemError: <built-in method fetch_row of _mysql_connector.MySQL object at 0x00000172B02080B0> returned a result with an error set
Here is my code :
import mysql.connector
cnx = mysql.connector.connect(user='user', password='password', database='test')
cursor = cnx.cursor()
query = ("SELECT * FROM blob_test where id=1")
cursor.execute(query)
test_file = cursor.fetchone()
FYI, I am just testing with a simple table with 3 columns : (id SMALLINT, name VARCHAR, file BLOB)

Python SQLAlchemy: psycopg2.ProgrammingError relation already exists?

I have repeatable tried to create a table MYTABLENAME with SQLAlchemy in Python. I deleted all tables through my SQL client Dbeaver but I am getting an error that the table exists such that
Traceback (most recent call last):
File "/home/hhh/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/home/hhh/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
psycopg2.ProgrammingError: relation "ix_MYTABLENAME_index" already exists
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "ix_MYTABLENAME_index" already exists
[SQL: 'CREATE INDEX "ix_MYTABLENAME_index" ON "MYTABLENAME" (index)']
I succeed in the creation of tables and their insertions with an unique name but the second time I am getting the error despite deleting the tables in Dbeaver.
Small example
from datetime import date
from sqlalchemy import create_engine
import numpy as np
import pandas as pd
def storePandasDF2PSQL(myDF_):
#Store results as Pandas Dataframe to PostgreSQL database.
#
#Example
#df=pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
#dbName= date.today().strftime("%Y%m%d")+"_TABLE"
#engine = create_engine('postgresql://hhh:yourPassword#localhost:1234/hhh')
#df.to_sql(dbName, engine)
df = myDF_
dbName = date.today().strftime("%Y%m%d")+"_TABLE"
engine = create_engine('postgresql://hhh:yourPassword#localhost:1234/hhh')
# ERROR: NameError: name 'table' is not defined
#table.declarative_base.metadata.drop_all(engine) #Drop all tables
#TODO: This step is causing errors because the SQLAlchemy thinks the
#TODO: table still exists even though deleted
df.to_sql(dbName, engine)
What is the proper way to clean up the backend such as some hanging index in order to recreate the table with fresh data? In other words, how to solve the error?
The issue might be from sqlalchemy side which believes that there is an index as message of deletion of tables was not notified to the sqlalchemy. There is a sqlalchemy way of deleting the tables
table.declarative_base.metadata.drop_all(engine)
This should keep Sqlalchemy informed about the deletions.
This answer does not address the reusing of the same table names and hence not about cleaning up the SQLAlchemy metadata.
Instead of reusing the table names, add the execution time like this to the end of the tableName
import time
dbName = date.today().strftime("%Y%m%d")+"_TABLE_"+str(time.time())
dbTableName = dbName
so your SQL developmnet environment, such as SQL client locking up the connection or specific tables, does not matter that much. Closing Dbeaver can help while running the Python with SQLAlchemy.

SQLite Python Blaze - Attempting to create a table after dropping a table of same name returns old schema

I am trying to work out why the schema of a dropped table returns when I attempt to create a table using a different set of column names?
After dropping the table, I can confirm in an SQLite explorer that the table has disappeared. Once trying to load the new file via ODO it then returns an error "Column names of incoming data don't match column names of existing SQL table names in SQL table". Then I can see the same table is re-created in the database, using the previously dropped schema! I attempted a VACUUM statement after dropping the table but still same issue.
I can create the table fine using a different table name, however totally confused as to why I can't use the previously dropped table name I want to use?
import sqlite3
import pandas as pd
from odo import odo, discover, resource, dshape
conn = sqlite3.connect(dbfile)
c = conn.cursor()
c.execute("DROP TABLE <table1>")
c.execute("VACUUM")
importfile = pd.read_csv(csvfile)
odo(importfile,'sqlite:///<db_path>::<table1'>)
ValueError: Column names of incoming data don't match column names of existing SQL table Names in SQL table:
import sqlite3
import pandas as pd
from odo import odo, discover, resource, dshape
conn = sqlite3.connect('test.db')
cursor = conn.cursor();
table = """ CREATE TABLE IF NOT EXISTS TABLE1 (
id integer PRIMARY KEY,
name text NOT NULL
); """;
cursor.execute(table);
conn.commit(); # Save table into database.
cursor.execute(''' DROP TABLE TABLE1 ''');
conn.commit(); # Save that table has been dropped.
cursor.execute(table);
conn.commit(); # Save that table has been created.
conn.close();

Categories

Resources