Python SQLite3 global database connection - python

Can anyone tell me if this will work. When you open a SQLite database connection to your database file in the main of your python application and then call that variable as a global connection each time you need to execute to database in a function.
As seen below. Will this even work?
import sqlite3 as lite
con = lite.connect(database)
def db_add_records(_number, _data):
global con
cur = con.cursor()
cur.execute(whatever sql you think of)
con.commit()
Instead of each time creating a new connection to the database inside each function, as I have seen some people do.
Cleaner?

Related

SQLAlchemy: Get the database name from connection

I wonder if it is possible to get the database name after a connection. I known that it is possible with the engine created by the function 'create_engine' (here) and I would like to have the same possibility after a connection.
from sqlalchemy import create_engine, inspect
engine = create_engine('mysql+mysqldb://login:pass#localhost/MyDatabase')
print (engine.url.database) # print the dabase name with an engine
con = engine.connect()
I looked at the inspector tool, but there is no way to retrieve the database name like:
db_name = inspect(con.get_database_name() )
May be it is not possible. Any idea ?
Thanks a lot!
For MySQL, executing select DATABASE() as name_of_current_database should be sufficient. For SQL Server, it would be select DB_NAME() as name_of_current_database. I do not know of any inherently portable way of doing this that will work irrespective of the backend.

PyMysql Update query within function using global cursor

I am trying to run an update query within a function using a global cursor that has been set in a
"with MySQLdb.connect" statement in the main body. For some reason the update queries in the main body work though and the queries in the functions dont work :(
Is there a way to get the error because none is being generated,
import MySQLdb
import sys
def updateFunction(data):
global cur
cur.execute("UPDATE1")
sys.exit(0)
if __name__=="__main__":
data="sample data, not important"
with MySQLdb.connect(host="localhost",user="user",passwd="pass",db="db") as cur:
cur.execute("UPDATE2")
updateFunction(data)
in that example UPDATE2 would run and UPDATE1 wouldnt.
The problem was that updating queries needed to be commited, and that needed to be done through connection.commit().

Python mysql doesn't see data change in database

i need some help with python an mysql.
I have the following code, which is executing in infinite loop:
db = MySQLdb.connect("127.0.0.1","user","password","dbname" )
while True:
cursor = db.cursor()
cursor.execute("SELECT * FROM requests WHERE status <> 'Finished'")
all_pending_requests = cursor.fetchall()
cursor.close()
And that works fine the first time i run it. But when i go to a tool like mysql workbench or i type it myself in in terminal, i update some rows and set their status to something that is not "Finished". So by doing that the next time the loop executes i should get those rows as a result but i get nothing. Do you guys now why this is happening maybe?
Thanks for help.
I am not certain but would assume that you are using InnoDB storage engine in MySQL and MySQLdb version >=1.2.0. You need to commit before the changes are being reflected. As of version 1.2.0, MySQLdb disables auto-commit by default. Confirmation of the same is here. Try adding db.commit() as the last line in the loop.

How to read a database created with python with R

I've created a database with the python package sqlite3.
import sqlite3
conn=sqlite3.connect('foo.sqlite')
c=conn.cursor()
c.execute('CREATE TABLE foo (bar1 int, bar2 int)')
conn.commit()
conn.close
Then for statistical purposes I try to read this database with R (I use the R package RSQLite)
library('RSQLite')
drv=dbDriver('SQLite')
foo=dbConnect(drv,'foo.sqlite')
If I want to list the table I've just created with Python
dbListTables(foo)
R says that the database is empty :
character(0)
Am I doing something wrong or does R cannot read a Python database ?
Thanks for your help
Try closing your database connection in python, rather than just instantiating the close method:
conn.close()
Spot the difference? Then it all works for me.
> dbListTables(foo)
[1] "foo"
although it all works for me even if I don't close the connection, and even if I've not quit python after the commit. So, umm...

Procedure execute inside SQL Developer, but not inside a script

I had a procedure that was not working.
If I tried to run: "BEGIN proc_name; END;" in SQL Developer or via script I had the same error.
I've fixed the procedure and now when I run that same command in SQL Developer, it's fine, but the script returns an error.
When I try:
...
sql = """EXEC proc_name"""
con = connection.cursor()
con.execute( sql )
...
I get DatabaseError: ORA-00900: invalid SQL statement, but probably is because of that: Problem with execute procedure in PL/SQL Developer and I'm not really worried about it.
What is really making me curious is when I try:
...
sql = """BEGIN proc_name;END;"""
con = connection.cursor()
con.execute( sql )
...
I get the same error that I had before fix the procedure.
Do you have any idea what is going on?
PS: This is a python script using cx_Oracle and I'm using Oracle 10g.
Try using the callproc() or callfunc() method on the cursor, instead of execute(). They are not exactly Py DB API compatible, but should do the job for cx_Oracle...

Categories

Resources