I have a stored procedure in Postgres called sales, and it works well from pgadmin:
CALL sales();
However, when I call it from Python:
import psycopg2
conn = psycopg2.connect (host ....)
cur = conn.cursor()
cur.callproc('sales')
conn.commit()
I get the following error message:
psycopg2.ProgrammingError: sales() is a procedure
LINE 1: SELECT * FROM sales()
^
HINT: To call a procedure, use CALL.
Assuming your procedure is called sales, you just need to "call" it e.g. CALL sales()
https://www.postgresql.org/docs/11/sql-call.html
I see what you are getting at, the python documentation here is misleading
"Calling a PostgreSQL stored procedure in Python steps"
http://www.postgresqltutorial.com/postgresql-python/call-stored-procedures/
Essentially the callproc is currently outdated (written for postgres 10 and below) and still considers procedures to be a function. So unless they update this, you will need to execute your own SQL in this instance like so
cur.execute("CALL sales();")
or if the sales procedure required inputs:
cur.execute("CALL sales(%s, %s);", (val1, val2))
Try this code to call PostgreSQL Stored_Procedure in Python Script :
import pyodbc
import psycopg2
import io
from sqlalchemy import create_engine
from urllib.parse import quote
#define your PostgreSQL connection here:
host="Provide Host Name"
dbname="Provide Database Name"
user="Provide User"
password="Provide Password"
engine=create_engine('postgresql://{}:{}#{}:5432/{}.format(user,quote(password),host,dbname))
conn=engine.raw_connection()
cur=conn.cursor()
#This will be your code to call stored procedure:
cur.execute('''call storedProcedureName()''')
conn.commit() # This is mandatory because we want to commit changes to DB
cur.close()
conn.close()
time.sleep(60) # timeout (optional)
Related
getting an unread result found error when executing the bolded sql commands. The python code is in a docker container and so is the MySQL db. Truncating the code to highlight where the issue is.
import mysql.connector
from datetime import datetime
import requests
import json
import math
import os
import logging
# Use logging.info() to output info to the console
logging.basicConfig(level=logging.INFO)
# Connecting to the MySQL Docker image
cnx = mysql.connector.connect(user='test', password='test', host='db', database='VisualDB')
logging.info(cnx.is_connected())
# Use mycursor for pointing to tables and making queries<br>
mycursor = cnx.cursor()
userRows = mycursor.execute("SELECT * FROM user;")
cnx.commit()
logging.info(userRows)
sensorRows = mycursor.execute("SELECT * FROM SENSOR;")
cnx.commit()
logging.info(sensorRows)
This is the error I get: mysql.connector.errors.InternalError: Unread result found
Did a bunch of commenting out to confirm these lines are the issue, I also sometimes get it on the cnx().commit
Is there something wrong with my image itself? mysql:latest
Also the database is created using a Dockerfile in the "db" folder pointing to a sql file
Thank you in advance for any advice you can provide!
logging.info(userRows) does not consume the results that mycursor.execute("SELECT * FROM user;") returns, so the next call to mycursor.execute will cause the said error.
Try to consume the results, ie logging.info(list(userRows)).
I have a sqlite db in my home dir.
stephen#stephen-AO725:~$ pwd
/home/stephen
stephen#stephen-AO725:~$ sqlite db1
SQLite version 2.8.17
Enter ".help" for instructions
sqlite> select * from test
...> ;
3|4
5|6
sqlite> .quit
when I try to connect from a jupiter notebook with sqlalchemy and pandas, sth does not work.
db=sqla.create_engine('sqlite:////home/stephen/db1')
pd.read_sql('select * from db1.test',db)
~/anaconda3/lib/python3.7/site-packages/sqlalchemy/engine/default.py in do_execute(self, cursor, statement, parameters, context)
578
579 def do_execute(self, cursor, statement, parameters, context=None):
--> 580 cursor.execute(statement, parameters)
581
582 def do_execute_no_params(self, cursor, statement, context=None):
DatabaseError: (sqlite3.DatabaseError) file is not a database
[SQL: select * from db1.test]
(Background on this error at: http://sqlalche.me/e/4xp6)
I also tried:
db=sqla.create_engine('sqlite:///~/db1')
same result
Personally, just to complete the code of #Stephen with the modules required:
# 1.-Load module
import sqlalchemy
import pandas as pd
#2.-Turn on database engine
dbEngine=sqlalchemy.create_engine('sqlite:////home/stephen/db1.db') # ensure this is the correct path for the sqlite file.
#3.- Read data with pandas
pd.read_sql('select * from test',dbEngine)
#4.- I also want to add a new table from a dataframe in sqlite (a small one)
df_todb.to_sql(name = 'newTable',con= dbEngine, index=False, if_exists='replace')
Another way to read is using sqlite3 library, which may be more straighforward:
#1. - Load libraries
import sqlite3
import pandas as pd
# 2.- Create your connection.
cnx = sqlite3.connect('sqlite:////home/stephen/db1.db')
cursor = cnx.cursor()
# 3.- Query and print all the tables in the database engine
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(cursor.fetchall())
# 4.- READ TABLE OF SQLITE CALLED test
dfN_check = pd.read_sql_query("SELECT * FROM test", cnx) # we need real name of table
# 5.- Now I want to delete all rows of this table
cnx.execute("DELETE FROM test;")
# 6. -COMMIT CHANGES! (mandatory if you want to save these changes in the database)
cnx.commit()
# 7.- Close the connection with the database
cnx.close()
Please let me know if this helps!
import sqlalchemy
engine=sqlalchemy.create_engine(f'sqlite:///db1.db')
Note: that you need three slashes in sqlite:/// in order to use a relative path for the DB. If you want an absolute path, use four slashes: sqlite:////
Source: Link
The issue is no backward compatibility as noted by Everila. anaconda installs its own sqlite, which is sqlite3.x and that sqlite cannot load databases created by sqlite 2.x
after creating a db with sqlite 3 the code works fine
db=sqla.create_engine('sqlite:////home/stephen/db1')
pd.read_sql('select * from test',db)
which confirms the 4 slashes are needed.
None of the sqlalchemy solutions worked for me with python 3.10.6 and sqlalchemy 2.0.0b4, it could be a beta issue or version 2.0.0 changed things. #corina-roca's solution was close, but not right as you need to pass a connection object, not an engine object. That's what the documentation says, but it didn't actually work. After a bit of experimentation, I discovered that engine.raw_connect() works, although you get a warning on the CLI. Here are my working examples
The sqlite one works out of the box - but it's not ideal if you are thinking of changing databases later
import sqlite3
conn = sqlite3.connect("sqlite:////home/stephen/db1")
df = pd.read_sql_query('SELECT * FROM test', conn)
df.head()
# works, no problem
sqlalchemy lets you abstract your db away
from sqlalchemy import create_engine, text
engine = create_engine("sqlite:////home/stephen/db1")
conn = engine.connect() # <- this is also what you are supposed to
# pass to pandas... it doesn't work
result = conn.execute(text("select * from test"))
for row in result:
print(row) # outside pands, this works - proving that
# connection is established
conn = engine.raw_connection() # with this workaround, it works; but you
# get a warning UserWarning: pandas only
# supports SQLAlchemy connectable ...
df = pd.read_sql_query(sql='SELECT * FROM test', con=conn)
df.head()
I tried inserting the values into the DB through python. However i do not get any error but i do not see it updating in DB. Please advice.
#!/usr/bin/python
import MySQLdb
val = MySQLdb.connect(host='localhost', user='root', passwd='root123',
db='expenses')
def access_db(val):
access = val.cursor()
sql = """Insert into monthly values (2,'Food',1000)"""
access.execute(sql)
val.commit()
val.close()
Output from DB after the script execution:
MariaDB[expenses]> select * from monthly;
SL_no Type Amount
1 Fuel 500
I do not find the second entry in Db.
I dont think you are calling the access_db() function anywhere
I've been following this 5 part tutorial from this YouTube playlist: https://www.youtube.com/playlist?list=PLQVvvaa0QuDezJh0sC5CqXLKZTSKU1YNo
I'm using Jupyter notebook with Python with the following code:
import sqlite3
import time
import datetime
import random
conn = sqlite3.connect("tutorial2.db")
c = conn.cursor()
Then I create several functions.
def create_table():
c.execute('CREATE TABLE IF NOT EXISTS stuffToPlot (unix REAL, datestamp TEXT,
keyword TEXT, value REAL)')
def data_entry():
c.execute("INSERT INTO stuffToPlot VALUES (145123542, '2016-01-03',
'Python', 7)")
conn.commit()
c.close()
conn.close()
create_table()
data_entry()
It works fine the first time, and generates a db file in C:\Users\Michael
However, when I try to run only the create_table() function again, I get the following error:
ProgrammingError: Cannot operate on a closed cursor.
Anyone able to help resolving this issue would be greatly appreciated!
The error is pretty explicit: You cannot run queries on closed cursors. Here it's even worse since you have also closed the connection at the first call of the data_entry() function.
I would advise on opening a cursor for each query, and then closing it after completing the query, and only closing the connection at the end of your script:
import sqlite3
import time
import datetime
import random
conn = sqlite3.connect("tutorial2.db")
def create_table():
c = conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS stuffToPlot (unix REAL, datestamp TEXT, keyword TEXT, value REAL)')
c.close()
def data_entry():
c = conn.cursor()
c.execute("INSERT INTO stuffToPlot VALUES (145123542, '2016-01-03', 'Python', 7)")
conn.commit()
c.close()
create_table()
data_entry()
By moving the conn.close() statement after you have completed all of your queries, and opening the cursors only when you need them, the error won't occur anymore.
EDIT : What is happening in your video is the following:
He first executes the whole script once.
He comments the line that creates the table.
He executes the whole script a second time.
I think you are probably entering the commands in a python interactive session, which is not equivalent to what he is doing in the video, because when he re-executes the script, a new connection and cursor are created, whereas if you're only trying to call the function again, the cursor and connection are already closed, which causes the error.
Writing a script to convert raw data for MySQL import I worked with a temporary textfile so far which I later imported manually using the LOAD DATA INFILE... command.
Now I included the import command into the python script:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
cursor = db.cursor()
query = """
LOAD DATA INFILE 'temp.txt' INTO TABLE myDB.values
FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';
"""
cursor.execute(query)
cursor.close()
db.commit()
db.close()
This works but temp.txt has to be in the database directory which isn't suitable for my needs.
Next approch is dumping the file and commiting directly:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
sql = "INSERT INTO values(`timestamp`,`id`,`value`,`status`) VALUES(%s,%s,%s,%s)"
cursor=db.cursor()
for line in lines:
mode, year, julian, time, *values = line.split(",")
del values[5]
date = datetime.strptime(year+julian, "%Y%j").strftime("%Y-%m-%d")
time = datetime.strptime(time.rjust(4, "0"), "%H%M" ).strftime("%H:%M:%S")
timestamp = "%s %s" % (date, time)
for i, value in enumerate(values[:20], 1):
args = (timestamp,str(i+28),value, mode)
cursor.execute(sql,args)
db.commit()
Works as well but takes around four times as long which is too much. (The same for construct was used in the first version to generate temp.txt)
My conclusion is that I need a file and the LOAD DATA INFILE command to be faster. To be free where the textfile is placed the LOCAL option seems useful. But with MySQL Connector (1.1.7) there is the known error:
mysql.connector.errors.ProgrammingError: 1148 (42000): The used command is not allowed with this MySQL version
So far I've seen that using MySQLdb instead of MySQL Connector can be a workaround. Activity on MySQLdb however seems low and Python 3.3 support will probably never come.
Is LOAD DATA LOCAL INFILE the way to go and if so is there a working connector for python 3.3 available?
EDIT: After development the database will run on a server, script on a client.
I may have missed something important, but can't you just specify the full filename in the first chunk of code?
LOAD DATA INFILE '/full/path/to/temp.txt'
Note the path must be a path on the server.
To use LOAD DATA INFILE with every accessible file you have to set the
LOCAL_FILES client flag while creating the connection
import mysql.connector
from mysql.connector.constants import ClientFlag
db = mysql.connector.connect(client_flags=[ClientFlag.LOCAL_FILES], <other arguments>)