Error AttributeError: 'Connection' object has no attribute 'fetchall' - python

I have the following SQL database details:
import sqlalchemy as sch
from config import Config
db_uri = os.environ["SQLALCHEMY_DATABASE_URI"] + os.environ["DB_NAME"]
in the env file I have these
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://*****:*****#instance-amazonaws.com:3306/'
DB_NAME = 'DB_NAME'
Now
db_engine = extrac_db.create_engine(Config.db_uri)
db_connection = db_engine.connect()
When I try this query:
db_connection.execute("select count(*) from table")
query_result = db_connection.fetchall()
It gives the following error:
AttributeError: 'Connection' object has no attribute 'fetchall'
What is the problem here!!!?

Wild guess:
query = db_connection.execute("select count(*) from table")
query_result = query.fetchall()

This might work with SQLAlchemy
import import sqlalchemy as sch
import pprint
with db_engine.begin() as conn:
qry = sch.text("select count(*) from table")
resultset = conn.execute(qry)
results_as_dict = resultset.mappings().all()
pprint(results_as_dict)

Related

Python Error: Not all parameters were used in the SQL statement

Trying to read a CSV and save information to a MySQL-database.
I get several errors:
ProgrammingError: Not all parameters were used in the SQL statement
AttributeError: 'MySQLCursor' object has no attribute 'rollback'
Which I think i connected to:
cursor.executemany(sql_insert,records)
cursor.commit();
cursor.rollback()
import mysql.connector as sql_db
import pypyodbc as odbc
import pyodbc
import pandas as pd
import csv
df = pd.read_csv(r'C:\Users\xxx\Documents\Python Scripts\Databases\Testfiles\test.csv',sep=";")
columns = ['Id', 'Name', 'Url', 'ImageUrl', 'MaxNumberOfRooms', 'MinNumberOfRooms', 'MaxArea', 'MaxPrice']
df_data = df[columns]
records = df_data.values.tolist()
mydb = sql_db.connect(
host="127.0.0.1",
user="Admin",
password="AdminPassword",
database="dbTest"
)
sql_insert = """
INSERT INTO TestTable
VALUES (%s,%s,%s,%s,%s,%s,%s,%s)
"""
try:
cursor = mydb.cursor()
cursor.executemany(sql_insert,records)
mydb.commit();
except Exception as e:
mydb.rollback()
print(str(e[1]))
finally:
cursor.close()
mydb.close()
Try -
VALUES (?,?,?,?,?,?,?,?,GETDATE(), Name, Url, ImageUrl, MaxNumberOfRooms, MinNumberOfRooms, MaxArea, MaxPrice) Name, Url etc. being your variables.

How to execute multiple queries in pandas?

How to execute the following queries with sqlalchemy?
import pandas as pd
import urllib
from sqlalchemy import create_engine
from sqlalchemy.types import NVARCHAR
params = urllib.parse.quote_plus(r'DRIVER={SQL Server};SERVER=localhost\SQLEXPRESS;Trusted_Connection=yes;DATABASE=my_db;autocommit=true;MultipleActiveResultSets=True')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = create_engine(conn_str, encoding = 'utf-8-sig')
with engine.connect() as con:
con.execute('Declare #latest_date nvarchar(8);')
con.execute('SELECT #latest_date = max(date) FROM my_table')
df = pd.read_sql_query('SELECT * from my_db where date = #latest_date', conn_str)
However, an error occured:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Must declare the scalar variable "#latest_date". (137) (SQLExecDirectW)')
How to solve this problem?
Thanks.
You don't need to declare a variable and use so many queries, you can do it just with one query:
SELECT *
FROM my_db
WHERE date = (SELECT max(date)
FROM my_db)
And then you can use, i use backticks because date is a reserved word:
with engine.connect() as con:
query="SELECT * FROM my_db WHERE `date` = (SELECT max(`date`) FROM my_db)"
df = pd.read_sql(query, con=con)

How to import all the tables from a Postgres schema using python

I know I can do it manually using sqlalchemy and pandas
dbschema ='myschema'
engine = create_engine('postgresql://XX:YY#localhost:5432/DB',
connect_args={'options': '-csearch_path={}'.format(dbschema )})
df = psql.read_sql('Select * from myschema."df"', con = engine)
But is it possible to do a loop and to get all the tables ?
I tried something like
tables = engine.table_names()
print(tables)
['A', 'B']
for table in tables :
table = psql.read_sql('Select * from myschema."%(name)s"', con = engine, params={'name' : table})
I get this message:
LINE 1: Select * from myschema.'A'
I guess the problem is caused by my quotes but I am not so sure.
EDIT :
So I tried the example here : Passing table name as a parameter in psycopg2
from psycopg2 import sql
try:
conn = psycopg2.connect("dbname='DB' user='XX' host='localhost' password='YY'")
except:
print ('I am unable to connect to the database')
print(conn)
cur = conn.cursor()
for table in tables :
table = cur.execute(sql.SQL("Select * from myschema.{}").format(sql.Identifier(table)))
But my tables are 'None' so I am doing something wrong but I can't see what.

SQL Alchemy Parametrized Query , binding table name as parameter gives error

I am using parametrized query utilizing Text object in SQL alchemy and are getting different result.
Working example:
import sqlalchemy as sqlal
from sqlalchemy.sql import text
db_table = 'Cars'
id_cars = 8
query = text("""SELECT *
FROM Cars
WHERE idCars = :p2
""")
self.engine.execute(query, {'p2': id_cars})
Example that produces sqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax)
import sqlalchemy as sqlal
from sqlalchemy.sql import text
db_table = 'Cars'
id_cars = 8
query = text("""SELECT *
FROM :p1
WHERE idCars = :p2
""")
self.engine.execute(query, {'p1': db_table, 'p2': id_cars})
Any idea on how I can run the query with a dynamic table name that are also protected from sql injection?
I use PostgreSQL and psycopg2 backend. I was able to do it using:
from psycopg2 import sql
from sqlalchemy import engine
connection: sqlalchemy.engine.Connection
connection.connection.cursor().execute(
sql.SQL('SELECT * FROM {} where idCars = %s').format(sql.Identifier(db_table)),
(id_cars, )
)
For PyMySQL it's not supported.
You could just use the benefits of writing in python:
Library to use:
import sqlalchemy
from sqlalchemy import create_engine, MetaData, Table, func, event
from sqlalchemy.sql import text
from urllib.parse import quote_plus
connection (that I did not see in your example - here connection with sql azure):
params = urllib.parse.quote_plus(r'...')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine_azure = create_engine(conn_str, echo=True)
Your example:
db_table = 'Cars'
id_cars = 8
query = text('SELECT * FROM ' + db_table + 'WHERE idCars = ' + id_cars)
connection = engine.connect()
connection.execute(query)
connection.close()

How to debug cursor.execute (psycopg2) is NoneType

I try to run SQL against my postgres db,
connection object I got through
import psycopg2
conn_string = "host='localhost' port='5432' dbname='postgres' user='postgres' password='mysecretpassword'"
conn = psycopg2.connect(conn_string)
seems to be OK
result = cursor.execute(
"""
select
*
from
planet_osm_point limit 10
""")
Result is Nonetype, so must be something wrong ?
What have I done wrong ? How could I debug this ?
cursor.execute() only executes the query, it does not fetch any data. In order to receive data, you will need to call cursor.fetchall() or cursor.fetchone().
import psycopg2
conn_string = "host = 'localhost' port = '5432' dbname = 'postgres' user = 'postgres' password = 'mysecretpassword'"
conn = psycopg2.connect(conn_string)
cursor.execute(
"""
select
*
from
planet_osm_point limit 10
""")
result = cursor.fetchall()

Categories

Resources