Python psycopg2 error when changing environment to os x - python

I have this error when i perform the following task,
results = db1.executeSelectCommand(siteSql, (),)
TypeError: unbound method executeSelectCommand() must be called with dbConnn instance as first argument (got str instance instead)
My code is as follows:
class dbConnn:
db_con = None
execfile("/Users/usera/Documents/workspace/testing/src/db/db_config.py")
def executeSelectCommand(self,sql,ip):
#psycopg connection here.
I use this class here:
from db import dbConnections
db1 = dbConnections.dbConnn
siteSql = 'select post_content from post_content_ss order by RANDOM() limit 500' #order by year,month ASC'
results = db1.executeSelectCommand(siteSql, (),)
In windows, there don't seem to have a problem with this? God, it must be really elementary but I can't find it.

db1 = dbConnections.dbConnn
Here you assign the class dbConn to the variable db1. You probably wanted to create a new instance instead:
db1 = dbConnections.dbConnn()

Related

Insert query not getting executed from sqlalchemy with parameters [duplicate]

How can I call stored procedures of sql server with sqlAlchemy?
Engines and Connections have an execute() method you can use for arbitrary sql statements, and so do Sessions. For example:
results = sess.execute('myproc ?, ?', [param1, param2])
You can use outparam() to create output parameters if you need to (or for bind parameters use bindparam() with the isoutparam=True option)
context: I use flask-sqlalchemy with MySQL and without ORM-mapping. Usually, I use:
# in the init method
_db = SqlAlchemy(app)
#... somewhere in my code ...
_db.session.execute(query)
Calling stored procedures is not supported out of the box: the callproc is not generic, but specific to the mysql connector.
For stored procedures without out params, it is possible to execute a query like
_db.session.execute(sqlalchemy.text("CALL my_proc(:param)"), param='something')
as usual. Things get more complicated when you have out params...
One way to use out params is to access the underlying connector is through engine.raw_connection(). For example:
conn = _db.engine.raw_connection()
# do the call. The actual parameter does not matter, could be ['lala'] as well
results = conn.cursor().callproc('my_proc_with_one_out_param', [0])
conn.close() # commit
print(results) # will print (<out param result>)
This is nice since we are able to access the out parameter, BUT this connection is not managed by the flask session. This means that it won't be committed/aborted as with the other managed queries... (problematic only if your procedure has side-effect).
Finally, I ended up doing this:
# do the call and store the result in a local mysql variabl
# the name does not matter, as long as it is prefixed by #
_db.session.execute('CALL my_proc_with_one_out_param(#out)')
# do another query to get back the result
result = _db.session.execute('SELECT #out').fetchone()
The result will be a tuple with one value: the out param. This is not ideal, but the least dangerous: if another query fails during the session, the procedure call will be aborted (rollback) as well.
Just execute procedure object created with func:
from sqlalchemy import create_engine, func
from sqlalchemy.orm import sessionmaker
engine = create_engine('sqlite://', echo=True)
print engine.execute(func.upper('abc')).scalar() # Using engine
session = sessionmaker(bind=engine)()
print session.execute(func.upper('abc')).scalar() # Using session
The easiest way to call a stored procedure in MySQL using SQLAlchemy is by using callproc method of Engine.raw_connection(). call_proc will require the procedure name and parameters required for the stored procedure being called.
def call_procedure(function_name, params):
connection = cloudsql.Engine.raw_connection()
try:
cursor = connection.cursor()
cursor.callproc(function_name, params)
results = list(cursor.fetchall())
cursor.close()
connection.commit()
return results
finally:
connection.close()
Supposing you already have session created with sessionmaker(), you can use following function:
def exec_procedure(session, proc_name, params):
sql_params = ",".join(["#{0}={1}".format(name, value) for name, value in params.items()])
sql_string = """
DECLARE #return_value int;
EXEC #return_value = [dbo].[{proc_name}] {params};
SELECT 'Return Value' = #return_value;
""".format(proc_name=proc_name, params=sql_params)
return session.execute(sql_string).fetchall()
Now you can execute your stored procedure 'MyProc' with parameters simply like that:
params = {
'Foo': foo_value,
'Bar': bar_value
}
exec_procedure(session, 'MyProc', params)
Out of desperate need for a project of mine, I wrote a function that handles Stored Procedure calls.
Here you go:
import sqlalchemy as sql
def execute_db_store_procedure(database, types, sql_store_procedure, *sp_args):
""" Execute the store procedure and return the response table.
Attention: No injection checking!!!
Does work with the CALL syntax as of yet (TODO: other databases).
Attributes:
database -- the database
types -- tuple of strings of SQLAlchemy type names.
Each type describes the type of the argument
with the same number.
List: http://docs.sqlalchemy.org/en/rel_0_7/core/types.html
sql_store_procudure -- string of the stored procedure to be executed
sp_args -- arguments passed to the stored procedure
"""
if not len(types) == len(sp_args):
raise ValueError("types tuple must be the length of the sp args.")
# Construch the type list for the given types
# See
# http://docs.sqlalchemy.org/en/latest/core/sqlelement.html?highlight=expression.text#sqlalchemy.sql.expression.text
# sp_args (and their types) are numbered from 0 to len(sp_args)-1
type_list = [sql.sql.expression.bindparam(
str(no), type_=getattr(sql.types, typ)())
for no, typ in zip(range(len(types)), types)]
try:
# Adapts to the number of arguments given to the function
sp_call = sql.text("CALL `%s`(%s)" % (
sql_store_procedure,
", ".join([":%s" % n for n in range(len(sp_args))])),
bindparams=type_list
)
#raise ValueError("%s\n%s" % (sp_call, type_list))
with database.engine.begin() as connection:
return connection.execute(
sp_call,
# Don't do this at home, kids...
**dict((str(no), arg)
for (no, arg) in zip(range(len(sp_args)), sp_args)))
except sql.exc.DatabaseError:
raise
It works with the CALL syntax, so MySQL should work as expected. MSSQL uses EXEC instead of call and a little differennt syntax, I guess. So making it server agnostic is up to you but shouldn’t be too hard.
Another workaround:
query = f'call Procedure ("{#param1}", "{#param2}", "{#param3}")'
sqlEngine = sqlalchemy.create_engine(jdbc)
conn = sqlEngine.connect()
df = pd.read_sql(query,conn,index_col=None)
I had a stored procedure for postgresql with following signature -
CREATE OR REPLACE PROCEDURE inc_run_count(
_host text,
_org text,
_repo text,
_rule_ids text[]
)
After quite a few error and trial, I found this is how to call the procedure from python3.
def update_db_rule_count(rule_ids: List[str], host: str, org: str, repo: str):
param_dict = {"host": host, "org": org, "repo": repo, "rule_ids": f'{{ {",".join(rule_ids)} }}'}
with AnalyticsSession() as analytics_db:
analytics_db.execute('call inc_run_count(:host, :org, :repo, :rule_ids)', param_dict)
analytics_db.commit()

Python - Instantiate object from SQLite cursor

I'm running into an error when I try to instantiate an object from a cursor in SQLite and I've exhausted my research and couldn't find a solution.
Premise: I cannot use SqlAlchemy or anything of that sorts.
Assumption: The database (SQLite) works, it contains a table named table_cars, and the table is populated with data in its single column: name.
So, I have a class lets say:
class Car():
def __init__(self, name):
self.name = name
#classmethod
def from_cursor(cls, c):
car = cls(c(0))
# this line breaks when called from the function below.
And I also have a db module, with the following function:
def get_cars_from_db():
sql = 'SELECT * FROM table_cars;'
conn = get_conn()
cur = conn.cursor()
cur.execute(sql)
data = cur.fetchall()
# at this point, if i print the cursor, I can see all data, so far so good.
cars = [Car.from_cursor(c) for c in data]
# the line above causes the code to break
return cars
The code breaks with the following error:
TypeError: 'tuple' object is not callable
What am I doing wrong here?
You can use cls(c[0]) or cls(*c) to unpack tuple to function arguments.
It's also worth to specify an exact order of your columns in query.
select name from table_cars

Why Does SQLAlchemy Label Columns in Query

When I make a query in SQLAlchemy, I noticed that the queries use the AS keyword for each column. It sets the alias_name = column_name for every column.
For example, if I run the command print(session.query(DefaultLog)), it returns:
Note: DefaultLog is my table object.
SELECT default_log.id AS default_log_id, default_log.msg AS default_log_msg, default_log.logger_time AS default_log_logger_time, default_log.logger_line AS default_log_logger_line, default_log.logger_filepath AS default_log_logger_filepath, default_log.level AS default_log_level, default_log.logger_name AS default_log_logger_name, default_log.logger_method AS default_log_logger_method, default_log.hostname AS default_log_hostname
FROM default_log
Why does it use an alias = original name? Is there some way I can disable this behavior?
Thank you in advance!
Query.statement:
The full SELECT statement represented by this Query.
The statement by default will not have disambiguating labels applied
to the construct unless with_labels(True) is called first.
Using this model:
class DefaultLog(Base):
id = sa.Column(sa.Integer, primary_key=True)
msg = sa.Column(sa.String(128))
logger_time = sa.Column(sa.DateTime)
logger_line = sa.Column(sa.Integer)
print(session.query(DefaultLog).statement) shows:
SELECT defaultlog.id, defaultlog.msg, defaultlog.logger_time, defaultlog.logger_line
FROM defaultlog
print(session.query(DefaultLog).with_labels().statement) shows:
SELECT defaultlog.id AS defaultlog_id, defaultlog.msg AS defaultlog_msg, defaultlog.logger_time AS defaultlog_logger_time, defaultlog.logger_line AS defaultlog_logger_line
FROM defaultlog
You asked:
Why does it use an alias = original name?
From Query.with_labels docs:
...this is commonly used to disambiguate columns from multiple tables which have the same name.
So if you want to issue a single query that calls upon multiple tables, there is nothing stopping those tables having columns that share the same name.
Is there some way I can disable this behavior?
Also from the Query.with_labels docs:
When the Query actually issues SQL to load rows, it always uses column
labeling.
All of the methods that retrieve rows (get(), one(), one_or_none(), all() and iterating over the Query) route through the Query.__iter__() method:
def __iter__(self):
context = self._compile_context()
context.statement.use_labels = True
if self._autoflush and not self._populate_existing:
self.session._autoflush()
return self._execute_and_instances(context)
... where this line hard codes the label usage: context.statement.use_labels = True. So it is "baked in" and can't be disabled.
You can execute the statement without labels:
session.execute(session.query(DefaultLog).statement)
... but that takes the ORM out of the equation.
It is possible to hack sqlachemy Query class to not add labels. But one must be aware that this will breaks when a table is used twice in the query. For example, self join or join thought another table.
from sqlalchemy.orm import Query
class MyQuery(Query):
def __iter__(self):
"""Patch to disable auto labels"""
context = self._compile_context(labels=False)
context.statement.use_labels = False
if self._autoflush and not self._populate_existing:
self.session._autoflush()
return self._execute_and_instances(context)
And then use it according to mtth answer
sessionmaker(bind=engine, query_cls=MyQuery)
Printing an SQLAlchemy query is tricky and produced not human-friendly output. Not only columns but also bind params are in an odd place.
Here's how to do it correctly:
qry = session.query(SomeTable)
compiled = qry.statement.compile(dialect=session.bind.dialect, compile_kwargs={"literal_binds": True})
print(compiled)
Here's how to fix it for all your future work:
from sqlalchemy.orm import Query
class MyQuery(Query):
def __str__(self):
dialect = self.session.bind.dialect
compiled = self.statement.compile(dialect=dialect, compile_kwargs={"literal_binds": True})
return str(compiled)
To use:
session = sessionmaker(bind=engine, query_cls=MyQuery)()

Celery and SQLAlchemy - This result object does not return rows. It has been closed automatically

I have a celery project connected to a MySQL databases. One of the tables is defined like this:
class MyQueues(Base):
__tablename__ = 'accepted_queues'
id = sa.Column(sa.Integer, primary_key=True)
customer = sa.Column(sa.String(length=50), nullable=False)
accepted = sa.Column(sa.Boolean, default=True, nullable=False)
denied = sa.Column(sa.Boolean, default=True, nullable=False)
Also, in the settings I have
THREADS = 4
And I am stuck in a function in code.py:
def load_accepted_queues(session, mode=None):
#make query
pool = session.query(MyQueues.customer, MyQueues.accepted, MyQueues.denied)
#filter conditions
if (mode == 'XXX'):
pool = pool.filter_by(accepted=1)
elif (mode == 'YYY'):
pool = pool.filter_by(denied=1)
elif (mode is None):
pool = pool.filter(\
sa.or_(MyQueues.accepted == 1, MyQueues.denied == 1)
)
#generate a dictionary with data
for i in pool: #<---------- line 90 in the error
l.update({i.customer: {'customer': i.customer, 'accepted': i.accepted, 'denied': i.denied}})
When running this I get an error:
[20130626 115343] Traceback (most recent call last):
File "/home/me/code/processing/helpers.py", line 129, in wrapper
ret_value = func(session, *args, **kwargs)
File "/home/me/code/processing/test.py", line 90, in load_accepted_queues
for i in pool: #generate a dictionary with data
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2341, in instances
fetch = cursor.fetchall()
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3205, in fetchall
l = self.process_rows(self._fetchall_impl())
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3174, in _fetchall_impl
self._non_result()
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3179, in _non_result
"This result object does not return rows. "
ResourceClosedError: This result object does not return rows. It has been closed automatically
So mainly it is the part
ResourceClosedError: This result object does not return rows. It has been closed automatically
and sometimes also this error:
DBAPIError: (Error) (, AssertionError('Result length not requested
length:\nExpected=1. Actual=0. Position: 21. Data Length: 21',))
'SELECT accepted_queues.customer AS accepted_queues_customer,
accepted_queues.accepted AS accepted_queues_accepted,
accepted_queues.denied AS accepted_queues_denied \nFROM
accepted_queues \nWHERE accepted_queues.accepted = %s OR
accepted_queues.denied = %s' (1, 1)
I cannot reproduce the errror properly as it normally happens when processing a lot of data. I tried to change THREADS = 4 to 1 and errors disappeared. Anyway, it is not a solution as I need the number of threads to be kept on 4.
Also, I am confused about the need to use
for i in pool: #<---------- line 90 in the error
or
for i in pool.all(): #<---------- line 90 in the error
and could not find a proper explanation of it.
All together: any advise to skip these difficulties?
All together: any advise to skip these difficulties?
yes. you absolutely cannot use a Session (or any objects which are associated with that Session), or a Connection, in more than one thread simultaneously, especially with MySQL-Python whose DBAPI connections are very thread-unsafe*. You must organize your application such that each thread deals with it's own, dedicated MySQL-Python connection (and therefore SQLAlchemy Connection/ Session / objects associated with that Session) with no leakage to any other thread.
Edit: alternatively, you can make use of mutexes to limit access to the Session/Connection/DBAPI connection to just one of those threads at a time, though this is less common because the high degree of locking needed tends to defeat the purpose of using multiple threads in the first place.
I got the same error while making a query to SQL-Server procedure using SQLAlchemy.
In my case, adding SET NOCOUNT ON to the stored procedure fixed the problem.
ALTER PROCEDURE your_procedure_name
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for your procedure here
SELECT *
FROM your_table_name;
END;
Check out this article for more details
I was using an INSERT statment. Adding
RETURNING id
at the end of the query worked for me. As per this issue
That being said it's a pretty weird solution, maybe something fixed in later versions of SQLAlchemy, I am using 1.4.39.
This error occurred for me when I used a variable in Python
and parsed it with an UPDATE
statement using pandas pd.read_sql()
Solution:
I simply used mycursor.execute() instead of pd.read_sql()
import mysql.connector and from sqlalchemy import create_engine
Before:
pd.read_sql("UPDATE table SET column = 1 WHERE column = '%s'" % variable, dbConnection)
After:
mycursor.execute("UPDATE table SET column = 1 WHERE column = '%s'" % variable)
Full code:
import mysql.connector
from sqlalchemy import create_engine
import pandas as pd
# Database Connection Setup >
sqlEngine = create_engine('mysql+pymysql://root:root#localhost/db name')
dbConnection = sqlEngine.connect()
db = mysql.connector.connect(
host="localhost",
user="root",
passwd="root",
database="db name")
mycursor = db.cursor()
variable = "Alex"
mycursor.execute("UPDATE table SET column = 1 WHERE column = '%s'" % variable)
For me I got this error when I forgot to write the table calss name for the select function query = select().where(Assessment.created_by == assessment.created_by) so I had only to fix this by adding the class table name I want to get entries from like so:
query = select(Assessment).where(
Assessment.created_by == assessment.created_by)

Python call constructor in a member function

Let's take for example this class, which is extending MySQLDB's connection object.
class DBHandler(mysql.connections.Connection):
def __init__(self,cursor=None):
if cursor == None:
cursor = 'DictCursor'
super(DBHandler,self).__init__(host = db_host,
user = db_user,
passwd = db_pass,
db = db,
cursorclass=getattr(mysql.cursors, cursor))
def getall(self,q,params=None):
try:
cur = self.cursor()
cur.execute(q,params)
res = cur.fetchall()
return res
except mysql.OperationalError:
#this is the line in question
pass
def execute(self,q,params):
cur = self.cursor()
cur.execute(q,params)
self.commit()
return cur.lastrowid
This thing is largely a convenience to get simpler access to common required queries.
On the line marked with the comment, is it possible in Python to recall the object constructor, even though this is a member function? I use this example to illustrate because it would effectively reestablish the connection in the event it is dropped on timeout before a query is run.
I'm aware of MySQLdb's ping() method, this is really just a question of capability. In python, Is it possible to call a constructor from within a member function called on an instance to re-initialize that instance? Thanks!
Yes, you can, since it would be preferable to extract your initialization code in another method (a def init(self):).
This is because __init__ is not really the constructor of the object, it is more the "initializer" of your instance, the real constructor is the __new__ method, that is responsible of the instance creation.

Categories

Resources