PostgreSQL problem in Django - python

I have a Django application and I'm using postgres. I try to execute the bollowing line in one of my tests:
print BillingUser.objects.all()
And I get the following error:
"current transaction is aborted, commands ignored until end of transaction block."
My postresql log:
ERROR: duplicate key value violates unique constraint "billing_rental_wallet_id_key"
STATEMENT: INSERT INTO "billing_rental" ("wallet_id", "item_id", "end_time", "time", "value", "index", "info") VALUES (61, 230, E'2010-02-11 11:01:01.092336', E'2010-02-01 11:01:01.092336', 10.0, 1, NULL)
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: INSERT INTO "billing_timeable" ("creation_date", "update_date") VALUES (E'2010-02-01 11:01:01.093504', E'2010-02-01 11:01:01.093531')
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT "billing_timeable"."id", "billing_timeable"."creation_date", "billing_timeable"."update_date", "billing_billinguser"."timeable_ptr_id", "billing_billinguser"."username", "billing_billinguser"."pin", "billing_billinguser"."sbox_id", "billing_billinguser"."parental_code", "billing_billinguser"."active" FROM "billing_billinguser" INNER JOIN "billing_timeable" ON ("billing_billinguser"."timeable_ptr_id" = "billing_timeable"."id") LIMIT 21
How can I fix that?
Thanks, Arshavski Alexander.

Ok... looking at the PostgreSQL log, it does look that you are doing a wrong insert that will abort the transaction... now, looking at your code I think the problems lies here:
at lines 78-81
currency = Currency.objects.all()[2]
if not Wallet.objects.filter(user=user):
wallet = Wallet(user=user, currency=currency)
wallet.save()
You will create a wallet for the current user, but then on line 87-88 you wrote:
user.wallet.amount = 12.0
user.wallet.save()
However, as you save the wallet after retrieving the user, it does not know that you had already created a wallet for him, and having a OneToOne relationship, this will cause the error you're having... I think what you should do is to add a line after 81:
currency = Currency.objects.all()[2]
if not Wallet.objects.filter(user=user):
wallet = Wallet(user=user, currency=currency)
wallet.save()
user.wallet = wallet
That should solve the issue....

You insert data in some of your test functions. After invalid insert DB connections is in fail state. You need to rollback transaction or turn it off completely. See django docs on transactions and testing them.

From the log it looks like you are trying to insert an item with a duplicate ID which throws an error and the rest of your code can't access the DB anymore. Fix that query, and it should work.

Related

Unexpected behaviour in sqlalchemy query

INTRO -
In the below python snippet,
I'm querying and fetching users' results from MySQL with the all(), then iterating the result and adding an address to a list of addresses related to a user, nothing special.
PROBLEM -
until now, I have always did it like this -
address = Address(...)
users = db.query(User).all()
for user in users :
user.addresses.append(address)
db.add(user)
db.commit()
so why when using the "SQL Expression Language" I need to iterate the result this way?
(when omitting the model notation it throws "Could not locate column in row for column")
stmt = select(User).join(Country, User.country_id == Country.id).where(Country.iso_code == iso_code).outerjoin(User.addresses)
related_users: Optional[List[User]] = db.execute(stmt).all()
if related_users:
address_in_db = self.get_or_create_address(...)
for user in related_users:
user.User.addresses.append(address_in_db ) # why this is not user.addresses.append(address)
db.add(user.User) # same here, why not just user
db.commit()

Errors adding a new column to a table in MySQL with Python

I'm new to MySQL and database in general, however I'm having some issue when I try to add a new column of integer inside my table. To add a new column I'm doing so:
import mysql.connector
mydb = mysql.connector.connect(
# host, user, password and database
)
mycursor = mydb.cursor(buffered = True)
# some stuff to get the variable domain
mycursor.execute('ALTER TABLE domainsMoreUsed ADD {} INTEGER(10)'.format(domain)) # domain is a string
but i get this error:
raise errors.get_mysql_exception(exc.errno, msg=exc.msg,
mysql.connector.errors.ProgrammingError: 1064 (42000): You
have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right
syntax to use near 'in INTEGER(10)' at line 1
I get the same error above also trying:
mycursor.execute('ALTER TABLE domainsMoreUsed ADD %s INTEGER(10)' % domain)
Instead when I use:
mycursor.execute('ALTER TABLE domainsMoreUsed ADD %s INTEGER(10)', (domain))
i get:
raise ValueError("Could not process parameters")
ValueError: Could not process parameters
I read some post of other users about the same error, but I couldn't find what I need. I'm pretty sure about the SQL syntax being correct.
I'm using MySQL 8.0 with Python 3.8.3 on Windows 10.
Thank you in advance for your help.
What is the string domain set to? The error message syntax to use near 'in INTEGER(10)' at line 1, implies "in", which is a reserved word. If you want to use that for a table or column name, you need to add backticks: " ` " (left of '1' on the top row of your keyboard) around them.
Change your queries like this:
mycursor.execute('ALTER TABLE domainsMoreUsed ADD `{}` INTEGER(10)'.format(domain))
mycursor.execute('ALTER TABLE domainsMoreUsed ADD `%s` INTEGER(10)', (domain))

SQL compilation error-- Snowsql validation using python

I am following this tutorial :https://docs.snowflake.com/en/sql-reference/functions/validate.html
to try and 'return errors by query ID and saves the results to a table for future reference'
however for a seamless transfer I don't want to be putting the job id always as it would require me to go to snowflake console- go to history- get the jobid -copy and paste it to python code.
Instead I wanted to go with just the tablename which is a variable and 'last_query_id()' and give me the list errors. Is there any way i can achieve this?
import snowflake.connector
tableName='F58155'
ctx = snowflake.connector.connect(
user='*',
password='*',
account='*')
cs = ctx.cursor()
ctx.cursor().execute("USE DATABASE STORE_PROFILE_LANDING")
ctx.cursor().execute("USE SCHEMA PUBLIC")
try:
ctx.cursor().execute("PUT file:///temp/data/{tableName}/* #%
{tableName}".format(tableName=tableName))
except Exception:
pass
ctx.cursor().execute("truncate table {tableName}".format(tableName=tableName))
ctx.cursor().execute("COPY INTO {tableName} ON_ERROR = 'CONTINUE' ".format(tableName=tableName,
FIELD_OPTIONALLY_ENCLOSED_BY = '""', sometimes=',', ERROR_ON_COLUMN_COUNT_MISMATCH = 'TRUE'))
I have tried the below validate function....it is giving me error on this line
the error is "SQL compilation error:
syntax error line 1 at position 74 unexpected 'tableName'.
syntax error line 1 at position 83 unexpected '}'."
ctx.cursor().execute("create or replace table save_copy_errors as select * from
table(validate({tableName},'select last_query_id()'))");
ctx.close()
The line
ctx.cursor().execute("create or replace table save_copy_errors as select * from
table(validate({tableName},'select last_query_id()'))");
should be replaced with these two
job_id = ctx.cursor().execute("select last_query_id()").fetchone()[0]
ctx.cursor().execute(f"create or replace table save_copy_errors as select * from
table(validate({tableName},job_id=>'{job_id}'))");

Not able to pass datetime object into postgresql using psycopg2

I need to pass some insert data into postgres which also contains a timestamp. I am using psycopg2 for the same.
I have tried to follow the answer here upon getting the same error as the one asked in the question: Passing a datetime into psycopg2
My code which doesn't work:
recv_data = {"datetime":datetime.datetime(2019, 12, 5, 12, 56, 34, 617607)
"temperature": 40, "humidity":80}
insert_stmt = "INSERT INTO temp_humidity (temperature,humidity,datetime) VALUES (%s,%s,%s)"
data = (recv_data["temperature"], recv_data["humidity"], recv_data["datetime"])
print(insert_stmt)
cursor.execute(insert_stmt, data)
connection.commit()
ERROR:
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: INSERT INTO temp_humidity (temperature,humidity,datetime) VALUES (42,79,'2019-12-05T05:55:45.135111'::timestamp)
Any solution would be appreciated.
The timestamp seems to be OK,
SELECT '2019-12-05T05:55:45.135111'::timestamp;
timestamp
----------------------------
2019-12-05 05:55:45.135111
(1 row)
The problem seems to be elsewhere, maybe in a constraint check?
Can you obtain the precise error message from psycopg2 exception or PostgreSQL logs?

Celery and SQLAlchemy - This result object does not return rows. It has been closed automatically

I have a celery project connected to a MySQL databases. One of the tables is defined like this:
class MyQueues(Base):
__tablename__ = 'accepted_queues'
id = sa.Column(sa.Integer, primary_key=True)
customer = sa.Column(sa.String(length=50), nullable=False)
accepted = sa.Column(sa.Boolean, default=True, nullable=False)
denied = sa.Column(sa.Boolean, default=True, nullable=False)
Also, in the settings I have
THREADS = 4
And I am stuck in a function in code.py:
def load_accepted_queues(session, mode=None):
#make query
pool = session.query(MyQueues.customer, MyQueues.accepted, MyQueues.denied)
#filter conditions
if (mode == 'XXX'):
pool = pool.filter_by(accepted=1)
elif (mode == 'YYY'):
pool = pool.filter_by(denied=1)
elif (mode is None):
pool = pool.filter(\
sa.or_(MyQueues.accepted == 1, MyQueues.denied == 1)
)
#generate a dictionary with data
for i in pool: #<---------- line 90 in the error
l.update({i.customer: {'customer': i.customer, 'accepted': i.accepted, 'denied': i.denied}})
When running this I get an error:
[20130626 115343] Traceback (most recent call last):
File "/home/me/code/processing/helpers.py", line 129, in wrapper
ret_value = func(session, *args, **kwargs)
File "/home/me/code/processing/test.py", line 90, in load_accepted_queues
for i in pool: #generate a dictionary with data
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2341, in instances
fetch = cursor.fetchall()
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3205, in fetchall
l = self.process_rows(self._fetchall_impl())
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3174, in _fetchall_impl
self._non_result()
File "/home/me/envs/me/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 3179, in _non_result
"This result object does not return rows. "
ResourceClosedError: This result object does not return rows. It has been closed automatically
So mainly it is the part
ResourceClosedError: This result object does not return rows. It has been closed automatically
and sometimes also this error:
DBAPIError: (Error) (, AssertionError('Result length not requested
length:\nExpected=1. Actual=0. Position: 21. Data Length: 21',))
'SELECT accepted_queues.customer AS accepted_queues_customer,
accepted_queues.accepted AS accepted_queues_accepted,
accepted_queues.denied AS accepted_queues_denied \nFROM
accepted_queues \nWHERE accepted_queues.accepted = %s OR
accepted_queues.denied = %s' (1, 1)
I cannot reproduce the errror properly as it normally happens when processing a lot of data. I tried to change THREADS = 4 to 1 and errors disappeared. Anyway, it is not a solution as I need the number of threads to be kept on 4.
Also, I am confused about the need to use
for i in pool: #<---------- line 90 in the error
or
for i in pool.all(): #<---------- line 90 in the error
and could not find a proper explanation of it.
All together: any advise to skip these difficulties?
All together: any advise to skip these difficulties?
yes. you absolutely cannot use a Session (or any objects which are associated with that Session), or a Connection, in more than one thread simultaneously, especially with MySQL-Python whose DBAPI connections are very thread-unsafe*. You must organize your application such that each thread deals with it's own, dedicated MySQL-Python connection (and therefore SQLAlchemy Connection/ Session / objects associated with that Session) with no leakage to any other thread.
Edit: alternatively, you can make use of mutexes to limit access to the Session/Connection/DBAPI connection to just one of those threads at a time, though this is less common because the high degree of locking needed tends to defeat the purpose of using multiple threads in the first place.
I got the same error while making a query to SQL-Server procedure using SQLAlchemy.
In my case, adding SET NOCOUNT ON to the stored procedure fixed the problem.
ALTER PROCEDURE your_procedure_name
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for your procedure here
SELECT *
FROM your_table_name;
END;
Check out this article for more details
I was using an INSERT statment. Adding
RETURNING id
at the end of the query worked for me. As per this issue
That being said it's a pretty weird solution, maybe something fixed in later versions of SQLAlchemy, I am using 1.4.39.
This error occurred for me when I used a variable in Python
and parsed it with an UPDATE
statement using pandas pd.read_sql()
Solution:
I simply used mycursor.execute() instead of pd.read_sql()
import mysql.connector and from sqlalchemy import create_engine
Before:
pd.read_sql("UPDATE table SET column = 1 WHERE column = '%s'" % variable, dbConnection)
After:
mycursor.execute("UPDATE table SET column = 1 WHERE column = '%s'" % variable)
Full code:
import mysql.connector
from sqlalchemy import create_engine
import pandas as pd
# Database Connection Setup >
sqlEngine = create_engine('mysql+pymysql://root:root#localhost/db name')
dbConnection = sqlEngine.connect()
db = mysql.connector.connect(
host="localhost",
user="root",
passwd="root",
database="db name")
mycursor = db.cursor()
variable = "Alex"
mycursor.execute("UPDATE table SET column = 1 WHERE column = '%s'" % variable)
For me I got this error when I forgot to write the table calss name for the select function query = select().where(Assessment.created_by == assessment.created_by) so I had only to fix this by adding the class table name I want to get entries from like so:
query = select(Assessment).where(
Assessment.created_by == assessment.created_by)

Categories

Resources