I need to pass some insert data into postgres which also contains a timestamp. I am using psycopg2 for the same.
I have tried to follow the answer here upon getting the same error as the one asked in the question: Passing a datetime into psycopg2
My code which doesn't work:
recv_data = {"datetime":datetime.datetime(2019, 12, 5, 12, 56, 34, 617607)
"temperature": 40, "humidity":80}
insert_stmt = "INSERT INTO temp_humidity (temperature,humidity,datetime) VALUES (%s,%s,%s)"
data = (recv_data["temperature"], recv_data["humidity"], recv_data["datetime"])
print(insert_stmt)
cursor.execute(insert_stmt, data)
connection.commit()
ERROR:
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: INSERT INTO temp_humidity (temperature,humidity,datetime) VALUES (42,79,'2019-12-05T05:55:45.135111'::timestamp)
Any solution would be appreciated.
The timestamp seems to be OK,
SELECT '2019-12-05T05:55:45.135111'::timestamp;
timestamp
----------------------------
2019-12-05 05:55:45.135111
(1 row)
The problem seems to be elsewhere, maybe in a constraint check?
Can you obtain the precise error message from psycopg2 exception or PostgreSQL logs?
Related
I am following this tutorial :https://docs.snowflake.com/en/sql-reference/functions/validate.html
to try and 'return errors by query ID and saves the results to a table for future reference'
however for a seamless transfer I don't want to be putting the job id always as it would require me to go to snowflake console- go to history- get the jobid -copy and paste it to python code.
Instead I wanted to go with just the tablename which is a variable and 'last_query_id()' and give me the list errors. Is there any way i can achieve this?
import snowflake.connector
tableName='F58155'
ctx = snowflake.connector.connect(
user='*',
password='*',
account='*')
cs = ctx.cursor()
ctx.cursor().execute("USE DATABASE STORE_PROFILE_LANDING")
ctx.cursor().execute("USE SCHEMA PUBLIC")
try:
ctx.cursor().execute("PUT file:///temp/data/{tableName}/* #%
{tableName}".format(tableName=tableName))
except Exception:
pass
ctx.cursor().execute("truncate table {tableName}".format(tableName=tableName))
ctx.cursor().execute("COPY INTO {tableName} ON_ERROR = 'CONTINUE' ".format(tableName=tableName,
FIELD_OPTIONALLY_ENCLOSED_BY = '""', sometimes=',', ERROR_ON_COLUMN_COUNT_MISMATCH = 'TRUE'))
I have tried the below validate function....it is giving me error on this line
the error is "SQL compilation error:
syntax error line 1 at position 74 unexpected 'tableName'.
syntax error line 1 at position 83 unexpected '}'."
ctx.cursor().execute("create or replace table save_copy_errors as select * from
table(validate({tableName},'select last_query_id()'))");
ctx.close()
The line
ctx.cursor().execute("create or replace table save_copy_errors as select * from
table(validate({tableName},'select last_query_id()'))");
should be replaced with these two
job_id = ctx.cursor().execute("select last_query_id()").fetchone()[0]
ctx.cursor().execute(f"create or replace table save_copy_errors as select * from
table(validate({tableName},job_id=>'{job_id}'))");
The book named "Practical Programming: 2nd Edition" has conflicting code. This is the start of my code:
import sqlite3
con = sqlite3.connect('stackoverflow.db')
cur = conn.cursor()
To commit, would I use con.commit(), cur.commit() or are there different times to use each? From the book:
con.commit() :
cur.commit() :
Documentation shows con.commit() :
I took unutbu's advice and tried it myself.
Sample code:
import sqlite3
con = sqlite3.connect('db.db')
cur = con.cursor()
data = [('data', 3), ('data2', 69)]
cur.execute('CREATE TABLE Density(Name TEXT, Number INTEGER)')
for i in data:
cur.execute('INSERT INTO Density VALUES (?, ?)', (i[0], i[1]))
cur.commit()
PyCharm Run:
Traceback (most recent call last):
File "/Users/User/Library/Preferences/PyCharmCE2018.1/scratches/scratch_2.py", line 13, in <module>
cur.commit()
AttributeError: 'sqlite3.Cursor' object has no attribute 'commit'
Error in textbook. cur.commit() does not exist.
Thanks unutbu and s3n0
con.commit() and conn.commit() are the same ... they are created object types ... in both cases they are otherwise named ... important is mainly .commit() and not the naming that the programmer has specified
There are object types that use a different name (con and cur - as you asked) to calling the method. You can also use a different name in your code, for example:
db = sqlite3.connect('/tmp/filename.db')
cursor = db.cursor()
cursor.execute("CREATE TABLE ....
.... some DB-API 2.0 commands ....
")
db.commit()
Please check again the webpage https://docs.python.org/3/library/sqlite3.html .
You forgot to copy these two lines from the webpage:
import sqlite3
conn = sqlite3.connect('example.db')
And then continuing the code (just copied it):
c = conn.cursor()
# Create table
c.execute('''CREATE TABLE stocks
(date text, trans text, symbol text, qty real, price real)''')
# Insert a row of data
c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
# Save (commit) the changes
conn.commit()
# We can also close the connection if we are done with it.
# Just be sure any changes have been committed or they will be lost.
conn.close()
I think if you're using a specified cursor to commit changes, in your case, it should be cur.connection.commit().
You can always use connect to commit in the end of your code, whether it's named db, or con or conn.
But when your code gets complicated, you'll have different function to do certain operation to the database, if you only use connection commit, when there is a bug, you gonna have a hard time to find which function failed. So you create specific cursor for specific operation, when that failed, the traceback message will show you which specific cursor when wrong.
To #s3n0 & #DanielYu's point they can be handled two different ways. I had to list these out to better understand the overlap:
Connection Objects
backup
close
commit
create_aggregate
create_collation
create_function
cursor
enable_load_extension
execute
executemany
executescript
in_transaction
interrupt
isolation_level
iterdump
load_extension
rollback
row_factory
set_authorizer
set_progress_handler
set_trace_callback
text_factory
total_changes
Cursor objects
arraysize
close
connection
description
execute
executemany
executescript
fetchall
fetchmany
fetchone
lastrowid
rowcount
setinputsizes
setoutputsize
I get the following error code while executing my Code. The error does not occur immediately - it occurs randomly after 2-7 hours. Until the error occurs there is no problem to stream the online feeds and write them in a DB.
Error message:
Traceback (most recent call last):
File "C:\Python27\MySQL_finalversion\RSS_common_FV.py", line 78, in <module>
main()
File "C:\Python27\MySQL_finalversion\RSS_common_FV.py", line 63, in main
feed_iii = feed_load_iii(feed_url_iii)
File "C:\Python27\MySQL_finalversion\RSS_common_FV.py", line 44, in feed_load_iii
in feedparser.parse(feed_iii).entries]
IndexError: list index out of range
Here you can find my Code:
import feedparser
import MySQLdb
import time
from cookielib import CookieJar
db = MySQLdb.connect(host="localhost", # your host, usually localhost
user="root", # your username - SELECT * FROM mysql.user
passwd="****", # your password
db="sentimentanalysis_unicode",
charset="utf8") # name of the data base
cur = db.cursor()
cur.execute("SET NAMES utf8")
cur.execute("SET CHARACTER SET utf8")
cur.execute("SET character_set_connection=utf8")
cur.execute("DROP TABLE IF EXISTS feeddata_iii")
sql_iii = """CREATE TABLE feeddata_iii(III_ID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(III_ID),III_UnixTimesstamp integer,III_Timestamp varchar(255),III_Source varchar(255),III_Title varchar(255),III_Text TEXT,III_Link varchar(255),III_Epic varchar(255),III_CommentNr integer,III_Author varchar(255))"""
cur.execute(sql_iii)
def feed_load_iii(feed_iii):
return [(time.time(),
entry.published,
'iii',
entry.title,
entry.summary,
entry.link,
(entry.link.split('=cotn:')[1]).split('.L&id=')[0],
(entry.link.split('.L&id=')[1]).split('&display=')[0],
entry.author)
for entry
in feedparser.parse(feed_iii).entries]
def main():
feed_url_iii = "http://www.iii.co.uk/site_wide_discussions/site_wide_rss2.epl"
feed_iii = feed_load_iii(feed_url_iii)
print feed_iii[1][1]
for item in feed_iii:
cur.execute("""INSERT INTO feeddata_iii(III_UnixTimesstamp, III_Timestamp, III_Source, III_Title, III_Text, III_Link, III_Epic, III_CommentNr, III_Author) VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)""",item)
db.commit()
if __name__ == "__main__":
while True:
main()
time.sleep(240)
If you need further information - please feel free to ask. I need your help!
Thanks and Regards from London!
In essence, your program is insufficiently resilient to poorly-formatted data.
Your code makes very explicit assumptions about the structure of the data, and is unable to cope if the data is not so structured. You need to detect the cases where the data is incorrectly formatted and take some other action then.
A rather sloppy way to do this would simply trap the exception that's currently being raised which you could do with (something like)
try:
feed_iii = feed_load_iii(feed_url_iii)
except IndexError:
# do something to report or handle the data format problem
I am new to Python 2.6. I have been trying to fetch date datetime value which is in yyyy-mm-dd hh:m:ss format back in my Python program. On checking the column type in Python I get the error: 'buffer' object has no attribute 'decode'. I want to use the strptime() function to split the date data and use it but I can't find how to convert a buffer to string. The following is a sample of my code (also available here):
conn = sqlite3.connect("mrp.db.db", detect_types=sqlite3.PARSE_DECLTYPES)
cursor = conn.cursor()
qryT = """
SELECT dateDefinitionTest FROM t
WHERE IDproject = 4 AND IDstatus = 5
ORDER BY priority, setDate DESC
"""
rec = (4,4)
cursor.execute(qryT,rec)
resultsetTasks = cursor.fetchall()
cursor.close() # closing the resultset
for item in resultsetTasks:
taskDetails = {}
_f = item[10].decode("utf-8")
The exception I get is:
'buffer' object has no attribute 'decode'
I am not exactly sure what your problem may be. The following is a working example of what you are trying to achieve, which hopefully will help you:
#!/usr/bin/env python
import datetime
import sqlite3
conn = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES)
cursor = conn.cursor()
cursor.execute("CREATE TABLE t (dateDefinitionTest DATETIME)")
cursor.execute("INSERT INTO t VALUES (?)", (datetime.datetime.now(),))
query = "SELECT dateDefinitionTest FROM t"
cursor.execute(query)
for row in cursor.fetchall():
dt = datetime.datetime.strptime(row[0], "%Y-%m-%d %H:%M:%S.%f")
print(repr(dt))
print(dt.strftime("%Y-%m-%d %H:%M:%S.%f"))
cursor.close()
which outputs:
datetime.datetime(2012, 11, 11, 16, 40, 26, 788966)
2012-11-11 16:40:26.788966
Your problem's root cause is considering the 'buffer' object gotten from Sqlite database as a 'string' object, the string object has the encode() method, but 'buffer' object has no this method.
What your need do is simple: just convert the 'buffer' object to string object, and it's not difficult, only add one line in your codes:
tempString=str(item[10])
_f = tempString.decode("utf-8")
I encountered the same problem today, and googled directed me to here, and found no suitable answer yet. So provide it here.
The sqlite record data's type is buffer or string is decided by how we construct our db table, and the sqlite version, and the sqlite db plugin's version, so you test result is different from Pedro Romano's. But any way, just add this line: tempString=str(item[10]), it could force the system use it as a string.
I have a Django application and I'm using postgres. I try to execute the bollowing line in one of my tests:
print BillingUser.objects.all()
And I get the following error:
"current transaction is aborted, commands ignored until end of transaction block."
My postresql log:
ERROR: duplicate key value violates unique constraint "billing_rental_wallet_id_key"
STATEMENT: INSERT INTO "billing_rental" ("wallet_id", "item_id", "end_time", "time", "value", "index", "info") VALUES (61, 230, E'2010-02-11 11:01:01.092336', E'2010-02-01 11:01:01.092336', 10.0, 1, NULL)
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: INSERT INTO "billing_timeable" ("creation_date", "update_date") VALUES (E'2010-02-01 11:01:01.093504', E'2010-02-01 11:01:01.093531')
ERROR: current transaction is aborted, commands ignored until end of transaction block
STATEMENT: SELECT "billing_timeable"."id", "billing_timeable"."creation_date", "billing_timeable"."update_date", "billing_billinguser"."timeable_ptr_id", "billing_billinguser"."username", "billing_billinguser"."pin", "billing_billinguser"."sbox_id", "billing_billinguser"."parental_code", "billing_billinguser"."active" FROM "billing_billinguser" INNER JOIN "billing_timeable" ON ("billing_billinguser"."timeable_ptr_id" = "billing_timeable"."id") LIMIT 21
How can I fix that?
Thanks, Arshavski Alexander.
Ok... looking at the PostgreSQL log, it does look that you are doing a wrong insert that will abort the transaction... now, looking at your code I think the problems lies here:
at lines 78-81
currency = Currency.objects.all()[2]
if not Wallet.objects.filter(user=user):
wallet = Wallet(user=user, currency=currency)
wallet.save()
You will create a wallet for the current user, but then on line 87-88 you wrote:
user.wallet.amount = 12.0
user.wallet.save()
However, as you save the wallet after retrieving the user, it does not know that you had already created a wallet for him, and having a OneToOne relationship, this will cause the error you're having... I think what you should do is to add a line after 81:
currency = Currency.objects.all()[2]
if not Wallet.objects.filter(user=user):
wallet = Wallet(user=user, currency=currency)
wallet.save()
user.wallet = wallet
That should solve the issue....
You insert data in some of your test functions. After invalid insert DB connections is in fail state. You need to rollback transaction or turn it off completely. See django docs on transactions and testing them.
From the log it looks like you are trying to insert an item with a duplicate ID which throws an error and the rest of your code can't access the DB anymore. Fix that query, and it should work.