Using SQLite Python and Multithreading - python

I'm desperately trying to get this code to work from about 70 threads, where it won't be run exactly at the same time, but pretty closely. All I really want is a way of saying, try to insert this, and if you can't back off for a while and try again, just doit without breaking the database. I'm using no options when creating the database, except for the filename. The only problem is I'm getting lots of disk I/O errors and database disk image is malformed. I'm trying to run this in a transaction, so if anything goes wrong it should roll back. I've tried the isolation_level=None option on the connection, which didn't really help. I'm using the Python sqlite3 module.
Here's the code
update_simulations_end_time_sql = """update simulations set end_time=?, completion_status =? where id=?;"""
def __set_time(sql_command, data):
retries=0
while retries<5:
try:
with create_tables.create_connection() as conn:
cur = conn.cursor()
cur.execute("begin")
cur.execute(sql_command, data)
return
except Exception as e:
print(f"__set_time has failed with {sql_command}")
print(e)
sleep_time = uniform(0.1,4)
print(f"Sleeping for {sleep_time}")
sleep(sleep_time)
retries+=1
raise Exception(f"__set_time failed after {retries}")
Here's the options sqlite was compiled with
sqlite> SELECT * FROM pragma_compile_options;
COMPILER=gcc-9.4.0
ENABLE_COLUMN_METADATA
ENABLE_DBSTAT_VTAB
ENABLE_FTS3
ENABLE_FTS3_PARENTHESIS
ENABLE_FTS3_TOKENIZER
ENABLE_FTS4
ENABLE_FTS5
ENABLE_JSON1
ENABLE_LOAD_EXTENSION
ENABLE_PREUPDATE_HOOK
ENABLE_RTREE
ENABLE_SESSION
ENABLE_STMTVTAB
ENABLE_UNKNOWN_SQL_FUNCTION
ENABLE_UNLOCK_NOTIFY
ENABLE_UPDATE_DELETE_LIMIT
HAVE_ISNAN
LIKE_DOESNT_MATCH_BLOBS
MAX_SCHEMA_RETRY=25
MAX_VARIABLE_NUMBER=250000
OMIT_LOOKASIDE
SECURE_DELETE
SOUNDEX
THREADSAFE=1
USE_URI
If anyone has any ideas on how to solve this, I would be amazingly grateful.

In Python 3.11 you will be able to use the sqlite3 module with options like (link):
import sqlite3
if sqlite3.threadsafety == 3:
check_same_thread = False
else:
check_same_thread = True
conn = sqlite3.connect(":memory:", check_same_thread=check_same_thread)

Related

Accessing SQLite DB /w python and getting malformed DBs

I have some python code that copies a SQLite db across sftp. However, it is a highly active db, so many of the times I am running into a malformed db. I'm thinking of these possible options, but I don't know how to implement them because I am newer to python.
Alternate method of getting the sqlite db copied?
Maybe there is a way to query the sqlite file from the device? Not sure if that would work since sqlite is more of a local db not sure how I can query it like I could w mysql etc...
Create a loop? I could call the function again in the exception, but not sure how to retry the rest of the code.
Also, the malformed db issue can possibly occur in other sections im thinking? Maybe I need to run a pragma quick_check?
This is commonly what I am seeing.... The other catch is why am I seeing it as often as I am? Because if I load the sqlite file from my main machine, and it runs the query files?
(venv) dulanic#mediaserver:/opt/python_scripts/rpi$ cd /opt/python_scripts/rpi ; /usr/bin/env /opt/python_scripts/rpi/venv/bin/python /home/dulanic/.vscode-server/extensions/ms-python.python-2021.2.636928669/pythonFiles/lib/python/debugpy/launcher 37599 -- /opt/python_scripts/rpi/rpdb.py
An error occurred: database disk image is malformed
This is my current code:
#!/usr/bin/env python3
import psycopg2, sqlite3, sys, paramiko, sys, os, socket, time
scpuser=os.getenv('scpuser')
scppw = os.getenv('scppw')
sqdb = os.getenv('sqdb')
sqlike = os.getenv('sqlike')
pgdb = os.getenv('pgdb')
pguser = os.getenv('pguser')
pgpswd = os.getenv('pgpswd')
pghost = os.getenv('pghost')
pgport = os.getenv('pgport')
pgschema = os.getenv('pgschema')
database = r"./pihole.db"
pihole = socket.gethostbyname('pi.hole')
tabnames=[]
tabgrab = ''
def pullsqlite():
sftp.get('/etc/pihole/pihole-FTL.db','pihole.db')
sftp.close()
# SFTP pull config
ssh_client=paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname=pihole,username=scpuser,password=scppw)
sftp=ssh_client.open_sftp()
# Pull SQlite
pullsqlite()
# Load sqlite tables to list
consq=sqlite3.connect(sqdb)
cursq=consq.cursor()
cursq.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name in ({sqlike})" )
tabgrab = cursq.fetchall()
# postgres connection
conpg = psycopg2.connect(database=pgdb, user=pguser, password=pgpswd,
host=pghost, port=pgport)
#Load data to postgres from sqlite
for item in tabgrab:
tabnames.append(item[0])
start = time.perf_counter()
for table in tabnames:
curpg = conpg.cursor()
if table=='queries':
curpg.execute(f"SELECT max(id) FROM {table};")
max_id = curpg.fetchone()[0]
cursq.execute(f"SELECT * FROM {table} where id > {max_id};")
else:
cursq.execute(f"SELECT * FROM {table};")
try:
rows=cursq.fetchall()
except sqlite3.Error as e:
print("An error occurred:", e.args[0])
colcount=len(rows[0])
pholder=('%s,'*colcount)[:-1]
try:
curpg.execute(f"SET search_path TO {pgschema};" )
curpg.executemany(f"INSERT INTO {table} VALUES ({pholder}) ON CONFLICT DO NOTHING;" ,rows)
conpg.commit()
print(f'Inserted {len(rows)} rows into {table}')
except psycopg2.DatabaseError as e:
print (f'Error {e}')
sys.exit(1)
if 'start' in locals():
elapsed = time.perf_counter() - start
print(f'Time {elapsed:0.4}')
consq.close()

How to make a button insert data into my database?

I'm doing a little python product registration program and the database just doesn't open
I tried to open with sqlite3 library and by suggestion tried with QSqlDatabase, but nothing works. What could this error be? How to solve and connect?
i changed the database connection
path = r'C:\Users\Daniel\Desktop\Sistema NaruHodo\Banco de Dados'
conn = sqlite3.connect(path+r'\produtos.db')
cursor = conn.cursor()
Erro:
C:\Users\Daniel\AppData\Local\Programs\Python\Python37\pythonw.exe "C:/Users/Daniel/Desktop/Sistema NaruHodo/cadastroprodutos.py"
Process finished with exit code -1073740791 (0xC0000409)
Here is the def I'm using to try to get the field data and insert it into the database.
def addProduto(self, produtotext, estoquetext, precocustotext, precovendatext, fornecedorcomboBox):
path = r'C:\Users\Daniel\Desktop\Sistema NaruHodo\Banco de Dados'
conn = sqlite3.connect(path+r'\produtos.db')
produto = str(self.produtotext.text())
estoque = float(self.estoquetext.text())
precocusto = float(self.precocustotext.text())
precovenda = float(self.precovendatext.text())
fornecedor = str(self.fornecedorcomboBox.currentText())
conn.execute(f"""INSERT INTO produ VALUES (null, {produto}, {estoque}, {precocusto}, {precovenda}, {fornecedor})""")
conn.commit()
It looks like you're trying to display the database in a table. I recommend using a QSqlDatabase to create a connection to the database (it does support an SQLite driver). Using this instead of sqlite3 allows for better integration with the PyQt5 framework. Specifically, it can be easily hooked up with a QSqlTableModel and then displayed with a QTableView. For this approach, I suggest you familiarise yourself with model/view programming. It may seem more involved and complex than the QTableWidget you are currently using, but it's well worth it for the easy database integration it offers.
Excuse some of the doc links being for PySide2; the PyQt5 docs aren't exactly complete unfortunately.
I solved the problem, with QSqlDatabase it did not accept and gave drivers error, so I went back to sqlite3. But it was still crashing even connected, so I found that the problem was really in the function that I didn't convert to string
def addProduto(self):
self.banco = sqlite3.connect('Vendas.db')
self.cursor = banco.cursor()
Nome = self.produtotext.text()
Quant = self.estoquetext.text()
Valor = self.precocustotext.text()
Preco = self.precovendatext.text()
Forn = self.fornecedorcomboBox.currentText()
if Nome != '' or Quant != '' or Valor != '' or Preco != '' or Forn != '':
self.banco.execute(f"""INSERT INTO Produtos VALUES ('{Nome}',{Quant}, {Valor}, {Preco}, '{Forn}')""")
self.banco.commit()
self.LoadDatabase()
self.limparcampos()

python's lib pymysql stop replying randomly

I am using pyMysql for saving data to mysql db. I have permanent flow of tick data from financial market. i use cur.executemany for inserting 10 lines by one time. It worked fine just for first 20-30 lines - then it stops write it and don't throw any exception..
self.queue.append((timestamp,side,size,price))
if len(self.queue)>=10:
try:
self.logger.info("Writing 10 lines to sql..")
conn = pymysql.connect(host='localhost', port=3306, user='root', passwd='*****', db='sys')
cur = conn.cursor()
sqlQ=""" INSERT den_trades (date2 , side, size, price) VALUES (%s,%s,%s,%s)"""
cur.executemany(sqlQ, self.queue)
conn.commit()
conn.close()
self.queue=[]
except Exception as e:
self.logger.warning("Exception while cur.executemany... sys.exc_info()[0]: {}".format(sys.exc_info()[0]))
self.logger.warning("e.message ".format(e.message))
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(e).__name__, e.args)
self.logger.warning(message)
conn.rollback()
i am trying to get some exceptions - but no one warning..
strange thing is - when problem appears - "Writing 10 lines to sql.." still work fine for every tick - self.queue continue to grow up - so self.queue=[] never happens.. how can it be? first string of try: block still work, but last string stop occur.. if so - there should be any exception... right?
one more thing. i have another script that's running fine on same machine. that script saving 1000 lines by one time through pyMysql.
Can it be a problem?

SQLAlchemy, scoped_session - raw SQL INSERT doesn't write to DB

I have a Pyramid / SQLAlchemy, MySQL python app.
When I execute a raw SQL INSERT query, nothing gets written to the DB.
When using ORM, however, I can write to the DB. I read the docs, I read up about the ZopeTransactionExtension, read a good deal of SO questions, all to no avail.
What hasn't worked so far:
transaction.commit() - nothing is written to the DB. I do realize this statement is necessary with ZopeTransactionExtension but it just doesn't do the magic here.
dbsession().commit - doesn't work since I'm using ZopeTransactionExtension
dbsession().close() - nothing written
dbsession().flush() - nothing written
mark_changed(session) -
File "/home/dev/.virtualenvs/sc/local/lib/python2.7/site-packages/zope/sqlalchemy/datamanager.py", line 198, in join_transaction
if session.twophase:
AttributeError: 'scoped_session' object has no attribute 'twophase'"
What has worked but is not acceptable because it doesn't use scoped_session:
engine.execute(...)
I'm looking for how to execute raw SQL with a scoped_session (dbsession() in my code)
Here is my SQLAlchemy setup (models/__init__.py)
def dbsession():
assert (_dbsession is not None)
return _dbsession
def init_engines(settings, _testing_workarounds=False):
import zope.sqlalchemy
extension = zope.sqlalchemy.ZopeTransactionExtension()
global _dbsession
_dbsession = scoped_session(
sessionmaker(
autoflush=True,
expire_on_commit=False,
extension=extension,
)
)
engine = engine_from_config(settings, 'sqlalchemy.')
_dbsession.configure(bind=engine)
Here is a python script I wrote to isolate the problem. It resembles the real-world environment of where the problem occurs. All I want is to make the below script insert the data into the DB:
# -*- coding: utf-8 -*-
import sys
import transaction
from pyramid.paster import setup_logging, get_appsettings
from sc.models import init_engines, dbsession
from sqlalchemy.sql.expression import text
def __main__():
if len(sys.argv) < 2:
raise RuntimeError()
config_uri = sys.argv[1]
setup_logging(config_uri)
aa = init_engines(get_appsettings(config_uri))
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
print list(session.execute("""SELECT * from operations""").fetchall()) # prints inserted data
transaction.commit()
print list(session.execute("""SELECT * from operations""").fetchall()) # doesn't print inserted data
if __name__ == '__main__':
__main__()
What is interesting, if I do:
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
op = Operation(generated_description='aa', description='oo')
session.add(op)
then the first print outputs the raw SQL inserted row ('hello2' 'world'), and the second print prints both rows, and in fact both rows are inserted into the DB.
I cannot comprehend why using an ORM insert alongside raw SQL "fixes" it.
I really need to be able to call execute() on a scoped_session to insert data into the DB using raw SQL. Any advice?
It has been a while since I mixed raw sql with sqlalchemy, but whenever you mix them, you need to be aware of what happens behind the scenes with the ORM. First, check the autocommit flag. If the zope transaction is not configured correctly, the ORM insert might be triggering a commit.
Actually, after looking at the zope docs, it seems manual execute statements need an extra step. From their readme:
By default, zope.sqlalchemy puts sessions in an 'active' state when they are
first used. ORM write operations automatically move the session into a
'changed' state. This avoids unnecessary database commits. Sometimes it
is necessary to interact with the database directly through SQL. It is not
possible to guess whether such an operation is a read or a write. Therefore we
must manually mark the session as changed when manual SQL statements write
to the DB.
>>> session = Session()
>>> conn = session.connection()
>>> users = Base.metadata.tables['test_users']
>>> conn.execute(users.update(users.c.name=='bob'), name='ben')
<sqlalchemy.engine...ResultProxy object at ...>
>>> from zope.sqlalchemy import mark_changed
>>> mark_changed(session)
>>> transaction.commit()
>>> session = Session()
>>> str(session.query(User).all()[0].name)
'ben'
>>> transaction.abort()
It seems you aren't doing that, and so the transaction.commit does nothing.

Sybase sybpydb queries not returning anything

I am currently connecting to a Sybase 15.7 server using sybpydb. It seems to connect fine:
import sys
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib')
sys.path.append('/dba/sybase/ase/15.7/OCS-15_0/lib')
import sybpydb
conn = sybpydb.connect(user='usr', password='pass', servername='serv')
is working fine. Changing any of my connection details results in a connection error.
I then select a database:
curr = conn.cursor()
curr.execute('use db_1')
however, now when I try to run queries, it always returns None
print curr.execute('select * from table_1')
I have tried running the use and select queries in the same execute, I have tried including go commands after each, I have tried using curr.connection.commit() after each, all with no success. I have confirmed, using dbartisan and isql, that the same queries I am using return entries.
Why am I not getting results from my queries in python?
EDIT:
Just some additional info. In order to get the sybpydb import to work, I had to change two environment variables. I added the lib paths (the same ones that I added to sys.path) to $LD_LIBRARY_PATH, i.e.:
setenv LD_LIBRARY_PATH "$LD_LIBRARY_PATH":dba/sybase/ase/15.7/OCS-15_0/python/python26_64r/lib:/dba/sybase/ase/15.7/OCS-15_0/lib
and I had to change the SYBASE path from 12.5 to 15.7. All this was done in csh.
If I print conn.error(), after every curr.execute(), I get:
("Server message: number(5701) severity(10) state(2) line(0)\n\tChanged database context to 'master'.\n\n", 5701)
I completely understand where you might be confused by the documentation. Its doesn't seem to be on par with other db extensions (e.g. psycopg2).
When connecting with most standard db extensions you can specify a database. Then, when you want to get the data back from a SELECT query, you either use fetch (an ok way to do it) or the iterator (the more pythonic way to do it).
import sybpydb as sybase
conn = sybase.connect(user='usr', password='pass', servername='serv')
cur = conn.cursor()
cur.execute("use db_1")
cur.execute("SELECT * FROM table_1")
print "Query Returned %d row(s)" % cur.rowcount
for row in cur:
print row
# Alternate less-pythonic way to read query results
# for row in cur.fetchall():
# print row
Give that a try and let us know if it works.
Python 3.x working solution:
import sybpydb
try:
conn = sybpydb.connect(dsn="Servername=serv;Username=usr;Password=pass")
cur = conn.cursor()
cur.execute('select * from db_1..table_1')
# table header
header = tuple(col[0] for col in cur.description)
print('\t'.join(header))
print('-' * 60)
res = cur.fetchall()
for row in res:
line = '\t'.join(str(col) for col in row)
print(line)
cur.close()
conn.close()
except sybpydb.Error:
for err in cur.connection.messages:
print(f'Error {err[0]}, Value {err[1]}')

Categories

Resources