python sqlite3 Getting error: sqlite3.OperationalError: database is locked - python

Can you help me, I wrote this code:
class Favorits(object):
def __init__(self, graph_):
self.graph_ = graph_
self.conn3 = sqlite3.connect('C:/C/V2.db')
def add_(self):
с3 = self.conn3.cursor()
с3.execute("SELECT COUNT(*) FROM fav")
t_count = с3.fetchall()
self.conn3.commit()
t_count = t_count[0][0]
to_add_rus = self.graph_.text_rus.get('1.0', 'end')
to_add_eng = self.graph_.text_eng.get('1.0', 'end')
to_add_esp = self.graph_.text_esp.get('1.0', 'end')
с3.execute("INSERT INTO fav VALUES(?,?,?,?)", (t_count + 1, to_add_rus, to_add_eng, to_add_esp))
self.conn3.commit()
def rem_(self):
c4 = self.conn3.cursor()
idx = (self.graph_.word_.f_0_to_remove)
idx = idx[0]
print(idx)
c4.execute("DELETE FROM fav WHERE id_=?", (idx,))
self.conn3.commit()
This is a class which I use to add and remove different rows from db (using Tkinter as GUI).
So basically I'm trying to make two different connections to the same db via two different cursors (in order to be able to add and remove words from it). And I constantly get this error:
self.conn3.commit()
sqlite3.OperationalError: database is locked
I already tried differend options, makin' two different cursors, etc. nothing helps.

You need to close the database connection after opening it. Otherwise the connection persists, and so the database believes an edit is underway so locks itself.
You can break the connection with the command self.conn3.close().

Related

Odoo 10: How to know if record is already in database or a new one?

I'm working on to compute the average of x records and I don't want to include the last one (the record where I trigger the action).I can trigger the action in existing record or in a new one (not yet in database).
Here is my code:
#api.one
#api.depends('stc')
def _compute_average_gross(self):
if self.stc:
base_seniority = 12
match_seniority = self.seniority.split()
total_seniority = int(match_seniority[0]) + int(match_seniority[2]) * 12
if total_seniority < 12:
base_seniority = total_seniority if total_seniority else 1 # avoid dividing by 0
# if the hr.payslip is already in db
if self._origin.id:
limit = 13
# could be self.env.cr.execute()
sum_sbr = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit)[1:].mapped('line_ids').filtered(lambda x: x.code == 'SBR').mapped('amount'))
sum_average_gross = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit)[1:].mapped('average_gross'))
else:
limit = 12
# could be self.env.cr.execute()
sum_sbr = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit).mapped('line_ids').filtered(lambda x: x.code == 'SBR').mapped('amount'))
sum_average_gross = sum(self.search([('employee_id', '=', self.employee_id.id)], order='create_date desc', limit=limit).mapped('average_gross'))
self.average_gross = round((sum_sbr + sum_average_gross) / base_seniority, 2)
With that I got an error that self doesn't have _origin, I trier with origin but got the same error. I also tried with self.context['params'].get('id') but it doesn't work as expected.
Could you help me?
To check if record is not saved in database do this:
if isinstance(self.id, models.NewId):
# record is not saved in database.
# do your logic
# record is saved in databse
if not isinstance(self.id, models.NewId):
# ....
for all who are coming to this after the accepted answer:
the correct solution should be this
if isinstance(self.id, models.NewId) and not self._origin:
# record is not saved in database.
# do your logic
# record is saved in databse
if not isinstance(self.id, models.NewId) or self._origin:
# ....
I'm not sure if _origin already existed in Odoo 10, but needed the same in Odoo 13
I haven't tested in a single record but with res.partner and the partner contacts (field child_ids) and the problem is if you open an existing contact and change any field, odoo transfers the existing record in a new record and you get a false positive answer as the record.id is a new ID but the origin exists in DB
haven't tested with copy functionality but i'm sure oodo is correctly resetting the origin in the new record so my answer should be right

Bulk INSERT IGNORE using Flask-SQLAlchemy

I'm trying to update a database using API-gathered data, and I need to make sure all tables are being updated.
Sometime I will receive data that's already in the database, so I want to do an INSERT IGNORE.
My current code is something like this:
def update_orders(new_orders):
entries = []
for each_order in new_orders:
shipping_id = each_order['id']
title = each_order['title']
price = each_order['price']
code = each_order['code']
source = each_order['source']
phone = each_order['phone']
category = each_order['delivery_category']
carrier = each_order['carrier_identifier']
new_entry = Orders(
id=shipping_id,
title=title,
code=code,
source=source,
phone=phone,
category=category,
carrier=carrier,
price=price
)
entries.append(new_entry)
if len(entries) == 0:
print('No new orders.')
break
else:
print('New orders:', len(entries))
db.session.add_all(entries)
db.session.commit()
This works well when I'm creating the database from scratch, but it will give me an error if there's duplicate data, and I'm not able to commit the inserts.
I've been reading for a while, and found a workaround that uses prefix_with:
print('New orders:', len(entries))
if len(entries) == 0:
print('No new orders.')
else:
insert_command = Orders.__table__.insert().prefix_with('OR IGNORE').values(entries)
db.session.execute(insert_command)
db.session.commit()
The problem is that values(entries) is a bunch of objects:
<shop.database.models.Orders object at 0x11986def0> instead of being the instance of the class, is the class instance object in memory.
Anybody has any suggestion on approaching this problem?
Feel free to suggest a different approach, or just an adjustment.
Thanks a lot.
What database are you using ? Under MySQL, "INSERT OR IGNORE" is not valid syntax, instead one should use "INSERT IGNORE". I had the same situation and got my query to work with the following:
insert_command = Orders.__table__.insert().prefix_with(' IGNORE').values(entries)

Python Flask and SQLAlchemy, selecting all data from a column

I am attempting to query all rows for a column called show_id. I would then like to compare each potential item to be added to the DB with the results. Now the simplest way I can think of doing that is by checking if each show is in the results. If so pass etc. However the results from the below snippet are returned as objects. So this check fails.
Is there a better way to create the query to achieve this?
shows_inDB = Show.query.filter(Show.show_id).all()
print(shows_inDB)
Results:
<app.models.user.Show object at 0x10c2c5fd0>,
<app.models.user.Show object at 0x10c2da080>,
<app.models.user.Show object at 0x10c2da0f0>
Code for the entire function:
def save_changes_show(show_details):
"""
Save the changes to the database
"""
try:
shows_inDB = Show.query.filter(Show.show_id).all()
print(shows_inDB)
for show in show_details:
#Check the show isnt already in the DB
if show['id'] in shows_inDB:
print(str(show['id']) + ' Already Present')
else:
#Add show to DB
tv_show = Show(
show_id = show['id'],
seriesName = str(show['seriesName']).encode(),
aliases = str(show['aliases']).encode(),
banner = str(show['banner']).encode(),
seriesId = str(show['seriesId']).encode(),
status = str(show['status']).encode(),
firstAired = str(show['firstAired']).encode(),
network = str(show['network']).encode(),
networkId = str(show['networkId']).encode(),
runtime = str(show['runtime']).encode(),
genre = str(show['genre']).encode(),
overview = str(show['overview']).encode(),
lastUpdated = str(show['lastUpdated']).encode(),
airsDayOfWeek = str(show['airsDayOfWeek']).encode(),
airsTime = str(show['airsTime']).encode(),
rating = str(show['rating']).encode(),
imdbId = str(show['imdbId']).encode(),
zap2itId = str(show['zap2itId']).encode(),
added = str(show['added']).encode(),
addedBy = str(show['addedBy']).encode(),
siteRating = str(show['siteRating']).encode(),
siteRatingCount = str(show['siteRatingCount']).encode(),
slug = str(show['slug']).encode()
)
db.session.add(tv_show)
db.session.commit()
except Exception:
print(traceback.print_exc())
I have decided to use the method above and extract the data I wanted into a list, comparing each show to the list.
show_compare = []
shows_inDB = Show.query.filter().all()
for item in shows_inDB:
show_compare.append(item.show_id)
for show in show_details:
#Check the show isnt already in the DB
if show['id'] in show_compare:
print(str(show['id']) + ' Already Present')
else:
#Add show to DB
For querying a specific column value, have a look at this question: Flask SQLAlchemy query, specify column names. This is the example code given in the top answer there:
result = SomeModel.query.with_entities(SomeModel.col1, SomeModel.col2)
The crux of your problem is that you want to create a new Show instance if that show doesn't already exist in the database.
Querying the database for all shows and looping through the result for each potential new show might become very inefficient if you end up with a lot of shows in the database, and finding an object by identity is what an RDBMS does best!
This function will check to see if an object exists, and create it if not. Inspired by this answer:
def add_if_not_exists(model, **kwargs):
if not model.query.filter_by(**kwargs).first():
instance = model(**kwargs)
db.session.add(instance)
So your example would look like:
def add_if_not_exists(model, **kwargs):
if not model.query.filter_by(**kwargs).first():
instance = model(**kwargs)
db.session.add(instance)
for show in show_details:
add_if_not_exists(Show, id=show['id'])
If you really want to query all shows upfront, instead of putting all of the id's into a list, you could use a set instead of a list which will speed up your inclusion test.
E.g:
show_compare = {item.show_id for item in Show.query.all()}
for show in show_details:
# ... same as your code

python cassandra get big result of select * in generator (without storage result in ram)

I want to get all data in cassandra table "user"
i have 840000 users and i don't want to get all users in python list.
i want get users in packs of 100 users
in cassandra doc https://datastax.github.io/python-driver/query_paging.html
i see i can use fetch_size, but in my python code i have database object that contains all cql instruction
from cassandra.cluster import Cluster
from cassandra.query import SimpleStatement
class Database:
def __init__(self, name, salary):
self.cluster = Cluster(['192.168.1.1', '192.168.1.2'])
self.session = cluster.connect()
def get_users(self):
users_list = []
query = "SELECT * FROM users"
statement = SimpleStatement(query, fetch_size=10)
for user_row in session.execute(statement):
users_list.append(user_row.name)
return users_list
actually get_users return very big list of user name
but i want to transform return get_users to a "generator"
i don't want get all users name in 1 list and 1 call of function get_users, but i want to have lot of call get_users and return list with only 100 users max every call function
for example :
list1 = database.get_users()
list2 = database.get_users()
...
listn = database.get_users()
list1 contains 100 first user in query
list2 contains 100 "second" users in query
listn contains the latest elements in query (<=100)
is this possible ?
thanks for advance for your answer
According to Paging Large Queries:
Whenever there are no more rows in the current page, the next page
will be fetched transparently.
So, if you execute your code like this, you will still the whole result set, but this is paged in a transparent manner.
In order to achieve what you need to use callbacks. You can also find some code sample on the link above.
I added below the full code for reference.
from cassandra.cluster import Cluster
from cassandra.query import SimpleStatement
from threading import Event
class PagedResultHandler(object):
def __init__(self, future):
self.error = None
self.finished_event = Event()
self.future = future
self.future.add_callbacks(
callback=self.handle_page,
errback=self.handle_error)
def handle_page(self, rows):
for row in rows:
process_row(row)
if self.future.has_more_pages:
self.future.start_fetching_next_page()
else:
self.finished_event.set()
def handle_error(self, exc):
self.error = exc
self.finished_event.set()
def process_row(user_row):
print user_row.name, user_row.age, user_row.email
cluster = Cluster()
session = cluster.connect()
query = "SELECT * FROM myschema.users"
statement = SimpleStatement(query, fetch_size=5)
future = session.execute_async(statement)
handler = PagedResultHandler(future)
handler.finished_event.wait()
if handler.error:
raise handler.error
cluster.shutdown()
Moving to next page is done in handle_page when start_fetching_next_page is called.
If you replace the if statement with self.finished_event.set() you will see that the iteration stops after the first 5 rows as defined in fetch_size

how do i call the function in specified period of time?

I am trying to build a learner which will call the function and store the weights into the DB, now the problem is, it at least takes from 30 to 60 seconds to learn, so if i want to store i need to wait and i decided to call the function with threading timer which will call the function after specified time period,
Example of code:
def learn(myConnection):
'''
Derive all the names,images where state = 1
Learn and Store
Delete all the column where state is 1
'''
id = 0
with myConnection:
cur = myConnection.cursor()
cur.execute("Select name, image FROM images WHERE state = 1")
rows = cur.fetchall()
for row in rows:
print "%s, %s" % (row[0], row[1])
name ='images/Output%d.jpg' % (id,)
names = row[0]
with open(name, "wb") as output_file:
output_file.write(row[1])
unknown_image = face_recognition.load_image_file(name)
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
# here i give a timer and call the function
threading=Timer(60, storeIntoSQL(names,unknown_encoding) )
threading.start()
id += 1
the thing that did not work with this is that it just worked as if i did not specify the timer it did not wait 60 seconds it just worked normal as if i called the function without the timer, Any ideas on how i can make this work or what alternatives i can use ? ... PS i have already used time.sleep it just stops the main thread i need the Project to be running while this is training
Example of the function that is being called:
def storeIntoSQL(name,unknown_face_encoding):
print 'i am printing'
# connect to the database
con = lite.connect('users2.db')
# store new person into the database rmena
with con:
cur = con.cursor()
# get the new id
cur.execute("SELECT DISTINCT id FROM Users ")
rows = cur.fetchall()
newId = len(rows)+1
# store into the Database
query = "INSERT INTO Users VALUES (?,?,?)"
cur.executemany(query, [(newId,name,r,) for r in unknown_face_encoding])
con
I was also told that MUTEX synchronization could help, where i can make one thread to work only if the other thread has finished it's job but i am not sure how to implement it and am open to any suggestions
I would suggest to use the threading library of python and implement a time.sleep(60) somewhere inside your function or in a wrapper function. For example
import time
import threading
def delayed_func(name,unknown_face_encoding):
time.sleep(60)
storeIntoSQL(name,unknown_face_encoding)
timer_thread = threading.Thread(target=delayed_func, args=(name,unknown_face_encoding))
timer_thread.start()

Categories

Resources