Unable to use same SQLite connection across multiple objects in Python - python

I'm working on a Python desktop app using wxPython and SQLite. The SQLite db is basically being used as a save file for my program so I can save and backup and reload the data being entered. I've created separate classes for parts of my UI so make it easier to manage from the "main" window. The problem I'm having is that each control needs to access the database, but the filename, and therefore the connection name, needs to be dynamic. I originally created a DBManager class that hardcoded a class variable with the connection string, which worked but didn't let me change the filename. For example
class DBManager:
conn = sqlite3.Connection('my_file.db')
#This could then be passed to other objects as needed
class Control1:
file = DBManager()
class Control2:
file = DBManager()
etc.
However, I'm running into a lot of problems trying to create this object with a dynamic filename while also using the same connection across all controls. Some examples of this I've tried...
class DBManager:
conn = None
def __init__(self):
pass
def __init__(self, filename):
self.conn = sqlite3.Connection(filename)
class Control1:
file = DBManager()
class Control2:
file = DBManager()
The above doesn't work because Python doesn't allow overloading constructors, so I always have to pass a filename. I tried adding some code to the constructor to act differently based upon whether the filename passed was blank or not.
class DBManager:
conn = None
def __init__(self, filename):
if filename != '':
self.conn = sqlite3.Connection(filename)
class Control1:
file = DBManager('')
class Control2:
file = DBManager('')
This let me compile, but the controls only had an empty connection. The conn object was None. It seems like I can't change a class variable after it's been created? Or am I just doing something wrong?
I've thought about creating one instance of DBManager that I then pass into each control, but that would be a huge mess if I need to load a new DB after starting the program. Also, it's just not as elegant.
So, I'm looking for ideas on achieving the one-connection path with a dynamic filename. For what it's worth, this is entirely for personal use, so it doesn't really have to follow "good" coding convention.

Explanation of your last example
You get None in the last example because you are instantiating DBManager in Control1 and Control2 with empty strings as input, and the DBManager constructor has an if-statement saying that a connection should not be created if filename is just an empty string. This leads to the self.conn instance variable never being set and any referal to conn would resolve to the conn class variable which is indeed set to None.
self.conn would create an instance variable only accessible by the specific object.
DBManager.conn would refer to the class variable and this is what you want to update.
Example solution
If you only want to keep one connection, you would need to do it with e.g. a. class variable, and update the class variable every time you interact with a new db.
import sqlite3
from sqlite3 import Connection
class DBManager:
conn = None
def __init__(self, filename):
if filename != '':
self.filename = filename
def load(self) -> Connection:
DBManager.conn = sqlite3.Connection(self.filename) # updating class variable with new connection
print(DBManager.conn, f" used for {self.filename}")
return DBManager.conn
class Control1:
db_manager = DBManager('control1.db')
conn = db_manager.load()
class Control2:
db_manager = DBManager('control2.db')
conn = db_manager.load()
if __name__ == "__main__":
control1 = Control1()
control2 = Control2()
would output the below. Note that the class variable conn refers to different memory addresses upon instantiating each control, showing that it's updated.
<sqlite3.Connection object at 0x10dc1e1f0> used for control1.db
<sqlite3.Connection object at 0x10dc1e2d0> used for control2.db

Related

With Peewee, how to check if an SQLite file has been created vs filled without creating a table. If I import, it seems the table is created?

first I'd like to check if the file exists, and Ive used this os.path:
def check_db_exist():
try:
file_exists = exists('games.db')
if file_exists:
file_size = os.path.getsize('games.db')
if file_size > 3000:
return True, file_size
else:
return False, 'too small'
else:
return False, 'does not exist'
except:
return False, 'error'
I have a separate file for my models, and creating the database. My concern is, if I import the class for the database it instantiates the sql file.
Moreover, pywebview when displaying my html, wipes all variables.
If I were to run this process as I load my page, then I can't access the variable for true/false sqlite exists.
db = SqliteDatabase('games.db')
class Game(Model):
game = CharField()
exe = CharField()
path = CharField()
longpath = CharField()
i_d = IntegerField()
class Meta:
database = db
This creates the table, so checking if the file exists is useless.
Then if I uncomment the first line in this file the database gest created, otherwise all of my db. variables are unusable. I must be missing a really obvious function to solve my problems.
# db = SqliteDatabase('games.db')
def add_game(game, exe, path, longpath, i_d):
try:
Game.create(game=game, exe=exe, path=path, longpath=longpath, i_d=i_d)
except:
pass
def loop_insert(lib):
db.connect()
for i in lib[0]:
add_game(i.name, i.exe, i.path, i.longpath, i.id)
db.close()
def initial_retrieve():
db.connect()
vals = ''
for games in Game.select():
val = js.Import.javascript(str(games.game), str(games.exe), str(games.path), games.i_d)
vals = vals + val
storage = vals
db.close()
return storage
should I just import the file at a different point in the file? whenever I feel comfortable? I havent seen that often so I didnt want to be improper in formatting.
edit: edit: Maybe more like this?
def db():
db = SqliteDatabase('games.db')
return db
class Game(Model):
game = CharField()
exe = CharField()
path = CharField()
file 2:
from sqlmodel import db, Game
def add_game(game, exe, path, longpath, i_d):
try:
Game.create(game=game, exe=exe, path=path, longpath=longpath, i_d=i_d)
except:
pass
def loop_insert(lib):
db.connect()
for i in lib[0]:
add_game(i.name, i.exe, i.path, i.longpath, i.id)
db.close()
I am not sure if this answers your question, since it seems to involve multiple processes and/or processors, but In order to check for the existence of a database file, I have used the following:
DATABASE = 'dbfile.db'
if os.path.isfile(DATABASE) is False:
# Create the database file here
pass
else:
# connect to database here
db.connect()
I would suggest using sqlite's user_version pragma:
db = SqliteDatabase('/path/to/db.db')
version = db.pragma('user_version')
if not version: # Assume does not exist/newly-created.
# do whatever.
db.pragma('user_version', 1) # Set user version.
from reddit:
me: To the original challenge, there's a reason I want to know whether the file exists. Maybe its flawed at the premises, I'll explain and you can fill in there.
This script will run on multiple machines I dont ahve access to. At the entry point of a first-time use case, I will be porting data from a remote location, if its the first time the script runs on that machine, its going down a different work flow than a repeated opening.
Akin to grabbing all pc programs vs appending and reading from teh last session. How would you suggest quickly understanding if that process has started and finished from a previous session.
Checking if the sqlite file is made made the most intuitive sense, and then adjusting to byte size. lmk
them:
This is a good question!
How would you suggest quickly understanding if that process
has started and finished from a previous session.
If the first thing your program does on a new system is download some kind of fixture data, then the way I would approach it is to load the DB file as normal, have Peewee ensure the tables exist, and then do a no-clause SELECT on one of them (either through the model, or directly on the database through the connection if you want.) If it's empty (you get no results) then you know you're on a fresh system and you need to make the remote call. If you get results (you don't need to know what they are) then you know you're not on a fresh system.

ZODB broken instance

I am trying to persist an object reference using only ZODB in a FileStorage database.
I made a test to analyze its performance, but the object when it is loaded it appears to be broken.
The test consists on:
create an object in one script and write it to database.
In another script read that object from the same database and use it there.
zodb1.py image from CMD
zodb2.py image from CMD
zodb1.py
import ZODB
from ZODB.FileStorage import FileStorage
import persistent
import transaction
storage = FileStorage('ODB.fs')
db = ZODB.DB(storage)
connection = db.open()
ODB = connection.root()
print(ODB)
class Instrument(persistent.Persistent):
def __init__(self, name, address):
self.name = name
self.address = address
def __str__(self):
return f'Instrument - {self.name}, ID: {self.address}'
camera = Instrument(name='Logitech', address='CAM0')
ODB['camera'] = camera
ODB._p_changed = True
transaction.commit()
print(ODB)
ob = ODB['camera']
print(ob)
print(dir(ob))
zodb2.py
import ZODB, ZODB.FileStorage
import persistent
import transaction
connection = ZODB.connection('ODB.fs')
ODB = connection.root()
print(ODB)
ob = ODB['camera']
print(ob)
print(dir(ob))
Am I missing something important? I've read the ZODB's documentation and I see no other configuration process or another way to aproach this.
Thank you in advance.
I think that the problem you see is because zodb2.py has no knowledge of the Instrument class defined in zodb1.py.
I guess that if you moved your class to a separate module and imported it in both zodb1 and zodb2, you would not see a broken object.

Reusing a single session for routing connections with SQLAlchemy between master & read replicas

We needed to route our database requests to either a writer master database or a set of read replicas.
We found a blog post by Mike Bayer suggesting how to do so using SQLAlchemy. We replicated the solution but that did not work out with our existing tests due to various reasons.
We went on with the following below. This will reuse one session rather than creating new ones that will stack altogether:
class ExplicitRoutingSession(SignallingSession):
_name = None
def get_bind(self, mapper=None, clause=None):
# If reader and writer binds are not configured,
# connect using the default SQLALCHEMY_DATABASE_URI
if not self.binds_setup:
return super().get_bind(mapper, clause)
return self.load_balance(mapper, clause)
def load_balance(self, mapper=None, clause=None):
# Use the explicit name if present
if self._name and not self._flushing:
bind = self._name
self._name = None
self.app.logger.debug(f"Connecting -> {bind}")
return get_state(self.app).db.get_engine(self.app, bind=bind)
# Everything else goes to the writer engine
else:
self.app.logger.debug("Connecting -> writer")
return get_state(self.app).db.get_engine(self.app, bind='writer')
def using_bind(self, name):
self._name = name
return self
#cached_property
def binds_setup(self):
binds = self.app.config['SQLALCHEMY_BINDS'] or {}
return all([k in binds for k in ['reader', 'writer']])
So far it works good for us. We assume we might lose some functionality such as db save points by not having stacked sessions... but we'd like to know if there are stability and unforeseen risks other than losing features with such an approach?
Notes:
We are also using flask-sqlalchemy.
This is from an open source notification platform and you can browse the code/branch yourself.

How can I optionally populate instance of an object from pickle file?

Full code here: link
Relevant code is:
class Session():
...
def load(self, create_date=None):
if create_date:
self = pickle.load(open(file_path(create_date), 'rb'))
else:
self = pickle.load(open(file_path(), 'rb'))
I have defined a simple class "Session"
(It's a container for everything else my app will do).
I have a method for creating a fresh session, and saving it to a pickle.
I intend to have one Session = one Day.
So if the user re-opens the app on a particular day, it checks for an existing session and reloads it from the pickle.
My current code throws an error:
AttributeError: 'Session' object has no attribute 'create_date'
I believe the line that isn't working properly is 44:
self = pickle.load(open(file_path(), 'rb'))
But I have a working variation on Line 12. (Not ideal, outside the class)
How can I load this existing pickle data and populate it into the "active_session" instance?

Understanding Class inheritance to DRY up some code

I am using the cloudant python library to connect to my cloudant account.
Here is the code I have so far:
import cloudant
class WorkflowsCloudant(cloudant.Account):
def __init__(self):
super(WorkflowsCloudant, self).__init__(settings.COUCH_DB_ACCOUNT_NAME,
auth=(settings.COUCH_PUBLIC_KEY,
settings.COUCH_PRIVATE_KEY))
#blueprint.route('/<workflow_id>')
def get_single_workflow(account_id, workflow_id):
account = WorkflowsCloudant()
db = account.database(settings.COUCH_DB_NAME)
doc = db.document(workflow_id)
resp = doc.get().json()
if resp['account_id'] != account_id:
return error_helpers.forbidden('Invalid Account')
return jsonify(resp)
This Flask controller will have CRUD operations inside of it, but with the current implementation, I will have to set the account and db variables in each method before performing operations on the document I want to view/manipulate. How can I clean up (or DRY up) my code so that I only have to call to my main WorkflowsCloudant class?
I don't know cloudant, so I may be totally off base, but I believe this answers your question:
Delete the account, db, and doc lines from get_single_workflow.
Add the following lines to __init__:
db = account.database(settings.COUCH_DB_NAME)
self.doc = db.document(workflow_id)
Change the resp line in get_single_workflow to:
resp = WorkflowsCloudant().doc.get().json()

Categories

Resources