first I'd like to check if the file exists, and Ive used this os.path:
def check_db_exist():
try:
file_exists = exists('games.db')
if file_exists:
file_size = os.path.getsize('games.db')
if file_size > 3000:
return True, file_size
else:
return False, 'too small'
else:
return False, 'does not exist'
except:
return False, 'error'
I have a separate file for my models, and creating the database. My concern is, if I import the class for the database it instantiates the sql file.
Moreover, pywebview when displaying my html, wipes all variables.
If I were to run this process as I load my page, then I can't access the variable for true/false sqlite exists.
db = SqliteDatabase('games.db')
class Game(Model):
game = CharField()
exe = CharField()
path = CharField()
longpath = CharField()
i_d = IntegerField()
class Meta:
database = db
This creates the table, so checking if the file exists is useless.
Then if I uncomment the first line in this file the database gest created, otherwise all of my db. variables are unusable. I must be missing a really obvious function to solve my problems.
# db = SqliteDatabase('games.db')
def add_game(game, exe, path, longpath, i_d):
try:
Game.create(game=game, exe=exe, path=path, longpath=longpath, i_d=i_d)
except:
pass
def loop_insert(lib):
db.connect()
for i in lib[0]:
add_game(i.name, i.exe, i.path, i.longpath, i.id)
db.close()
def initial_retrieve():
db.connect()
vals = ''
for games in Game.select():
val = js.Import.javascript(str(games.game), str(games.exe), str(games.path), games.i_d)
vals = vals + val
storage = vals
db.close()
return storage
should I just import the file at a different point in the file? whenever I feel comfortable? I havent seen that often so I didnt want to be improper in formatting.
edit: edit: Maybe more like this?
def db():
db = SqliteDatabase('games.db')
return db
class Game(Model):
game = CharField()
exe = CharField()
path = CharField()
file 2:
from sqlmodel import db, Game
def add_game(game, exe, path, longpath, i_d):
try:
Game.create(game=game, exe=exe, path=path, longpath=longpath, i_d=i_d)
except:
pass
def loop_insert(lib):
db.connect()
for i in lib[0]:
add_game(i.name, i.exe, i.path, i.longpath, i.id)
db.close()
I am not sure if this answers your question, since it seems to involve multiple processes and/or processors, but In order to check for the existence of a database file, I have used the following:
DATABASE = 'dbfile.db'
if os.path.isfile(DATABASE) is False:
# Create the database file here
pass
else:
# connect to database here
db.connect()
I would suggest using sqlite's user_version pragma:
db = SqliteDatabase('/path/to/db.db')
version = db.pragma('user_version')
if not version: # Assume does not exist/newly-created.
# do whatever.
db.pragma('user_version', 1) # Set user version.
from reddit:
me: To the original challenge, there's a reason I want to know whether the file exists. Maybe its flawed at the premises, I'll explain and you can fill in there.
This script will run on multiple machines I dont ahve access to. At the entry point of a first-time use case, I will be porting data from a remote location, if its the first time the script runs on that machine, its going down a different work flow than a repeated opening.
Akin to grabbing all pc programs vs appending and reading from teh last session. How would you suggest quickly understanding if that process has started and finished from a previous session.
Checking if the sqlite file is made made the most intuitive sense, and then adjusting to byte size. lmk
them:
This is a good question!
How would you suggest quickly understanding if that process
has started and finished from a previous session.
If the first thing your program does on a new system is download some kind of fixture data, then the way I would approach it is to load the DB file as normal, have Peewee ensure the tables exist, and then do a no-clause SELECT on one of them (either through the model, or directly on the database through the connection if you want.) If it's empty (you get no results) then you know you're on a fresh system and you need to make the remote call. If you get results (you don't need to know what they are) then you know you're not on a fresh system.
I am trying to persist an object reference using only ZODB in a FileStorage database.
I made a test to analyze its performance, but the object when it is loaded it appears to be broken.
The test consists on:
create an object in one script and write it to database.
In another script read that object from the same database and use it there.
zodb1.py image from CMD
zodb2.py image from CMD
zodb1.py
import ZODB
from ZODB.FileStorage import FileStorage
import persistent
import transaction
storage = FileStorage('ODB.fs')
db = ZODB.DB(storage)
connection = db.open()
ODB = connection.root()
print(ODB)
class Instrument(persistent.Persistent):
def __init__(self, name, address):
self.name = name
self.address = address
def __str__(self):
return f'Instrument - {self.name}, ID: {self.address}'
camera = Instrument(name='Logitech', address='CAM0')
ODB['camera'] = camera
ODB._p_changed = True
transaction.commit()
print(ODB)
ob = ODB['camera']
print(ob)
print(dir(ob))
zodb2.py
import ZODB, ZODB.FileStorage
import persistent
import transaction
connection = ZODB.connection('ODB.fs')
ODB = connection.root()
print(ODB)
ob = ODB['camera']
print(ob)
print(dir(ob))
Am I missing something important? I've read the ZODB's documentation and I see no other configuration process or another way to aproach this.
Thank you in advance.
I think that the problem you see is because zodb2.py has no knowledge of the Instrument class defined in zodb1.py.
I guess that if you moved your class to a separate module and imported it in both zodb1 and zodb2, you would not see a broken object.
We needed to route our database requests to either a writer master database or a set of read replicas.
We found a blog post by Mike Bayer suggesting how to do so using SQLAlchemy. We replicated the solution but that did not work out with our existing tests due to various reasons.
We went on with the following below. This will reuse one session rather than creating new ones that will stack altogether:
class ExplicitRoutingSession(SignallingSession):
_name = None
def get_bind(self, mapper=None, clause=None):
# If reader and writer binds are not configured,
# connect using the default SQLALCHEMY_DATABASE_URI
if not self.binds_setup:
return super().get_bind(mapper, clause)
return self.load_balance(mapper, clause)
def load_balance(self, mapper=None, clause=None):
# Use the explicit name if present
if self._name and not self._flushing:
bind = self._name
self._name = None
self.app.logger.debug(f"Connecting -> {bind}")
return get_state(self.app).db.get_engine(self.app, bind=bind)
# Everything else goes to the writer engine
else:
self.app.logger.debug("Connecting -> writer")
return get_state(self.app).db.get_engine(self.app, bind='writer')
def using_bind(self, name):
self._name = name
return self
#cached_property
def binds_setup(self):
binds = self.app.config['SQLALCHEMY_BINDS'] or {}
return all([k in binds for k in ['reader', 'writer']])
So far it works good for us. We assume we might lose some functionality such as db save points by not having stacked sessions... but we'd like to know if there are stability and unforeseen risks other than losing features with such an approach?
Notes:
We are also using flask-sqlalchemy.
This is from an open source notification platform and you can browse the code/branch yourself.
Full code here: link
Relevant code is:
class Session():
...
def load(self, create_date=None):
if create_date:
self = pickle.load(open(file_path(create_date), 'rb'))
else:
self = pickle.load(open(file_path(), 'rb'))
I have defined a simple class "Session"
(It's a container for everything else my app will do).
I have a method for creating a fresh session, and saving it to a pickle.
I intend to have one Session = one Day.
So if the user re-opens the app on a particular day, it checks for an existing session and reloads it from the pickle.
My current code throws an error:
AttributeError: 'Session' object has no attribute 'create_date'
I believe the line that isn't working properly is 44:
self = pickle.load(open(file_path(), 'rb'))
But I have a working variation on Line 12. (Not ideal, outside the class)
How can I load this existing pickle data and populate it into the "active_session" instance?
I am using the cloudant python library to connect to my cloudant account.
Here is the code I have so far:
import cloudant
class WorkflowsCloudant(cloudant.Account):
def __init__(self):
super(WorkflowsCloudant, self).__init__(settings.COUCH_DB_ACCOUNT_NAME,
auth=(settings.COUCH_PUBLIC_KEY,
settings.COUCH_PRIVATE_KEY))
#blueprint.route('/<workflow_id>')
def get_single_workflow(account_id, workflow_id):
account = WorkflowsCloudant()
db = account.database(settings.COUCH_DB_NAME)
doc = db.document(workflow_id)
resp = doc.get().json()
if resp['account_id'] != account_id:
return error_helpers.forbidden('Invalid Account')
return jsonify(resp)
This Flask controller will have CRUD operations inside of it, but with the current implementation, I will have to set the account and db variables in each method before performing operations on the document I want to view/manipulate. How can I clean up (or DRY up) my code so that I only have to call to my main WorkflowsCloudant class?
I don't know cloudant, so I may be totally off base, but I believe this answers your question:
Delete the account, db, and doc lines from get_single_workflow.
Add the following lines to __init__:
db = account.database(settings.COUCH_DB_NAME)
self.doc = db.document(workflow_id)
Change the resp line in get_single_workflow to:
resp = WorkflowsCloudant().doc.get().json()