I want to initialize two databases with total different models in my database.py file.
database.py
engine1 = create_engine(uri1)
engine2 = create_engine(uri2)
session1 = scoped_session(sessionmaker(autocommit=False,autoflush=False,bind=engine1))
session2 = scoped_session(sessionmaker(autocommit=False,autoflush=False,bind=engine2))
Base = declarative_base(name='Base')
Base.query = session1.query_property()
LogBase = declarative_base(name='LogBase')
LogBase.query = session2.query_property()
and the two model structures:
models.py
class MyModel(Base):
pass
models2.py
class MyOtherModel(LogBase):
pass
back to the database.py where i want to create/initialize the databases after importing the models
# this does init the database correctly
def init_db1():
import models
Base.metadata.create_all(bind=engine1)
# this init function doeas not work properly
def init_db2():
import models2
LogBase.metadata.create_all(bind=engine2)
if I change the import in the second init function it does work
def init_db2():
from models2 import *
LogBase.metadata.create_all(bind=engine2)
but there is a warning:
database.py:87: SyntaxWarninyntaxWarning: import * only allowed at module level
Everthing does work properly, I have the databases initialized, but the Warning tells me, that there is something wrong with it.
If someone can explain me why the first attempt isn't correct I would be grateful. Thanks.
You are indeed discouraged from using the from ... import * syntax inside functions, because that makes it impossible for Python to determine what the local names are for that function, breaking scoping rules. In order for Python to make things work anyway, certain optimizations have to be disabled and name lookup is a lot slower as a result.
I cannot reproduce your problem otherwise. Importing just models2 makes sure that everything defined in that module is executed so that the LogBase class has a registry of all declarations. There is no reason for that path to fail while the models.py declarations for Base do work.
For the purposes of SQLAlchemy and declarative table metadata, there is no difference between the import models2 and the from models2 import * syntax; only their effect on the local namespace differs. In both cases, the models2 top-level code is run, classes are defined, etc. but in the latter case then the top-level names from the module are added to the local namespace as direct references, as opposed to just a reference to the module object being added.
Related
I've a implemented a BaseScanner inside common/base/scanner.py that is subclasses by Scanner inside stash/scanner.py, jira/scanner.py, etc.
Now, the problem is that BaseScanner needs to access the ORM models in e.g. stash/db/models.py, depending on where it's subclasses (in stash, jira, etc.):
# common package
common/base/scanner.py
# stash package
stash/scanner.py
stash/db/models.py
# jira package
jira/scanner.py
jira/db/models.py
...
Is it an anti-pattern to provide a module as an argument to a class when instantiating it, like I do here in main.py?
import stash, jira
...
if args.command == 'stash':
import stash.db.models as models
scanner = jira.Scanner(args, models, ...)
scanner.run()
and then to access the different ORM models from inside BaseScanner, like self.models.Scan, self.models.Match, etc.?
If it's an anti-pattern, what could an alternative solution be?
I would like to make use of object-oriented programming style when coding with Peewee. Unfortunately, docs give hints only with kinda global variables handling DB connection. When I try to take adventage of Model and Controller objects (View isn't important at this moment), I'm getting error, probably because of cross-referencing each other:
ImportError: cannot import name 'Application' from 'Application'
(C:\ [... src ...] )
Peewee requires to put database handler in abstract class definition, like this:
class BaseModel(Model):
class Meta:
database = SqliteDatabase('../res/db.db', pragmas={'foreign_keys': 1})
Well, the problem is, I cannot keep the DB handler like that. I'm preparing my app for kinda standalone Windows application with service module. For this reason, I guess I need to store absolute path for db file in config file. Consequently, before the model starts loading database, I need to load configuration files from controller.
What I did was pushing the DB handler to the static field in controller:
from Application import Application
from peewee import *
class BaseModel(Model):
class Meta:
database = Application.database
As you see, DB handler is taken from Application abstract controller. Application is a base controller, from which derives GuiApp and ServiceApp. Both descendands use the same DB, so keeping handler as a static field looks convenient for me.
Now, please do take a look at my Application class:
import logging.handlers
from peewee import SqliteDatabase
import datetime
import threading
from Windows import *
class Application:
database = SqliteDatabase(None)
def __init__(self, appname):
# (...)
from Entities import RouterSettings, BalanceEntry
Application.database.init(
conig_app_src + 'db.db',
pragmas={'foreign_keys': 1})
Application.database.connect()
# !!!
from Entities import RouterSettings, BalanceEntry
# !!!
Application.database.create_tables([RouterSettings, BalanceEntry], safe=True)
The problem is, when I put Peewee Entities import right before the place I start using them, I mean, inside __init__ method, I'm somehow losing accessability from another parts of my app. It forces me to put this import statements in every controller method in order to get proper access to Entity models.
On the other hand, when I put Entity import on top of controller module, I'm getting error from cross referencing. Error message I put above.
To sum up, I'm looking for OOP way to manage app with peewee models. Do you know any way to do that? Or do I have to use global database variable in the app init?
Thanks to #coleifer, I decided to make one another class for database handling:
class DbHandler:
database = SqliteDatabase(None)
#staticmethod
def start(dbSrc):
DbHandler.database.init(
dbSrc + '\\res\\SIMail.db',
pragmas={'foreign_keys': 1})
DbHandler.database.connect()
DbHandler.database.create_tables([RouterSettings, BalanceEntry],
safe=True)
Well, eventually it looks quite similar to global variables, but I think this solution fits my needs. What is the most important, I managed to get out of cross-referencing.
I'm rebuilding a former Django REST API project as a GraphQL one. I now have queries & mutations working properly.
Most of my learning came from looking at existing Graphene-Django & Graphene-Python code samples. There seem to be a lot of inconsistencies amongst them.
In some it was suggested that the GraphQL queries should be placed in schema.py whereas the mutations should be placed in mutation.py.
What I'm thinking makes more sense is to instead have these two files hold their respective code:
- queries.py
- mutations.py
I'm relatively new to Django & Python though so want to be sure that I'm not violating any conventions.
Interested in your thoughts!
Robert
There aren't any conventions yet, since GraphQL is a fairly new alternative method to REST. Thus, "conventions" are created at the moment we speak.
However, since schema is general-defined term you may rename it to queries.
This is my project structure:
django_proj/
manage.py
requirements.txt
my_app/
__init__.py
migrations/
admin.py
schema/
__init__.py
schema.py # holds the class Query. The GraphQL endpoints, if you like
types.py # holds the DjangoObjectType classes
inputs.py # holds the graphene.InputObjectType classes (for defining input to a query or mutation)
mutations.py # holds the mutations (what else?!)
So the schema.py (__init__) could be renamed to queries.py if you like. There is no much big difference between these two words.
I liked nik_m's answer so much I wrote some code to generate the template structure from inside the Django shell. I want to enforce some consistency as I create these files over and over again. I'm putting the code here in case someone else finds it useful.
import os
from django.conf import settings
def schema_setup(app_name):
"""
Sets up a default schema file structure.
"""
SCHEMA_DIRECTORY_NAME = 'schema'
app_directory = os.path.join(settings.PROJECT_DIR, app_name)
if not os.path.exists(app_directory):
raise Exception("Can't find app directory {}".format(app_directory))
schema_directory = os.path.join(app_directory, SCHEMA_DIRECTORY_NAME)
if os.path.exists(schema_directory):
raise Exception("Schema directory {} already exists.".format(schema_directory))
os.makedirs(schema_directory)
mutation_class = "{}Mutation".format(app_name.title())
query_class = "{}Query".format(app_name.title())
init_txt = "from .mutations import {}\nfrom .queries import {}\n".format(mutation_class, query_class)
fields_txt = "# Insert common fields here.\nimport graphene\n"
inputs_txt = "# Insert graphene.InputObjectType classes.\nimport graphene\n"
mutations_txt = "# Insert graphql mutations here.\nimport graphene\n\nclass {}(graphene.AbstractType):\n pass\n".format(mutation_class)
queries_txt = "# Insert graphql queries here.\nimport graphene\n\nclass {}(graphene.AbstractType):\n pass\n".format(query_class)
types_txt = "# Insert DjangoObjectType classes here.\nimport graphene\nfrom graphene_django.types import DjangoObjectType\n"
for fname, file_text in [("__init__.py", init_txt),
("fields.py", fields_txt),
("inputs.py", inputs_txt),
("mutations.py", mutations_txt),
("queries.py", queries_txt),
("types.py", types_txt),
]:
with open(os.path.join(schema_directory, fname), "w") as output_file:
output_file.write(file_text)
print("Created {}".format(fname))
From inside the Django shell, run like schema_setup("my_app")
Note:
This assumes you set PROJECT_DIR in your settings like PROJECT_DIR = os.path.dirname(os.path.abspath(__file__))
In your top level schema, import like from my_app.schema import MyAppQuery, MyAppMutation
I've gone back and forth on "query" vs. "queries" and "mutation" vs "mutations" -- as of this moment, the graphene documentation isn't consistent
I have a library via which I dynamically load classes. It's exposed as
mylib.registry.<className>
registry is an instance of a class that contains a dictionary of class name (strings) to module names, and a getattr call that dynamically loads a class if it is requested. A user can thus refer to any class without having to deal with the module locations (there is a global namespace for class names, but not module names).
For example, the entries:
{'X', 'mylib.sublib.x',
'Y', 'mylib.sublib.y'}
could then be used like:
import mylib
x = mylib.registry.X()
y = mylib.registry.Y()
That's the background. On top of this, these objects are sqlalchemy ORM classes which have relationships to one another. Let's assume here that X has a one-to-many with Y.
Assume thus this definition.
class X(Base):
y_id = Column(Integer, ForeignKey('y.id'))
y = relationship('Y')
class Y(Base):
xs = relationship('X')
These are in separate files, and each imports the top-level registry.
So here's the issue -- how do I actually get this to resolve without loading every class up front?
The example above doesn't work, because if I import only X via the registry, then Y isn't in sqlalchemy class registry, and thus the relatiobship breaks.
If I import the registry itself and then refer to the classes directly, then the modules don't load because of interdependencies.
I tried using a lambda to defer loading, but this too fails with an error about a missing 'strategy'.
What approaches have others used here? If I'm missing something obvious, let me know. It's been a long day.
Thanks.
You should never use relationships on two sides that don't know about each other. Normally, this is avoided by using backref but in your case this creates problems, because you want each side to be aware of its relationship by itself. The trick here is the back_populates keyword, offered by relationship:
y = relationship("Y", back_populates="xs")
and
xs = relationship("X", back_populates="y")
Applying these will make them aware of each other. However, it will not solve your importing problem. Normally, you could now just from x import X on the Y side. But the other way around won't work because it will create a circular import.
The trick is simple: Put the import after the class you want to import. Because the strings in relationship are evaluated lazily, you can import the class after the relationship was defined. So for X do this:
class X(Base):
__tablename__ = 'x'
id = Column(Integer, primary_key=True)
y_id = Column(ForeignKey('y.id'))
y = relationship("Y", back_populates="xs")
from y import Y
And the other way around for the Y as well (this is not required, but creates more symmetry). I'd also put a comment there to explain this to avoid someone putting it to the top, breaking the program.
Is it possible to include libraries/packages in only one location?
class Sample( db.Model ):
randomText = db.StringProperty( multiline = True )
--
from google.appengine.ext import db
from project.models import Sample
class SampleHandler( ):
def get( self ):
xamp = Sample.all( )
Since the handler already imports db from the google.appengine.ext library/package, and then imports the model i'd assume you don't have to include it again in the model itself. However, it looks like I have to, any way?
Anyone care to explain?
You need to import modules where they are used.
If your models module uses the google.appengine.ext.db module, you need to import it there, not in your handler module.
Importing things creates a reference to that 'thing' in your module namespace, so that the code there can find it when using it. db is the local name by which you get to use the object defined in google.appengine.ext.
If your handler uses the same object, it needs to import that still. If by importing models all names used by models suddenly where available in your handler module too, you'd end up with name conflicts and hard-to-debug errors all over the place.
Vice versa, if only importing google.appengine.ext.db in your handler module and not in your models module were to work, you'd need to import all the dependencies of given module together with the module itself. This quickly becomes unworkable, as you'd need to document all the things your models module requires just to be able to use it.