Graphene-Django Filenaming Conventions - python

I'm rebuilding a former Django REST API project as a GraphQL one. I now have queries & mutations working properly.
Most of my learning came from looking at existing Graphene-Django & Graphene-Python code samples. There seem to be a lot of inconsistencies amongst them.
In some it was suggested that the GraphQL queries should be placed in schema.py whereas the mutations should be placed in mutation.py.
What I'm thinking makes more sense is to instead have these two files hold their respective code:
- queries.py
- mutations.py
I'm relatively new to Django & Python though so want to be sure that I'm not violating any conventions.
Interested in your thoughts!
Robert

There aren't any conventions yet, since GraphQL is a fairly new alternative method to REST. Thus, "conventions" are created at the moment we speak.
However, since schema is general-defined term you may rename it to queries.
This is my project structure:
django_proj/
manage.py
requirements.txt
my_app/
__init__.py
migrations/
admin.py
schema/
__init__.py
schema.py # holds the class Query. The GraphQL endpoints, if you like
types.py # holds the DjangoObjectType classes
inputs.py # holds the graphene.InputObjectType classes (for defining input to a query or mutation)
mutations.py # holds the mutations (what else?!)
So the schema.py (__init__) could be renamed to queries.py if you like. There is no much big difference between these two words.

I liked nik_m's answer so much I wrote some code to generate the template structure from inside the Django shell. I want to enforce some consistency as I create these files over and over again. I'm putting the code here in case someone else finds it useful.
import os
from django.conf import settings
def schema_setup(app_name):
"""
Sets up a default schema file structure.
"""
SCHEMA_DIRECTORY_NAME = 'schema'
app_directory = os.path.join(settings.PROJECT_DIR, app_name)
if not os.path.exists(app_directory):
raise Exception("Can't find app directory {}".format(app_directory))
schema_directory = os.path.join(app_directory, SCHEMA_DIRECTORY_NAME)
if os.path.exists(schema_directory):
raise Exception("Schema directory {} already exists.".format(schema_directory))
os.makedirs(schema_directory)
mutation_class = "{}Mutation".format(app_name.title())
query_class = "{}Query".format(app_name.title())
init_txt = "from .mutations import {}\nfrom .queries import {}\n".format(mutation_class, query_class)
fields_txt = "# Insert common fields here.\nimport graphene\n"
inputs_txt = "# Insert graphene.InputObjectType classes.\nimport graphene\n"
mutations_txt = "# Insert graphql mutations here.\nimport graphene\n\nclass {}(graphene.AbstractType):\n pass\n".format(mutation_class)
queries_txt = "# Insert graphql queries here.\nimport graphene\n\nclass {}(graphene.AbstractType):\n pass\n".format(query_class)
types_txt = "# Insert DjangoObjectType classes here.\nimport graphene\nfrom graphene_django.types import DjangoObjectType\n"
for fname, file_text in [("__init__.py", init_txt),
("fields.py", fields_txt),
("inputs.py", inputs_txt),
("mutations.py", mutations_txt),
("queries.py", queries_txt),
("types.py", types_txt),
]:
with open(os.path.join(schema_directory, fname), "w") as output_file:
output_file.write(file_text)
print("Created {}".format(fname))
From inside the Django shell, run like schema_setup("my_app")
Note:
This assumes you set PROJECT_DIR in your settings like PROJECT_DIR = os.path.dirname(os.path.abspath(__file__))
In your top level schema, import like from my_app.schema import MyAppQuery, MyAppMutation
I've gone back and forth on "query" vs. "queries" and "mutation" vs "mutations" -- as of this moment, the graphene documentation isn't consistent

Related

Creating Custom SessionStore for a django project

If I want to subclass a module such as django.contrib.sessions.backends.db.SessionStore so that I can override a lot of its default behaviour, what is the standard way of doing this in Django?
Suppose I create a file called mydb.py:
from django.contrib.sessions.backends.db import SessionStore as DBSessionStore
class SessionStore(DBSessionStore):
...
If my project has this structure below, is it best practice to put, mydb.py in a backends directory under project's folder?
myproject
myproject/manage.py
myproject/myproject
myproject/myproject/wsgi.py
myproject/myproject/__init__.py
myproject/myproject/settings.py
myproject/myproject/urls.py
myproject/db.sqlite3
myproject/myapp
myproject/myapp/tests.py
myproject/myapp/admin.py
myproject/myapp/__init__.py
myproject/myapp/models.py
myproject/myapp/apps.py
myproject/myapp/migrations
myproject/myapp/migrations/__init__.py
myproject/myapp/views.py
myproject/myapp/urls.py
myproject/backends
myproject/backends/__init__.py
myproject/backends/mydb.py
myproject/__init__.py
Is settings.SESSION_ENGINE='backends.db' a reasonable standard to avoid namespace collisions? Is it a general rule of djamgo configurations that the current project is included in the python search path?
You should refer to it as to the file:
SESSION_ENGINE = 'python.path.mydb'
Django docs is missing this little detail. In
https://docs.djangoproject.com/en/2.2/_modules/django/contrib/sessions/middleware/ line 12 (django 1.9), you can find this:
self.SessionStore = engine.SessionStore
So it's literally taking the SessionStore class from engine you provided in settings. SESSION_ENGINE.

Validate models on application startup

Question: Is there a way to fetch models from DB at startup and crash if some validation checks fail?
Explanation:
I've got the following model (simplified):
class Configuration(models.Model):
related = models.ForeignKey('other_app.Model')
service_classes = models.CharField(...)
def validate(self):
valid = # checks service_classes against settings.SERVICE_CLASSES - see below
if not self.valid:
raise ValidationException("invalid")
service_classes contains a comma-separated list of service classes identifiers, i.e. service_class1,service_class2.
Service classes basically look like that:
# module service.py
ServiceClass1(object):
ID = 'service_class1'
ServiceClass2(object):
ID = 'service_class2'
Django settings file contains a list of paths to service classes that are enabled for this instance:
SERVICE_CLASSES = [
'service.ServiceClass1',
'service.ServiceClass2'
]
However, since SERVICE_CLASSES can be modified after Configuration models already created, they might become out of sync, and bad things could happen if Configuration.service_classes contain IDs of classes not found in SERVICE_CLASSES. So, essentially, what I want to do is to execute the following code at application startup:
for config in Configuration.models.all():
config.validate() # and let it throw Exception to fail fast
Django docs suggests using AppConfig.ready for startup initialization code, but with an explicit instruction to avoid accessing DB in any way there. I've tried that anyway and it crashed with OperationalError - most likely the DB is not ready by that time, since it is executed before syncdb have a chance to create a database (not sure about that though).
I've tried placing it in models.py - crashes with ValueError: Related model 'other_app.Model' cannot be resolved - and it looks like it is totally right, since that app is not configured yet when models.py is processed.

How to create initial revision for test objects when using django-reversion in test case

I'm creating some initial tests as I play with django-revisions. I'd like to be able to test that some of my api and view code correctly saves revisions. However, I can't get even a basic test to save a deleted version.
import reversion
from django.db import transaction
from django import test
from myapp import models
class TestRevisioning(test.TestCase):
fixtures = ['MyModel']
def testDelete(self):
object1 = models.MyModel.objects.first()
with transaction.atomic():
with reversion.create_revision():
object1.delete()
self.assertEquals(reversion.get_deleted(models.MyModel).count(), 1)
This fails when checking the length of the deleted QuerySet with:
AssertionError: 0 != 1
My hypothesis is that I need to create the initial revisions of my model (do the equivalent of ./manage.py createinitialrevisions). If this is the issue, how do I create the initial revisions in my test? If that isn't the issue, what else can I try?
So, the solution is pretty simple. I saved my object under revision control.
# imports same as question
class TestRevisioning(test.TestCase):
fixtures = ['MyModel']
def testDelete(self):
object1 = models.MyModel.objects.first()
# set up initial revision
with reversion.create_revision():
object1.save()
# continue with remainder of the test as per the question.
# ... etc.
I tried to override _fixture_setup(), but that didn't work. Another option would be to loop over the MyModel objects in the __init__(), saving them under reversion control.
'MyModel' is the name of the file with your fixtures?
If not, what you probably is missing is the data creation.
You can use fixtures (but a file, not the name of your model) or factories.
There's a whole chapter in Django documentation related to providing initial data in database for models: https://docs.djangoproject.com/en/1.7/howto/initial-data/
Hope it helps

SQLAlchemy creating two databases in one file with two different models

I want to initialize two databases with total different models in my database.py file.
database.py
engine1 = create_engine(uri1)
engine2 = create_engine(uri2)
session1 = scoped_session(sessionmaker(autocommit=False,autoflush=False,bind=engine1))
session2 = scoped_session(sessionmaker(autocommit=False,autoflush=False,bind=engine2))
Base = declarative_base(name='Base')
Base.query = session1.query_property()
LogBase = declarative_base(name='LogBase')
LogBase.query = session2.query_property()
and the two model structures:
models.py
class MyModel(Base):
pass
models2.py
class MyOtherModel(LogBase):
pass
back to the database.py where i want to create/initialize the databases after importing the models
# this does init the database correctly
def init_db1():
import models
Base.metadata.create_all(bind=engine1)
# this init function doeas not work properly
def init_db2():
import models2
LogBase.metadata.create_all(bind=engine2)
if I change the import in the second init function it does work
def init_db2():
from models2 import *
LogBase.metadata.create_all(bind=engine2)
but there is a warning:
database.py:87: SyntaxWarninyntaxWarning: import * only allowed at module level
Everthing does work properly, I have the databases initialized, but the Warning tells me, that there is something wrong with it.
If someone can explain me why the first attempt isn't correct I would be grateful. Thanks.
You are indeed discouraged from using the from ... import * syntax inside functions, because that makes it impossible for Python to determine what the local names are for that function, breaking scoping rules. In order for Python to make things work anyway, certain optimizations have to be disabled and name lookup is a lot slower as a result.
I cannot reproduce your problem otherwise. Importing just models2 makes sure that everything defined in that module is executed so that the LogBase class has a registry of all declarations. There is no reason for that path to fail while the models.py declarations for Base do work.
For the purposes of SQLAlchemy and declarative table metadata, there is no difference between the import models2 and the from models2 import * syntax; only their effect on the local namespace differs. In both cases, the models2 top-level code is run, classes are defined, etc. but in the latter case then the top-level names from the module are added to the local namespace as direct references, as opposed to just a reference to the module object being added.

Why would I put code in __init__.py files?

I am looking for what type of code would I put in __init__.py files and what are the best practices related to this. Or, is it a bad practice in general ?
Any reference to known documents that explain this is also very much appreciated.
Libraries and frameworks usually use initialization code in __init__.py files to neatly hide internal structure and provide a uniform interface to the user.
Let's take the example of Django forms module. Various functions and classes in forms module are defined in different files based on their classification.
forms/
__init__.py
extras/
...
fields.py
forms.py
widgets.py
...
Now if you were to create a form, you would have to know in which file each function is defined and your code to create a contact form will have to look something like this (which is incovenient and ugly).
class CommentForm(forms.forms.Form):
name = forms.fields.CharField()
url = forms.fields.URLField()
comment = forms.fields.CharField(widget=forms.widgets.Textarea)
Instead, in Django you can just refer to various widgets, forms, fields etc. directly from the forms namespace.
from django import forms
class CommentForm(forms.Form):
name = forms.CharField()
url = forms.URLField()
comment = forms.CharField(widget=forms.Textarea)
How is this possible? To make this possible, Django adds the following statement to forms/__init__.py file which import all the widgets, forms, fields etc. into the forms namespace.
from widgets import *
from fields import *
from forms import *
from models import *
As you can see, this simplifies your life when creating the forms because now you don't have to worry about in where each function/class is defined and just use all of these directly from forms namespace. This is just one example but you can see examples like these in other frameworks and libraries.
One of the best practices in that area is to import all needed classes from your library (look at mongoengine, for example). So, a user of your library can do this:
from coollibrary import OneClass, SecondClass
instead of
from coollibrary.package import OneClass
from coollibrary.anotherpackage import SecondClass
Also, good practice is include in __init__.py version constant
For convenience: The other users will not need to know your functions' exactly location.
your_package/
__init__.py
file1.py/
file2.py/
...
fileN.py
# in __init__.py
from file1 import *
from file2 import *
...
from fileN import *
# in file1.py
def add():
pass
then others can call add() by
from your_package import add
without knowing file1, like
from your_package.file1 import add
Put something for initializing. For example, the logging(this should put in the top level):
import logging.config
logging.config.dictConfig(Your_logging_config)

Categories

Resources