How to move graphene resolve methods to different files? - python

I have the following code. Query is my root schema.
If I have only one profile it's ok to have resolve method inside of query. But what if schema is too big?
Is anyway to move resolve_profile inside of Profile object type?
import graphene
class Query(graphene.ObjectType):
profile = graphene.ObjectType(Profile)
def resolve_profile(self):
return ...
class Profile(graphene.ObjectType):
firstName = graphene.String(graphene.String)
lastName = graphene.String(graphene.String)

No, you can't move resolve_profile into Profile but there is another technique to handle having a large schema. You can split your query into multiple files and inherit each of these files in Query. In this example, I've broken Query into AQuery, BQuery and CQuery:
class Query(AQuery, BQuery, CQuery, graphene.ObjectType):
pass
And then you could define AQuery in a different file like this:
class AQuery(graphene.ObjectType):
profile = graphene.ObjectType(Profile)
def resolve_profile(self):
return ...
and put other code in BQuery and CQuery.
You can also use the same technique to split up your mutations.

Related

Flask Sqlalchemy add multiple row

I am using flask-restful this is
My class I want to insert
class OrderHistoryResource(Resource):
model = OrderHistoryModel
schema = OrderHistorySchema
order = OrderModel
product = ProductModel
def post(self):
value = req.get_json()
data = cls.schema(many=True).load(value)
data.insert()
In my model
def insert(self):
db.session.add(self)
db.session.commit()
schema
from config.ma import ma
from model.orderhistory import OrderHistoryModel
class OrderHistorySchema(ma.ModelSchema):
class Meta:
model = OrderHistoryModel
include_fk = True
Example Data I want to insert
[
{
"quantity":99,
"flaskSaleStatus":true,
"orderId":"ORDER_64a79028d1704406b6bb83b84ad8c02a_1568776516",
"proId":"PROD_9_1568779885_64a79028d1704406b6bb83b84ad8c02a"
},
{
"quantity":89,
"flaskSaleStatus":true,
"orderId":"ORDER_64a79028d1704406b6bb83b84ad8c02a_1568776516",
"proId":"PROD_9_1568779885_64a79028d1704406b6bb83b84ad8c02a"
}
]
this is what i got after insert method has started
TypeError: insert() takes exactly 2 arguments (0 given)
or there is another way to do this action?
Edited - released marshmallow-sqlalchemy loads directly to instance
You need to loop through the OrderModel instances in your list.
You can then use add_all to add the OrderModel objects to the session, then bulk update - see the docs
Should be something like:
db.session.add_all(data)
db.session.commit()
See this post for brief discussion on why add_all is best when you have complex ORM relationships.
Also - not sure you need to have all your models/schemas as class variables, it's fine to have them imported (or just present in the same file, as long as they're declared before the resource class).
You are calling insert on list cause data is list of model OrderHistoryModel instances.
Also post method doesn't need to be classmethod and you probably had an error there as well.
Since data is list of model instances you can use db.session.add_all method to add them to session in bulk.
def post(self):
value = req.get_json()
data = self.schema(many=True).load(value)
db.session.add_all(data)
db.session.commit()

How to map an existing python class to a Django model

I'm writing a web scraper to get information about customers and appointment times to visit them. I have a class called Job that stores all the details about a specific job. (Some of its attributes are custom classes too e.g Client).
class Job:
def __init__(self, id_=None, client=Client(None), appointment=Appointment(address=Address(None)), folder=None,
notes=None, specific_reqs=None, system_notes=None):
self.id = id_
self.client = client
self.appointment = appointment
self.notes = notes
self.folder = folder
self.specific_reqs = specific_reqs
self.system_notes = system_notes
def set_appointment_date(self, time, time_format):
pass
def set_appointment_address(self, address, postcode):
pass
def __str__(self):
pass
My scraper works great as a stand alone app producing one instance of Job for each page of data scraped.
I now want to save these instances to a Django database.
I know I need to create a model to map the Job class onto but that's where I get lost.
From the Django docs (https://docs.djangoproject.com/en2.1/howto/custom-model-fields/) it says in order to use my Job class in the Django model I don't have to change it at all. That's great - just what I want. but I can't follow how to create a model that maps to my Job class.
Should it be something like
from django.db import models
import Job ,Client
class JobField(models.Field):
description = "Job details"
def __init__(self, *args, **kwargs):
kwargs['id_'] = Job.id_
kwargs['client'] = Client(name=name)
...
super().__init__(*args, **kwargs)
class Job(models.Model):
job = JobField()
And then I'd create a job using something like
Job.objects.create(id_=10101, name="Joe bloggs")
What I really want to know is am I on the right lines? Or (more likely) how wrong is this approach?
I know there must be a big chunk of something missing here but I can't work out what.
By mapping I'm assuming you want to automatically generate a Django model that can be migrated in the database, and theoretically that is possible if you know what field types you have, and from that code you don't really have that information.
What you need to do is to define a Django model like exemplified in https://docs.djangoproject.com/en/2.1/topics/db/models/.
Basically you have to create in a project app's models.py the following class:
from django import models
class Job(models.Model):
client = models.ForeignKey(to=SomeClientModel)
appointment = models.DateTimeField()
notes = models.CharField(max_length=250)
folder = models.CharField(max_length=250)
specific_reqs = models.CharField(max_length=250)
system_notes = models.CharField(max_length=250)
I don't know what data types you actually have there, you'll have to figure that out yourself and cross-reference it to https://docs.djangoproject.com/en/2.1/ref/models/fields/#model-field-types. This was just an example for you to understand how to define it.
After you have these figured out you can do the Job.objects.create(...yourdata).
You don't need to add an id field, because Django creates one by default for all models.

how to use peewee's Using as a decorator to dynamically specify a database?

Despite numerous recipes and examples in peewee's documentation; I have not been able to find how to accomplish the following:
For finer-grained control, check out the Using context manager / decorator. This allows you to specify the database to use with a given list of models for the duration of the wrapped block.
I assume it would go something like...
db = MySQLDatabase(None)
class BaseModelThing(Model):
class Meta:
database = db
class SubModelThing(BaseModelThing):
'''imagine all the fields'''
class Meta:
db_table = 'table_name'
runtime_db = MySQLDatabase('database_name.db', fields={'''imagine field mappings here''', **extra_stuff)
#Using(runtime_db, [SubModelThing])
#runtime_db.execution_context()
def some_kind_of_query():
'''imagine the queries here'''
but I have not found examples, so an example would be the answer to this question.
Yeah, there's not a great example of using Using or the execution_context decorators, so the first thing is: don't use the two together. It doesn't appear to break anything, just seems to be redundant. Logically that makes sense as both of the decorators cause the specified model calls in the block to run in a single connection/transaction.
The only(/biggest) difference between the two is that Using allows you to specify the particular database that the connection will be using - useful for master/slave (though the Read slaves extension is probably a cleaner solution).
If you run with two databases and try using execution_context on the 'second' database (in your example, runtime_db) nothing will happen with the data. A connection will be opened at the start of the block and closed and the end, but no queries will be executed on it because the models are still using their original database.
The code below is an example. Every run should result in only 1 row being added to each database.
from peewee import *
db = SqliteDatabase('other_db')
db.connect()
runtime_db = SqliteDatabase('cmp_v0.db')
runtime_db.connect()
class BaseModelThing(Model):
class Meta:
database = db
class SubModelThing(Model):
first_name = CharField()
class Meta:
db_table = 'table_name'
db.create_tables([SubModelThing], safe=True)
SubModelThing.delete().where(True).execute() # Cleaning out previous runs
with Using(runtime_db, [SubModelThing]):
runtime_db.create_tables([SubModelThing], safe=True)
SubModelThing.delete().where(True).execute()
#Using(runtime_db, [SubModelThing], with_transaction=True)
def execute_in_runtime(throw):
SubModelThing(first_name='asdfasdfasdf').save()
if throw: # to demo transaction handling in Using
raise Exception()
# Create an instance in the 'normal' database
SubModelThing.create(first_name='name')
try: # Try to create but throw during the transaction
execute_in_runtime(throw=True)
except:
pass # Failure is expected, no row should be added
execute_in_runtime(throw=False) # Create a row in the runtime_db
print 'db row count: {}'.format(len(SubModelThing.select()))
with Using(runtime_db, [SubModelThing]):
print 'Runtime DB count: {}'.format(len(SubModelThing.select()))

Query statement in get_by_id (ndb, GAE)

I'm using Google App Engine with webapp2 and Python.
I have a User model with a deleted field:
class User(ndb.Model):
first_name = ndb.StringProperty()
last_name = ndb.StringProperty()
email = ndb.StringProperty()
deleted = ndb.BooleanProperty(default=False)
I'd like to get a User object by calling User.get_by_id() but I would like to exclude objects that have deleted field True. Is it possible to do this with the normal get_by_id() function?
If not, could I override it?
Or should I create a custom class method, smth like get_by_id_2() that does a normal .get() query like this: User.query(User.key.id()==id, User.deleted==False).get()?
Would you reccomend something else instead?
A query is significantly slower than a get, and is subject to eventual consistency. You should probably use the normal get_by_id and check deleted afterwards. You certainly could wrap that up in a method:
#classmethod
def get_non_deleted(cls, id):
entity = cls.get_by_id(id)
if entity and not entity.deleted:
return entity

How can I add class attributes to Models in Django?

I'm using django to build an internal webapp where devices and analysis reports on those devices are managed.
Currently an abstract Analysis is defined like this:
class Analysis(models.Model):
project = models.ForeignKey(Project)
dut = models.ForeignKey(Dut) # Device Under Test
date = models.DateTimeField()
raw_data = models.FileField(upload_to="analysis")
public = models.BooleanField()
#property
def analysis_type(self):
s = str(self.__class__)
class_name = s.split('.')[-1][:-2] # Get latest name in dotted class name as remove '> at end
return AnalysisType.objects.get(name=class_name)
class Meta:
abstract = True
There are then a number of different analysis types that can be done on a device, with different resulting data.
class ColorAnalysis(Analysis):
value1 = models.FloatField()
value2 = models.FloatField()
...
class DurabilityAnalysis(Analysis):
value1 = models.FloatField()
value2 = models.FloatField()
...
...
Each such analysis is created from an Excel sheet posted by the operator. There exists an Excel template the operator fills in for each analysis type.
(The issue here is not if data input should be done in a web form, there are lots of reasons to choose the Excel path)
On a page on the website all analysis types should be listed along with a link to the corresponding Excel sheet template used to report that analysis.
Currently I have defined something like
class AnalysisType(models.Model):
name = models.CharField(max_length=256 )
description = models.CharField(max_length=1024,blank=True )
template = models.FileField(upload_to="analysis_templates")
but when I though about how I would link this data to the different analysis result model classes I though that what I want to do is to add this data as class attributes to each analysis type.
The problem is that the class attributes are already used by the django magic to define the data of each instance.
How do I add "class attributes" to django models? Any other ideas on how to solve this?
EDIT:
Now added the analysis_type property by looking up the class name. This requires no manual adding of a variable to each sub-class. Works fine, but still requires manual adding of an entry of AnalysisType corresponding to each sub-class. It would be nice if this could be handled by the class system as well. Any ideas?
How about a property or method that returns an AnalysisType dependent on an attribute in the particular Analysis subclass?
class Analysis(models.Model):
...
#property
def analysis_type(self):
return AnalysisType.objects.get(name=self.analysis_type_name)
class ColorAnalysis(Analysis):
analysis_type_name = 'color'
class DurabilityAnalysis(Analysis):
analysis_type_name = 'durability'

Categories

Resources