I'm using Google App Engine with webapp2 and Python.
I have a User model with a deleted field:
class User(ndb.Model):
first_name = ndb.StringProperty()
last_name = ndb.StringProperty()
email = ndb.StringProperty()
deleted = ndb.BooleanProperty(default=False)
I'd like to get a User object by calling User.get_by_id() but I would like to exclude objects that have deleted field True. Is it possible to do this with the normal get_by_id() function?
If not, could I override it?
Or should I create a custom class method, smth like get_by_id_2() that does a normal .get() query like this: User.query(User.key.id()==id, User.deleted==False).get()?
Would you reccomend something else instead?
A query is significantly slower than a get, and is subject to eventual consistency. You should probably use the normal get_by_id and check deleted afterwards. You certainly could wrap that up in a method:
#classmethod
def get_non_deleted(cls, id):
entity = cls.get_by_id(id)
if entity and not entity.deleted:
return entity
Related
Problem: I get a ValidationError when trying to perform a .save() when appending a value to an EmbeddedDocumentListField because I am missing required fields that already exist on the document.
Note that at this point the User document has already been created as part of the signup process, so it already has an email and password in the DB.
My classes:
class User(gj.Document):
email = db.EmailField(required=True, unique=True)
password = db.StringField(required=True)
long_list_of_thing_1s = db.EmbeddedDocumentListField("Thing1")
long_list_of_thing_2s = db.EmbeddedDocumentListField("Thing2")
class Thing1(gj.EmbeddedDocument):
some_string = db.StringField()
class Thing2(gj.EmbeddedDocument):
some_string = db.StringField()
Trying to append a new EmbeddedDocument to the EmbeddedDocumentListField in my User class in the Thing2 Resource endpoint:
class Thing2(Resource):
def post(self):
try:
body = request.get_json()
user_id = body["user_id"]
user = UserModel.objects.only("long_list_of_thing_2s").get(id=user_id)
some_string = body["some_string"]
new_thing_2 = Thing2Model()
new_thing_2.some_string = some_string
user.long_list_of_thing_2s.append(new_thing_2)
user.save()
return 201
except Exception as exception:
raise InternalServerError
On hitting this endpoint I get the following error on the user.save()
mongoengine.errors.ValidationError: ValidationError (User:603e39e7097f3e9a6829f422) (Field is required: ['email', 'password'])
I think this is because of the .only("long_list_of_thing_2s")
But I am specifically using UserModel.objects.only("long_list_of_thing_2s") because I don't want to be inefficient in bringing the entire UserModel into memory when I only want to append something the long_list_of_thing_2s
Is there a different way I should be going about this? I am relatively new to Flask and Mongoengine so I am not sure what all the best practices are when going about this process.
You are correct, this is due to the .only and is a known "bug" in MongoEngine.
Unless your Model is really large, using .only() will not make a big difference so I'd recommend to use it only if you observe performance issues.
If you do have to keep the .only() for whatever reason, you should be able to make use of the push atomic operator. An advantage of using the push operator is that in case of race conditions (concurrent requests), it will gracefully deal with the different updates, this is not the case with regular .save() which will overwrite the list.
I have the following code. Query is my root schema.
If I have only one profile it's ok to have resolve method inside of query. But what if schema is too big?
Is anyway to move resolve_profile inside of Profile object type?
import graphene
class Query(graphene.ObjectType):
profile = graphene.ObjectType(Profile)
def resolve_profile(self):
return ...
class Profile(graphene.ObjectType):
firstName = graphene.String(graphene.String)
lastName = graphene.String(graphene.String)
No, you can't move resolve_profile into Profile but there is another technique to handle having a large schema. You can split your query into multiple files and inherit each of these files in Query. In this example, I've broken Query into AQuery, BQuery and CQuery:
class Query(AQuery, BQuery, CQuery, graphene.ObjectType):
pass
And then you could define AQuery in a different file like this:
class AQuery(graphene.ObjectType):
profile = graphene.ObjectType(Profile)
def resolve_profile(self):
return ...
and put other code in BQuery and CQuery.
You can also use the same technique to split up your mutations.
Question
How can I build a Model that that stores one field in the database, and then retrieves other fields from an API behind-the-scenes when necessary?
Details:
I'm trying to build a Model called Interviewer that stores an ID in the database, and then retrieves name from an external API. I want to avoid storing a copy of name in my app's database. I also want the fields to be retrieved in bulk rather than per model instance because these will be displayed in a paginated list.
My first attempt was to create a custom Model Manager called InterviewManager that overrides get_queryset() in order to set name on the results like so:
class InterviewerManager(models.Manager):
def get_queryset(self):
query_set = super().get_queryset()
for result in query_set:
result.name = 'Mary'
return query_set
class Interviewer(models.Model):
# ID provided by API, stored in database
id = models.IntegerField(primary_key=True, null=False)
# Fields provided by API, not in database
name = 'UNSET'
# Custom model manager
interviewers = InterviewerManager()
However, it seems like the hardcoded value of Mary is only present if the QuerySet is not chained with subsequent calls. I'm not sure why. For example, in the django shell:
>>> list(Interviewer.interviewers.all())[0].name
'Mary' # Good :)
>>> Interviewer.interviewers.all().filter(id=1).first().name
'UNSET' # Bad :(
My current workaround is to build a cache layer inside of InterviewManager that the model accesses like so:
class InterviewerManager(models.Manager):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.api_cache = {}
def get_queryset(self):
query_set = super().get_queryset()
for result in query_set:
# Mock querying a remote API
self.api_cache[result.id] = {
'name': 'Mary',
}
return query_set
class Interviewer(models.Model):
# ID provided by API, stored in database
id = models.IntegerField(primary_key=True, null=False)
# Custom model
interviewers = InterviewerManager()
# Fields provided by API, not in database
#property
def name(self):
return Interviewer.interviewers.api_cache[self.id]['name']
However this doesn't feel like idiomatic Django. Is there a better solution for this situation?
Thanks
why not just make the API call in the name property?
#property
def name(self):
name = get_name_from_api(self.id)
return name
If that isnt possible by manipulating a get request where you can add a list of names and recieve the data. The easy way is to do it is in a loop.
I would recommand you to build a so called proxy where you load the articles in a dataframe/dict, save this varible data ( with for example pickle ) and use it when nessary. It reduces loadtimes and is near efficient.
For the life of me I cannot figure this out and I'm having trouble finding information on it.
I have a Django view which accepts an argument which is a primary key (e.g: URL/problem/12) and loads a page with information from the argument's model.
I want to mock models used by my view for testing but I cannot figure it out, this is what I've tried:
#patch('apps.problem.models.Problem',)
def test_search_response(self, problem, chgbk, dispute):
problem(problem_id=854, vendor_num=100, chgbk=122)
request = self.factory.get(reverse('dispute_landing:search'))
request.user = self.user
request.usertype = self.usertype
response = search(request, problem_num=12)
self.assertTemplateUsed('individual_chargeback_view.html')
However - I can never get the test to actually find the problem number, it's as if the model does not exist.
I think that's because if you mock the entire model itself, the model won't exist because any of the functions to create/save it will have been mocked. If Problem is just a mock model class that hasn't been modified in any way, it knows nothing about interacting with the database, the ORM, or anything that could be discoverable from within your search() method.
One approach you could take rather than mocking models themselves would be to create FactoryBoy model factories. Since the test database is destroyed with each test run, these factories are a great way to create test data:
http://factoryboy.readthedocs.io/en/latest/
You could spin up a ProblemFactory like so:
class ProblemFactory(factory.Factory):
class Meta:
model = Problem
problem_id = factory.Faker("pyint")
vendor_num = factory.Faker("pyint")
chgbk = factory.Faker("pyint")
Then use it to create a model that actually exists in your database:
def test_search_response(self, problem, chgbk, dispute):
problem = ProblemFactory(problem_id=854, vendor_num=100, chgbk=122)
request = self.factory.get(reverse('dispute_landing:search', kwargs={'problem_id':problem.id}))
request.user = self.user
request.usertype = self.usertype
response = search(request, problem_num=854)
self.assertTemplateUsed('individual_chargeback_view.html')
I'm doing an online store in appengine, and I'm creating a model that will hold the settings of the store in the db, the code looks something like this:
class StoreSettings(ndb.Model):
name = ndb.StringProperty()
homepageTitle = ndb.StringProperty()
metaKeywords = ndb.StringProperty()
metaDescription = ndb.StringProperty()
timezone = ndb.IntegerProperty()
currency = ndb.StringProperty()
Is there an easy way to make the StoreSettings class to be a singleton?
Thanks
When you initialize your settings you can provide a key_name, then when you have to retrieve it you can use get_or_insert method. If it doesn't exist it will create it otherwise it will retrieve it.
settings_db = StoreSettings.get_or_insert(
'my_settings',
name='yourname'
....
)
Or if you create the object when your application starts then you can just get it by the key name
settigns_db = StoreSettings.get_by_id('my_settings')
Keep the same key? whenever you push an entity in the datastore, it needs a key.
If you create a second object with the same key, it ends up over-writing the previous entity.