I use GAE NDB Python 2.7
My two Models code:
class A(ndb.Model):
def X(self, value):
:: # some statements to return a value
return range
def Y(self, value):
:: # some statements to return a value
return range
def Z(self, value):
:: # some statements to return a value
return range
property_1 = ndb.IntegerProperty(default=0, indexed=False)
property_2 = ndb.IntegerProperty(default=0, indexed=False)
property_3 = ndb.IntegerProperty(default=0, indexed=False)
property_4 = ndb.IntegerProperty(indexed=False)
# Computed values
computed_property_1 = ndb.ComputedProperty(lambda e: e.X(e.property_1))
computed_property_2 = ndb.ComputedProperty(lambda e: e.Y(e.property_2))
computed_property_3 = ndb.ComputedProperty(lambda e: e.Z(e.property_3))
date_added = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
date_modified = ndb.DateTimeProperty(auto_now=True, indexed=False)
class B(ndb.Model):
property_5 = ndb.IntegerProperty()
property_6 = ndb.StructuredProperty(A)
date_added = ndb.DateTimeProperty(auto_now_add=True, indexed=False)
date_modified = ndb.DateTimeProperty(auto_now=True, indexed=False)
My Query code:
qry_1 = B.query(B.property_5==input_value) # or B.query(B.property_6.computed_property_2==input_value)
record_list = qry_1.fetch()
When I perform the above query on entity of model B, would any write operation be performed? (especially for the ComputedProperty and DateTimeProperty(with "auto_now") properties)
If yes, would it be rate limited to 1 write per second (i think that is the limit for free apps)
If yes, and if i have 50 entities matching the query, would it first complete the write operation(mentioned above) before completing the query and returning the matched entity set (any estimate of the query completion time)
Any difference in the above answers if I replace the following line in class B
property_6 = ndb.StructuredProperty(A)
with
property_6 = ndb.StructuredProperty(A, repeated=True)
There are no write operations by performing queries. The same applies to the two variations with StructuredProperty. Also auto_now_add and auto_now are only set during write operations. I'm not 100% sure, but as far as I understand the docs, computed properties are also updated at write (I haven't used them yet).
Related
I am trying to calculate a value from GraphQL. I am sending mutation to Django models but before save it I want to calculate this value with if statement (if the value is greater than 10 divide by 2, if is less than 10 multiply by 2).
I don't know where to add this function.
Here is my mutation in schema.py
class CreatePrice(graphene.Mutation):
price = graphene.Field(PriceType)
class Arguments:
price_data = PriceInput(required=True)
#staticmethod
def mutate(root, info, price_data):
price = Price.objects.create(**price_data)
return CreatePrice(price=price)
class Mutation(graphene.ObjectType):
create_product = CreateProduct.Field()
create_price = CreatePrice.Field()
schema = graphene.Schema(query = Query, mutation=Mutation)
And here is my Django model. Base price is calculated value and function name has two options(*2 or /2 it depends of initial value).
class Price(models.Model):
base_price = models.CharField(max_length = 20)
function_name = models.CharField(max_length = 20, choices = PROMO_FUNCTION)
def __str__(self):
return self.price_name
P.S. Sorry for bad English. Thanks!
I don't know why you are using CharField for base_price. So, I suggest you to do this:
#staticmethod
def mutate(root, info, price_data):
if int(price_data.base_price) >= 10:
price_data.base_price = str(int(price_data.base_price) / 2)
else:
price_data.base_price = str(int(price_data.base_price) * 2)
price = Price(base_price=price_data.base_price, function_name=price_data.function_name)
price.save()
return CreatePrice(price=price)
You can also create records in database by creating object and using save method on it.
NOTE:
This is a detailed question asking how best to implement and manage Database caching in my web-application with memcached. This question uses Python/Django to illustrate the data-models and usage, but the language is not really that relevant. I'm really more interested in learning what the best strategy to maintain cache-coherency is. Python/Django just happens to be the language I'm using to illustrate this question.
RULES OF MY APPLICATION:
I have a 3 x 3 grid of cells of integers
The size of this grid may increase or decrease in the future. Our solution must scale.
Their is a cumulative score for each row that is calculated by summing (value * Y-Coord) for each cell in that row.
Their is a cumulative score for each column that is calculated by summing (value * X-Coord) for each cell in that column.
The values in the cells change infrequently. But those values and the scores scores are read frequently.
I want to use memcached to minimize my database accesses.
I want to minimize/eliminate storing duplicate or derived information in my database
The image below shows an example of the state of the my grid.
MY CODE:
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
class Cell(models.Model):
x = models.IntegerField(editable=False)
y = models.IntegerField(editable=False)
# Whenever this value is updated, the keys for the row and column need to be
# invalidated. But not sure exactly how I should manage that.
value = models.IntegerField()
class Row(models.Model):
y = models.IntegerField()
#property
def cummulative_score(self):
# I need to do some memcaching here.
# But not sure the smartest way to do it.
return sum(map(lambda p: p.x * p.value, Cell.objects.filter(y=self.y)))
class Column(models.Model):
x = models.IntegerField()
#property
def cummulative_score(self):
# I need to do some memcaching here.
# But not sure the smartest way to do it.
return sum(map(lambda p: p.y * p.value, Cell.objects.filter(x=self.x)))
SO HERE IS MY QUESTION:
You can see that I have setup a memcached instance. Of course I know how to insert/delete/update keys and values in memcached. But given my code above how should I name the keys appropriately? It won't work if the key names are fixed since there must exist individual keys for each row and column. And critically how can I ensure that the appropriate keys (and only the appropriate keys) are invalidated when the values in the cells are updated?
How do I manage the cache invalidations whenever anyone updates Cell.values so that the database accesses are minimized? Isn't there some django middleware that can handle this book-keeping for me? The documents that I have seen don't do that.
# your client, be it memcache or redis, assign to client variable
# I think both of them use set without TTL for permanent values.
class Cell(models.Model):
x = models.IntegerField(editable=False)
y = models.IntegerField(editable=False)
value = models.IntegerField()
def save(self, *args, **kwargs):
Cell.cache("row",self.y)
Cell.cache("column",self.x)
super(Cell, self).save(*args, **kwargs)
#staticmethod
def score(dimension, number):
return client.get(dimension+str(number), False) or Cell.cache(number)
#staticmethod
def cache(dimension, number):
if dimension == "row":
val = sum([c.y * c.value for c in Cell.objects.filter(y=number)])
client.set(dimension+str(self.y),val)
return val
if dimension == "column":
val = sum([c.x * c.value for c in Cell.objects.filter(x=number)])
client.set(dimension+str(self.x),val)
return val
raise Exception("No such dimension:"+str(dimension))
If you want to cache individual row/column combinations you should append the object id to the key name.
given a x, y variables:
key = 'x={}_y={}'.format(x, y)
I would use the table name and just append the id, row id could just be the table PK, column id could just be the column name, like this
key = '{}_{}_{}'.format(table_name, obj.id, column_name)
In any case I suggest considering caching the whole row instead of individuals cells
The Cell object can invalidate cached values for its Row and Column when the model object is saved.
(Row and Column are plain objects here, not Django models, but of course you can change that if you need to store them in the database for some reason.)
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
class Cell(models.Model):
x = models.IntegerField(editable=False)
y = models.IntegerField(editable=False)
# Whenever this value is updated, the keys for the row and column need to be
# invalidated. But not sure exactly how I should manage that.
value = models.IntegerField()
def invalidate_cache(self):
Row(self.y).invalidate_cache()
Column(self.x).invalidate_cache()
def save(self, *args, **kwargs):
super(Cell, self).save(*args, **kwargs)
self.invalidate_cache()
class Row(object):
def __init__(self, y):
self.y = y
#property
def cache_key(self):
return "row_{}".format(self.y)
#property
def cumulative_score(self):
score = mc.get(self.cache_key)
if not score:
score = sum(map(lambda p: p.x * p.value, Cell.objects.filter(y=self.y)))
mc.set(self.cache_key, score)
return score
def invalidate_cache(self):
mc.delete(self.cache_key)
class Column(object):
def __init__(self, x):
self.x = x
#property
def cache_key(self):
return "column_{}".format(self.x)
#property
def cumulative_score(self):
score = mc.get(self.cache_key)
if not score:
score = sum(map(lambda p: p.y * p.value, Cell.objects.filter(x=self.x)))
mc.set(self.cache_key, score)
return score
def invalidate_cache(self):
mc.delete(self.cache_key)
So I am able to generate a random id using uuid
So far so good
But when I try to database i get same value
def f():
d = uuid4()
str = d.hex
return str[0:16]
class Q(models.Model):
a = models.CharField(max_length=150)
b = models.IntegerField(max_length=25)
c = models.IntegerField(max_length=32 , default=0)
d = models.ManyToManyField(Ans , related_name='aa')
e = models.CharField(max_length=18 , default = f() ,unique=True )
class Ans(models.Model):
sub = models.CharField(max_length=150)
-----------------------------------------------------------------
And I'm inserting like this
def ins(request):
t =random.randint(0, 1000)
p = Q(a = t , b=0 , c=0)
p.save()
return HttpResponse('Saved')
Just curious what the hell is happening here
Side note: If I set e.unique = False I get 2-3 with the same e values before I get a new
UUID values
You should not call the function that you are passing to default:
e = models.CharField(max_length=18, default=f, unique=True)
FYI, according to docs, you should pass a value or a callable:
The default value for the field. This can be a value or a callable
object. If callable it will be called every time a new object is
created.
I am creating a django app/website and am in trouble with some Boolean results I don't understand.
In my models, I have a Article class with 2 functions :
class Article(models.Model):
#some vars
basetime = models.IntegerField()
duration = models.IntegerField()
has_begun = models.BooleanField()
def remainingTime(self):
if(self.basetime + self.duration) - time.time() >= 0:
return ((self.basetime + self.duration) - time.time())
else:
return -1
def stillAvailable(self):
if self.remainingTime() >= 0:
return True
return False
And in my views I have a function check :
def check(request,i):
try:
article = Article.objects.get(pk=i)
except Article.DoesNotExist:
return ccm(request)
if (article.stillAvailable):
return test(request,article.remainingTime)
else:
return quid(request)
When a page calls check, my browser displays the test page, and the argument article.remainingTime is -1. (wich is the correct value for what I want to do).
My problem is : if article.remainingTime = -1, then article.stillAvailable should return False, and so the check function should return quid(request).
I don't see the reason why django/python interpreter evaluates article.stillAvailable True.
If anyone can help, that'd be very appreciated :P
You are using
if (article.stillAvailable):
As a attribute, rather than calling it as a method. As the attribute exists, it's interpreted as non false. You just need to add the brackets to call the method.
I'm trying to create a google app engine data model with the following attributes:
store string, value pair into BigTable
if string, value pair DOES NOT exist, create the record
if string, value pair DOES exist, update the record, incrementing a counter
code:
class stringListRecord(db.Model):
type = db.StringProperty();
value = db.StringProperty();
refs = db.IntegerProperty(default=1);
def __init__(self, *args, **kw):
key = db.GqlQuery("SELECT __key__ FROM stringListRecord WHERE type = :1 AND value = :2", kw['type'], kw['value']).get();
if key != None:
kw['key'] = key;
db.Model.__init__(self, *args, **kw);
def increment_counter(self, key):
obj = db.get(key);
obj.refs += 1;
db.Model.put(obj);
def put(self):
if self.key() != None:
self.increment_counter(self.key());
#db.run_in_transaction(self.increment_counter, self.key());
else:
db.Model.put(self);
When I run the commented out code, i.e. db.run_in_transaction() I get:
Only ancestor queries are allowed inside transactions.
Is there a better way to get this functionality out of GAE?