I have a project deployed on Google App Engine....I am facing rollback issue. I want to rollback just like Relational db where we do commit before return or end of method. How do we achieve this in ndb???
Please find below code snippet for clarity
class PersonTable(ndb.Model):
personId = ndb.StringProperty()
personName = ndb.StringProperty()
personAddress = ndb.StringProperty()
personOldReference = ndb.StringProperty()
scopeReference = ndb.StringProperty()
class ScopeTable(ndb.Model):
child = ndb.StringProperty()
isPerson = ndb.BooleanProperty()
parent = ndb.StringProperty()
#endpoints.method(ZGScopeRequest, ZGScopeResponse, name='scope', path='scope', http_method='POST')
def scope(self, request):
scope = request.scope
person = request.personName
list = scope.split('.')
length = len(list)
for entry in list:
newId = GenerateRandomId('SCOPE')
child = mobileappmodels.ScopeTable.query(mobileappmodels.ScopeTable.child.IN([entry])).get()
if child:
pass
else:
index = list.index(entry)
parent = list[index-1]
if entry == 'PersonName':
entry = person
fetchPerson = mobileappmodels.CommunityTable.gql("""WHERE personName = :1""",entry).fetch()
update = fetchCommunity[0].key.get()
update.scopeReference = newId
update.put()
save = mobileappmodels.ScopeTable(child=person, isCommunity = True, parent = parent, id=newId)
save.put()
elif entry == 'Global':
save = mobileappmodels.ScopeTable(child=entry, isCommunity = False, parent = '', id=newId)
save.put()
else:
save = mobileappmodels.ScopeTable(child=entry, isCommunity = False, parent = parent, id=newId)
save.put()
return ZGScopeResponse(message="Uploaded")
Suppose my function fails just before return and after 2nd put, how do I rollback???
Help would be really appreciate as I am new to Google App Engine and NDB
Not sure I understand. How could it fail after the last put and before the return?
Perhaps you want to run the code in a transaction:
https://cloud.google.com/appengine/docs/standard/python/ndb/transactions
You can run it in a transaction, and call ndb.Rollback if needed.
Related
Hello I'm new on Datastore and on Python and I have a basic question but it can help me to understand more the Google cloud.
I have 4 entities and let's say I have a parent (match) with children : team, player and event.
class Team(ndb.Model):
d_name = ndb.StringProperty()
d_side = ndb.StringProperty()
class Player(ndb.Model):
d_name = ndb.StringProperty()
date_of_birth = ndb.StringProperty()
d_position = ndb.StringProperty()
d_teamKey = ndb.StringProperty()
class Match(ndb.Model):
d_competition_name = ndb.StringProperty()
d_date = ndb.StringProperty()
d_pool = ndb.StringProperty()
d_season = ndb.StringProperty()
d_team1Key = ndb.StringProperty()
d_team2Key = ndb.StringProperty()
d_winning_teamKey = ndb.StringProperty()
d_match_id = ndb.StringProperty()
d_match_day = ndb.IntegerProperty()
class Event(ndb.Expando):
d_teamKey = ndb.StringProperty()
d_playerKey = ndb.StringProperty()
I know that the query if I want all the matchs day 4 is :
q = ndb.gql("SELECT * FROM Match WHERE d_match_day = 4")
But how can I seach all the players in theses match's children so that I have all the players who have played during the day 4 ?
Thank you !
Add another property to Match: A StructuredProperty, which is a list of Players (and/or Teams):
Players = ndb.StructuredProperty(Player)
Teams = ndb.StructuredProperty(Team)
Then, you can query for 4 and pull the list of Players and/or Teams.
wait.... you add the player as a child to EACH match he plays? that seems unefficient design. A child can only have 1 parent (unless I grossly misunderstood ancestor queries, which is possible, to be fair).
Anyway in one single query I don't think that's doable. I would start getting the keys from match where d_match_day = 4, and from there do a "select * from Players where match_key = " and use the list you just created. (you might need to change match_key to match your actual ancestor key, but the jist of it is there)
I'm a noobie trying get primary model's primary UUID automatically instantiated before it's stored into DB, I would not like to commit objects into db just to get the UUID available.
The short snippets below are from the actual code I have.
I think I need to attach initialization into some SQLAlchemy hook, but I don't know which or how.
I have an UUID helper as follows
class GUID(TypeDecorator):
impl = types.LargeBinary
...
the in the tables I use
class Row(Model,Base):
__tablename__ = "Row"
id = Column(GUID(), primary_key=True, default=uuid.uuid4)
row_text = Column(Unicode, index=True)
original_row_index = Column(Integer)
when I do this test:
def test_uuid():
row_text = "Just a plain row."
irow = 0
row = Row(row_text, irow)
row.save()
row.commit()
if row.id == None:
print ("row.id == None")
else:
print ("row.id set")
row2 = Row(row_text, irow)
row2.save()
if row2.id == None:
print ("row2.id == None")
else:
print ("row2.id set")
it prints
row.id set
row2.id == None
The Model class I use is as follows:
class Model():
def __init__(self):
pass
def save(self):
db = Db.instance()
db.session.add(self)
def commit(self):
db = Db.instance()
db.session.commit()
I suppose that your should use not commit method but flush method: Difference between flush and commit.
Flush not store information into the hard disk but create all changes on primary key:
Primary key attributes are populated immediately within the flush() process as they are generated and no call to commit() should be required
Link to original
I am using GAE Python. I have two root entities:
class X(ndb.Model):
subject = ndb.StringProperty()
grade = ndb.StringProperty()
class Y(ndb.Model):
identifier = ndb.StringProperty()
name = ndb.StringProperty()
school = ndb.StringProperty()
year = ndb.StringProperty()
result = ndb.StructuredProperty(X, repeated=True)
Since google stores our data across several data centers, we might not get the most recent data when we do a query as shown below(in case some changes have been "put"):
def post(self):
identifier = self.request.get('identifier')
name = self.request.get('name')
school = self.request.get('school')
year = self.request.get('year')
qry = Y.query(ndb.AND(Y.name==name, Y.school==school, Y.year==year))
record_list = qry.fetch()
My question: How should I modify the above fetch operation to always get the latest data
I have gone through the related google help doc but could not understand how to apply that here
Based on hints from Isaac answer, Would the following be the solution(would "latest_record_data" contain the latest data of the entity):
def post(self):
identifier = self.request.get('identifier')
name = self.request.get('name')
school = self.request.get('school')
year = self.request.get('year')
qry = Y.query(ndb.AND(Y.name==name, Y.school==school, Y.year==year))
record_list = qry.fetch()
record = record_list[0]
latest_record_data = record.key.get()
There's a couple ways on app engine to get strong consistency, most commonly using gets instead of queries and using ancestor queries.
To use a get in your example, you could encode the name into the entity key:
class Y(ndb.Model):
result = ndb.StructuredProperty(X, repeated=True)
def put(name, result):
Y(key=ndb.Key(Y, name), result).put()
def get_records(name):
record_list = ndb.Key(Y, name).get()
return record_list
An ancestor query uses similar concepts to do something more powerful. For example, fetching the latest record with a specific name:
import time
class Y(ndb.Model):
result = ndb.StructuredProperty(X, repeated=True)
#classmethod
def put_result(cls, name, result):
# Don't use integers for last field in key. (one weird trick)
key = ndb.Key('name', name, cls, str(int(time.time())))
cls(key=key, result=result).put()
#classmethod
def get_latest_result(cls, name):
qry = cls.query(ancestor=ndb.Key('name', name)).order(-cls.key)
latest = qry.fetch(1)
if latest:
return latest[0]
The "ancestor" is the first pair of the entity's key. As long as you can put a key with at least the first pair into the query, you'll get strong consistency.
I want to define a custom string as an ID so I created the following Model:
class WikiPage(ndb.Model):
id = ndb.StringProperty(required=True, indexed=True)
content = ndb.TextProperty(required=True)
history = ndb.DateTimeProperty(repeated=True)
Based on this SO thread, I believe this is right.
Now I try to query by this id by:
entity = WikiPage.get_by_id(page) # page is an existing string id, passed in as an arg
This is based on the NDB API.
This however isn't returning anything -- entity is None.
It only works when I run the following query instead:
entity = WikiPage.query(WikiPage.id == page).get()
Am I defining my custom key incorrectly or misusing get_by_id() somehow?
Example:
class WikiPage(ndb.Model):
your_id = ndb.StringProperty(required=True)
content = ndb.TextProperty(required=True)
history = ndb.DateTimeProperty(repeated=True)
entity = WikiPage(id='hello', your_id='hello', content=...., history=.....)
entity.put()
entity = WikiPage.get_by_id('hello')
or
key = ndb.Key('WikiPage','hello')
entity = key.get()
entity = WikiPage.get_by_id(key.id())
and this still works:
entity = WikiPage.query(WikiPage.your_id == 'hello').get()
How can I update a row's information?
For example I'd like to alter the name column of the row that has the id 5.
Retrieve an object using the tutorial shown in the Flask-SQLAlchemy documentation. Once you have the entity that you want to change, change the entity itself. Then, db.session.commit().
For example:
admin = User.query.filter_by(username='admin').first()
admin.email = 'my_new_email#example.com'
db.session.commit()
user = User.query.get(5)
user.name = 'New Name'
db.session.commit()
Flask-SQLAlchemy is based on SQLAlchemy, so be sure to check out the SQLAlchemy Docs as well.
There is a method update on BaseQuery object in SQLAlchemy, which is returned by filter_by.
num_rows_updated = User.query.filter_by(username='admin').update(dict(email='my_new_email#example.com')))
db.session.commit()
The advantage of using update over changing the entity comes when there are many objects to be updated.
If you want to give add_user permission to all the admins,
rows_changed = User.query.filter_by(role='admin').update(dict(permission='add_user'))
db.session.commit()
Notice that filter_by takes keyword arguments (use only one =) as opposed to filter which takes an expression.
This does not work if you modify a pickled attribute of the model. Pickled attributes should be replaced in order to trigger updates:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
from pprint import pprint
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqllite:////tmp/users.db'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), unique=True)
data = db.Column(db.PickleType())
def __init__(self, name, data):
self.name = name
self.data = data
def __repr__(self):
return '<User %r>' % self.username
db.create_all()
# Create a user.
bob = User('Bob', {})
db.session.add(bob)
db.session.commit()
# Retrieve the row by its name.
bob = User.query.filter_by(name='Bob').first()
pprint(bob.data) # {}
# Modifying data is ignored.
bob.data['foo'] = 123
db.session.commit()
bob = User.query.filter_by(name='Bob').first()
pprint(bob.data) # {}
# Replacing data is respected.
bob.data = {'bar': 321}
db.session.commit()
bob = User.query.filter_by(name='Bob').first()
pprint(bob.data) # {'bar': 321}
# Modifying data is ignored.
bob.data['moo'] = 789
db.session.commit()
bob = User.query.filter_by(name='Bob').first()
pprint(bob.data) # {'bar': 321}
Just assigning the value and committing them will work for all the data types but JSON and Pickled attributes. Since pickled type is explained above I'll note down a slightly different but easy way to update JSONs.
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(80), unique=True)
data = db.Column(db.JSON)
def __init__(self, name, data):
self.name = name
self.data = data
Let's say the model is like above.
user = User("Jon Dove", {"country":"Sri Lanka"})
db.session.add(user)
db.session.flush()
db.session.commit()
This will add the user into the MySQL database with data {"country":"Sri Lanka"}
Modifying data will be ignored. My code that didn't work is as follows.
user = User.query().filter(User.name=='Jon Dove')
data = user.data
data["province"] = "south"
user.data = data
db.session.merge(user)
db.session.flush()
db.session.commit()
Instead of going through the painful work of copying the JSON to a new dict (not assigning it to a new variable as above), which should have worked I found a simple way to do that. There is a way to flag the system that JSONs have changed.
Following is the working code.
from sqlalchemy.orm.attributes import flag_modified
user = User.query().filter(User.name=='Jon Dove')
data = user.data
data["province"] = "south"
user.data = data
flag_modified(user, "data")
db.session.merge(user)
db.session.flush()
db.session.commit()
This worked like a charm.
There is another method proposed along with this method here
Hope I've helped some one.
Models.py define the serializers
def default(o):
if isinstance(o, (date, datetime)):
return o.isoformat()
def get_model_columns(instance,exclude=[]):
columns=instance.__table__.columns.keys()
columns=list(set(columns)-set(exclude))
return columns
class User(db.Model):
__tablename__='user'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
.......
####
def serializers(self):
cols = get_model_columns(self)
dict_val = {}
for c in cols:
dict_val[c] = getattr(self, c)
return json.loads(json.dumps(dict_val,default=default))
In RestApi, We can update the record dynamically by passing the json data into update query:
class UpdateUserDetails(Resource):
#auth_token_required
def post(self):
json_data = request.get_json()
user_id = current_user.id
try:
instance = User.query.filter(User.id==user_id)
data=instance.update(dict(json_data))
db.session.commit()
updateddata=instance.first()
msg={"msg":"User details updated successfully","data":updateddata.serializers()}
code=200
except Exception as e:
print(e)
msg = {"msg": "Failed to update the userdetails! please contact your administartor."}
code=500
return msg
I was looking for something a little less intrusive then #Ramesh's answer (which was good) but still dynamic. Here is a solution attaching an update method to a db.Model object.
You pass in a dictionary and it will update only the columns that you pass in.
class SampleObject(db.Model):
id = db.Column(db.BigInteger, primary_key=True)
name = db.Column(db.String(128), nullable=False)
notes = db.Column(db.Text, nullable=False)
def update(self, update_dictionary: dict):
for col_name in self.__table__.columns.keys():
if col_name in update_dictionary:
setattr(self, col_name, update_dictionary[col_name])
db.session.add(self)
db.session.commit()
Then in a route you can do
object = SampleObject.query.where(SampleObject.id == id).first()
object.update(update_dictionary=request.get_json())
Update the Columns in flask
admin = User.query.filter_by(username='admin').first()
admin.email = 'my_new_email#example.com'
admin.save()
To use the update method (which updates the entree outside of the session) you have to query the object in steps like this:
query = db.session.query(UserModel)
query = query.filter(UserModel.id == user_id)
query.update(user_dumped)
db.session.commit()