I want to have a Flask route that deletes all instances of a SQLAlchemy model, VisitLog. I call VisitLog.query.delete(), then redirect back to the page, but the old entries are still present. There was no error. Why weren't they deleted?
#app.route('/log')
def log():
final_list = VisitLog.query.all()
return render_template('log.html', loging=final_list)
#app.route('/logclear')
def logclear():
VisitLog.query.delete()
return redirect("log.html", code=302)
Clear database
Just like other write operations, you must commit the session after executing a bulk delete.
VisitLog.query.delete()
db.session.commit()
Related
HI is there is any way that I can insert a row to db without using session. A simple Example:
try:
db.session.add(user1)
#in here I want to insert a row to my db but I can't do it with session because if i commit in here it will commit all inserts so my transaction not work.
db.session.add(user2)
except:
db.session.rollback()
else:
db.session.commit()
thank you
If you want to commit changes independently of the default db.session there a couple of possibilities.
If you need an actual session, create one using SQLAlchemy and use it for your log entries:
from sqlalchemy import orm
...
#app.route('/')
def index():
model = MyModel(name='M1')
db.session.add(model)
with orm.Session(db.engine).begin() as log_session:
# Session.begin will commit automatically.
log = MyLog(message='hello')
log_session.add(log)
return ''
If you are just inserting entries in the log table you can just connect using the engine.
import sqlalchemy as sa
...
#app.route('/')
def index():
model = MyModel(name='M1')
db.session.add(model)
log_table = sa.Table('my_log', db.metadata, autoload_with=db.engine)
with db.engine.begin() as conn:
conn.execute(log_table.insert(), {'message': 'hello'})
db.session.rollback()
return ''
You could also send a raw SQL statement using the mechanism in (2.), by replacing log_table.insert with sa.text(sql_string)
How ever you choose to do this be aware that:
Due to transaction isolation, you two transactions may have different views of the data in the database
You are responsible for making sure these additional sessions/transactions/connections are rolled back, committed and closed as necessary
You are responsible for handling problem scenarios, for example if an error causes the db.session to roll back, making log messages potentially invalid.
I've created in my 'view.py' program (HTML redirector) a thread responsible to update my instances that contains data from DB. The idea is that every 15 seconds the data from DB is refreshed in these instances so when the user refreshes the HTML page, the data is available.
Code:
#defining the Thread Function
def set_global_vars():
while True:
global ALL_SERVERS_FROM_DB
time.sleep(15)
ALL_SERVERS_FROM_DB = ServersManipulator(Server().query_all())
set_vars_thread = threading.Thread(target=set_global_vars)
set_vars_thread .start()
# Redirecting to servers page uging ALL_SERVERS_FROM_DB
#page.route('/servers', methods=['GET', 'POST'])
def servers(server='ALL'):
return render_template('servers.html', server_man=ALL_SERVERS_FROM_DB, server=server)
Problem is if I add a new entry that should affect 'ALL_SERVERS_FROM_DB', it doesn't reflect on the HTML page, that uses this instance to populate a table.
Hope I was clear and that someone can help me.
Kind Regards
Well,
It turns out I must recreate my DB Engine session every time I query from DB, otherwise, it seems it uses some kind of cache.
def query_all(self):
"""
Purpose:
Retrieves all entries related to this Class from Database
Parameters:
"""
global DBSession
DBSession = sessionmaker(bind=DBENGINE)() # <- I've added this to work
result = DBSession.query(type(self)).order_by(type(self).id)
DBSession.close()
return result
Before I was only starting the DBsession when the Object was instantiated.
Regards
I wonder how SQLAlchemy tracks changes that are made outside of SQLAlchemy (manual change for example)?
Until now, I used to put db.session.commit() before each value that can be changed outside of SQLAlchemy. Is this a bad practice? If yes, is there a better way to make sure I'll have the latest value? I've actually created a small script below to check that and apparently, SQLAlchemy can detect external changes without db.session.commit() being called each time.
Thanks,
P.S: I really want to understand how all the magics happen behind SQLAlchemy work. Does anyone has a pointer to some docs explaining the behind-the-scenes work of SQLAlchemy?
import os
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# Use SQLlite so this example can be run anywhere.
# On Mysql, the same behaviour is observed
basedir = os.path.abspath(os.path.dirname(__file__))
db_path = os.path.join(basedir, "app.db")
app.config["SQLALCHEMY_DATABASE_URI"] = 'sqlite:///' + db_path
db = SQLAlchemy(app)
# A small class to use in the test
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
# Create all the tables and a fake data
db.create_all()
user = User(name="old name")
db.session.add(user)
db.session.commit()
#app.route('/')
def index():
"""The scenario: the first request returns "old name" as expected.
Then, I modify the name of User:1 to "new name" directly on the database.
On the next request, "new name" will be returned.
My question is: how SQLAlchemy knows that the value has been changed?
"""
# Before, I always use db.session.commit()
# to make sure that the latest value is fetched.
# Without db.session.commit(),
# SQLAlchemy still can track change made on User.name
# print "refresh db"
# db.session.commit()
u = User.query.filter_by(id=1).first()
return u.name
app.run(debug=True)
The "cache" of a session is a dict in its identity_map (session.identity_map.dict) that only caches objects for the time of "a single business transaction" , as answered here https://stackoverflow.com/a/5869795.
For different server requests, you have different identity_map. It is not a shared object.
In your scenario, you requested the server 2 separated times. The second time, the identity_map is a new one (you can easily check it by printing out its pointer), and has nothing in cache. Consequently the session will request the database and get you the updated answer. It does not "track change" as you might think.
So, to your question, you don't need to do session.commit() before a query if you have not done a query for the same object in the same server request.
Hope it helps.
I am working on a google app engine (gae) project in python which has the following structure:
class LoginHandler(webapp2.RequestHandler):
def get(self):
...#check User-> DB access
def post():
...#check User-> DB access
class SignupHandler(webapp2.RequestHandler):
def get(self):
...#check User-> DB access
def post():
...#check User-> DB access
class Site1Handler(webapp2.RequestHandler):
def get(self):
...#check User-> DB access
def post():
...#check User-> DB access
class Site2Handler(webapp2.RequestHandler):
def get(self):
...#check User-> DB access
def post():
...#check User-> DB access
class ...
application = webapp2.WSGIApplication([('/login', LoginHandler),
('/signup',SignupHandler),
('/site1', Site1Handler),
('/site2', Site2Handler),
...,
],
debug=True)
Every user who wants to use this application has to be logged in.
Therefore on the login-site and the signup-site a cookie value with an user_id is set.
So lets imagine this app has 100 URLs and the corresponding 100 Site...Handlers() implemented.
Than for every get()/post() call I first get the user_id from the cookie and check in the database if this user exists and if it is valid.
So if the user clicks on 20 sites the app accesses 20 times the db to validate the user.
I am sure there is a better way and I would be glad if someone could show me how to do this.
I have already seen someone inherited his own Handler from webapp2.RequestHandler
which would than look like:
class MyHandler(webapp2.RequestHandler):
def initialize(self, *a, **kw):
webapp2.RequestHandler.initialize(self, *a, **kw)
uid = self.request.cookies.get('user_id')
self.user = uid and User.all().filter('userid =', uid).get()
class LoginHandler(MyHandler):
def get(self):
...#if self.user is valid -> OK
def post():
...#if self.user is valid -> OK
...
And here it is getting confusing for me.
Consider two or more people accessing the application concurrently. Will then User1 see data of User2 because self.user is initialized with data from User2?
I also concidered using a global variable to save the current user. But here the same problem if two users access the app concurrent.
I also found the webapp2.registry functionality which seemed to me the same like a global dictionary. And here also the problem of two or more users accessing the app at the same time.
Could someone please show me how to do it right? I am very new to gae and very happy for every hint in the right direction.
(Maybe Memcached is the solution. But I am more interested in a review of this check if user is valid pattern. So what would be best practice to do this?)
Assuming that you are using NDB and validating your user by getting a User object via a key/id - it will be automatically cached in memcache as well as in current local instance's memory, so your route handlers won't be calling Datastore with every single request, this is all done automatically, no extra coding required. If for validation / getting the user object you are using a query - the result won't be automatically cached but you can always manually cache it and verify the user via cache first and if the cache doesn't exist only then query Datastore, caching the results for the next request.
See more here.
If you are using webapp2's Sessions with signed/secure cookies then the data in those cookies, including the fact that the user is validated (which you previously set when when validating the user the first time) can be trusted, as long as you use long and randomly generated secret_key, that is kept secret and thus, just like with cache, you first check whether the user is validated in the cookie and if not, you ask Datastore and save the result in the session cookie for the next request. See more here.
Either way, you don't have to repeat your validation code in every single handler like you are showing in your example. One way of fixing it would be using decorators which would make your validation reuse as simple as placing #login_required before your get method. See more info here and take a look at the webapp2_extras.appengine.users file to get an idea how to write your own, simmilar decorator.
I'm trying to make a function to delete a record in my database with flask and the extension for SQLAlchemy. Problem is, instead of deleting just one row, it deletes all of them. Can someone tell me what's wrong with my code?
#app.route('/admin/delete/<int:page_id>', methods=['GET','POST'])
#requires_auth
def delete_page(page_id):
page = Page.query.get(page_id)
if not page:
abort(404)
if page.children:
flash('You can not delete a page with child pages. Delete them, or assign them a different parent.',
'error')
return redirect(url_for('admin_page'))
if request.method == 'POST':
Page.query.get(page_id).query.delete()
db.session.commit()
flash('Page was deleted successfully', 'success')
return redirect(url_for('admin_page'))
return render_template('admin_delete.html', page_title=page.title, page_id=page_id)
Thanks in advance!
I suspect that this line does not what you think.
Page.query.get(page_id).query.delete()
You're getting a single instance (which you already did before), and by using query you actually issue a new query over all objects without filtering and therefore deleting all of them.
Probably what you want to do is this:
db.session.delete(page)