Django ORM access optimization with Filter - python

I've implemented a method on my django app that will check if a user has a specific permission or it's part of a group that contains this specific permission.
def user_has_perm(user, *permissions_to_check):
permission_check = True
permissions_not_found = []
user_groups = user.groups.all().prefetch_related('permissions')
for permission in permissions_to_check:
content_type, permission_codename = permission.split('.')
if not user.has_perm(permission) and not user_groups.filter(
permissions__content_type__model__icontains=content_type,
permissions__codename__icontains=permission_codename).exists(): # Goes down from Groups to the single user permission
permission_check = False
permissions_not_found.append(permission)
return permission_check, permissions_not_found
Now, everythings works like a charm, but, Django-Debug-Toolbar it's complaining about the query, that it's duplicated many times as the groups to check.
For me it's a bottleneck, because some users will have 50 groups associated, and really i don't know how to optimize this query...
Any suggestions?
Thanks

As mentioned by Klaus, this is a database killer. If there lots of permissions (even without any apps of your own, there will be quite a few), you could potentially find yourself executing hundreds of queries in your user_has_perm function. You could reduce that with caching
from django.core.cache import cache.
def user_has_perm(user, *permissions_to_check):
hash = # create a hash from user.id and permissions_to_check.
# maybe something as simple as "".join(permissions_to_check)
perms = cache.get(hash)
if perms:
return perms
permission_check = True
permissions_not_found = []
user_groups = user.groups.all().prefetch_related('permissions')
for permission in permissions_to_check:
content_type, permission_codename = permission.split('.')
if not user.has_perm(permission) and not user_groups.filter(
permissions__content_type__model__icontains=content_type,
permissions__codename__icontains=permission_codename).exists(): # Goes down from Groups to the single user permission
permission_check = False
permissions_not_found.append(permission)
cache.set(hash, [permisson_check, permissions_not_found], 10**5)
return permission_check, permissions_not_found

Related

With Peewee, how to check if an SQLite file has been created vs filled without creating a table. If I import, it seems the table is created?

first I'd like to check if the file exists, and Ive used this os.path:
def check_db_exist():
try:
file_exists = exists('games.db')
if file_exists:
file_size = os.path.getsize('games.db')
if file_size > 3000:
return True, file_size
else:
return False, 'too small'
else:
return False, 'does not exist'
except:
return False, 'error'
I have a separate file for my models, and creating the database. My concern is, if I import the class for the database it instantiates the sql file.
Moreover, pywebview when displaying my html, wipes all variables.
If I were to run this process as I load my page, then I can't access the variable for true/false sqlite exists.
db = SqliteDatabase('games.db')
class Game(Model):
game = CharField()
exe = CharField()
path = CharField()
longpath = CharField()
i_d = IntegerField()
class Meta:
database = db
This creates the table, so checking if the file exists is useless.
Then if I uncomment the first line in this file the database gest created, otherwise all of my db. variables are unusable. I must be missing a really obvious function to solve my problems.
# db = SqliteDatabase('games.db')
def add_game(game, exe, path, longpath, i_d):
try:
Game.create(game=game, exe=exe, path=path, longpath=longpath, i_d=i_d)
except:
pass
def loop_insert(lib):
db.connect()
for i in lib[0]:
add_game(i.name, i.exe, i.path, i.longpath, i.id)
db.close()
def initial_retrieve():
db.connect()
vals = ''
for games in Game.select():
val = js.Import.javascript(str(games.game), str(games.exe), str(games.path), games.i_d)
vals = vals + val
storage = vals
db.close()
return storage
should I just import the file at a different point in the file? whenever I feel comfortable? I havent seen that often so I didnt want to be improper in formatting.
edit: edit: Maybe more like this?
def db():
db = SqliteDatabase('games.db')
return db
class Game(Model):
game = CharField()
exe = CharField()
path = CharField()
file 2:
from sqlmodel import db, Game
def add_game(game, exe, path, longpath, i_d):
try:
Game.create(game=game, exe=exe, path=path, longpath=longpath, i_d=i_d)
except:
pass
def loop_insert(lib):
db.connect()
for i in lib[0]:
add_game(i.name, i.exe, i.path, i.longpath, i.id)
db.close()
I am not sure if this answers your question, since it seems to involve multiple processes and/or processors, but In order to check for the existence of a database file, I have used the following:
DATABASE = 'dbfile.db'
if os.path.isfile(DATABASE) is False:
# Create the database file here
pass
else:
# connect to database here
db.connect()
I would suggest using sqlite's user_version pragma:
db = SqliteDatabase('/path/to/db.db')
version = db.pragma('user_version')
if not version: # Assume does not exist/newly-created.
# do whatever.
db.pragma('user_version', 1) # Set user version.
from reddit:
me: To the original challenge, there's a reason I want to know whether the file exists. Maybe its flawed at the premises, I'll explain and you can fill in there.
This script will run on multiple machines I dont ahve access to. At the entry point of a first-time use case, I will be porting data from a remote location, if its the first time the script runs on that machine, its going down a different work flow than a repeated opening.
Akin to grabbing all pc programs vs appending and reading from teh last session. How would you suggest quickly understanding if that process has started and finished from a previous session.
Checking if the sqlite file is made made the most intuitive sense, and then adjusting to byte size. lmk
them:
This is a good question!
How would you suggest quickly understanding if that process
has started and finished from a previous session.
If the first thing your program does on a new system is download some kind of fixture data, then the way I would approach it is to load the DB file as normal, have Peewee ensure the tables exist, and then do a no-clause SELECT on one of them (either through the model, or directly on the database through the connection if you want.) If it's empty (you get no results) then you know you're on a fresh system and you need to make the remote call. If you get results (you don't need to know what they are) then you know you're not on a fresh system.

Restricting view by model permissions fails in django

Very basically I have a model member and want to restrict searching members to allowed users. I use the following in the corresponding view
if request.user.has_perm('view_member'):
context['user_list'] = get_user(q)
Sadly this does not work even if
a) I give a user this permission via the admin interface
b) in my tests that look a bit like the following
def test_user_search_view(self):
self.client.login(username='testuser0', password='12345') # this is a user that was given the permission (see below)
response = self.client.post(reverse('library:search'), data={'q': "Müller"})
# Check our user is logged in
self.assertEqual(str(response.context['user']), 'testuser0') # Works
self.assertContains(response, "Max") # Fails, but Max should be there
For further tests I used the debugging mode of PyCharm to go into the test. In the console I then executed the following
>>> permission_view_user = Permission.objects.get(codename='view_member')
>>> test_user0.user_permissions.add(permission_view_user)
>>> test_user0.save() # For good luck
>>> user = get_object_or_404(User, pk=test_user0.pk)
>>> user.has_perm(permission_view_user)
False
I would expect true here.
I forgot the app name 🙈
It should be
if request.user.has_perm('appname.view_member'):
context['user_list'] = get_user(q)

Cloud Firestore - how retrieve document infos of nested subcollection?

The firestore db has a structure like this:
collection: users
documents with id an user_id
each document has a sub-collection: games
each sub-collection has documents with id a game_id
My problem is that I can't loop over the first collection (---> for user in ref_users:) even my firestore db is already full with data organized like I said. Can someone help me understanding how can I retrieve the infos of these documents? Thanks a lot.
from flask import Flask, render_template
from google.cloud import firestore
db = firestore.Client()
#app.route('/', methods=['GET'])
def index():
ref_users = db.collection(u'users').get()
games = []
for user in ref_users:
print(f'{user.id} => {user.to_dict()}')
ref_game = db.collection(u'users').document(f'{user.id}').collection(u'games').get()
for rg in ref_game:
print(f'{rg.id} => {rg.to_dict()}')
tmp = rg.to_dict()
tmp['game_id'] = rg.id
tmp['user_id'] = user.id
games.append(tmp)
return render_template('game_list.html', title='Game list', games=games)
Implemented this code on my side with some minor changes and its working fine. So I think that you are maybe searching for solution to decrease computational complexity.
I think that collection_group might be helpful. Please check general documentation and Python reference.
I have created code for the same result as the sample (just index part). It has one loop less:
def index():
games = []
ref_game = db.collection_group(u'games').get()
for rg in ref_game:
print(f'{rg.id} => {rg.to_dict()}')
tmp = rg.to_dict()
tmp['game_id'] = rg.id
tmp['user_id'] = rg.reference.parent.parent.id
games.append(tmp)
return str(games)
I would use more sophisticated name for this subcollection like userGames or something. This is because of the fact that collection_group works on every collection with this name in whole database. If your system with grow, there is a chance that you will use the same name games in some other path not related to this logic. Than it will be included to this collection_group and may cause unexpected issues.

How to replace search method with sql query in odoo

When I have too many product and I used the following filter, the loading time is long.So I want to replace with query.How can I change my search into query?I know the basic query but cant convert my search into query because I don't know how to use for with_context in query.
class valuation_report(models.Model):
_inherit = "stock.valuation.layer"
def _get_current_user(self):
for r in self:
r.user_id = self.env.user
def _search_branch(self, operator, value):
warehouse_id= self.env['stock.warehouse'].search([('branch_id','=',self.env.user.branch_id.id)])
product_ids = self.env['product.product'].with_context(warehouse=warehouse_id.ids).search([]).filtered(lambda p:p.qty_available > 0)
return [('product_id','in',product_ids.ids)]
user = fields.Many2one('res.users', compute=_get_current_user, search=_search_branch)
You can do two things:
Put log_level = debug_sql in your Odoo configuration file. This way you will see what SQL in the logs and this will guide you to understand what with_context is doing.
If you really need to transform everything into SQL, use self.env.cr.execute("SELECT ... FROM ...") and then rows = self.env.cr.dictfetchall()

How can i lock some rows for inserting in Django?

I want to batch create users for admin. But, the truth is make_password is a time-consuming task. Then, if I returned the created user-password list util the new users are all created, it will let front user waiting for a long time. So, i would like to do somethings like code showed below. Then, I encountered a problem. Cause I can not figure out how to lock the user_id_list for creating, someone registered during the thread runtime will cause an Dulplicate Key Error error thing like that.
So, I am looking forward your good solutions.
def BatchCreateUser(self, request, context):
"""批量创建用户"""
num = request.get('num')
pwd = request.get('pwd')
pwd_length = request.get('pwd_length') or 10
latest_user = UserAuthModel.objects.latest('id') # retrieve the lastest registered user id
start_user_id = latest_user.id + 1 # the beginning user id for creating
end_user_id = latest_user.id + num # the end user id for creating
user_id_list = [i for i in range(start_user_id, end_user_id + 1)] # user id list for creating
raw_passwords = generate_cdkey(num, pwd_length, False) # generating passwords
Thread(target=batch_create_user, args=(user_id_list, raw_passwords)).start() # make a thread to perform this time-consuming task
user_password_list = list(map(list, zip(*[user_id_list, raw_passwords]))) # return the user id and password immediately without letting front user waiting so long
return {'results': user_password_list}

Categories

Resources