So I have a class name Player defined like below:
class Player:
def insert(score):
# implementation of score insert
def retrieve(player_id)
# implementation of retrieval
Now, I want to modify the insert and retrieve functions so that a username is passed and only if username has admin role I should be able to insert or retrieve (I have a user class). So I modified the methods like below:
def insert(score, username):
u = UserModel()
if not u.is_admin_user(username):
print('User does not have access to retrieve data')
return False
#implementation of score insert
This works fine, however I have various other models, with many different methods who all need this restriction and I would need to add this snippet everywhere. So is there something cool in Python OOPS using which I can use? I am talking in terms of interface or decorator or something like that. I am not very strong with OOPS...
Any help is much appreciated.
I think the best you can do without too much of a redesign is a simple function definition for your check.
def check_permissions(username):
u = UserModel()
if not u.is_admin_user(username):
raise PermissionError('User does not have access to retrieve data')
class Player:
def insert(self, score, username):
check_permissions(username)
# implementation of score insert
def retrieve(self, player_id, username):
check_permissions(username)
# implementation of retrieval
This way you don't need to repeat whole authentication logic in every place you want to add the check and only need to add an argument and a single line of code in every place that needs it.
That being said, I feel like your structure could use a bit of redesign and moving the authentication to a different place to avoid breaking good code practices.
Related
I'm very new to coding, and I'm still confused about how to approach this. I want to make an employee database that can, depending on user input 1: look up an employee, 2: add an employee, 3: change an existing employee's info, and 4: list all employees.
I can get the first and last specifications to work (albeit messily), but I'm having trouble with 2 and 3. I'm also unsure about the right way to store my employee instances, or if I should store them in a dictionary instead. Any pointers of how to start steps 2 and 3 would be great!
I've tried looking up a few different solutions, as well as reading articles about classes and/or dictionaries, but I'm unfortunately lost.
Defining the class:
class Employee:
```def __init__(self, name, salary, department, title):
``````self.name = name
``````self.salary = salary
``````self.department = department
``````self.title = title
```def tell(self):
``````print('{}, {}, {}, {}'.format(self.name, self.salary, self.department, self.title), end="")
```def add_employee(self, addname, addsalary, adddepartment, addtitle):
``````self.name, self.salary, self.department, self.title.append(addname,
``````addsalary, adddepartment, addtitle)
The above is nested under the class; the last line is an attempt to make an add_employee function that adds employee details from user_input to the pre-set instances. I don't think this code is correct, either.
The following is a function outside of the class, as well as the three pre-set employee instances; my guess is this is also bad code (oof):
def add_employee_input():
```Employee.add_employee(input("Enter name:"), input("Enter salary:"), input("Enter department:"),input("Enter title:"))
```print("The full roster is as follows:")
```emp1.tell()
```emp2.tell()
```emp3.tell()
```emp4.tell()
emp1 = Employee('Angela', '40000', 'Department for Penguin Research', 'Penguinologist')
emp2 = Employee('Mr. Rogers', '60000', 'Admin', 'Manager')
emp3 = Employee('Lucie', '50000', 'Department for Snail Research', 'Snailologist')
emp4 = Employee(addname, addsalary, adddepartment, addtitle)
I know the last line is wrong; I don't know if I can directly insert a variable into a class instance like this. And I doubt the function is correct, either.
If I try to run the code without the add_employee function blocked off, I get an error message about how the "addname," "addsalary," etc. variables are undefined. This makes sense to me, since I'm trying to incorporate local class variables to an outside function. I don't know how to do it otherwise, but I'm sure there's a way.
Your function needs an employee to set differents part of it
def add_employee_input(employee: Employee):
employee.name = input("Name:")
#And so on
#main
myEmployee = Employee(...) #needs empty constructor
add_employee_input(myEmployee)
But as you can see your employee needs parameters (because you have no empty constructor)
So add
class Employee:
def __init__(self):
self.name = "DefaultName"
# And so on
Good job so far! Let me give some suggestions on how to tackle this problem.
When you specified the add_employee function in the class, that's what's called an instance method. Note that you're taking a self parameter; this means that the Employee instance should already exist when you use it.
Writing Employee.add_employee(...) asks the Employee class to run the function. But the function is defined for instances, not classes! Look up the #classmethod and #staticmethod decorators for creating methods that don't require instantiation.
Now, in theory you could use those decorators and handle everything inside the Employee class, but here's the thing: Should a regular employee know how to create itself? Also, what happens if one day you decide to not use objects for database entries, but an actual database like MySQL? These are software engineering issues, but consider the following.
There are various ways to solve this. I suggest creating a new class, EmployeeDatabase, that abstracts away all the database. Now, functions in the object will handle each requirement, like this:
class EmployeeDatabase:
def __init__(self):
# TODO: Initialize database.
pass
def lookup_by_name(self, name):
# TODO: Lookup employee by name, and return all the available information.
pass
def add_employee(self, name, salary, department, title):
# TODO: Add employee to the database.
pass
def list_all(self):
# TODO: List all employees in the database.
pass
def change_employee(self, pk):
# TODO: Change an employee information.
pass
I want to note two things. First, a database usually uses an identifier for each row, so I suggest assigning a primary key, pk, to each employee record. Second, this liberates you to use the implementation you want. A possible implementation would be using a list:
def __init__(self):
self._database = []
In this case, the primary key would just be the position of the employee in the list.
The _ at the start of the instance variables is just a convention to say that you're not supposed to touch them from outside the instance without a method. Don't worry about it.
To add a new employee as a tuple:
def add_employee(self, name, salary, department, title):
self._database.append((name, salary, department, title))
I suggest that the interactive input functions stay outside the class definition. To use the class, you would do:
employee_database = EmployeeDatabase()
def add_employee_input(database):
database.add_employee(
input('Enter name:'),
input('Enter salary:'),
input('Enter department:'),
input('Enter title:'),
)
add_employee_input(employee_database)
# The output for interactive use could be something like
# "Created new employee with ID 1"
Can you implement the other functions?
Some last pointers:
To look up the employee name, you would need to check the name one by one in the database. This is clearly inefficient, so databases use what's called an index. Of course, you could just use a dictionary where the key is the name, but what happens if two people have the same name?
I used a tuple to represent an employee in the list. You could also continue using the Employee class idea, storing instances in the list. A more elegant solution would be to use a NamedTuple, which lets you manipulate each element with attributes, like you would in a class instance. It's part of the Python standard library.
I want to have a base class called MBUser that has some predefined properties, ones that I don't want to be changed. If the client wants to add properties to MBUser, it is advised that MBUser be subclassed, and any additional properties be put in there.
The API code won't know if the client actually subclasses MBUser or not, but it shouldn't matter. The thinking went that we could just get MBUser by id. So I expected this to work:
def test_CreateNSUser_FetchMBUser(self):
from nsuser import NSUser
id = create_unique_id()
user = NSUser(id = id)
user.put()
# changing MBUser.get.. to NSUser.get makes this test succeed
get_user = MBUser.get_by_id(id)
self.assertIsNotNone(get_user)
Here NSUser is a subclass of MBUser. The test fails.
Why can't I do this?
What's a work around?
Models are defined by their "kind", and a subclass is a different kind, even if it seems the same.
The point of subclassing is not to share values, but to share the "schema" you've created for a given "kind".
A kind map is created on base class ndb.Model (it seems like you're using ndb since you mentioned get_by_id) and each kind is looked up when you do queries like this.
For subclasses, the kind is just defined as the class name:
#classmethod
def _get_kind(cls):
return cls.__name__
I just discovered GAE has a solution for this. It's called the PolyModel:
https://developers.google.com/appengine/docs/python/ndb/polymodelclass
I've been reading on the ways to implement authorization (and authentication) to my newly created Pyramid application. I keep bumping into the concept called "Resource". I am using python-couchdb in my application and not using RDBMS at all, hence no SQLAlchemy. If I create a Product object like so:
class Product(mapping.Document):
item = mapping.TextField()
name = mapping.TextField()
sizes = mapping.ListField()
Can someone please tell me if this is also called the resource? I've been reading the entire documentation of Pyramids, but no where does it explain the term resource in plain simple english (maybe I'm just stupid). If this is the resource, does this mean I just stick my ACL stuff in here like so:
class Product(mapping.Document):
__acl__ = [(Allow, AUTHENTICATED, 'view')]
item = mapping.TextField()
name = mapping.TextField()
sizes = mapping.ListField()
def __getitem__(self, key):
return <something>
If I were to also use Traversal, does this mean I add the getitem function in my python-couchdb Product class/resource?
Sorry, it's just really confusing with all the new terms (I came from Pylons 0.9.7).
Thanks in advance.
I think the piece you are missing is the traversal part. Is Product
the resource? Well it depends on what your traversal produces, it
could produce products.....
Perhaps it might be best to walk this through from the view back to
how it gets configured when the application is created...
Here's a typical view.
#view_config(context=Product, permission="view")
def view_product(context, request):
pass # would do stuff
So this view gets called when context is an instance of Product. AND
if the acl attribute of that instance has the "view"
permission. So how would an instance of Product become context?
This is where the magic of traversal comes in. The very logic of
traversal is simply a dictionary of dictionaries. So one way that this
could work for you is if you had a url like
/product/1
Somehow, some resource needs to be traversed by the segments of the
url to determine a context so that a view can be determined. What if
we had something like...
class ProductContainer(object):
"""
container = ProductContainer()
container[1]
>>> <Product(1)>
"""
def __init__(self, request, name="product", parent=None):
self.__name__ = name
self.__parent__ = parent
self._request = request
def __getitem__(self, key):
p = db.get_product(id=key)
if not p:
raise KeyError(key)
else:
p.__acl__ = [(Allow, Everyone,"view")]
p.__name__ = key
p.__parent__ = self
return p
Now this is covered in the documentation and I'm attempting to boil it
down to the basics you need to know. The ProductContainer is an object
that behaves like a dictionary. The "name" and "parent"
attributes are required by pyramid in order for the url generation
methods to work right.
So now we have a resource that can be traversed. How do we tell
pyramid to traverse ProductContainer? We do that through the
Configurator object.
config = Configurator()
config.add_route(name="product",
path="/product/*traverse",
factory=ProductContainer)
config.scan()
application = config.make_wsgi_app()
The factory parameter expects a callable and it hands it the current
request. It just so happens that ProductContainer.init will do
that just fine.
This might seem a little much for such a simple example, but hopefully
you can imagine the possibilities. This pattern allows for very
granular permission models.
If you don't want/need a very granular permission model such as row
level acl's you probably don't need traversal, instead you can use
routes with a single root factory.
class RootFactory(object):
def __init__(self, request):
self._request = request
self.__acl__ = [(Allow, Everyone, "view")] # todo: add more acls
#view_config(permission="view", route_name="orders")
def view_product(context, request):
order_id, product_id = request.matchdict["order_id"], request.matchdict["product_id"]
pass # do what you need to with the input, the security check already happened
config = Configurator(root_factory=RootFactory)
config.add_route(name="orders",
path="/order/{order_id}/products/{product_id}")
config.scan()
application = config.make_wsgi_app()
note: I did the code example from memory, obviously you need all the necessary imports etc. in other words this isn't going to work as a copy/paste
Have you worked through http://michael.merickel.org/projects/pyramid_auth_demo/ ? If not, I suspect it may help. The last section http://michael.merickel.org/projects/pyramid_auth_demo/object_security.html implements the pattern you're after (note the example "model" classes inherit from nothing more complex than object).
Here's the situation. I've got an app with multiple users and each user has a group/company they belong to. There is a company field on all models meaning there's a corresponding company_id column in every table in the DB. I want to transparently enforce that, when a user tries to access any object, they are always restricted to objects within their "domain," e.g. their group/company. I could go through every query and add a filter that says .filter(company=user.company), but I'm hoping there's a better way to do at a lower level so it's transparent to whoever is coding the higher level logic.
Does anyone have experience with this and/or can point we to a good resource on how to approach this? I'm assuming this is a fairly common requirement.
You could do something like this:
from django.db import models
from django.db.models.query import QuerySet
class DomainQuerySet(QuerySet):
def applicable(self, user=None):
if user is None:
return self
else:
return self.filter(company=user.company)
class DomainManager(models.Manager):
def get_query_set(self):
return DomainQuerySet(self.model)
def __getattr__(self, name):
return getattr(self.get_query_set(), name)
class MyUser(models.Model):
company = models.ForeignKey('Company')
objects = DomainManager()
MyUser.objects.applicable(user)
Since we are using querysets, the query is chainable so you could also do:
MyUser.objects.applicable().filter(**kwargs)
I'm working on a website where I sell products (one class Sale, one class Product). Whenever I sell a product, I want to save that action in a History table and I have decided to use the observer pattern to do this.
That is: my class Sales is the subject and the History class is the observer, whenever I call the save_sale() method of the Sales class I will notify the observers. (I've decided to use this pattern because later I'll also send an email, notify the admin, etc.)
This is my subject class (the Sales class extends from this)
class Subject:
_observers = []
def attach(self, observer):
if not observer in self._observers:
self._observers.append(observer)
def detach(self, observer):
try:
self._observers.remove(observer)
except ValueError:
pass
def notify(self,**kargs):
for observer in self._observers:
observer.update(self,**kargs)
on the view I do something like this
sale = Sale()
sale.user = request.user
sale.product = product
h = History() #here I create the observer
sale.attach(h) #here I add the observer to the subject class
sale.save_sale() #inside this class I will call the notify() method
This is the update method on History
def update(self,subject,**kargs):
self.action = "sale"
self.username = subject.user.username
self.total = subject.product.total
self.save(force_insert=True)
It works fine the first time, but when I try to make another sale, I get an error saying I can't insert into History because of a primary key constraint.
My guess is that when I call the view the second time, the first observer is still in the Subject class, and now I have two history observers listening to the Sales, but I'm not sure if that's the problem (gosh I miss the print_r from php).
What am I doing wrong? When do I have to "attach" the observer? Or is there a better way of doing this?
BTW: I'm using Django 1.1 and I don't have access to install any plugins.
This may not be an acceptable answer since it's more architecture related, but have you considered using signals to notify the system of the change? It seems that you are trying to do exactly what signals were designed to do. Django signals have the same end-result functionality as Observer patterns.
http://docs.djangoproject.com/en/1.1/topics/signals/
I think this is because _observers = [] acts like static shared field. So every instance of Subject changes the _observers instance and it has unwanted side effect.
Initialize this variable in constructor:
class Subject:
def __init__(self):
self._observers = []
#Andrew Sledge's answer indicates a good way of tackling this problem. I would like to suggest an alternate approach.
I had a similar problem and started out using signals. They worked well but I found that my unit tests had become slower as the signals were called each time I loaded an instance of the associated class using a fixture. This added tens of seconds to the test run. There is a work around but I found it clumsy. I defined a custom test runner and disconnected my functions from the signals before loading fixtures. I reconnected them afterwards.
Finally I decided to ditch signals altogether and overrode the appropriate save() methods of models instead. In my case whenever an Order is changed a row is automatically created in and OrderHistory table, among other things. In order to do this I added a function to create an instance of OrderHistory and called it from within the Order.save() method. this also made it possible to test the save() and the function separately.
Take a look at this SO question. It has a discussion about when to override save() versus when to use signals.
Thank you all for your answers, reading about signals gave me another perspective but i dont want to use them because of learning purposes (i wanted to use the observer pattern in web development :P) In the end, i solved doing something like this:
class Sales(models.Model,Subject):
...
def __init__(self):
self._observers = [] #reset observers
self.attach(History()) #attach a History Observer
...
def save(self):
super(Sales,self).save()
self.notify() # notify all observers
now every time i call the save(), the observers will be notified and if i need it, i could add or delete an observer
what do you think? is this a good way to solve it?