I am starting an App Engine application.
I started defining some simple models I will need.
I want to write tests for my application (that would be the first time I've done that).
I cannot see what I should be testing for.
I already looked to how to do this (https://developers.google.com/appengine/docs/python/tools/localunittesting), but I just don't know what to test...
Here is my code so far:
class User(db.Model):
email = db.EmailProperty()
name = db.StringProperty()
class Service(db.Model):
name = db.StringProperty(required=True)
class UserService(db.Model):
user_id = db.ReferenceProperty(User,
required=True,
collection_name='user_services')
service_id = db.ReferenceProperty(Service,
required=True)
access_token = db.StringProperty(required=True)
refresh_token = db.StringProperty(required=True)
class LocalServer(db.Model):
authentication_token = db.StringProperty(required=True)
class Task(db.Model):
user_service_id = db.Reference(UserService,
required=True,
collection_name='tasks')
local_server_id = db.ReferenceProperty(LocalServer,
required=True,
collection_name='tasks')
creation_date = db.DateTimeProperty(auto_now_add=True,
required=True)
completion_date = db.DateTimeProperty(required=True)
number_of_files = db.IntegerProperty(required=True)
status = db.StringProperty(required=True,
choices=('created', 'validated', 'in_progress', 'done'))
Quoting Wikipedia:
Intuitively, one can view a unit as the smallest testable part of an application.
Now, I don't know exactly what your application is supposed to do, but in general you don't have to test each specific class/model. What does this mean? Well, you don't need to test a feature like this: "what happens when I add two users, and then I want to filter them by a specific name?". You don't have to test it, because in that case you would test a GAE function, .filter(). Now, why should you test it? :) Google pays its developers for that!
But what if you write a "filter" method? What if you customize the filter() method? Then you must test them.
I suggest you to read this answer. The question is about Django models, but actually it is valid for every framework or programming language.
Related
I'm writing a web scraper to get information about customers and appointment times to visit them. I have a class called Job that stores all the details about a specific job. (Some of its attributes are custom classes too e.g Client).
class Job:
def __init__(self, id_=None, client=Client(None), appointment=Appointment(address=Address(None)), folder=None,
notes=None, specific_reqs=None, system_notes=None):
self.id = id_
self.client = client
self.appointment = appointment
self.notes = notes
self.folder = folder
self.specific_reqs = specific_reqs
self.system_notes = system_notes
def set_appointment_date(self, time, time_format):
pass
def set_appointment_address(self, address, postcode):
pass
def __str__(self):
pass
My scraper works great as a stand alone app producing one instance of Job for each page of data scraped.
I now want to save these instances to a Django database.
I know I need to create a model to map the Job class onto but that's where I get lost.
From the Django docs (https://docs.djangoproject.com/en2.1/howto/custom-model-fields/) it says in order to use my Job class in the Django model I don't have to change it at all. That's great - just what I want. but I can't follow how to create a model that maps to my Job class.
Should it be something like
from django.db import models
import Job ,Client
class JobField(models.Field):
description = "Job details"
def __init__(self, *args, **kwargs):
kwargs['id_'] = Job.id_
kwargs['client'] = Client(name=name)
...
super().__init__(*args, **kwargs)
class Job(models.Model):
job = JobField()
And then I'd create a job using something like
Job.objects.create(id_=10101, name="Joe bloggs")
What I really want to know is am I on the right lines? Or (more likely) how wrong is this approach?
I know there must be a big chunk of something missing here but I can't work out what.
By mapping I'm assuming you want to automatically generate a Django model that can be migrated in the database, and theoretically that is possible if you know what field types you have, and from that code you don't really have that information.
What you need to do is to define a Django model like exemplified in https://docs.djangoproject.com/en/2.1/topics/db/models/.
Basically you have to create in a project app's models.py the following class:
from django import models
class Job(models.Model):
client = models.ForeignKey(to=SomeClientModel)
appointment = models.DateTimeField()
notes = models.CharField(max_length=250)
folder = models.CharField(max_length=250)
specific_reqs = models.CharField(max_length=250)
system_notes = models.CharField(max_length=250)
I don't know what data types you actually have there, you'll have to figure that out yourself and cross-reference it to https://docs.djangoproject.com/en/2.1/ref/models/fields/#model-field-types. This was just an example for you to understand how to define it.
After you have these figured out you can do the Job.objects.create(...yourdata).
You don't need to add an id field, because Django creates one by default for all models.
I am currently working on a small flask app that will be connecting to an api and processing data pulled from that api.
Users are able to login to my flask app and then also define their credentials to interact with the api. Any given user may have one or more API credentials associated with their user.
I've created db models to store user and Api credentials in the database as follows.
I'm using the flask-login module which has the "current_user" object which provides me with the User model of the user that is currently logged in across my entire app.
Models:
class User(UserMixin, db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(64), index=True, unique=True)
email = db.Column(db.String(128), index=True, unique=True)
firstname = db.Column(db.String(55))
lastname = db.Column(db.String(55))
password = db.Column(db.String(128))
creds = db.relationship('ApiCredential', backref='owner', lazy='dynamic')
class ApiCredential(db.Model):
__tablename__ = 'api_credentials'
id = db.Column(db.Integer, primary_key=True)
site = db.Column(db.String(140))
user = db.Column(db.String(140))
username = db.Column(db.String(100), index=True, unique=True)
password = db.Column(db.String(55))
base = db.Column(db.String(100))
owner_id = db.Column(db.Integer, db.ForeignKey('users.id'))
active = db.Column(db.Boolean)
I would like to know how to create a similar "global variable" for my API credentials that is specific only to the logged in user and not to all users of the application
NOTE*** It seems as though "current_user" is something called a local proxy which i am not at all familiar with and cannot seem to find any decent documentation or explanation of what it is and how to use it for my needs.
You're in for a fun ride, at the end of which you might choose to do something less magic.
First, it helps to understand how current_user works. The source (as of this moment) is here. It's a werkzeug.local.LocalProxy, which wraps a lambda that calls flask_login.utils._get_user.
LocalProxy is pretty cool, and understanding it is a great way to level-up on Python, but flask-login uses a thin slice of it. The source (as of this moment) is here. It looks scary, but if you trace the code, it merely invokes the local arg (the lambda from flask-login).
That gets us back to _get_user (here, as of this moment), which loads a user if there isn't one (in the top of the current request context), and then returns the user from the top of the request context.
You could follow that pattern, and write a package that exports current_credentials. You'd follow the same pattern, using a werkzeug.local.LocalProxy to wrap a lambda that invoked, say, _get_credentials. The trick would be to import current_user from flask-login, using it with _get_credentials to get to user with which to construct the query to join to your ApiCredentials table.
Or you could take a simple approach, and write utility method for your views to use, which would use current_user to get the user and then do the join to get API credentials for that user.
One method could be to create a new route in your flask app, when the user get requests the page then you can check which user it is, once you know which user it is, you can query using your api credentials model and filter by current_user.id.
creds = ApiCredentials.query.filter_by(owner_id=current_user.id).first_or_404()
Then you can do as you please with the information stored in your API Credentials table.
I don't see why you would want to replicate the user loader functionality. This is the function you will have included at some point when you set up Flask login. Very short function that returns the current user object from the user model.
You could display or return your api keys. Display them on a page as HTML or for more autonomous jobs, you could use jsonify to return the keys as a json string.
I'm sorry this doesn't directly answer your question, but I hope my suggestion might lead you to a slightly less complex answer and you can continue developing your web app. Perhaps it would be worth revisiting at a later date.
Note: this is off the top of my head. The code line I provided might need to be as follows
creds = ApiCredentials.query.filter_by(owner_id==current_user.id).first()
Furthermore, you may not want to use .first() if they have multiple api credentials stored in the table.
In which case this would be more suitable:
creds = ApiCredentials.query.filter_by(owner_id==current_user.id).all()
I'm using Google App Engine with webapp2 and Python.
I have a User model with a deleted field:
class User(ndb.Model):
first_name = ndb.StringProperty()
last_name = ndb.StringProperty()
email = ndb.StringProperty()
deleted = ndb.BooleanProperty(default=False)
I'd like to get a User object by calling User.get_by_id() but I would like to exclude objects that have deleted field True. Is it possible to do this with the normal get_by_id() function?
If not, could I override it?
Or should I create a custom class method, smth like get_by_id_2() that does a normal .get() query like this: User.query(User.key.id()==id, User.deleted==False).get()?
Would you reccomend something else instead?
A query is significantly slower than a get, and is subject to eventual consistency. You should probably use the normal get_by_id and check deleted afterwards. You certainly could wrap that up in a method:
#classmethod
def get_non_deleted(cls, id):
entity = cls.get_by_id(id)
if entity and not entity.deleted:
return entity
I am working on a webapp in flask and using a services layer to abstract database querying and manipulation away from the views and api routes. Its been suggested that this makes testing easier because you can mock out the services layer, but I am having trouble figuring out a good way to do this. As a simple example, imagine that I have three SQLAlchemy models:
models.py
class User(db.Model):
id = db.Column(db.Integer, primary_key = True)
email = db.Column(db.String)
class Group(db.Model):
id = db.Column(db.Integer, primary_key = True)
name = db.Column
class Transaction(db.Model):
id = db.Column(db.Integer, primary_key = True)
from_id = db.Column(db.Integer, db.ForeignKey('user.id'))
to_id = db.Column(db.Integer, db.ForeignKey('user.id'))
group_id = db.Column(db.Integer, db.ForeignKey('group.id'))
amount = db.Column(db.Numeric(precision = 2))
There are users and groups, and transactions (which represent money changing hands) between users. Now I have a services.py that has a bunch of functions for things like checking if certain users or groups exist, checking if a user is a member of a particular group, etc. I use these services in an api route which is sent JSON in a request and uses it to add transactions to the db, something similar to this:
routes.py
import services
#app.route("/addtrans")
def addtrans():
# get the values out of the json in the request
args = request.get_json()
group_id = args['group_id']
from_id = args['from']
to_id = args['to']
amount = args['amount']
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
# check that the group exists
if not services.group_exists(to_id):
return "no such group"
# add the transaction to the db
services.add_transaction(from_id,to_id,group_id,amount)
return "success"
The problem comes when I try to mock out these services for testing. I've been using the mock library, and I'm having to patch the functions from the services module in order to get them to be redirected to mocks, something like this:
mock = Mock()
mock.user_exists.return_value = True
mock.group_exists.return_value = True
#patch("services.user_exists",mock.user_exists)
#patch("services.group_exists",mock.group_exists)
def test_addtrans_route(self):
assert "success" in routes.addtrans()
This feels bad for any number of reasons. One, patching feels dirty; two, I don't like having to patch every service method I'm using individually (as far as I can tell there's no way to patch out a whole module).
I've thought of a few ways around this.
Reassign routes.services so that it refers to my mock rather than the actual services module, something like: routes.services = mymock
Have the services be methods of a class which is passed as a keyword argument to each route and simply pass in my mock in the test.
Same as (2), but with a singleton object.
I'm having trouble evaluating these options and thinking of others. How do people who do python web development usually mock services when testing routes that make use of them?
You can use dependency injection or inversion of control to achieve a code much simpler to test.
replace this:
def addtrans():
...
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
...
with:
def addtrans(services=services):
...
# check that both users exist
if not services.user_exists(to_id) or not services.user_exists(from_id):
return "no such users"
...
what's happening:
you are aliasing a global as a local (that's not the important point)
you are decoupling your code from services while expecting the same interface.
mocking the things you need is much easier
e.g.:
class MockServices:
def user_exists(id):
return True
Some resources:
https://github.com/ivan-korobkov/python-inject
http://code.activestate.com/recipes/413268/
http://www.ninthtest.net/aglyph-python-dependency-injection/
You can patch out the entire services module at the class level of your tests. The mock will then be passed into every method for you to modify.
#patch('routes.services')
class MyTestCase(unittest.TestCase):
def test_my_code_when_services_returns_true(self, mock_services):
mock_services.user_exists.return_value = True
self.assertIn('success', routes.addtrans())
def test_my_code_when_services_returns_false(self, mock_services):
mock_services.user_exists.return_value = False
self.assertNotIn('success', routes.addtrans())
Any access of an attribute on a mock gives you a mock object. You can do things like assert that a function was called with the mock_services.return_value.some_method.return_value. It can get kind of ugly so use with caution.
I would also raise a hand for using dependency injection for such needs. You can use Dependency Injector to describe structure of your application using inversion of control container(s) to make it look like this:
"""Example of dependency injection in Python."""
import logging
import sqlite3
import boto3
import example.main
import example.services
import dependency_injector.containers as containers
import dependency_injector.providers as providers
class Core(containers.DeclarativeContainer):
"""IoC container of core component providers."""
config = providers.Configuration('config')
logger = providers.Singleton(logging.Logger, name='example')
class Gateways(containers.DeclarativeContainer):
"""IoC container of gateway (API clients to remote services) providers."""
database = providers.Singleton(sqlite3.connect, Core.config.database.dsn)
s3 = providers.Singleton(
boto3.client, 's3',
aws_access_key_id=Core.config.aws.access_key_id,
aws_secret_access_key=Core.config.aws.secret_access_key)
class Services(containers.DeclarativeContainer):
"""IoC container of business service providers."""
users = providers.Factory(example.services.UsersService,
db=Gateways.database,
logger=Core.logger)
auth = providers.Factory(example.services.AuthService,
db=Gateways.database,
logger=Core.logger,
token_ttl=Core.config.auth.token_ttl)
photos = providers.Factory(example.services.PhotosService,
db=Gateways.database,
s3=Gateways.s3,
logger=Core.logger)
class Application(containers.DeclarativeContainer):
"""IoC container of application component providers."""
main = providers.Callable(example.main.main,
users_service=Services.users,
auth_service=Services.auth,
photos_service=Services.photos)
Having this will give your a chance to override particular implementations later:
Services.users.override(providers.Factory(example.services.UsersStub))
Hope it helps.
#patch("dao.qualcomm_transaction_service.QualcommTransactionService.get_max_qualcomm_id",20)
def test_lambda_handler():
lambda_handler(event, None)
I used mocking seeing your example and my method expects to return 20 whenever in lambda function testing locally get_max_qualcomm_id us made .but on reaching the above method i get a exception int type object is not Callable. Please let me know what is the problem here .
This is actual method being call which i am trying to mock :
last_max_id = QualcommTransactionService().get_max_qualcomm_id(self.subscriber_id)
I would like to save contacts on AppEngine datastore from the Google API v3. I want to know what is the best way to store these data and especially if there is already a model!
Looking at the sources of gdata, I found a very interesting beginning. But this is the modeling of the data only in python and not for the datastore.
Question : Is this model already exists in python?
If not:
Question : What is the best way to start from scratch?
Beginning example of a contact in Python :
class Contact(db.Model):
"""
https://developers.google.com/gdata/docs/2.0/elements?hl=fr#gdContactKind
https://developers.google.com/appengine/docs/python/datastore/datamodeling
"""
content = db.Text()
"""atom:content Notes about the contact."""
link = db.ListProperty(Link,indexed=False,default=[])
"""atom:link* Links to related information. Specifically, atom:link[#rel='alternate'] links to an HTML page describing the contact."""
title = db.StringProperty()
"""atom:title Contact's name. This field is read only. To modify the contact's name, see gd:name."""
email = db.ListProperty(Email,indexed=False,default=[])
"""gd:email* Email addresses."""
"""etc..."""
class Link(db.Model):
"""
Link
"""
link = db.LinkProperty()
class Email(db.Model):
"""
Email
"""
email_address = db.EmailProperty()
class EmailImParent(db.Model):
address = db.StringProperty()
label = db.StringProperty()
rel = db.StringProperty()
primary = db.StringProperty()
class Email(db.Model,EmailImParent):
"""
The gd:email element.
"""
email = db.EmailProperty()
display_name = db.StringProperty()
I think everyone is rolling their own. It is easy enough to do, you can take a look at AE-BaseApp/Python and check out the code from the Github link. I have some new code that will be updated in the near future that contains some improvements to the contact model. (the updated code is currently broken due to hacking to get logins working on both http and https here)