How can I test web2py units that are intended to be run on GAE?
I learned to run tests using this method:
python web2py.py -S api -M -R applications/api/tests/test.py
But how do I run tests with dev_appserver.py & web2py.py?
Unless you are testing requests, all you need to to do run unit tests is to have the Google App Engine SDK in your Python path.
I found it a little bit annoying so I built testable-appengine to automate the setup process (and add a couple nice things to the virtualenv it builds). It also has some interesting examples of how you can install third-party libraries for deployment alongside the application. I'd love to see how it works with Web2Py.
Related
I have a meteor project that includes python scripts in our private folder of our project. We can easily run them from meteor using exec, we just don't know how to install python modules on our galaxy server that is hosting our app. It works fine running the scripts on our localhost since the modules are installed on our computers, but it appears galaxy doesn't offer a command line or anything to install these modules. We tried creating our own command line by calling exec commands on the meteor server, but it was unable to find any modules. For example when we tried to install pip, the server logged "Unable to find pip".
Basically we can run the python scripts, but since they rely on modules, galaxy throws errors and we aren't sure how to install those modules. Any ideas?
Thanks!
It really depends on how horrible you want to be :)
No matter what, you'll need a well-specified requirements.txt or setup.py. Once you can confirm your scripts can run on something other than a development machine, perhaps by using a virtualenv, you have a few options:
I would recommend hosting your Python scripts as their own independent app. This sounds horrible, but in reality, with Flask, you can basically make them executable over the Internet with very, very little IT. Indeed, Flask is supported as a first-class citizen in Google App Engine.
Alternatively, you can poke at what version of Linux the Meteor containers are running and ship a binary built with PyInstaller in your private directory.
I want to modify the Dockerfile of a Google App Engine managed VM that uses a standard runtime (python27).
I want to do this to add a C++ library that needs to be called to implement an HTTP request. This library is pretty much the only addition I need to the sandboxed python27 runtime.
The documentation makes it quite clear that this is possible:
Each standard runtime uses a default Dockerfile, which is supplied by the SDK. You can extend and enhance a standard runtime by adding new docker commands to this file.
Elsewhere they say that they Dockerfile of a standard runtime will be generated in the project directory:
When you use gcloud to run or deploy a managed VM application based on a standard runtime (in this case Python27), the SDK will create a minimal Dockerfile using the standard runtime as a base image. You'll find this Dockerfile in your project directory...
This is the one I am supposed to modify according to the same page:
Later steps in this tutorial will show you how to extend the capabilities of your runtime environment by adding instructions to the Dockerfile.
The problem is that when I do run my application on the dev server, I cannot find the Dockerfile anywhere, so I can't make any changes to it.
Has anyone managed to modify the standard runtime Dockerfile for Google App Engine? Any help would be appreciated.
To use google-api-python-client i had the same issue, cause I needed pycrypto. I always got the error:
CryptoUnavailableError: No crypto library available
To solve this I created an instance start handler that installs all needed libs. It's ugly but it works.
app.yaml:
handlers:
- url: /_ah/start
script: start_handler.app
start_handler.py
import webapp2
import logging
import os
class StartHandler(webapp2.RequestHandler):
def execute(self, cmd):
logging.info(os.popen("%s 2>&1" % cmd).read())
def get(self):
if not os.environ.get('SERVER_SOFTWARE','').startswith('Development'):
self.execute("apt-get update")
self.execute("apt-get -y install build-essential libssl-dev libffi-dev python-dev")
self.execute("pip install cryptography")
self.execute("pip install pyopenssl")
app = webapp2.WSGIApplication([
('/_ah/start', StartHandler)
], debug=True)
It seems like the Dockerfile is generated only when using gcloud preview app run and not dev_appserver.py, which was what I was using.
I am however not able to modify the Dockerfile and run a custom managed VM. But that is a seperate error (--custom_entrypoint related).
This whole situation is a nightmare fueled by atrocious documentation and support. A warning for other developers considering Google App Engine.
Turns out, extending the Dockerfile in your app does not work the way it's purported in the Documentation (Link). In fact, if there is a Dockerfile present you will get the following error:
"ERROR: (gcloud.preview.app.deploy) There is a Dockerfile in the current directory, and the runtime field in /[...]/app.yaml is currently set to [runtime: python27]. To use your Dockerfile to build a custom runtime, set the runtime field in [...]/app.yaml to [runtime: custom]. To continue using the [python27] runtime, please omit the Dockerfile from this directory"
The only way I've been able to use a customized Dockerfile is using a custom runtime.
Google has a really good GitHub example for deploying Django to a managed VM using a custom Python runtime (here).
Since you're using the custom runtime you'll have to implement health checking yourself. However, if you need to access Google APIs, Google has an example of how to set that up on GitHub (here).
For help implementing health checking, or integrating with Google APIs you can follow the Google Compute Engine, Getting Started series of tutorials (here).
I'm wondering if appcfg.py in GAE can be smart enough to not upload Python modules that actually should be pre-installed for GAE apps.
E.g., in my workflow I'm developing a pure Django app in standard Django environment which is not aware about httplib2 module. For the project to be self-contained I'm using virtualenv and pip install missing modules. And since I couldn't make GAE's appcfg.py happy in virtualenv sandbox I simply created symlinks to all custom virtualenv modules in the root folder. So, for example, I ended up with a symlink httplib2 --> ../src/lib/site-packages/httplib2.
Then to check if httpblib2 had actually been uploaded to GAE I used appcfg.py download_app command and, bingo, httpblib2 was sitting there, eating my quota for no reason.
I could think of tweaking syspath etc. but it is interesting to know if there are some cleaner solutions...
I am creating a app on Google app engine and am wondering if there are ways to do automated testing with python.
Thanks!
We are generally not testing too much. We once had a "80% test coverage" rule but found this doesn't make us better or faster. Most code and data structures we use are designed quite defensively so there is seldom harm which can't be undone. Our users prefer fast turnaround times to 100% uptime.
We have two apps setup: my app.appspot.com and my app-test.appspot.com. The whole codebase is designer to ensure app-test.appspot.com never changes state in external systems.
occasionally we copy the data from app.appspot.com to app-test.appspot.com. It can get messy, because id generation counters for the datastore don't get updated but it works good enough.
We develop on both systems. Frontend development is done mostly on app.appspot.com and experiments with the backend are done on app-test.appspot.com.
We have three branches: master, rc and production.rc gets updated from master and production from rc. rc is deployed daily to rc.app.appspot.com by or operations them. production is deployed weekly to production.app.appspot.com (which is also reachable via an other app name.
Developers usually deply to dev-whoami.app.appspot.com for experimenting. We use the development server very little because wee need a lot of data from the datastore.
Now to testing: we mostly use acceptance tests. We have a little framework called resttest_dsl which we use to describe tests like this:
client.GET('/').responds_access_denied()
client.GET('/', auth='user').responds_html()
client.GET('/admin').responds_access_denied()
client.GET('/admin', auth='user').responds_access_denied()
client.GET('/admin', auth='admin').responds_html()
client.GET('/artikel/').responds_with_html_to_valid_auth()
client.GET('/artikel/?q=Ratzfratz', auth='user').responds_html()
client.GET('/api/ic/v3/a/14600/03/zukunft.json').responds_with_json_to_valid_auth()
client.GET('/kunden/SC50313/o/SO1194829/', auth='user').responds_html()
client.GET('/api/masterdata/artikel/v2/artnr/14600/bild=s210').redirects_to('...')
hostname and credentials have defaults but can be overwritten by environment variables. Most errors we ever have fixed have a regression test in there. We use Makefiles to drive the whole stuff. Eg.g:
deploy:
appcfg.py update -V dev-`whoami` -A app .
TESTHOST=dev-`whoami`.app.appspot.com make resttest
open http://dev-`whoami`.app.appspot.com/
Deployment always happens from the central git repository like this:
deploy_production:
rm -Rf tmp
mkdir tmp
(cd tmp ; git clone git#github.com:user/app.git)
(cd tmp/app ; git checkout production ; make dependencies)
(cd tmp/app ; git show-ref --hash=7 refs/remotes/origin/production > version.txt)
appcfg.py update -V "v`cat tmp/app/version.txt`" -A app tmp/app
(cd tmp/huWaWi ; TESTHOST="v`cat version.txt`".app.appspot.com make resttest)
appcfg.py update -V production -A app tmp/app
appcfg.py backends -V production -A app tmp/huWaWi app
We first deploy to a version tagged with the current revision on AppEngine. We then run resttest.py against this freshly deployed version. On failure the mmake stops execution. If no failure occurred the "production version" is deployed.
We also run mandantory pep8, pyflakes and pylint checks on source code checkin.
All in all we have very simple minded tests but run them a lot and against production code and data. For us this catches most of error we make which relatively little effort.
I use gaeunit - http://code.google.com/p/gaeunit/ - which may or may not suit your needs but once its going it's pretty easy to add to. I also added an xml output so that I can stuff the results back into a junit analyser so my jenkins can report back after code checkins that nothing broke.
David Robinson refers to the development unit testing.
If you are looking for automated user(production) testing using python, go for selenium rc or selenium webdriver(improved version & standalone).
You can do wonders with selenium RC.
Refer to http://seleniumhq.org/projects/webdriver/
I'd like to write some Python unit tests for my Google App Engine. How can I set that up? Does someone happen to have some sample code which shows how to write a simple test?
GAEUnit is a unit test framework that helps to automate testing of your Google App Engine application.
Update: The Python SDK now provides a testbed module that makes service stubs available for unit testing. Documentation here.
Google's Python SDK now allows for this via the unittest module. More details here.
One note that you might find useful: To actually execute the tests, you should use NoseGAE. From the commandline, use:
$ sudo easy_install nose
$ sudo easy_install NoseGAE
(you can alternatively use pip for a virtual environment installation)
Then cd into your app's source directory and run:
$ nosetests --with-gae
That will run all the unit tests for your app.
One working solution is using following combination (as described in http://www.cuberick.com/2008/11/unit-test-your-google-app-engine-models.html)
Nose
Nose GAE
GAE Testbed
Since, gae is based on webhooks it can be easy to set your own testing framework for all relevant urls in your app.yaml. You can test it on sample dataset on development server ( start devel server with --datastore_path option ) and assert writes to database or webhook responses.