Running unit tests on production google app engine - python

Has anyone done tests on production (or on staging) as opposed to local tests on the dev server? is it a bad idea to try?
At first glance, App Engine and unit tests aren't a great fit. App
Engine requests can only be driven by http or xmpp. Unit tests are
typically initiated via command-line or IDE. App Engine requests are
limited to 30 seconds. A unit test suite may contain thousands of
tests that take far longer than 30 seconds to execute. How do we
bridge the gap?
is there a python equivalent to cloud cover?
I would love my app to have a webpage of checkbox's which allows me to select which tests to run and displays recent results for each test. (preferably without me writing my own version of unittest / unittest2
While some of my tests may be local only, I think I may need to run some of these tests on production also. I may also have additional "live only tests".
I guess my concern is how to run the local tests on live without having to maintain two separate sets of tests. And also how to run some tests on live without messing up the live data in the datastore. (yes some tests may use stubs or mocks but I might wanna check the production datastore or a staged version of it?)
I haven't tried running a unit test on live, I assume via stdout unittest would log results to the administration console which probably wouldn't be as useful as having the results show up on a webpage that is used for running the tests.
I would also want to set up staging before production by changing the version number in app.yaml (combined with Namespaces, Versions and Multitenancy.. ). I could run tests on staging also.
Anyone got a basic approach I should try?

Check out aeta. It runs tests in the task queue, and you can access these tests from the web interface or the command line.
For testing the live datastore without messing up your data, you could try using a staging or testing server.

Have you tried the remote_api console? It will allow you to run unit tests inside your local directory straight onto the live appengine runtime.

Related

Best way to architect test development in Python so that tests can be run either with a mock or without a mock?

I'm starting a fresh python project and I want to write unit and integration tests with mocking and stubbing. However, I would like to run these tests during the build-pipeline against actual services by spawning these dependent services in a docker container. What is the best way to architect my project so that I can easily enable and disable mocking so that:
tests are run with mocks in local branches
tests are run with actual services (with mocks disabled) in CI build pipeline
I'm using python3 and pytest for my purposes.
I asked this question on "Software Quality Assurance & Testing" Stack Exchange page and received an answer that works.This can be achieved by inheriting two different test classes from one abstract test class and calling one or the other depending on the environment the test is run on.
More details in this answer:
https://sqa.stackexchange.com/a/44745/45222

How could I run unit tests in Supervisor context?

I am building a complex Python application that distributes data between very different services, devices, and APIs. Obviously, there is a lot of private authentication information. I am handling it by passing it with environmental variables within a Supervisor process using the environment= keyword in the configuration file.
I have also a test that checks whether all API authentication information is set up correctly and whether the external APIs are available. Currently I am using Nosetest as test runner.
Is there a way to run the tests in the Supervisor context without brute force parsing the supervisor configuration file within my test runner?
I decided to use Python Celery which is already installed on my machine. My API queries are wrapped as tasks and send to Celery. Given this setup I created my testrunner as just another task that runs the API tests.
The web application tests do not need the stored credentials but run fine in the Celery context as well.

System/Integration Testing a Flask Web application

I have a web application which uses Flask and talks to a database at the backend. It also uses Amazon AWS S3. I have written unit tests for this package.
The question is that I want to write integration tests where I am testing the external dependencies as well. I have been reading about integration tests and system tests. Do I create a new package lets says FooSystemTests or FooIntegrationTests or should they be part of my application package? I am planning to make this as part of my deployment process. My plan was in the integration tests I will test my external dependencies and in the system tests I will test things like if I go to a route what do I get back (testing as if the system is a black box). Also I read about selenium testing, where should that be system or integration?
Any thoughts/ideas will be very helpful.

Switching branches while running python unit tests

This is more of a general question about safety in testing than a specific code problem. Lets say I have feature branch for my git repository, I always run a suite of unit tests before I merge back into develop or master. But these unit tests often take a while (on the order of an hour). So I generally kick off the test and then change branches in my repository so I can work on coding other things...I'm assuming this is safe because the .pyc files are already created?
I recommend that you offload test execution to a proper continuous-integration system such as Jenkins or Travis. Switching out the entire source tree in the middle of a test run is bound to cause weird problems.
Also consider that your test suite likely contains both unit tests and integration tests. Unit tests should be fast! A runtime of 0.1 seconds is a slow unit test. Tests that touch the filesystem, communicate with a database, send packets over the network, and so on are integration tests. You might scale those back to running once or twice a day. See Working Effectively With Legacy Code by Michael Feathers.
If proper CI is not a preferred option for whatever reason and you are OK with some scripting, then you could do a script that would copy a git revision (git export or git new workdir + checkout there ) and execute tests in that location while you keep going further with changes. Worked fine for me with a large java project.

How to build web client/system tests on an App Engine project?

Is there a common/best practice way to build a client-side javascript test platform for GAE python?
I know the GAE SDK has a Testbed API, and there's nose-gae. Both of these seem to be designed for unit-testing of server side code. I'm currently running on django-nonrel, and using django's unit test framework for this.
I'm looking for a similar testing framework where I can load fixture data, but run a real HTTP server so I can run Selenium tests against the server. This way I can test my javascript AJAX calls against fixed data.
I'm currrently achieving this by having django's test framework launch the GAE SDK dev_appserver on a separate thread after loading fixtures. This is able to handle HTTP calls from a browser. It mostly works for me, but there's a few cases where there's some flaky threading issues. The worst part is that it's rather hacky and calls deep into the SDK to launch dev_appserver. This is going to break with dev_appserver2.
I'm wondering if there's a better way to solve this. With all the GAE devs out there, someone else must be running system tests against fixed data?

Categories

Resources