My current project has a policy of 100% code coverage from its unit tests. Our continuous integration service will not allow developers to push code without 100% coverage.
As the project has grown, so has the time to run the full test suite. While developers typically run a subset of tests relevant to the code they are changing, they will usually do one final full run before submitting to CI, and the CI server itself also runs the full test suite.
Unit tests by their nature are highly parallelizable, as they are self-contained and stateless from test to test. They return only two pieces of information: pass/fail and the lines of code covered. A map/reduce solution seems like it would work very well.
Are there any Python testing frameworks that will run tests across a cluster of machines with code coverage and combine the results when finished?
I don't know of any testing frameworks that will run tests distributed off a group of machines, but nose has support for parallelizing tests on the same machine using multiprocessing.
At minimum that might be a good place to start to create a distributed testing framework
I think there is no framework that matches exactly to your needs.
I know py.test has xdist plugin which adds distributed test executors. You can use it to write your CI infrastructure on top of it.
Not exactly what you are looking at, but the closest I could recall is from the Hadoop groups is using JUnit for testing with Hadoop. Here is the mail. As mentioned in the mail search for gridunit papers.
Unit testing with Hadoop in a distributed way is very interesting. Any frameworks around this would be very useful, but developing a framework shouldn't be very difficult. If interested let me know.
Related
I understand that we use CI to test a software after any changes made to it.It will kick off unit tests and system level tests as soon as someone checks-in.
Now, where unit and functional test scripts we wrote fit in here?
Am I right that CI won't have any built-in tests: unit,functional,system. "We" write all those test scripts but have CI kick them off ?
CI is intended to be a system which provides a framework for all your unit tests and functional tests. The CI will kick off the unit tests , build and run functional tests and take appropriate actions as specified by you.
This is more of a general question about safety in testing than a specific code problem. Lets say I have feature branch for my git repository, I always run a suite of unit tests before I merge back into develop or master. But these unit tests often take a while (on the order of an hour). So I generally kick off the test and then change branches in my repository so I can work on coding other things...I'm assuming this is safe because the .pyc files are already created?
I recommend that you offload test execution to a proper continuous-integration system such as Jenkins or Travis. Switching out the entire source tree in the middle of a test run is bound to cause weird problems.
Also consider that your test suite likely contains both unit tests and integration tests. Unit tests should be fast! A runtime of 0.1 seconds is a slow unit test. Tests that touch the filesystem, communicate with a database, send packets over the network, and so on are integration tests. You might scale those back to running once or twice a day. See Working Effectively With Legacy Code by Michael Feathers.
If proper CI is not a preferred option for whatever reason and you are OK with some scripting, then you could do a script that would copy a git revision (git export or git new workdir + checkout there ) and execute tests in that location while you keep going further with changes. Worked fine for me with a large java project.
Has anyone done tests on production (or on staging) as opposed to local tests on the dev server? is it a bad idea to try?
At first glance, App Engine and unit tests aren't a great fit. App
Engine requests can only be driven by http or xmpp. Unit tests are
typically initiated via command-line or IDE. App Engine requests are
limited to 30 seconds. A unit test suite may contain thousands of
tests that take far longer than 30 seconds to execute. How do we
bridge the gap?
is there a python equivalent to cloud cover?
I would love my app to have a webpage of checkbox's which allows me to select which tests to run and displays recent results for each test. (preferably without me writing my own version of unittest / unittest2
While some of my tests may be local only, I think I may need to run some of these tests on production also. I may also have additional "live only tests".
I guess my concern is how to run the local tests on live without having to maintain two separate sets of tests. And also how to run some tests on live without messing up the live data in the datastore. (yes some tests may use stubs or mocks but I might wanna check the production datastore or a staged version of it?)
I haven't tried running a unit test on live, I assume via stdout unittest would log results to the administration console which probably wouldn't be as useful as having the results show up on a webpage that is used for running the tests.
I would also want to set up staging before production by changing the version number in app.yaml (combined with Namespaces, Versions and Multitenancy.. ). I could run tests on staging also.
Anyone got a basic approach I should try?
Check out aeta. It runs tests in the task queue, and you can access these tests from the web interface or the command line.
For testing the live datastore without messing up your data, you could try using a staging or testing server.
Have you tried the remote_api console? It will allow you to run unit tests inside your local directory straight onto the live appengine runtime.
Is there a testing framework (preferable python) that executes test, monitor the progress (failed/passed/timeout) and controls the vmware? Thanks
I am trying to make some automation functional testing in Vmware using Autoit script, VMs are controlled by a little python script on the host machine (deploy test files into VMs, execute them and collect the results data). But now it seems to be lots of works to do if I want this script to be able to manage and execute a series of test cases.
Thanks a lot!
Cheers,
Zhe
There are lots of continuous integration tools that may do what you want.
One implemented in Python that may fit your need is Buildbot - it can manage running builds and tests across multiple machines and consolidating the results.
I am starting to work on a hobby project with a Python codebase and I would like to set up some form of continuous integration (i.e. running a battery of test-cases each time a check-in is made and sending nag e-mails to responsible persons when the tests fail) similar to CruiseControl or TeamCity.
I realize I could do this with hooks in most VCSes, but that requires that the tests run on the same machine as the version control server, which isn't as elegant as I would like. Does anyone have any suggestions for a small, user-friendly, open-source continuous integration system suitable for a Python codebase?
We run Buildbot - Trac at work. I haven't used it too much since my codebase isn't part of the release cycle yet. But we run the tests on different environments (OSX/Linux/Win) and it sends emails — and it's written in Python.
One possibility is Hudson. It's written in Java, but there's integration with Python projects:
Hudson embraces Python
I've never tried it myself, however.
(Update, Sept. 2011: After a trademark dispute Hudson has been renamed to Jenkins.)
Second the Buildbot - Trac integration. You can find more information about the integration on the Buildbot website. At my previous job, we wrote and used the plugin they mention (tracbb).
What the plugin does is rewriting all of the Buildbot urls so you can use Buildbot from within Trac. (http://example.com/tracbb).
The really nice thing about Buildbot is that the configuration is written in Python. You can integrate your own Python code directly to the configuration. It's also very easy to write your own BuildSteps to execute specific tasks.
We used BuildSteps to get the source from SVN, pull the dependencies, publish test results to WebDAV, etcetera.
I wrote an X10 interface so we could send signals with build results. When the build failed, we switched on a red lava lamp. When the build succeeded, a green lava lamp switched on. Good times :-)
We use both Buildbot and Hudson for Jython development. Both are useful, but have different strengths and weaknesses.
Buildbot's configuration is pure Python and quite simple once you get the hang of it (look at the epydoc-generated API docs for the most current info). Buildbot makes it easier to define non-testing tasks and distribute the testers. However, it really has no concept of individual tests, just textual, HTML, and summary output, so if you want to have multi-level browsable test output and so forth you'll have to build it yourself, or just use Hudson.
Hudson has terrific support for drilling down from overall results into test suites and individual tests; it also is great for comparing test output between builds, but the distributed (master/slave) stuff is comparatively more complicated because you need a Java environment on the slaves too; also, Hudson is less tolerant of flaky network links between the master and slaves.
So, to get the benefits of both tools, we run a single instance of Hudson, which catches the common test failures, then we do multi-platform regression with Buildbot.
Here are our instances:
Jython Hudson
Jython buildbot
We are using Bitten wich is integrated with trac. And it's python based.
TeamCity has some Python integration.
But TeamCity is:
not open-source
is not small, but rather feature rich
is free for small-mid teams.
I have very good experiences with Travis-CI for smaller code bases.
The main advantages are:
setup is done in less than half a screen of config file
you can do your own installation or just use the free hosted version
semi-automatic setup for github repositories
no account needed on website; login via github
Some limitations:
Python is not supported as a first class language (as of time of writing; but you can use pip and apt-get to install python dependencies; see this tutorial)
code has to be hosted on github (at least when using the official version)