Switching branches while running python unit tests - python

This is more of a general question about safety in testing than a specific code problem. Lets say I have feature branch for my git repository, I always run a suite of unit tests before I merge back into develop or master. But these unit tests often take a while (on the order of an hour). So I generally kick off the test and then change branches in my repository so I can work on coding other things...I'm assuming this is safe because the .pyc files are already created?

I recommend that you offload test execution to a proper continuous-integration system such as Jenkins or Travis. Switching out the entire source tree in the middle of a test run is bound to cause weird problems.
Also consider that your test suite likely contains both unit tests and integration tests. Unit tests should be fast! A runtime of 0.1 seconds is a slow unit test. Tests that touch the filesystem, communicate with a database, send packets over the network, and so on are integration tests. You might scale those back to running once or twice a day. See Working Effectively With Legacy Code by Michael Feathers.

If proper CI is not a preferred option for whatever reason and you are OK with some scripting, then you could do a script that would copy a git revision (git export or git new workdir + checkout there ) and execute tests in that location while you keep going further with changes. Worked fine for me with a large java project.

Related

Best way to architect test development in Python so that tests can be run either with a mock or without a mock?

I'm starting a fresh python project and I want to write unit and integration tests with mocking and stubbing. However, I would like to run these tests during the build-pipeline against actual services by spawning these dependent services in a docker container. What is the best way to architect my project so that I can easily enable and disable mocking so that:
tests are run with mocks in local branches
tests are run with actual services (with mocks disabled) in CI build pipeline
I'm using python3 and pytest for my purposes.
I asked this question on "Software Quality Assurance & Testing" Stack Exchange page and received an answer that works.This can be achieved by inheriting two different test classes from one abstract test class and calling one or the other depending on the environment the test is run on.
More details in this answer:
https://sqa.stackexchange.com/a/44745/45222

Running unit tests on production google app engine

Has anyone done tests on production (or on staging) as opposed to local tests on the dev server? is it a bad idea to try?
At first glance, App Engine and unit tests aren't a great fit. App
Engine requests can only be driven by http or xmpp. Unit tests are
typically initiated via command-line or IDE. App Engine requests are
limited to 30 seconds. A unit test suite may contain thousands of
tests that take far longer than 30 seconds to execute. How do we
bridge the gap?
is there a python equivalent to cloud cover?
I would love my app to have a webpage of checkbox's which allows me to select which tests to run and displays recent results for each test. (preferably without me writing my own version of unittest / unittest2
While some of my tests may be local only, I think I may need to run some of these tests on production also. I may also have additional "live only tests".
I guess my concern is how to run the local tests on live without having to maintain two separate sets of tests. And also how to run some tests on live without messing up the live data in the datastore. (yes some tests may use stubs or mocks but I might wanna check the production datastore or a staged version of it?)
I haven't tried running a unit test on live, I assume via stdout unittest would log results to the administration console which probably wouldn't be as useful as having the results show up on a webpage that is used for running the tests.
I would also want to set up staging before production by changing the version number in app.yaml (combined with Namespaces, Versions and Multitenancy.. ). I could run tests on staging also.
Anyone got a basic approach I should try?
Check out aeta. It runs tests in the task queue, and you can access these tests from the web interface or the command line.
For testing the live datastore without messing up your data, you could try using a staging or testing server.
Have you tried the remote_api console? It will allow you to run unit tests inside your local directory straight onto the live appengine runtime.

mercurial: running remote regression tests automatically on every commit

I commit every time I make some changes that I think might work: I don't do extensive testing before a commit. Also, my commits will soon be automatically pushed to a remote repository. (I'm the only developer, and I have to add features or rewrite parts of the code many times a day.)
I'd like to set up a remote computer to run regression tests automatically whenever I commit anything; and then email me back the differences report.
What's the easiest way to set this up?
All my code is in Python 3. My own system is Windows 7, ActiveState Python, TortoiseHG, and Wing IDE. I can set up the remote computer as either Linux or Windows. The application is all command-line, with text input and output.
Use a continious integration server such as Buildbot or Jenkins and configure it to monitor the repository. Then run the tests using that. Buildbot is written in Python so you should feel right at home with it.
If you feel it's wasteful to make Buildbot or Jenkins poll the repository (even though hg pull uses very few resources when there are no new changesets), then you can configure a changegroup hook in the repository to trigger a build in the CI server.
I would recommend setting up Buildbot. You can have it watch a remote repository (Mercurial is supported) and automatically kick off a build when the repository changes. In your case, a build would just be running your test suite.
Its waterfall display allows you to see which builds failed and when, in relation to commits from the repository. It can even notify you, with the offending commit, when something breaks.
Jenkins is another option, supporting most of the same features. There are even cloud hosting options, like ShiningPanda that can host it for you, and they offer free licensing for open-source projects.

Distributed unit testing and code coverage in Python

My current project has a policy of 100% code coverage from its unit tests. Our continuous integration service will not allow developers to push code without 100% coverage.
As the project has grown, so has the time to run the full test suite. While developers typically run a subset of tests relevant to the code they are changing, they will usually do one final full run before submitting to CI, and the CI server itself also runs the full test suite.
Unit tests by their nature are highly parallelizable, as they are self-contained and stateless from test to test. They return only two pieces of information: pass/fail and the lines of code covered. A map/reduce solution seems like it would work very well.
Are there any Python testing frameworks that will run tests across a cluster of machines with code coverage and combine the results when finished?
I don't know of any testing frameworks that will run tests distributed off a group of machines, but nose has support for parallelizing tests on the same machine using multiprocessing.
At minimum that might be a good place to start to create a distributed testing framework
I think there is no framework that matches exactly to your needs.
I know py.test has xdist plugin which adds distributed test executors. You can use it to write your CI infrastructure on top of it.
Not exactly what you are looking at, but the closest I could recall is from the Hadoop groups is using JUnit for testing with Hadoop. Here is the mail. As mentioned in the mail search for gridunit papers.
Unit testing with Hadoop in a distributed way is very interesting. Any frameworks around this would be very useful, but developing a framework shouldn't be very difficult. If interested let me know.

Remembering to run tests before commit

We have a decent set of unit tests on our code and those unit tests run in under 2 minutes. We also use TeamCity to do a build and to run tests after each check in. However, we still get issues where a developer "forgets" to run all the tests before a commit resulting in TeamCity failure which if this check in was done at 6PM may stay broken over night.
"Forgets" is a generic term, there a couple other common reasons why even remembering to run the tests could result in TeamCity failure. Such as.
-> A developer only checks in some of the modified files in his/her workspace.
-> A file was modified outside of eclipse such that eclipse's team synchronize perspective does not detect it as dirty.
How do you deal with this in your organization?
We are thinking of introducing "check in procedure" for developers which will be an automated tool that will automatically run all unit tests and then commit all of the "dirty" files in your workspace. Have you had any experience with such process? Are you aware of any tools which may facilitate this process? Our dev environment is Python using Eclipse's PyDev plugin.
In one of the teams I was working before we had an agreement that anyone who breaks the tests buys bacon sandwiches for the whole team the next morning. Its extreme, but it works perfectly!
I think it is more of a social problem rather than a deficiency of the automated systems.
Yes, you can improve the systems in place, but they will be no match for someone thinking of the implications of their commit, and testing it before they hit commit.
For mercurial you can use hooks, that will run tests and only allow push on success. But this may require a lot of time for a push (but developer has to run those tests anyway).
Or you can just have own set of bash scripts, that will run test and only then run commit command. For example for django and svn commit it could look as simple as this:
./manage.py test && svn commit $#
Or there is another way: if anyone commits code, that doesn't pass test, he pays some sum. Soon people will remember to test, for they won't like the notion of paying money ;-)
TeamCity has some support for pretested commit; if your development team is using a supported IDE, you might look into that.
In my company, we don't worry about it too much - our pattern looks something like this.
(a) each developer has their own project configuration in TeamCity, with roots pointing to their own sandbox. They are allowed to do anything they like here.
(b) the development team has an integration sandbox, where all changes are delivered. A project encapsulates the configurations which monitor this branch in the source control system. Dev Leads get to make up the rules here, but that rule is almost always "it must stay green". I'd have to look at the exact percentage of clean builds - it's not a perfect record, but its high enough that I've never been tempted to insist that the developers be more disciplined about running tests.
(c) the actual deliver comes from a Main stream, which Shall Stay Green (tm). Dev lead is responsible for delivering a clean snapshot of integration to the main stream on a well defined schedule. This project is the one that actually generates the installers that are delivered to testing, the bits that go into escrow, etc.
One reason that you might get a way with a more aggressive policy than we do is that our build cycle is slow - on the order of four hours. If we were an order of magnitude smaller, with a poor success rate, I might argue a different case.
For git you can: http://francoisgaudin.com/2011/02/16/run-django-tests-automatically-before-committing-on-git/
Run Django tests automatically before committing on Git
Since I often forget to run unit tests before committing, I spend a
lot of time looking for the bad commit when I find regressions 3
commits later.
However it’s really easy to automatically run tests before each
commit. In .git/hooks/pre-commit, put :
python manage.py
test exit $?
then chmod 755 this file and it’s done. I really love git :-)
Do not forget to source your virtualenv before committing.
Note that tests are run on your working tree and not the commit
itself, so if you commit only a part of your working tree, it may fail
while your commit passes tests.

Categories

Resources