I understand that we use CI to test a software after any changes made to it.It will kick off unit tests and system level tests as soon as someone checks-in.
Now, where unit and functional test scripts we wrote fit in here?
Am I right that CI won't have any built-in tests: unit,functional,system. "We" write all those test scripts but have CI kick them off ?
CI is intended to be a system which provides a framework for all your unit tests and functional tests. The CI will kick off the unit tests , build and run functional tests and take appropriate actions as specified by you.
Related
Suppose, you have Projects A and B in their respective repositories with their own unit test workflows. Also, suppose Project B depends on package-a defined by Project A. Now, suppose there is a PR on Project A that increases the minor or patch version of package-a. In that case, we want an integration test workflow from Project A to run the unit test workflow of Project B while making sure that Project B installs the version of package-a that is being worked on. The integration test workflow from Project A should fail if the unit test workflow from Project B fails.
This is complicated by the fact that Project B might need a special environment setup such as starting a database server for executing its tests. Project A shouldn't need to know how to do this environment setup.
I know of repository-dispatch and workflow-dispatch events by GitHub Actions. The problem I am seeing with them is that there is no mechanism to communicate back to the integration test workflow of project A about what happened with the unit test workflow of B.
The solution can make modifications both to Projects A and B.
I'm starting a fresh python project and I want to write unit and integration tests with mocking and stubbing. However, I would like to run these tests during the build-pipeline against actual services by spawning these dependent services in a docker container. What is the best way to architect my project so that I can easily enable and disable mocking so that:
tests are run with mocks in local branches
tests are run with actual services (with mocks disabled) in CI build pipeline
I'm using python3 and pytest for my purposes.
I asked this question on "Software Quality Assurance & Testing" Stack Exchange page and received an answer that works.This can be achieved by inheriting two different test classes from one abstract test class and calling one or the other depending on the environment the test is run on.
More details in this answer:
https://sqa.stackexchange.com/a/44745/45222
This is more of a general question about safety in testing than a specific code problem. Lets say I have feature branch for my git repository, I always run a suite of unit tests before I merge back into develop or master. But these unit tests often take a while (on the order of an hour). So I generally kick off the test and then change branches in my repository so I can work on coding other things...I'm assuming this is safe because the .pyc files are already created?
I recommend that you offload test execution to a proper continuous-integration system such as Jenkins or Travis. Switching out the entire source tree in the middle of a test run is bound to cause weird problems.
Also consider that your test suite likely contains both unit tests and integration tests. Unit tests should be fast! A runtime of 0.1 seconds is a slow unit test. Tests that touch the filesystem, communicate with a database, send packets over the network, and so on are integration tests. You might scale those back to running once or twice a day. See Working Effectively With Legacy Code by Michael Feathers.
If proper CI is not a preferred option for whatever reason and you are OK with some scripting, then you could do a script that would copy a git revision (git export or git new workdir + checkout there ) and execute tests in that location while you keep going further with changes. Worked fine for me with a large java project.
Has anyone done tests on production (or on staging) as opposed to local tests on the dev server? is it a bad idea to try?
At first glance, App Engine and unit tests aren't a great fit. App
Engine requests can only be driven by http or xmpp. Unit tests are
typically initiated via command-line or IDE. App Engine requests are
limited to 30 seconds. A unit test suite may contain thousands of
tests that take far longer than 30 seconds to execute. How do we
bridge the gap?
is there a python equivalent to cloud cover?
I would love my app to have a webpage of checkbox's which allows me to select which tests to run and displays recent results for each test. (preferably without me writing my own version of unittest / unittest2
While some of my tests may be local only, I think I may need to run some of these tests on production also. I may also have additional "live only tests".
I guess my concern is how to run the local tests on live without having to maintain two separate sets of tests. And also how to run some tests on live without messing up the live data in the datastore. (yes some tests may use stubs or mocks but I might wanna check the production datastore or a staged version of it?)
I haven't tried running a unit test on live, I assume via stdout unittest would log results to the administration console which probably wouldn't be as useful as having the results show up on a webpage that is used for running the tests.
I would also want to set up staging before production by changing the version number in app.yaml (combined with Namespaces, Versions and Multitenancy.. ). I could run tests on staging also.
Anyone got a basic approach I should try?
Check out aeta. It runs tests in the task queue, and you can access these tests from the web interface or the command line.
For testing the live datastore without messing up your data, you could try using a staging or testing server.
Have you tried the remote_api console? It will allow you to run unit tests inside your local directory straight onto the live appengine runtime.
My current project has a policy of 100% code coverage from its unit tests. Our continuous integration service will not allow developers to push code without 100% coverage.
As the project has grown, so has the time to run the full test suite. While developers typically run a subset of tests relevant to the code they are changing, they will usually do one final full run before submitting to CI, and the CI server itself also runs the full test suite.
Unit tests by their nature are highly parallelizable, as they are self-contained and stateless from test to test. They return only two pieces of information: pass/fail and the lines of code covered. A map/reduce solution seems like it would work very well.
Are there any Python testing frameworks that will run tests across a cluster of machines with code coverage and combine the results when finished?
I don't know of any testing frameworks that will run tests distributed off a group of machines, but nose has support for parallelizing tests on the same machine using multiprocessing.
At minimum that might be a good place to start to create a distributed testing framework
I think there is no framework that matches exactly to your needs.
I know py.test has xdist plugin which adds distributed test executors. You can use it to write your CI infrastructure on top of it.
Not exactly what you are looking at, but the closest I could recall is from the Hadoop groups is using JUnit for testing with Hadoop. Here is the mail. As mentioned in the mail search for gridunit papers.
Unit testing with Hadoop in a distributed way is very interesting. Any frameworks around this would be very useful, but developing a framework shouldn't be very difficult. If interested let me know.