running nose tests with a regular python script - python

I have a python script which takes a config file on command line and gives an output.
I am trying to see how I can use nosetests to run all these files.
I read through the nosetests info on google but i could not follow how to run them with the config file.
Any ideas on where I could get started?

Something like this should work:
import sys
import nose
def test_me():
assert True
if __name__ == '__main__':
module_name = sys.modules[__name__].__file__
config_name = 'nose.cfg'
result = nose.run(
argv=[sys.argv[0],
module_name,
'--config=' + config_name]
)
You can also pass your config instance, as described in the docs for nose.run() arguments here.

I didnt have to do any of those. Just nosetests by themselves execute any test beginning with "test_....py" and make sure you use "--exe" if they are executable if not you can skip that option. The nosetests help page on wiki really helps.

Related

How do I configure PyCharm's Coverage checker to recognize .coveragerc?

I have a .coveragerc file in the root of my project. It tells coverage.py to omit my project's migrations directories:
[run]
omit = *migrations*
When I run coverage.py at the command line, the config I put in .coveragerc is obeyed.
However, PyCharm does not recognize it. Is there a setting that I'm missing?
If it turns out there's no way for PyCharm to recognize .coveragerc, I'd be happy with even just a way to omit those directories.
You can make PyCharm use the .coveragerc by putting it into the working directory you run your tests from.
The feature request from https://youtrack.jetbrains.com/issue/PY-16945 was implemented and works in version 2018.1.
There is a feature request for this at https://youtrack.jetbrains.com/issue/PY-16945
There is a different way to make PyCharm ignore certain files and folders:
In the Settings choose Project: ... - Project Structure. Here you can mark folders as Excluded or exclude files specifically.
PyCharm`s code coverage reports ignores all those exluded files, too.
I found myself in the situation where I needed this badly. My travis run were running correctly and so did coveralls but I was unable to get things work in PyCharm.
The thing is a bit hacky, but hopefully it will help people :
In my root project directory, I got a .coveragerc
[run]
omit = ./venv
concurrency = multiprocessing
parallel = True
source = HookTest
data_file = .cache/.coverage
And I "hacked" a little run_coverage.py of PyCharm : (pycharm-2016.3.2/helpers/coverage_runner/run_coverage.py )
Starting at
argv = []
Replace everything with :
argv = []
for arg in sys.argv:
if arg.startswith('-m') and arg[2:]:
argv.append(arg[2:])
else:
argv.append(arg)
cwd = os.getcwd()
rcfile = cwd + "/.coveragerc"
if os.path.exists(rcfile):
print("Loading rcfile")
i = argv.index("run")+1
argv = argv[:i] + ["--rcfile={}".format(rcfile)] + argv[i:]
sys.argv = argv
try:
main()
finally:
if run_cov:
os.chdir(cwd)
if os.getenv('COVERAGE_COMBINE'):
main(["combine"])
main(["xml", "-o", coverage_file + ".xml", "--ignore-errors"])
To make this run with python setup.py test, I created a script in PyCharm that uses said setup.py, had test has parameter, and COVERAGE_COMBINE as global env. It might not be the best of all time solutions but at least it allows me not to use HTML output anymore while working locally :)
It does not work in the last Pycharm version 2018.3.4.
The only way I succeeded to make it works is to heck the run_coverage.py as #PonteIneptique
Here is the only modification I had to do:
run_xml = os.getenv('PYCHARM_RUN_COVERAGE_XML')
argv = ["xml", "-o", coverage_file + ".xml", "--ignore-errors"]
rcfile = cwd + "/.coveragerc"
if os.path.exists(rcfile):
print("Loading rcfile\n")
argv += ["--rcfile", rcfile]
if run_xml:
os.chdir(cwd)
main(argv)
else:
try:
main()
finally:
if run_cov:
os.chdir(cwd)
main(argv)
Be sure to set the working dir of of the .coveragerc file too in your configuration.
Pycharm coders should update their code for supporting this .coveragerc file from the GUI.

How to execute a python test using httpretty/sure

I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.

Can tests with pytest fixtures be run interactively?

I have some tests written using pytest and fixtures, e.g.:
class TestThing:
#pytest.fixture()
def temp_dir(self, request):
my_temp_dir = tempfile.mkdtemp()
def fin():
shutil.rmtree(my_temp_dir)
request.addfinalizer(fin)
return my_temp_dir
def test_something(self, temp_dir)
with open(os.path.join(temp_dir, 'test.txt'), 'w') as f:
f.write('test')
This works fine when the tests are invoked from the shell, e.g.
$ py.test
but I don't know how to run them from within a python/ipython session; trying e.g.
tt = TestThing()
tt.test_something(tt.temp_dir())
fails because temp_dir requires a request object to be passed on. So, how does one invoke a fixture with a request object injected?
Yes. You don't have to manually assemble any test fixtures or anything like that. Everything runs just like calling pytest in the project directory.
Method1:
This is the best method because it gives you access to the debugger if your test fails
In ipython shell use:
**ipython**> run -m pytest prj/
This will run all your tests in the prj/tests directory.
This will give you access to the debugger, or allow you to set breakpoints if you have a
import ipdb; ipdb.set_trace() in your program (https://docs.pytest.org/en/latest/usage.html#setting-breakpoints).
Method2:
Use !pytest while in the test directory. This wont give you access to the debugger. However, if you use
**ipython**> !pytest --pdb
If you have a test failure, it will drop you into the debugger (subshell), so you can run your post-mortem analysis (https://docs.pytest.org/en/latest/usage.html#dropping-to-pdb-python-debugger-on-failures)
Using these methods you can even run individual modules/test_fuctions/TestClasses in ipython using (https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests)
**ipython**> run -m pytest prj/tests/test_module1.py::TestClass1::test_function1
You can bypass the pytest.fixture decorator and directly call the wrapped test function.
tmp = tt.temp_dir.__pytest_wrapped__.obj(request=...)
Accessing internals, it's bad, but when you need it...
The best method I have which is far from ideal is to just %run the test file, then manually assemble the fixtures, then simply call the tests. The problem with this is tracking down the modules where the default fixtures are defined, and then calling them in their order of dependencies.
you can use two cells for this:
first:
def test_something():
assert True
second:
from tempfile import mktemp
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(_i) # write last cell input
!pytest $test_file
also u can do this on one cell like this but you won't have code highlighting:
from tempfile import mktemp
test_code = """
def test_something():
assert True
"""
test_file = mktemp('.py', 'test_')
open(test_file, 'wb').write(test_code)
!pytest $test_file
The simple answer is that you don't want to run py.test interactively from python. Most people set up some integration with their text editor or IDE to run py.test and parse it's output. But really it's a command line tool and that is how it should be used.
As a sidenode you may want to check out the built-in tmpdir fixture: http://pytest.org/latest/tmpdir.html Because it seems like you're re-inventing this.

PyCharm and unittest won't run

I have a problem with PyCharm 3.0.1 I can't run basic unittests.
Here is my code :
import unittest from MysqlServer import MysqlServer
class MysqlServerTest(unittest.TestCase):
def setUp(self):
self.mysqlServer = MysqlServer("ip", "username", "password", "db", port)
def test_canConnect(self):
self.mysqlServer.connect()
self.fail()
if __name__ == '__main__':
unittest.main()
Here is All the stuff PyCharm give me
Unable to attach test reporter to test framework or test framework quit unexpectedly
It also says
AttributeError: class TestLoader has no attribute '__init__'
And the event log :
2:14:28 PM Empty test suite
The problem is when I run manually the Python file (with PyCharm, as a script)
Ran 1 tests in 0.019s
FAILED (failures=1)
Which is normal I make the test fail on purpose. I am a bit clueless on what is going on.
here more information :
Setting->Python Integrated Tools->Package requirements file: <PROJECT_HOME>/src/test
Default test runner: Unittests
pyunit 1.4.1 Is installed
EDIT: Same thing happen with the basic usage from unitests.py
import unittest
class IntegerArithmenticTestCase(unittest.TestCase):
def testAdd(self): ## test method names begin 'test*'
self.assertEquals((1 + 2), 3)
self.assertEquals(0 + 1, 1)
def testMultiply(self):
self.assertEquals((0 * 10), 0)
self.assertEquals((5 * 8), 40)
if __name__ == '__main__':
unittest.main()
Although this wasn't the case with the original poster, I'd like to note that another thing that will cause this are test functions that don't begin with the word 'test.'
class TestSet(unittest.TestCase):
def test_will_work(self):
pass
def will_not_work(self):
pass
This is probably because you did not set your testing framework correctly in the settings dialogue.
Definitely a pycharm thingy, repeating from above,
Run --> Edit Configurations.
select the instances of the test, and press the red minus button.
I have the exact same problem. It turned out that the fact of recognizing an individual test was related to the file name. In my case, test_calculate_kpi.py, which PyCharm didn't recognize as a test when renamed to test_calculate_kpis.py, was immediately recognized.
4 steps to generate html reports with PyCharm and most default test runners (py.test, nosetest, unittest etc.):
make sure you give your test methods a prefix 'test' (as stated before by others), e.g. def test_run1()
the widespread example code from test report packageā€˜s documentation is
import unittest
import HtmlTestRunner
class TestGoodnessOfFitTests(unittest.TestCase):
def test_run1(self):
...
if __name__ == '__main__':
unittest.main(testRunner=HtmlTestRunner.HTMLTestRunner(output='t.html'))
This code is usually located in a test class file which contains all the unittests and the "main"-catcher code. This code continuously yielded the warning Empty test suite for me. Therefore, remove all the if __name__ ... code. The file now only contains the TestGoodnessOfFitTests class. Now, additionally
create a new main.py in the same directory of the test class file and use the following code:
import unittest
import HtmlTestRunner
test_class = TestGoodnessOfFitTests()
unittest.main(module=test_class,
testRunner=HtmlTestRunner.HTMLTestRunner(output='t.html'))
Remove your old run configurations, right-click on your main.py and press Run 'main'. Verify correct settings under Preferences -> Python Integrated Tools -> Default test runner (in my case py.test and nose worked)
Output:
Running tests...
----------------------------------------------------------------------
test_gaussian_dummy_kolmogorov_cdf_1 (tests.evaluation_tests.TestGoodnessOfFitTests) ... OK (1.033877)s
----------------------------------------------------------------------
Ran 1 test in 0:00:01
OK
Generating HTML reports...
Even I had the same problem, I felt the workspace was not properly refreshed. Even I did File->Synchronize(Ctrl+Aly+y). But that wasn't the solution. I just renamed my test python file name and again I tried executing the code, it started worked fine.
I had the same problem. The file was named test_exercise_detectors.py (note the plural "detectors") and it was in a packed named test_exercise_detectors. Changing the name of the file to test_exercise_detector.py (singular "detector") fixed the problem.
Adding
if __name__ == "__main__":
unittest.main()
fixed the issue for me.

Nose test script with command line arguments

I would like to be able to run a nose test script which accepts command line arguments. For example, something along the lines:
test.py
import nose, sys
def test():
# do something with the command line arguments
print sys.argv
if __name__ == '__main__':
nose.runmodule()
However, whenever I run this with a command line argument, I get an error:
$ python test.py arg
E
======================================================================
ERROR: Failure: ImportError (No module named arg)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/loader.py", line 368, in loadTestsFromName
module = resolve_name(addr.module)
File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nose-0.11.1-py2.6.egg/nose/util.py", line 334, in resolve_name
module = __import__('.'.join(parts_copy))
ImportError: No module named arg
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Apparently, nose tries to do something with the arguments passed in sys.argv. Is there a way to make nose ignore those arguments?
Alright, I hate "why would you want to do that?" answers just as much as anyone, but I'm going to have to make one here. I hope you don't mind.
I'd argue that doing whatever you're wanting to do isn't within the scope of the framework nose. Nose is intended for automated tests. If you have to pass in command-line arguments for the test to pass, then it isn't automated. Now, what you can do is something like this:
import sys
class test_something(object):
def setUp(self):
sys.argv[1] = 'arg'
del sys.argv[2] # remember that -s is in sys.argv[2], see below
def test_method(self):
print sys.argv
If you run that, you get this output:
[~] nosetests test_something.py -s
['/usr/local/bin/nosetests', 'arg']
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
(Remember to pass in the -s flag if you want to see what goes on stdout)
However, I'd probably still recommend against that, as it's generally a bad idea to mess with global state in automated tests if you can avoid it. What I would likely do is adapt whatever code I'm wanting to test to take an argv list. Then, you can pass in whatever you want during testing and pass in sys.argv in production.
UPDATE:
The reason why I need to do it is
because I am testing multiple
implementations of the same library.
To test those implementations are
correct I use a single nose script,
that accepts as a command line
argument the library that it should
import for testing.
It sounds like you may want to try your hand at writing a nose plugin. It's pretty easy to do. Here are the latest docs.
You could use another means of getting stuff into your code:
import os
print os.getenv('KEY_THAT_MIGHT_EXIST', default_value)
Then just remember to set your environment before running nose.
I think that is a perfectly acceptable scenario. I also needed to do something similar in order to run the tests against different scenarios (dev, qa, prod, etc) and there I needed the right URLS and configurations for each environment.
The solution I found was to use the nose-testconfig plugin (link here). It is not exactly passing command line arguments, but creating a config file with all your parameters, and then passing this config file as argument when you execute your nose-tests.
The config file has the following format:
[group1]
env=qa
[urlConfig]
address=http://something
[dbConfig]
user=test
pass=test
And you can read the arguments using:
from testconfig import config
print(config['dbConfig']['user'])
For now I am using the following hack:
args = sys.argv[1:]
sys.argv = sys.argv[0:1]
which just reads the argument into a local variable, and then deletes all the additional arguments in sys.argv so that nose does not get confused by them.
Just running nose and passing in parameters will not work as nose will attempt to interpret the arguments as nose parameters so you get the problems you are seeing.
I do not think nose support parameter passing directly yet but this nose plug-in nose-testconfig Allows you to write tests like below:
from testconfig import config
def test_os_specific_code():
os_name = config['os']['type']
if os_name == 'nt':
pass # some nt specific tests
else:
pass # tests for any other os

Categories

Resources