How does django-nose differ from the default Django test-runner - python

I've been seeing and reading about a lot of people using nose to run their Django tests. I haven't been able to figure out the added benefits of using Nose to run my Django tests. If someone could fill me in on what nose is and how it adds more to a Django project, it would be helpful.
I haven't been able to find a good document/article outlining these points.
Thank you

I was curious about this too and it seems that the main advantage of django-nose using the python nose library is "Test Discovery".
In addition, from http://readthedocs.org/docs/nose/en/latest/testing.html
you can also write simple test functions, as well as test classes that are not
subclasses of unittest.TestCase. nose also supplies a number of
helpful functions for writing timed tests, testing for exceptions, and
other common use cases. See Writing tests and Testing tools for more.
From what I understand from other python developers on freenode irc, Trial test runner on Twisted Framework have these similar features like nose.
I am still not entirely convinced about using django-nose for django development but am giving a shot and report back if I find out more!

There are a lot more features overall, but I think one major reason people use nose/djano_nose is that it allows you to very easily do code coverage.
python manage.py test myapp --with-coverage --cover-package=myapp

Related

Python - Difference between Sonarcube Vs pylint

I'm evaluating test framework, lint and code coverage options for a new Python project I'll be working on.
I've chosen pytest for the testing needs. After reading a bunch of resources, I'm confused when to use Sonarcube, Sonarlint , pylint and coverage.py.
Is SonarLint and Pylint comparable? When would I use Sonarcube?
I need to be able to use this in a Jenkins build. Thanks for helping!
Sonarlint and pylint are comparable, in a way.
Sonarlint is a code linter and pylint is too. I haven't used sonarlint, but it seems that analyzes the code a bit deeper that pylint does. From my experience, pylint only follows a set of rules (that you can modify, by the way), while sonarlint goes a bit further analyzing the inner workings of your code. They are both static analyze tools, however.
Sonarcube, on the other hand, does a bit more. Sonarcube is a CI/CD tool that runs static linters, but also shows you code smells, and does a security analysis. All of what I'm saying is based purely on their website.
If you would like to run CI/CD workflows or scripts, you would use Sonarcube, but for local coding, sonarlint is enough. Pylint is the traditional way, though.
Nicholas has a great summary of Pylint vs Sonarlint.
(Personally I use the Sonarlint)
Although the question is older, I thought I'd answer the other part of your question in case anyone else has the same question; internet being eternal and all.
Coverage.py as it sounds, runs code coverage for your package. SonarQube then uses the report that coverage.py makes and does things with it and formats it in a way that the Sonar team decided was necessary. Coverage.py is needed if you want to use SonarQube for code coverage. However, if you just want the code smells from SonarQube, it is not needed.
You were also asking about when to use SonarQube, coverage.py, and Jenkins.
In Jenkins, you would create a pipeline with several stages. Something along the following lines:
Check out code (automatically done as the first step by Jenkins
Build code as it is intended to be used by user/developer
Run Unit Tests
run coverage.py
run SonarQube

Adding unit testing to a large python project

I started learning python as I developed a project about a year ago. Since then the project became somewhat of a (quite large) stable and useful tool for me. The project's arrangement is like so:
main.py
../functions/func1.py
../functions/func2.py
../functions/func3.py
../functions/func4.py
...
../functions/funcN.py
where the main.py file calls the rest of the functions sequentially.
The issue is that I did not write a single unit test for any of the functions. Not one.
I did not pay much attention to testing since at first I was just learning and eventually it got out of hand.
I want to correct this and add the proper unit tests, the question is: which testing method should I use for my project?
I've seen many different methods described:
unittest
Doctest
pytest
nose
tox
unittest2
mock
but I've no idea if one of those is more suited to something like my project than the rest.
unittest which is now just unittest2 is already in python and the most standard, just start with that.
Think of nose as a set of extensions, use it when you want something not already in unittests, it's quite popular as well.
doctests puts unit tests into doc comments, I don't like it too much but use it if you want to.
mock is just a testing paradigm you should use when interacting with interfaces/objects is not trivial
tox runs tests under different python environments.
As an addition, integration tools like travis/jenkins allows you to run tox or sets of unit tests automatically, they're often used for multi-user projects, so everybody can see the test results on each commit.

What are the real benefits of using Nose for tests?

Is there anything special with using Nose for tests? From what I have heard the reason most people use Nose is..
because it gives you a report
because it shows you the time it took for the tests
How is that any better than using simple Bash like below?
tests.py:
assert test1()
assert test2()
assert test3()
print("No errors")
runtests:
#!/bin/sh
(time python tests.py) > log
return $?
The benefit of using a standard tool is that you are more likely to find third-party tools which build on top of the tool. So for just running a test, it doesn't matter what you use, but as soon as you start having many components in a Jenkins rig, having multiple different tools with different output formats and conventions makes it a real problem to maintain and develop monitoring and reporting.
For shell scripts (which I imagine is part of the question because you used the bash tag and wrote your script in sh), it's not like Nose is "the standard", and if you have multiple tools in different languages, it might not be possible to standardize on a single tool / framework / convention (TAP for Perl, Nose for Python, JUnit or whatever for Java ...)
One benefit which you didn't mention is that the framework takes care of a lot of the footwork for you. A single file with tests could be managed (with some pain) by hand, but once we start talking dozens of files with hundreds or thousands of test cases, you want a decent platform for managing those and let you focus on the actual testing instead of reinventing the wheels that the framework puts there for you to use.

Is there way to check feature deprecation against django version?

As some features get deprecated with new versions of Django, is there a way to check for that on an existing project code say on github.
Could a tool do that. Is there a way to detect that through testcases.
Would it be possible to do the same against a python version.
I guess one way could be to run through a specific version of django / python using tox and then check for errors.
I am just looking for something more elegant or direct, something like which says - "Note this feature has been deprecated", something which can be done in strongly typed language like Java.
If one wanted to build such a tool, what could be starting point for that, if possible.
This is how I got tox to run one project of mine against Django 1.6, 1.7 and 1.8 with deprecation warnings on:
[tox]
envlist = {py27,py34}-{django16,django17,django18}
[testenv]
basepython =
py27: python2.7
py34: python3.4
deps =
django16: Django>=1.6,<1.7
django17: Django>=1.7,<1.8
django18: Django>=1.8,<1.9
commands =
python -Wmodule ./manage.py test
The -Wmodule argument causes Python to output each deprecation warning the first time it occurs in a module, which was good enough for me. I was able to deal with instances where I used from django.core.cache import get_cache, which will be gone in Django 1.9.
In cases where -Wmodule outputs too much, you might want to be more selective. Python's documentation gives the lowdown on how to use this argument. I've purposely not used -Werror because this would not just make the individual tests fail but would make my test suite fail to execute any test because the suite uses deprecated features.
I think this would have to be in the unit tests for your project.
If your tests exercise code that will be deprecated in a future version of Django you will get warnings. If you've jumped to a version of Django where the feature is already deprecated you'll get exceptions and failed tests of course.
You can tell the Python interpreter to promote warnings to exceptions, which would cause tests to fail.
Instructions here to apply the same trick to the popular nosetests test framework.
If you know already (from Django docs) that some code you're writing will need to change depending on Django version it is run under (eg you're distributing a reusable Django app) I would suggest a form of feature detection using try ... except
For example, here I wanted to conditionally use the new transaction.atomic feature from Django >= 1.6: .
As you anticipated, I then run the tests against different versions of Django with the help of Tox.

Use Pytest with Django without using django-pytest 3rd party app

Is it possible to use Pytest with Django without using django-pytest 3rd party app?
I had tried to set this up, but kept running into random errors, like pytest couldn't find the DJANGO_SETTINGS_MODULE. Then I fixed the path, but the normal python manage.py runserver then couldn't find the DJANGO_SETTINGS_MODULE. I'm running:
Pytest 2.5.4
Python 3.4.0
Django 1.6.2
If it is possible, would you be able to provide a setup example of where to put the tests/ directory, etc... within the project so Pytest works?
Thanks
Hmm, py.test 2.5.4 does not exist afaik.
Anyway, assuming you mean to ask whether it is possible to avoid the pytest-django plugin to test Django using py.test: the short answer is no.
The long answer is yes, but it is extremely difficult to get it all to work and you will basically write at least a minimal version of pytest-django in the conftest.py file. The pytest-django plugin was created specifically to work around all the weirdness which Django does with much global state and other hidden magic.
OTOH looking at the pytest-django source would probably help you kickstart such an effort. However you may consider thinking about what it is of pytest-django you do not like and maybe file an enhancement request to improve it.

Categories

Resources