PyCharm can't find Django test - python

I'm running into this very same issue, but I can't find a solution for this.
Find the dummy Django project I created for this here.
This is my test configuration:
This is the project structure:
Trying to run the test results in:
AttributeError: 'super' object has no attribute 'run_tests'
Process finished with exit code 137
Empty test suite.
Obviously, running this from the shell gives no errors:
(django)mosquito#mosquito-X550LD ~/python_projects/django_test $ python manage.py test
Creating test database for alias 'default'...
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Destroying test database for alias 'default'...
Any idea what I'm doing wrong??
Thanks,
Alejandro

In the project configuration, add another environment variable of
DJANGO_SETTINGS_MODULE = django_test.settings
You can do this by clicking on the "..." icon

Related

Django Run a Test By Its Filename

I have a django application with a test called test_thing.py
Ive been running my test as such: python3 manage.py test --pattern="test_thing.py"
Is there a way that I can run the test by its filename?
I tried python3 manage.py test apiv2/tests/test_thing.py but I get an error:
TypeError: 'NoneType' object is not iterable
But when I run with the --pattern it works.
Try
python manage.py test apiv2.tests.test_thing.

Django test runner not respecting unittest.TestCase?

For one of my functional tests, I decided to use unittest.TestCase instead of a Django test class because it was convenient when cleaning up the test to have direct access to my local development database in the test itself.
Running the test in isolation like so passes as I'd expect:
$ python manage.py test functional_tests.test_functionality
System check identified no issues (0 silenced).
...
----------------------------------------------------------------------
Ran 3 tests in 0.040s
OK
When I try to run all tests at the same time, however, that test specifically errors out, complaining that an object DoesNotExist, as though it were using the Django test database:
$ python manage.py test functional_tests
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
..................E..
======================================================================
ERROR: some_functional_test (functional_tests.test_functionality.FunctionalTest)
----------------------------------------------------------------------
Traceback (most recent call last):
... etc.
app.models.Object.DoesNotExist: Object matching query does not exist.
----------------------------------------------------------------------
Ran 21 tests in 0.226s
FAILED (errors=1)
Destroying test database for alias 'default'...
I assume the error is with my trying use Object.objects.latest('created') when no Objects exist in Django's test database.
Is there some way to prevent Django from wrapping all tests in whatever it is about the test runner that prevents my test from accessing an Object directly?
First a little explanation.
By default when you run ./manage.py test django test-runner make few steps which involves creating test database (with test_ prefix to each database name from app settings), running migration and destroying test database (more details on runner steps could find here)
A good explanation on how django treat test database is described here
In your case when you run unittest.TestCase in isolation no test database is created:
$ python manage.py test functional_tests.test_functionality
System check identified no issues (0 silenced).
...
----------------------------------------------------------------------
Ran 3 tests in 0.040s
OK
( ^no logs about creating test database)
That's because there were no django.test.TestCase called. We can see it from sources (virgin unittest.TestCase doesn't have databases property, when django.TestCase it has)
But when you call whole module (python manage.py test functional_tests) looks like you have some django.test.TestCase tests in suite so that's why new test database is creating:
$ python manage.py test functional_tests
Creating test database for alias 'default'... # <-- THIS ONE
##<skipped for readability>
Destroying test database for alias 'default'...
And as you mentioned tests failed because there were no prepared objects for them.
Solution
At this point I see few options to solve this.
Prepare test data for tests explicitly (by fixtures or manually in tests or setups) so they would be independent from current state of database
Explicitly use desired database.
Run tests with --keepdb option (ie ./manage.py test --keepdb, it will use the existing database and will not destroy it after test runs) and set same test database name in app settings as working database (in such case it will not append test_ prefix for test db)
Since you don't want to use django.TestCase, so don't use them at all? Replace them with unittest.TestCase and it will not create test database

Django can't destroy and create test databases properly

When I try to run my unittest, this is what I get:
python manage.py test dbank --settings=databank_web.settings.dqs.dev_hooman
Creating test database for alias 'default'...
Creating test database for alias 'global'...
Creating test database for alias 'optin_db'...
Creating test database for alias 'vpd3'...
Creating test database for alias 'user_db'...
Creating test database for alias 'vpd1'...
Creating test database for alias 'vpd2'...
.
----------------------------------------------------------------------
Ran 1 test in 0.327s
OK
Destroying test database for alias 'default'...
Warning: Table 'mysql.proc' doesn't exist
It couldn't destroy the database. It gets better, when I rerun the test:
python manage.py test dbank --settings=databank_web.settings.dqs.dev_hooman
Creating test database for alias 'default'...
Creating test database for alias 'global'...
Got an error creating the test database: (1007, "Can't create database 'test_dqs12_full2'; database exists")
Type 'yes' if you would like to try deleting the test database 'test_dqs12_full2', or 'no' to cancel: yes
Destroying old test database 'global'...
Got an error recreating the test database: Table 'mysql.proc' doesn't exist
Any idea why this is going wrong?
Running latest homebrew + mysql-5.6.21 + Django 1.5.5
I have finally found the reason.
Our application has around 7 databases. When I was about to drop them manually to get a clean state, I accidentally also dropped the mysql database. This database should not be ever deleted, as it contains vital information about the root user.
When I realised this, I tried to mitigate by reinstalling MySQL via brew.
brew uninstall mysql
brew install mysql
This worked and mysql database showed up again when I ran show databases;, however the problem persisted. Thats when I came here for help.
As it seems things are a bit more complicated than that with brew.
This is how I got it working again:
1) brew uninstall mysql
2) sudo rm -r /usr/local/var/mysql/
3) brew install mysql
4) unset TMPDIR
5) mysql_install_db --verbose --user='whoami' --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp
6) mysql.server restart
7) mysql_secure_installation
Now it works. Hope this helps other fellow mac users. Thanks.
Well, there are two errors indicating that the mysql.proc table is missing. You might be able to generate it running mysql_update.

DJANGO_SETTINGS_MODULE is defined, project settings do import via `python manage.py `, yet django-pytest does not find the django settings

Synopsis of problem:
I want to use django-pytest to test my django scripts yet py.test complains that it cannot find DJANGO_MODULE_SETTINGS. I have followed the documentation of django-pytest for setting DJANGO_MODULE_SETTINGS. I am not getting the expected behavior from py.test.
I would like help troubleshooting this problem. Thank you in advance.
References—django-pytest documentation
Setting the DJANGO_SETTINGS_MODULE in virtualevn postactivate script:
In a new shell, the $DJANGO_SETTINGS_MODULE is not set.
I expect that the $DJANGO_SETTINGS_MODULE would be None. It
is.
In: echo $DJANGO_SETTINGS_MODULE
Out:
Contents of .virtualenv/browsing/bin/postactivate
As documented in the django-pytest documents, one may set
$DJANGO_SETTINGS_MODULE by exporting it in the postactivate
script for virutalenv.
export DJANGO_SETTINGS_MODULE=automated_browsing.settings
At terminal cli:
As expected, $DJANGO_SETTING_MODULE is defined.
# activate the virtualenv
In: workon browsing
In: echo $DJANGO_SETTINGS_MODULE
Out: automated_browsing.settings
As expected, the settings module is set as shown when the server runs:
# cd into django project
In: cd ../my_django/automated_browsing
# see if server runs
In: python manage.py runserver
Out: # output
Validating models...
0 errors found
September 02, 2014 - 10:45:35
Django version 1.6.6, using settings 'automated_browsing.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
In: ^C
As NOT expected, py.test cannot find it.
In: echo $DJANGO_SETTINGS_MODULE
Out: automated_browsing.settings
In: py.test tests/test_browser.py
Out: …
_pytest.config.UsageError: Could not import settings
'automated_browsing.settings'
(Is it on sys.path? Is there an import error in the settings file?)
No module named automated_browsing.settings
installed packages
Django==1.6.6
EasyProcess==0.1.6
PyVirtualDisplay==0.1.5
South==1.0
argparse==1.2.1
ipython==2.2.0
lxml==3.3.6
psycopg2==2.5.4
py==1.4.23
pytest-django==2.6.2
pytest==2.6.1
selenium==2.42.1
test-pkg==0.0
wsgiref==0.1.2
Had the same problem today. In the project/settings/ you have to create init.py . Like this django recognize the folder.
In my case I just typed __inti__.py which was wrong. I corrected inti.py --> init.py and it works fine.
Hope it helps.
V.
Inspired by this answer, I have solved my problem.
According to the django-pytest documentation for managing the python path:
By default, pytest-django tries to find Django projects by
automatically looking for the project’s manage.py file and adding its
directory to the Python path.
I assumed that executing py.test tests/test_browser.py inside of the django project containing manage.py would handle the PYTHONPATH. This was an incorrect assumption in my situation.
I added the following to .virtualenv/$PROJECT/bin/postactivate:
export PYTHONPATH=$PYTHONPATH:$HOME/development/my_python/my_django/automated_browsing
And now py.test works as expected.

How to see which tests were run during Django's manage.py test command

After tests execution is finished using Django's manage.py test command only number of passed tests is printed to the console.
(virtualenv) G:\Project\>python manage.py test
Creating test database for alias 'default'...
True
..
----------------------------------------------------------------------
Ran 2 tests in 0.017s
OK
Destroying test database for alias 'default'...
Is there any way to see:
which tests were actually executed
from what module
in what order
I haven't found any solution in the doc.
You can pass -v 2 to the test command:
python manage.py test -v 2
After running this command you'll get something like this (I'm using django 2, feel free to ignore migrations/database stuff):
Creating test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Operations to perform:
Synchronize unmigrated apps: messages, staticfiles
Apply all migrations: admin, auth, contenttypes, sessions
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
Applying contenttypes.0001_initial... OK
...
Applying sessions.0001_initial... OK
System check identified no issues (0 silenced).
test_equal_hard (polls.tests.TestHard) ... ok <--------+
test_equal_simple (polls.tests.TestSimple) ... ok <--------+
|
|
That's your tests! >----------------------------+
By the way, v stands for verbosity (You can also use --verbosity=2):
python manage.py test --verbosity=2
Here's the excerpt from the python manage.py test --help:
-v {0,1,2,3}, --verbosity {0,1,2,3}
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
Nigel's answer is great and definitely the lowest barrier to entry option. However, you can get even better feedback with django_nose (and it's not that difficult to setup ;).
The below is from: BDD with Python
First: install some requirements:
pip install nose pinocchio django_nose
Then add the following to settings.py
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
NOSE_ARGS = ['--with-spec', '--spec-color']
Then run your tests as per normal:
python manage.py test
Output should look something like this:
Note: The comments under your tests can be used to give even better output than just the name.
e.g.:
def test_something(self):
"""Something should happen"""
...
Will output "Something should happen" when running the test.
For extra points: You can also generate / output your code coverage:
pip install coverage
Add the following to your NOSE_ARGS in settings.py: '--with-coverage', '--cover-html', '--cover-package=.', '--cover-html-dir=reports/cover'
e.g.:
NOSE_ARGS = ['--with-spec', '--spec-color',
'--with-coverage', '--cover-html',
'--cover-package=.', '--cover-html-dir=reports/cover']
Then you'll get a nice code-coverage summary when you run python manage.py test as well as a neat html report in reports/cover

Categories

Resources