Tornado. Django-like testrunner and test database - python

I like django unit tests, cause they create and drop test database on run.
What ways to create test database for tornado exists?
UPD: I'm interested in postgresql test database creation on test run.

I found the easiest way is just to use a SQL dump for the test database. Create a database, populate it with fixtures, and write it out to file. Simply call load_fixtures before you run your tests (or whenever you want to reset the DB). This method can certainly be improved, but it's been good enough for my needs.
import os
import unittest2
import tornado.database
settings = dict(
db_host="127.0.0.1:3306",
db_name="testdb",
db_user="testdb",
db_password="secret",
db_fixtures_file=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'fixtures.sql'),
)
def load_fixtures():
"""Fixtures are stored in an SQL dump.
"""
os.system("psql %s --user=%s --password=%s < %s" % (settings['db_name'],
settings['db_user'], settings['db_password'], settings['db_fixtures_file']))
return tornado.database.Connection(
host=settings['db_host'], database=settings['db_name'],
user=settings['db_user'], password=settings['db_password'])

Related

How to backup Peewee database (SqliteQueueDatabase) programatically?

I'm using Peewee in one of my projecs. Specifically, I'm using SqliteQueueDatabase and I need to create a backup (i.e. another *.db file) without stopping my application. I saw that there are two methods that could work for me (backup and backup_to_file) but they're methods from CSqliteExtDatabase, and SqliteQueueDatabase is subclass of SqliteExtDatabase. I've found solutions to manually create a dump of the file, but I need a *.db file (not a *.csv file, for example). Couldn't find any similar question or relevant answer.
Thanks!
You can just import the backup_to_file() helper from playhouse._sqlite_ext and pass it your connection and a filename:
db = SqliteQueueDatabase('...')
from playhouse._sqlite_ext import backup_to_file
conn = db.connection() # get the underlying pysqlite conn
backup_to_file(conn, 'dest.db')
Also, if you're using pysqlite3, then there are also backup methods available on the connection itself.

How do I load my test yaml file prior to running my Django test?

I'm using Django and Python 3.7. I would like some test data loaded before running my tests. I thought specifying a "fixtures" element in my test would do this, but it doesn't seem to be loaded. I created the file, mainpage/fixtures/test_data.yaml, with this content
model: mainpage.website
pk: 1
fields:
path: /testsite
model: mainpage.article
pk: 1
fields:
website: 1
title: 'mytitle'
path: '/test-path'
url: 'http://www.mdxomein.com/path'
created_on:
type: datetime
columnDefinition: TIMESTAMP DEFAULT CURRENT_TIMESTAMP
model: mainpage.articlestat:
pk: 1
fields:
article: 1
elapsed_time_in_seconds: 300
hits: 2
I specify the fixture in my test file below ...
from django.test import TestCase
from mainpage.models import ArticleStat, Article
import unittest
class TestModels(unittest.TestCase):
fixtures = ['/mainpage/fixtures/test_data.yaml',]
# Test saving an article stat that hasn't previously
# existed
def test_add_articlestat(self):
id = 1
article = Article.objects.filter(id=id)
self.assertTrue(article, "A pre-condition of this test is that an article exist with id=" + str(id))
articlestat = ArticleStat(article=article,elapsed_time_in_seconds=250,votes=25,comments=15)
articlestat.save()
article_stat = ArticleStat.objects.get(article=article)
self.assertTrue(article_stat, "Failed to svae article stat properly.")
But it doesn't appear any of the test data is loaded when my test is run ...
(venv) localhost:mainpage_project davea$ cd /Users/davea/Documents/workspace/mainpage_project; source ./venv/bin/activate; python manage.py test
test activated!
Creating test database for alias 'default'...
/Users/davea/Documents/workspace/mainpage_project/venv/lib/python3.7/site-packages/django/db/models/fields/__init__.py:1421: RuntimeWarning: DateTimeField Article.front_page_first_appeared_date received a naive datetime (2019-01-30 17:02:31.329751) while time zone support is active.
RuntimeWarning)
System check identified no issues (0 silenced).
F
======================================================================
FAIL: test_add_articlestat (mainpage.tests.TestModels)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/davea/Documents/workspace/mainpage_project/mainpage/tests.py", line 15, in test_add_articlestat
self.assertTrue(article, "A pre-condition of this test is that an article exist with id=" + str(id))
AssertionError: <QuerySet []> is not true : A pre-condition of this test is that an article exist with id=1
----------------------------------------------------------------------
Ran 1 test in 0.001s
I've tried changing the file name to something that doesn't exist at all to see if I get a different error but I don't. So I don't think this "Fixtures" convention is working at all. How do I get my test data loaded prior to running my test?
In order to use fixtures that way, TransactionTestCase.fixtures needs to be set.
1
The magic to load fixtures happens in TransactionTestCase. This is makes it so that Test classes that subclasses TransactionTestCase e.g. django.test.TestCase also load up fixtures specified in the fixtures attribute.
2
The current TestModels test class subclasses unitest.TestCase and therefore does nothing with the fixtures setup. 3
from django.test import TestCase
class TestModels(TestCase):
fixtures = ['test_data.yaml',]
Usually it is just fine to set the name for the fixture file and not the entire path to the fixture file.
If you need to set a custom folder for fixtures to be discovered in, that can be specified by setting FIXTURE_DIRS 4
You should (as #OluwafemiSule) states in his answer using django.test.TestCase in order to the class attribute fixtures is taken into account.
Could you implement your test and load fixtures using unittest.TestCase?
Though is not recommended you could load your fixtures programmatically in your setUp method like this:
from django.core.management import call_command
def setUp(self):
call_command('loaddata', 'initial_data.yml', app_label='myapp')
but this approach will require you rollback any change you make to the loaded data if you what to use it in the subsequent tests.
The right way
From The Django testing docs in the section Fixtures in Unit Tests you can read:
The big thing that the Django Testcase does for you in regards to fixtures is that it maintains a consistent state for all of your tests. Before each test is run, the database is flushed: returning it to a pristine state (like after your first syncdb). Then your fixture gets loaded into the DB, then setUp() is called, then your test is run, then tearDown() is called. Keeping your tests insulated from each other is incredibly important when you are trying to make a good test suite.
That's why you should be using django.test.TestCase.
Are your fixtures good?
Another important tip is that you can use the command testserver to see if your fixtures are good:
$ django-admin testserver mydata.json
this will runs a Django development server (as in runserver) using data from the given fixture(s). You can read about testserver command and you'll find out that's a very useful tool.

Pytest-django: cannot delete db after tests

I have a Django application, and I'm trying to test it using pytest and pytest-django. However, quite often, when the tests finish running, I get the error that the database failed to be deleted: DETAIL: There is 1 other session using the database.
Basically, the minimum test code that I could narrow it down to is:
#pytest.fixture
def make_bundle():
a = MyUser.objects.create(key_id=uuid.uuid4())
return a
class TestThings:
def test_it(self, make_bundle):
all_users = list(MyUser.objects.all())
assert_that(all_users, has_length(1))
Every now and again the tests will fail with the above error. Is there something I am doing wrong? Or how can I fix this?
The database that I am using is PostgreSQL 9.6.
I am posting this as an answer because I need to post a chunk of code and because this worked. However, this looks like a dirty hack to me, and I'll be more than happy to accept anybody else's answer if it is better.
Here's my solution: basically, add the raw sql that kicks out all the users from the given db to the method that destroys the db. And do that by monkeypatching. To ensure that the monkeypatching happens before tests, add that to the root conftest.py file as an autouse fixture:
def _destroy_test_db(self, test_database_name, verbosity):
"""
Internal implementation - remove the test db tables.
"""
# Remove the test database to clean up after
# ourselves. Connect to the previous database (not the test database)
# to do so, because it's not allowed to delete a database while being
# connected to it.
with self.connection._nodb_connection.cursor() as cursor:
cursor.execute(
"SELECT pg_terminate_backend(pg_stat_activity.pid) "
"FROM pg_stat_activity "
"WHERE pg_stat_activity.datname = '{}' "
"AND pid <> pg_backend_pid();".format(test_database_name)
)
cursor.execute("DROP DATABASE %s"
% self.connection.ops.quote_name(test_database_name))
#pytest.fixture(autouse=True)
def patch_db_cleanup():
creation.BaseDatabaseCreation._destroy_test_db = _destroy_test_db
Note that the kicking-out code may depend on your database engine, and the method that needs monkeypatching may be different in different Django versions.

testing postgres db python

I don't understand how to test my repositories.
I want to be sure that I really saved object with all of it parameters into database, and when I execute my SQL statement I really received what I am supposed to.
But, I cannot put "CREATE TABLE test_table" in setUp method of unittest case because it will be created multiple times (tests of the same testcase are runned in parallel). So, as long as I create 2 methods in the same class which needs to work on the same table, it won't work (name clash of tables)
Same, I cannot put "CREATE TABLE test_table" setUpModule, because, now the table is created once, but since tests are runned in parallel, there is nothing which prevents from inserting the same object multiple times into my table, which breakes the unicity constraint of some field.
Same, I cannot "CREATE SCHEMA some_random_schema_name" in every method, because I need to globally "SET search_path TO ..." for a given Database, so every method runned in parallel will be affected.
The only way I see is to create to "CREATE DATABASE" for each test, and with unique name, and establish a invidual connection to each database.. This looks extreeeemly wasteful. Is there a better way?
Also, I cannot use SQLite in memory because I need to test PostgreSQL.
The best solution for this is to use the testing.postgresql module. This fires up a db in user-space, then deletes it again at the end of the run. You can put the following in a unittest suite - either in setUp, setUpClass or setUpModule - depending on what persistence you want:
import testing.postgresql
def setUp(self):
self.postgresql = testing.postgresql.Postgresql(port=7654)
# Get the url to connect to with psycopg2 or equivalent
print(self.postgresql.url())
def tearDown(self):
self.postgresql.stop()
If you want the database to persist between/after tests, you can run it with the base_dir option to set a directory - which will prevent it's removal after shutdown:
name = "testdb"
port = "5678"
path = "/tmp/my_test_db"
testing.postgresql.Postgresql(name=name, port=port, base_dir=path)
Outside of testing it can also be used as a context manager, where it will automatically clean up and shut down when the with block is exited:
with testing.postgresql.Postgresql(port=7654) as psql:
# do something here

How to force django to print each executed sql query

I have some function written with python. I want to know all sql queries, that was executed within this function. Is there a way to code something like:
def f():
start_to_print_queries()
# ...
# many many python code
# ...
stop_to_print_queries()
?
You can use django testing tools to capture queries on a connection. Assuming the default connection, something like this should work:
from django.db import connection
from django.test.utils import CaptureQueriesContext
def f():
with CaptureQueriesContext(connection) as queries:
# ...
# many many python code
# ...
print(len(queries.captured_queries))
Note that this will only work in debug mode (settings.DEBUG = True), because it relies on the engine catpuring the queries. If you are using more than one connection, simply substitute the connection you are interested in.
If you are interested in the detail of queries, queries.captured_queries contains detailed information: the sql code, the params and the timings of each request.
Also, if you need to count queries while building test cases, you can simply assert the number, like this:
def test_the_function_queries(self):
with self.assertNumQueries(42): # check the_function does 42 queries.
the_function()
If the test fails, Django will print all the queries for you to examine.
I would recommend the excellent django-debug-toolbar package. It allows you to interactively examine the SQL statements executed in a view, and even provides profiling information.
You can get it from pip:
pip install django-debug-toolbar
Include it in your settings.INSTALLED_APPLICATIONS:
INSTALLED_APPS = (
# ...
'django.contrib.staticfiles',
# ...
'debug_toolbar',
)
When executing your project in with DEBUG=True you should see a DjDT button in the top right corner.
Expanding the SQL tab will give you a detailed list of the sql queries.

Categories

Resources