I'm trying to write a unittest that will check if the correct error message is returned in case the database connection hits exception. I've tried to use connection.creation.destroy_test_db(':memory:') but it didn't work as I expected. I suppose I should either remove the tables or somehow cut the db connection. Is any of those possible?
I found my answer in the presentation Testing and Django by Carl Meyer. Here is how I did it:
from django.db import DatabaseError
from django.test import TestCase
from django.test.client import Client
import mock
class NoDBTest(TestCase):
cursor_wrapper = mock.Mock()
cursor_wrapper.side_effect = DatabaseError
#mock.patch("django.db.backends.util.CursorWrapper", cursor_wrapper)
def test_no_database_connection(self):
response = self.client.post('/signup/', form_data)
self.assertEqual(message, 'An error occured with the DB')
Sounds like this is a job for mocking. For example, if you are using MySQL, you can put a side_effect on connect method, like this:
from django.test import TestCase
from mock import patch
import MySQLdb
class DBTestCase(TestCase):
def test_connection_error(self):
with patch.object(MySQLdb, 'connect') as connect_method:
connect_method.side_effect = Exception("Database Connection Error")
# your assertions here
Hope that helps.
Since dec, 2021 there is the library Django Mockingbird.
With this you can mock the object that would be retrieved from db.
from djangomockingbird import mock_model
#mock_model('myapp.myfile.MyModel')
def test_my_test():
some_test_query = MyModel.objects.filter(bar='bar').filter.(foo='foo').first()
#some more code
#assertions here
I was looking for django's actual http response code in case of a database connection timeout when using pymysql. The following test confirmed it's a 401 Unauthorized when pymysql raises an OperationalError.
from unittest.mock import patch
import pymysql
from django.test import TestCase, Client
class TestDatabaseOutage(TestCase):
client = None
def setUp(self):
self.client = Client()
def test_database_connection_timeout_returns_401(self):
with patch.object(pymysql, 'connect') as connect_method:
message = "Can't connect to MySQL server on 'some_database.example.com' ([Errno 110] Connection timed out)"
connect_method.side_effect = pymysql.OperationalError(2003, message)
response = self.client.get('/')
self.assertEqual(response.status_code, 401)
Related
I'm running a Django project with Peewee in Python 3.6 and trying to track down what's wrong with the connection pooling. I keep getting the following error on the development server (for some reason I never experience this issue on my local machine):
Lost connection to MySQL server during query
The repro steps are reliable and are:
Restart Apache on the instance.
Go to my Django page and press a button which triggers a DB operation.
Works fine.
Wait exactly 10 minutes (I've tested enough to get the exact number).
Press another button to trigger another DB operation.
Get the lost connection error above.
The code is structured such that I have all the DB operations inside an independent Python module which is imported into the Django module.
In the main class constructor I'm setting up the DB as such:
from playhouse.pool import PooledMySQLDatabase
def __init__(self, host, database, user, password, stale_timeout=300):
self.mysql_db = PooledMySQLDatabase(host=host, database=database, user=user, password=password, stale_timeout=stale_timeout)
db_proxy.initialize(self.mysql_db)
Every call which needs to make calls out to the DB are done like this:
def get_user_by_id(self, user_id):
db_proxy.connect(reuse_if_open=True)
user = (User.get(User.user_id == user_id))
db_proxy.close()
return {'id': user.user_id, 'first_name': user.first_name, 'last_name': user.last_name, 'email': user.email }
I looked at the wait_timeout value on the MySQL instance and its value is 3600 so that doesn't seem to be the issue (and I tried changing it anyway just to see).
Any ideas on what I could be doing wrong here?
Update:
I found that the /etc/my.cnf configuration file for MySQL has the wait-timeout value set to 600, which matches what I'm experiencing. I don't know why this value doesn't show when I runSHOW VARIABLES LIKE 'wait_timeout'; on the MySQL DB (that returns 3600) but it does seem likely the issue is coming from the wait timeout.
Given this I tried setting the stale timeout to 60, assuming that if it's less than the wait timeout it might fix the issue but it didn't make a difference.
You need to be sure you're recycling the connections properly -- that means that when a request begins you open a connection and when the response is delivered you close the connection. The pool is not recycling the conn most likely because you're never putting it back in the pool, so it looks like its still "in use". This can easily be done with middleware and is described here:
http://docs.peewee-orm.com/en/latest/peewee/database.html#django
I finally came up with a fix which works for my case, after trying numerous ideas. It's not ideal but it works. This post on Connection pooling pointed me in the right direction.
I created a Django middleware class and configured it to be the first in the list of Django middleware.
from peewee import OperationalError
from playhouse.pool import PooledMySQLDatabase
database = PooledMySQLDatabase(None)
class PeeweeConnectionMiddleware(object):
CONN_FAILURE_CODES = [ 2006, 2013, ]
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
if database.database: # Is DB initialized?
response = None
try:
database.connect(reuse_if_open=True)
with database.atomic() as transaction:
try:
response = self.get_response(request)
except:
transaction.rollback()
raise
except OperationalError as exception:
if exception.args[0] in self.CONN_FAILURE_CODES:
database.close_all()
database.connect()
response = None
with database.atomic() as transaction:
try:
response = self.get_response(request)
except:
transaction.rollback()
raise
else:
raise
finally:
if not database.is_closed():
database.close()
return response
else:
return self.get_response(request)
I'm trying to unit test a class connection to a database. To avoid hardcoding a database, I'd like to assert that the mysql.connector.connect method is called with adequate values.
from mysql.connector import connect
from mysql.connector import Error
from discovery.database import Database
class MariaDatabase(Database):
def connect(self, username, password):
"""
Don't forget to close the connection !
:return: connection to the database
"""
try:
return connect(host=str(self.target),
database=self.db_name,
user=username,
password=password)
I've read documentation around mocks and similar problems (notably this one Python Unit Test : How to unit test the module which contains database operations?, which I thought would solve my problem, but mysql.connector.connect keeps being called instead of the mock).
I don't know what could I do to UTest this class
I'm a newbie in Python and I have problem with mockito in Python.
My production code looks like below:
from stompest.config import StompConfig
from stompest.sync import Stomp
class Connector:
def sendMessage(self):
message = {'message'}
dest = '/queue/foo'
def _send(self, message='', dest=''):
config = StompConfig(uri="tcp://localhost:61613")
client = Stomp(config)
client.connect()
client.send(body=message, destination=dest,
headers='')
client.disconnect()
as you see I would like to send a message using Stomp protocol. I my test I would like to test that w when I invoke a send method from Connector class a send method from Stompest library will be only once invoked.
My unit test looks like:
from Connector import Connector
import unittest
from mockito import *
import stompest
from stompest.config import StompConfig
from stompest.sync import Stomp
class test_Connector(unittest.TestCase):
def test_shouldInvokeConnectMethod(self):
stomp_config = StompConfig(uri="tcp://localhost:61613")
mock_stomp = mock(Stomp(stomp_config))
connector = Connector()
connector.sendMessage()
verify(mock_stomp, times=1).connect()
When I run test in debug mode I see that method for instance connect() is invoked and method send as well, but as a result of test I get:
Failure
Traceback (most recent call last):
File "C:\development\systemtest_repo\robot_libraries\test_Connector.py", line 16, in test_shouldInvokeConnectMethod
verify(mock_stomp, times=1).connect()
File "C:\Python27\lib\site-packages\mockito\invocation.py", line 111, in __call__
verification.verify(self, len(matched_invocations))
File "C:\Python27\lib\site-packages\mockito\verification.py", line 63, in verify
raise VerificationError("\nWanted but not invoked: %s" % (invocation))
VerificationError:
Wanted but not invoked: connect()
What did I do wrong?
You don't actually call the connect method on the mock object - you just check that it was called. This is what the error says as well Wanted but not invoked: connect(). Perhaps adding a call to mock_stomp.connect() before the call to verify will fix this:
mock_stomp = mock(Stomp(stomp_config))
# call the connect method first...
mock_stomp.connect()
connector = Connector()
connector.sendMessage()
# ...then check it was called
verify(mock_stomp, times=1).connect()
If you are instead trying to check that the mock is called from Connector, you probably at least need to pass in the mock_stomp object via dependency injection. For example
class Connector:
def __init__(self, stomp):
self.stomp = stomp
def sendMessage(self, msg):
self.stomp.connect()
# etc ...
and in your test
mock_stomp = mock(Stomp(stomp_config))
connector = Connector(mock_stomp)
connector.sendMessage()
verify(mock_stomp, times=1).connect()
Otherwise, I don't see how the connect() method could be invoked on the same instance of mock_stomp that you are basing your assertions on.
How can I tell the mongodb server is up and running from python? I currently use
try:
con = pymongo.Connection()
except Exception as e:
...
Or is there a better way in pymongo functions I can use?
For new versions of pymongo, from MongoClient docs:
from pymongo.errors import ConnectionFailure
client = MongoClient()
try:
# The ismaster command is cheap and does not require auth.
client.admin.command('ismaster')
except ConnectionFailure:
print("Server not available")
You can init MongoClient with serverSelectionTimeoutMS to avoid waiting for 20 seconds or so before code it raises exception:
client = MongoClient(serverSelectionTimeoutMS=500) # wait 0.5 seconds in server selection
Yes, try/except is a good (pythonic) way to check if the server is up. However, it's best to catch the specific excpetion (ConnectionFailure):
try:
con = pymongo.Connection()
except pymongo.errors.ConnectionFailure:
...
Add the following headers:
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError, OperationFailure
Create the connection with MongoDB from Python:
mongoClient = MongoClient("mongodb://usernameMongo:passwordMongo#localhost:27017/?authMechanism=DEFAULT&authSource=database_name", serverSelectionTimeoutMS=500)
Validations
try:
if mongoClient.admin.command('ismaster')['ismaster']:
return "Connected!"
except OperationFailure:
return ("Database not found.")
except ServerSelectionTimeoutError:
return ("MongoDB Server is down.")
I'm trying to create tests for a Tornado code base I'm picking up. I get the project to run fine but the first test I've written is getting a connection refused error.
Here's the code:
import unittest, os, os.path, sys, urllib
import tornado.options
from tornado.options import options
from tornado.testing import AsyncHTTPTestCase
APP_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
sys.path.append(os.path.join(APP_ROOT, '..'))
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)))
from main import Application
app = Application()
def clear_db(app=None):
os.system("mysql -u user --password=pw --database=testdb < %s" % (os.path.join(APP_ROOT, 'db', 'schema.sql')))
class TestHandlerBase(AsyncHTTPTestCase):
def setUp(self):
clear_db()
super(TestHandlerBase, self).setUp()
def get_app(self):
return app
def get_http_port(self):
return 5000
class TestRootHandler(TestHandlerBase):
def test_redirect(self):
response = self.fetch(
'/',
method='GET',
follow_redirects=False)
print response
self.assertTrue(response.headers['Location'].endswith('/login'))
This is the response I get:
HTTPResponse(_body=None, buffer=None, code=599,
effective_url='http://localhost:5000/',
error=HTTPError('HTTP 599: [Errno 61] Connection refused',),
headers={}, reason='Unknown',
request=<tornado.httpclient.HTTPRequest object at 0x10c363510>,
request_time=0.01304006576538086, time_info={})
Any idea on what might be causing the error? Is there a step I'm missing to get everything running for the test? Thanks!!!
Don't override get_http_port. A new HTTP server with a new port is setup for each test, so it won't be 5000 every time, even if that's what you have in your settings.
I agree with the answer by Cole Maclean
If you need to configure custom URL, then override the below method of AsyncHTTPTestCase
def get_url(self, path):
url = 'http://localhost:8080' + path
return url
In this scenario, this will take the URL as http://localhost:8080 by default.