python unittests mocking mysql multiple times causing errors - python

I am writing unittests for a program, the majority of functions are all boilerplate code to do some mysql queries with no real return types, to test these I have written tests to check for the query in the cursor:
#mock.patch('mysql.connector.connect')
def test_query1(self, mock_conn):
test_query_data = 100
import app
a = app.query1(test_query_data)
mock_cursor = mock_conn.return_value.cursor.return_value
self.assertEqual(mock_cursor.execute.call_args[0], ('SELECT id FROM table WHERE data=%s limit 1;', (100,)))
this test on its own works fine but when I have others structured the exact same way the patching of the mysql connection breaks causing an exception in the assert statement
Traceback (most recent call last):
File "c:\users\sirwill\appdata\local\programs\python\python38\lib\site-packages\mock\mock.py", line 1346, in patched
return func(*newargs, **newkeywargs)
File "C:\Users\sirwill\python_project\tests.py", line 69, in test_insert_event
self.assertEqual(mock_cursor.execute.call_args[0], ('SELECT id FROM table WHERE data=%s limit 1;', (100,)))
TypeError: 'NoneType' object is not subscriptable
I have tried to delete the module and reimport with no change in the result

for anyone else having this issue the answer was to reload the library upon importing into the test using
importlib.reload(app)

Related

How do I use unittest to catch CapabilityDisabledError exceptions?

I have a Flask project on GAE and I'd like to start adding try/except blocks around database writes in case the datastore has problems, which will definitely fire when there's a real error, but I'd like to mimic that error in a unittest so I can have confidence of what will really happen during an outage.
For example, my User model:
class User(ndb.Model):
guser = ndb.UserProperty()
user_handle = ndb.StringProperty()
and in other view/controller code:
def do_something():
try:
User(guser=users.get_current_user(), user_handle='barney').put()
except CapabilityDisabledError:
flash('Oops, database is down, try again later', 'danger')
return redirect(url_for('registration_done'))
Here's a gist of my test code: https://gist.github.com/iandouglas/10441406
In a nutshell, GAE allows us to use capabilities to temporarily disable the stubs for memcache, datastore_v3, etc., and in the main test method:
def test_stuff(self):
# this test ALWAYS passes, making me believe the datastore is temporarily down
self.assertFalse(capabilities.CapabilitySet('datastore_v3').is_enabled())
# but this write to the datastore always SUCCEEDS, so the exception never gets
# thrown, therefore this "assertRaises" always fails
self.assertRaises(CapabilityDisabledError,
lambda: User(guser=self.guser, pilot_handle='foo').put())
I read some other post recommending calling the User.put() as a lambda which results in this traceback:
Traceback (most recent call last):
File "/home/id/src/project/tests/integration/views/test_datastore_offline.py", line 28, in test_stuff
self.assertRaises(CapabilityDisabledError, lambda: User(
AssertionError: CapabilityDisabledError not raised
If I remove the lambda: portion, I get this traceback instead:
Traceback (most recent call last):
File "/home/id/src/project/tests/integration/views/test_datastore_offline.py", line 31, in test_stuff
pilot_handle_lower='foo'
File "/usr/lib/python2.7/unittest/case.py", line 475, in assertRaises
callableObj(*args, **kwargs)
TypeError: 'Key' object is not callable
Google's tutorials show you how to turn these capabilities on and off for unit testing, and in other tutorials they show you which exceptions could get thrown if their services are offline or experiencing intermittent issues, but they have no tutorials showing how they might work together in a unit test.
Thanks for any ideas.
The datastore stub does not support returning a CapabilityDisabledError, so enabled the error in the capabilities stub will not affect calls to datastore.
As a separate note, if you are using the High Replication Datastore, you'll never experience the CapabilityDisabledError because it does not have scheduled downtime.

errors with gae-sessions and nose

I'm running into a few problems with adding gae-sessions to a relatively mature GAE app. I followed the readme carefully and also looked at the demo.
First, just adding the gaesesions directory to my app causes the following error when running tests with nose and nose-gae:
Exception ImportError: 'No module named threading' in <bound method local.__del__ of <_threading_local.local object at 0x103e10628>> ignored
All the tests run fine so not a big problem but suggests that something isn't right.
Next, if I add the following two lines of code:
from gaesessions import get_current_session
session = get_current_session()
And run my tests, then I get the following error:
Traceback (most recent call last):
File "/Users/.../unit_tests.py", line 1421, in testParseFBRequest
data = tasks.parse_fb_request(sr)
File "/Users/.../tasks.py", line 220, in parse_fb_request
session = get_current_session()
File "/Users/.../gaesessions/__init__.py", line 36, in get_current_session
return _tls.current_session
File "/Library/.../python2.7/_threading_local.py", line 193, in __getattribute__
return object.__getattribute__(self, name)
AttributeError: 'local' object has no attribute 'current_session'
This error does not happen on the dev server.
Any suggestions on fixing the above would be greatly appreciated.
I ran into the same problem. The problem seems to be that the gae testbed behaves differently than the development server. I don't know the specifics but ended up solving it by adding
def setUp(self):
testbed.Testbed().activate()
# after activating the testbed:
from gaesessions import Session, set_current_session
set_current_session(Session())

Database errors in Django when using threading

I am working in a Django web application which needs to query a PostgreSQL database. When implementing concurrency using Python threading interface, I am getting DoesNotExist errors for the queried items. Of course, these errors do not occur when performing the queries sequentially.
Let me show a unit test which I wrote to demonstrate the unexpected behavior:
class ThreadingTest(TestCase):
fixtures = ['demo_city',]
def test_sequential_requests(self):
"""
A very simple request to database, made sequentially.
A fixture for the cities has been loaded above. It is supposed to be
six cities in the testing database now. We will made a request for
each one of the cities sequentially.
"""
for number in range(1, 7):
c = City.objects.get(pk=number)
self.assertEqual(c.pk, number)
def test_threaded_requests(self):
"""
Now, to test the threaded behavior, we will spawn a thread for
retrieving each city from the database.
"""
threads = []
cities = []
def do_requests(number):
cities.append(City.objects.get(pk=number))
[threads.append(threading.Thread(target=do_requests, args=(n,))) for n in range(1, 7)]
[t.start() for t in threads]
[t.join() for t in threads]
self.assertNotEqual(cities, [])
As you can see, the first test performs some database requests sequentially, which are indeed working with no problem. The second test, however, performs exactly the same requests but each request is spawned in a thread. This is actually failing, returning a DoesNotExist exception.
The output of the execution of this unit tests is like this:
test_sequential_requests (cesta.core.tests.threadbase.ThreadingTest) ... ok
test_threaded_requests (cesta.core.tests.threadbase.ThreadingTest) ...
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/jose/Work/cesta/trunk/src/cesta/core/tests/threadbase.py", line 45, in do_requests
cities.append(City.objects.get(pk=number))
File "/home/jose/Work/cesta/trunk/parts/django/django/db/models/manager.py", line 132, in get
return self.get_query_set().get(*args, **kwargs)
File "/home/jose/Work/cesta/trunk/parts/django/django/db/models/query.py", line 349, in get
% self.model._meta.object_name)
DoesNotExist: City matching query does not exist.
... other threads returns a similar output ...
Exception in thread Thread-6:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/usr/lib/python2.6/threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/jose/Work/cesta/trunk/src/cesta/core/tests/threadbase.py", line 45, in do_requests
cities.append(City.objects.get(pk=number))
File "/home/jose/Work/cesta/trunk/parts/django/django/db/models/manager.py", line 132, in get
return self.get_query_set().get(*args, **kwargs)
File "/home/jose/Work/cesta/trunk/parts/django/django/db/models/query.py", line 349, in get
% self.model._meta.object_name)
DoesNotExist: City matching query does not exist.
FAIL
======================================================================
FAIL: test_threaded_requests (cesta.core.tests.threadbase.ThreadingTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jose/Work/cesta/trunk/src/cesta/core/tests/threadbase.py", line 52, in test_threaded_requests
self.assertNotEqual(cities, [])
AssertionError: [] == []
----------------------------------------------------------------------
Ran 2 tests in 0.278s
FAILED (failures=1)
Destroying test database for alias 'default' ('test_cesta')...
Remember that all this is happening in a PostgreSQL database, which is supposed to be thread safe, not with the SQLite or similars. Test was ran using PostgreSQL also.
At this point, I am totally lost about what can be failing. Any idea or suggestion?
Thanks!
EDIT: I wrote a little view just to check up if it works out of the tests. Here is the code of the view:
def get_cities(request):
queue = Queue.Queue()
def get_async_cities(q, n):
city = City.objects.get(pk=n)
q.put(city)
threads = [threading.Thread(target=get_async_cities, args=(queue, number)) for number in range(1, 5)]
[t.start() for t in threads]
[t.join() for t in threads]
cities = list()
while not queue.empty():
cities.append(queue.get())
return render_to_response('async/cities.html', {'cities': cities},
context_instance=RequestContext(request))
(Please, do not take into account the folly of writing the application logic inside the view code. Remember that this is only a proof of concept and would not be never in the real app.)
The result is that code is working nice, the requests are made successfully in threads and the view finally shows the cities after calling its URL.
So, I think making queries using threads will only be a problem when you need to test the code. In production, it will work without any problem.
Any useful suggestions to test this kind of code successfully?
Try using TransactionTestCase:
class ThreadingTest(TransactionTestCase):
TestCase keeps data in memory and doesn't issue a COMMIT to database. Probably the threads are trying to connect directly to DB, while the data is not commited there yet. Seedescription here:
https://docs.djangoproject.com/en/dev/topics/testing/?from=olddocs#django.test.TransactionTestCase
TransactionTestCase and TestCase are identical except for the manner
in which the database is reset to a known state and the ability for
test code to test the effects of commit and rollback. A
TransactionTestCase resets the database before the test runs by
truncating all tables and reloading initial data. A
TransactionTestCase may call commit and rollback and observe the
effects of these calls on the database.
Becomes more clear from this part of the documentation
class LiveServerTestCase(TransactionTestCase):
"""
...
Note that it inherits from TransactionTestCase instead of TestCase because
the threads do not share the same transactions (unless if using in-memory
sqlite) and each thread needs to commit all their transactions so that the
other thread can see the changes.
"""
Now, the transaction has not been committed inside a TestCase, hence the changes are not visible to the other thread.
This sounds like it's an issue with transactions. If you're creating elements within the current request (or test), they're almost certainly in an uncommitted transaction that isn't accessible from the separate connection in the other thread. You probably need to manage your transctions manually to get this to work.

Import web2py's DAL to be used with Google Cloud SQL on App Engine

I want to build an app on App Engine which uses Cloud SQL as backend database instead of App engine's own datastore facility (which doesn't support common SQL operations such as JOIN).
Cloud SQL has a DB-API and hence I was looking for a lightweight Data Abstraction Layer (DAL) to help easily manipulate the cloud databases. A little research revealed that web2py has a pretty neat DAL which is compatible with Cloud SQL.
Since I don't actually need the whole full-stack web2py framework, I copied the dal.py file out from the /gluon folder into a simple testing app's main directory and included this line in my app:
from dal import DAL, Field
db=DAL('google:sql://myproject:myinstance/mydatabase')
However, this generated an error after I deployed the app and tried to run it.
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py", line 701, in __call__
handler.get(*groups)
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/helloworld2.py", line 13, in get
db=DAL('google:sql://serangoon213home:rainman001/guestbook')
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 5969, in __init__
raise RuntimeError, "Failure to connect, tried %d times:\n%s" % (attempts, tb)
RuntimeError: Failure to connect, tried 5 times:
Traceback (most recent call last):
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 5956, in __init__
self._adapter = ADAPTERS[self._dbname](*args)
File "/base/data/home/apps/jarod-helloworld/2.357593994022416181/dal.py", line 3310, in __init__
self.folder = folder or '$HOME/'+thread.folder.split('/applications/',1)[1]
File "/base/python_runtime/python_dist/lib/python2.5/_threading_local.py", line 199, in __getattribute__
return object.__getattribute__(self, name)
AttributeError: 'local' object has no attribute 'folder'
It looks like that it was due to an error with the 'folder' attribute which was assigned by the statement
self.folder = folder or '$HOME/'+thread.folder.split('/applications/',1)[1]
Does anyone know what this attribute does and how can I resolve this problem?
folder is a parm in the DAL contructor. It points to the folder where you store DBs (sqlite). Thus, I don't think that's the problem in your case. I would check again the connection string.
From the web2py docs:
The DAL can be used from any Python program simply by doing this:
from gluon import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases')
i.e. import the DAL, Field, connect and specify the folder which contains the .table files (the app/databases folder).
To access the data and its attributes we still have to define all the tables we are going to access with db.define_tables(...).
If we just need access to the data but not to the web2py table attributes, we get away without re-defining the tables but simply asking web2py to read the necessary info from the metadata in the .table files:
from gluon import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='path/to/app/databases',
auto_import=True))
This allows us to access any db.table without need to re-define it.

PermanentTaskFailure: 'module' object has no attribute 'Migrate'

I'm using Nick Johnson's Bulk Update library on google appengine (http://blog.notdot.net/2010/03/Announcing-a-robust-datastore-bulk-update-utility-for-App-Engine). It works wonderfully for other tasks, but for some reason with the following code:
from google.appengine.ext import db
from myapp.main.models import Story, Comment
import bulkupdate
class Migrate(bulkupdate.BulkUpdater):
DELETE_COMPLETED_JOBS_DELAY = 0
DELETE_FAILED_JOBS = False
PUT_BATCH_SIZE = 1
DELETE_BATCH_SIZE = 1
MAX_EXECUTION_TIME = 10
def get_query(self):
return Story.all().filter("hidden", False).filter("visible", True)
def handle_entity(self, entity):
comments = entity.comment_set
for comment in comments:
s = Story()
s.parent_story = comment.story
s.user = comment.user
s.text = comment.text
s.submitted = comment.submitted
self.put(s)
job = Migrate()
job.start()
I get the following error in my logs:
Permanent failure attempting to execute task
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 258, in post
run(self.request.body)
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/deferred/deferred.py", line 122, in run
raise PermanentTaskFailure(e)
PermanentTaskFailure: 'module' object has no attribute 'Migrate'
It seems quite bizarre to me. Clearly that class is right above the job, they're in the same file and clearly the job.start is being called. Why can't it see my Migrate class?
EDIT: I added this update job in a newer version of the code, which isn't the default. I invoke the job with the correct URL (http://version.myapp.appspot.com/migrate). Is it possible this is related to the fact that it isn't the 'default' version served by App Engine?
It seems likely that your declaration of the 'Migrate' class is in the handler script (Eg, the one directly invoked by app.yaml). A limitation of deferred is that you can't use it to call functions defined in the handler module.
Incidentally, my bulk update library is deprecated in favor of App Engine's mapreduce support; you should probably use that instead.

Categories

Resources