I have a function based view function in django that receives an ID from a model, retrieve a file address and delete it using os.remove
image = Images.objects.get(id=image_id)
os.remove(image.file)
the image_id is valid and is a part of my fixture.
what's the best way to write a test for this view, without manually creating a file each time I'm testing the code?
Is there a way to change the behavior of os.remove function for test?
Yes. It's called mocking, and there is a Python library for it: mock. Mock is available in the standard library as unittest.mock for Python 3.3+, or standalone for earlier versions.
So you would do something like this:
from mock import patch
...
#patch('mymodel_module.os.remove')
def test_my_method(self, mocked_remove):
call_my_model_method()
self.assertTrue(mocked_remove.called)
(where mymodel_module is the models.py where your model is defined, and which presumably imports os.)
Related
I created a custom module that overrides the message_new method of the mail.thread model to allow creation of quotations from incoming emails by setting the values of the required fields "partner_id" etc. based on the content of the incoming email. Everything worked correctly with my method being called instead of the original one as expected.
I'm trying to move that code into another custom module now, I placed the python file within the custom module folder, added the import to the init.py file of the custom module and added "mail" to the depends sections of the openerp.py file like I did before.
But now with the new custom module installed the message_new method isn't being overridden and it's calling the original method from mail.thread. This custom module is inheriting the sale.order and sale.order.line models and the changes it's making to those models are being executed so I have no idea why mail.thread isn't being affected when the only difference between this new custom module and the old one is that the enw module is inheriting and applying changes to several models at once rather than to only the mail.thread model.
Has anyone experienced problems inheriting a model and overriding it's methods like this before?
Update:
Based on the answer to this question I guess it's a bug:
How to inherit Mail.Thread AbstractModel and override function from this class in Odoo?
The workaround in that question didn't work exactly for me, but removing the message_new_orig declaration and subsequent call like so did:
from openerp.addons.mail.mail_thread import mail_thread
def message_new(self, cr, uid, msg_dict, custom_values=None, context=None):
# put custom code here
# ...
return res_id
# install overide
mail_thread.message_new = message_new
The setup of the problem is simple enough:
an user selects a language preference (this preference can be read from the user’s session);
based on this choice, load the appropriate .mo from the available translations;
(no separate domains are set up, if it makes any difference)
Problem: since this return has to be done outside the scope of the flask app, it cannot be instantiated and to use #babel.localeselector. Instead, I use a simple function based on webapp2 i18n’ extension which, using Babel’s support function, loads a given translation and returns a translation instance (Translations: "PROJECT VERSION"). (inb4 ‘why not use webapp2 already?’ too many libs already).
From this point on, it is not clear to me what to do with this instance. How can I get Babel to use this specific instance? (at the moment, it always uses the default one, no 'best_match' involved).
Solved by just using the flask app and the way I wanted to avoid - on every request, there is a callback to the app instance and to the localeselector decorator, language is set previously in an attribute in flask.g. Basically, by the books I guess.
I have two modules. One is the core of the website based on web.py (let's name it code.py), and other is an add-on module (addon.py). Using web.py, for each page that the website server there should be class definition in a core, like that:
class Page:
def GET(self):
variable = "Hello!"
return render.page_template(variable) #Here, it returns the rendered template to user
def POST(self):
post_variables = web.input()
pass #Doing something with those variables, maybe, writing in a database...
Now I really need to move that class definition from code.py to addon.py. I can refer to the class definition as a addon.Page instead of simply Page. The Page.GET function still works well... But there's one problem with POST. It seems like at the each call of POST function web.input() in a core module is being set as a storage object storing all the variables. And if my class definition is being stored in addon, the core simply calls addon.Page.POST() (I see no way to change this behaviour). The POST() tries to get web.input()... And fails, of course - web is not imported in addon.py, and even if it was, there wouldn't be any value web.py web-server is getting - just empty dictionary, it would be just another instance of the module. So i don't know...
One solution would be: putting some kind of function in addon.Page.POST(). This function would go one level down, to code.py and execute web.input() there, and return it back, to addon.py, some kind of accessing parent module namespace (like doing import __main__ and accessing __main__.web.input() (which, as I know, is discouraged) ).
Or, for example, putting some kind of C-like pointer that would be shared between the modules, like:
* in code.py there's definition that all the calls to code.addon.web_input() get routed to code.web.input()
* in addon.py - there's simply need to call addon.web_input to get info from code.web.input()
What do I do in this situation? There will be multiple addons, each with the class definition stored in this addon, and I should be able to add new modules, connect and disconnect existing modules easily, without any need to modify code.py. I believe this is possible in Python... Maybe web.py source needs modifying then?
I guess I'll turn my comment into an answer, since it seems to have solved your issue.
Modules that are imported are cached in Python. That means that when you import a module like web (the main web.py module) from multiple other modules, they'll all get the same module object, with the same contents.
So, probably all you need to do is import web at the top of your addon.py module.
I have a utility function in my Django project, it takes a queryset, gets some data from it and returns a result. I'd like to write some tests for this function. Is there anyway to 'mock' a QuerySet? I'd like to create an object that doesn't touch the database, and i can provide it with a list of values to use (i.e. some fake rows) and then it'll act just like a queryset, and will allow someone to do field lookups on it/filter/get/all etc.
Does anything like this exist already?
For an empty Queryset, I'd go simply for using none as keithhackbarth has already stated.
However, to mock a Queryset that will return a list of values, I prefer to use a Mock with a spec of the Model's manager. As an example (Python 2.7 style - I've used the external Mock library), here's a simple test where the Queryset is filtered and then counted:
from django.test import TestCase
from mock import Mock
from .models import Example
def queryset_func(queryset, filter_value):
"""
An example function to be tested
"""
return queryset.filter(stuff=filter_value).count()
class TestQuerysetFunc(TestCase):
def test_happy(self):
"""
`queryset_func` filters provided queryset and counts result
"""
m_queryset = Mock(spec=Example.objects)
m_queryset.filter.return_value = m_queryset
m_queryset.count.return_value = 97
result = func_to_test(m_queryset, '__TEST_VALUE__')
self.assertEqual(result, 97)
m_queryset.filter.assert_called_once_with(stuff='__TEST_VALUE__')
m_queryset.count.assert_called_once_with()
However, to fulfil the question, instead of setting a return_value for count, this could easily be adjusted to be a list of model instances returned from all.
Note that chaining is handled by setting the filter to return the mocked queryset:
m_queryset.filter.return_value = m_queryset
This would need to be applied for any queryset methods used in the function under test, e.g. exclude, etc.
Of course you can mock a QuerySet, you can mock anything.
You can create an object yourself, and give it the interface you need, and have it return any data you like. At heart, mocking is nothing more than providing a "test double" that acts enough like the real thing for your tests' purposes.
The low-tech way to get started is to define an object:
class MockQuerySet(object):
pass
then create one of these, and hand it to your test. The test will fail, likely on an AttributeError. That will tell you what you need to implement on your MockQuerySet. Repeat until your object is rich enough for your tests.
I am having the same issue, and it looks like some nice person has written a library for mocking QuerySets, it is called mock-django and the specific code you will need is here https://github.com/dcramer/mock-django/blob/master/mock_django/query.py I think you can then just patch you models objects function to return one of these QuerySetMock objects that you have set up to return something expected!
For this I use Django's .none() function.
For example:
class Location(models.Model):
name = models.CharField(max_length=100)
mock_locations = Location.objects.none()
This is the method used frequently in Django's own internal test cases. Based on comments in the code
Calling none() will create a queryset that never returns any objects and no
+query will be executed when accessing the results. A qs.none() queryset
+is an instance of ``EmptyQuerySet``.
Try out the django_mock_queries library that lets you mock out the database access, and still use some of the Django query set features like filtering.
Full disclosure: I contributed some features to the project.
Have you looked into FactoryBoy? https://factoryboy.readthedocs.io/en/latest/orms.html
It's a fixtures replacement tool with support for the django orm - factories basically generate orm-like objects (either in memory or in a test database).
Here's a great article for getting started: https://www.caktusgroup.com/blog/2013/07/17/factory-boy-alternative-django-testing-fixtures/
One first advice would be to split the function in two parts, one that creates the queryset
and one that manipulates the output of it. In this way testing the second part is straightforward.
For the database problem, I investigated if django uses sqlite-in-memory and I found out that
recent version of django uses the sqlite -in-memory database, from The django unittest page:
When using the SQLite database engine the tests will by default use an
in-memory database (i.e., the database will be created in memory,
bypassing the filesystem entirely!).
Mocking the QuerySet object will not make you exercise its full logic.
You can mock like this:
#patch('django.db.models.query.QuerySet')
def test_returning_distinct_records_for_city(self, mock_qs):
self.assertTrue(mock_qs.called)
Not that I know of, but why not use an actual queryset? The test framework is all set up to allow you to create sample data within your test, and the database is re-created on every test, so there doesn't seem to be any reason not to use the real thing.
What is the best idea to fill up data into a Django model from an external source?
E.g. I have a model Run, and runs data in an XML file, which changes weekly.
Should I create a view and call that view URL from a curl cronjob (with the advantage that that data can be read anytime, not only when the cronjob runs), or create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?
There is excellent way to do some maintenance-like jobs in project environment- write a custom manage.py command. It takes all environment configuration and other stuff allows you to concentrate on concrete task.
And of course call it directly by cron.
You don't need to create a view, you should just trigger a python script with the appropriate Django environment settings configured. Then call your models directly the way you would if you were using a view, process your data, add it to your model, then .save() the model to the database.
I've used cron to update my DB using both a script and a view. From cron's point of view it doesn't really matter which one you choose. As you've noted, though, it's hard to beat the simplicity of firing up a browser and hitting a URL if you ever want to update at a non-scheduled interval.
If you go the view route, it might be worth considering a view that accepts the XML file itself via an HTTP POST. If that makes sense for your data (you don't give much information about that XML file), it would still work from cron, but could also accept an upload from a browser -- potentially letting the person who produces the XML file update the DB by themselves. That's a big win if you're not the one making the XML file, which is usually the case in my experience.
"create a python script and install that script as a cron (with DJANGO _SETTINGS _MODULE variable setup before executing the script)?"
First, be sure to declare your Forms in a separate module (e.g. forms.py)
Then, you can write batch loaders that look like this. (We have a LOT of these.)
from myapp.forms import MyObjectLoadForm
from myapp.models import MyObject
import xml.etree.ElementTree as ET
def xmlToDict( element ):
return dict(
field1= element.findtext('tag1'),
field2= element.findtext('tag2'),
)
def loadRow( aDict ):
f= MyObjectLoadForm( aDict )
if f.is_valid():
f.save()
def parseAndLoad( someFile ):
doc= ET.parse( someFile ).getroot()
for tag in doc.getiterator( "someTag" )
loadRow( xmlToDict(tag) )
Note that there is very little unique processing here -- it just uses the same Form and Model as your view functions.
We put these batch scripts in with our Django application, since it depends on the application's models.py and forms.py.
The only "interesting" part is transforming your XML row into a dictionary so that it works seamlessly with Django's forms. Other than that, this command-line program uses all the same Django components as your view.
You'll probably want to add options parsing and logging to make a complete command-line app out of this. You'll also notice that much of the logic is generic -- only the xmlToDict function is truly unique. We call these "Builders" and have a class hierarchy so that our Builders are all polymorphic mappings from our source documents to Python dictionaries.