I want to use django setUpTestData to prepare some heavy data shared between multiples unit tests. Instances of my Model are created with some file fields.
However, from one test to another, file contents are not renewed, it has side effects between my tests
here is a minimalist example :
models.py
from django.db import models
class MyModel(models.Model):
my_file = models.FileField(upload_to="tests/")
test.py
from django.core.files.base import ContentFile
from django.test import TestCase
from core.models import MyModel
class Test(TestCase):
#classmethod
def setUpTestData(cls):
cls.instance = MyModel()
cls.instance.my_file.save("file.txt", ContentFile("Hello from setUpTestData"))
def test_a(self):
with open(self.instance.my_file.path, "r") as fp:
self.assertEqual(fp.read(), "Hello from setUpTestData")
self.instance.my_file.save("file.txt", ContentFile("Modified data"))
def test_b(self):
with open(self.instance.my_file.path, "r") as fp:
self.assertEqual(fp.read(), "Hello from setUpTestData")
self.instance.my_file.save("file.txt", ContentFile("Modified data"))
Running any of the two test alone works, however running one after the other one fails:
AssertionError: 'Modified datatUpTestData' != 'Hello from setUpTestData'
- Modified datatUpTestData
+ Hello from setUpTestData
How to ensure that file are correctly reset? Am I concerned by theses lines from the documentation ?
Objects assigned to class attributes in setUpTestData() must support creating deep copies with copy.deepcopy()
I feel like fileField should be handled by default by Django but it doesn't work, what should I do? Should I try to override __deepcopy__ for my models? Modifying my code for testing purpose is a bad pattern.
I've found a solution by using setUp and tearDown
def setUp(self) : # copy original files
shutil.copytree(settings.MEDIA_ROOT, "/tmp/tests", dirs_exist_ok=True)
def tearDown(self) : # restore original files
shutil.copytree("/tmp/tests", settings.MEDIA_ROOT, dirs_exist_ok=True)
Related
Multiple files in a project
db.py
Setup and handle firebase_admin connection
model.py
Abstract class for model CRUD
session.py, config.py...
Some classes extend models
main.py
Import files, coordinate them
main.py basically:
import db
class App():
....
main = App()
main.db = db.start(options)
model.py basically:
from abc import abstractmethod, ABC
class Model():
...
#abstractmethod
def gen_key(self):
pass
def create(self):
main.db.users_ref.set({
self.gen_key() : self.output_data()
})
but model.py cannot access main
I tried
global main
main = App()
in main.py, doesn't work.
Tried import main.py in model.py, got circular import error
Do I need to pass main to every single instance? Seemed so troublesome
I make model.py because I'm trying to make different parts of firebase more consistent. I learn this kind of things from PHP yii framework, it's for MySql, but make sense to me so I copy that kind of things.
You can see a gen_key in Model. I hope every "Table" have a gen_key method.
For example, Session.gen_key will return self.date
Config.key will return self.type+"_"+self.date
With that, I can just define the gen key rule, and run
record = new Session
record.date = today
record.save
Then things would automatically organise themselves in firebase
How can I set the default locale in Python's factory_boy for all of my Factories?
In docs says that one should set it with factory.Faker.override_default_locale but that does nothing to my fakers...
import factory
from app.models import Example
from custom_fakers import CustomFakers
# I use custom fakers, this indeed are added
factory.Faker.add_provider(CustomFakers)
# But not default locales
factory.Faker.override_default_locale('es_ES')
class ExampleFactory(factory.django.DjangoModelFactory):
class Meta:
model = Example
name = factory.Faker('first_name')
>>> from example import ExampleFactory
>>> e1 = ExampleFactory()
>>> e1.name
>>> u'Chad'
The Faker.override_default_locale() is a context manager, although it's not very clear from the docs.
As such, to change the default locale for a part of a test:
with factory.Faker.override_default_locale('es_ES'):
ExampleFactory()
For the whole test:
#factory.Faker.override_default_locale('es_ES')
def test_foo(self):
user = ExampleFactory()
For all the tests (Django):
# settings.py
TEST_RUNNER = 'myproject.testing.MyTestRunner'
# myproject/testing.py
import factory
from django.conf import settings
from django.util import translation
import django.test.runner
class MyTestRunner(django.test.runner.DiscoverRunner):
def run_tests(self, test_labels, extra_tests=None, **kwargs):
with factory.Faker.override_default_locale(translation.to_locale(settings.LANGUAGE_CODE)):
return super().run_tests(test_labels, extra_tests=extra_tests, **kwargs)
More on it here.
UPD As I said, this solution is suboptimal:
factory.Faker._DEFAULT_LOCALE is a private field
fake() and faker() use the private interface
fake() doesn't work since factory-boy==3.1.0
if I were to use faker, I'd use it directly, not via factory-boy
You should generally prefer the other answer. Leaving this one for posterity.
Not a good solution, but for now it's as good as it gets. You can change the variable that holds the value:
import factory
factory.Faker._DEFAULT_LOCALE = 'xx_XX'
Moreover, you can create a file like this (app/faker.py):
import factory
from faker.providers import BaseProvider
factory.Faker._DEFAULT_LOCALE = 'xx_XX'
def fake(name):
return factory.Faker(name).generate({})
def faker():
return factory.Faker._get_faker()
class MyProvider(BaseProvider):
def category_name(self):
return self.random_element(category_names)
...
factory.Faker.add_provider(MyProvider)
category_names = [...]
Then, once you import the file, the locale changes. Also, you get your providers and an easy way to use factory_boy's faker outside of the factories:
from app.faker import fake
print(fake('random_int'))
print(faker().random_int())
I'm having same issue as yours. For a temporary solution try passing locale in factory.Faker.
For example:
name = factory.Faker('first_name', locale='es_ES')
With Django, you can simply insert the following lines in <myproject>/settings.py:
import factory
factory.Faker._DEFAULT_LOCALE = 'fr_FR'
Further to #xelnor's answer, if using pytest (instead of Django manage.py test), add a hookwrapper on the pytest_runtestloop hook in your conftest.py to set the default locale for all the tests:
#pytest.hookimpl(hookwrapper=True)
def pytest_runtestloop(session):
with factory.Faker.override_default_locale(translation.to_locale(settings.LANGUAGE_CODE)):
outcome = yield
I have a #memoize decorator in my models, which caches some details on the model itself, to avoid multiple database calls when called many times (especially in templates). However, since I store the objects and refer to them in tests, this breaks things.
For example, if I do mygroup.subscribers, add a subscriber and try it again, it will return an incorrect number of subscribers, since it's been memoized.
How can I monkey-patch that decorator to do nothing from my tests.py? I haven't found a way to do it cleanly, since models get loaded first.
At the beginning of memoize implementation, checks if it is in testing mode as per this answer:
from django.core import mail
# at the beginning of your memoize
if hasattr(mail, 'outbox'):
# return without memorizing
You can disable your decorator in your test runner, the test environment will be set up before models are loaded.
For example:
from django.test.simple import DjangoTestSuiteRunner
from utils import decorators
class PatchTestSuiteRunner(DjangoTestSuiteRunner):
def setup_test_environment(self, **kwargs):
super(PatchTestSuiteRunner, self).setup_test_environment(**kwargs)
self.__orig_memoize = decorators.memoize
decorators.memoize = lambda x: x
def teardown_test_environment(self, **kwargs):
decorators.memoize = self.__orig_memoize
super(PatchTestSuiteRunner, self).teardown_test_environment(**kwargs)
Then put in your settings.py:
TEST_RUNNER = 'test.PatchTestSuiteRunner'
And the tests can be run without memoization:
# myapp/models.py
class TestObject(object):
def __init__(self, value):
self.value = value
#memoize
def get_value(self):
return self.value
# myapp/test.py
from django.test import TestCase
from .models import TestObject
class NoMemoizeTestCase(TestCase):
def test_memoize(self):
t = TestObject(0)
self.assertEqual(t.get_value(), 0)
t.value = 1
self.assertEqual(t.get_value(), 1)
Note that although we're restoring the original decorator in the test runner's teardown_test_environment, memoization will not be restored on already decorated functions. Memoization could be restored if we use a more complex testing decorator, but this is probably not required in standard use cases.
i'm trying to build sort of a "mini django model" for working with Django and MongoDB without using the norel Django's dist (i don't need ORM access for these...).
So, what i'm trying to do is to mimic the standart behavior or "implementation" of default models of django... that's what i've got so far:
File "models.py" (the base)
from django.conf import settings
import pymongo
class Model(object):
#classmethod
def db(cls):
db = pymongo.Connection(settings.MONGODB_CONF['host'], settings.MONGODB_CONF['port'])
#classmethod
class objects(object):
#classmethod
def all(cls):
db = Model.db() #Not using yet... not even sure if that's the best way to do it
print Model.collection
File "mongomodels.py" (the implementation)
from mongodb import models
class ModelTest1(models.Model):
database = 'mymongodb'
collection = 'mymongocollection1'
class ModelTest2(models.Model):
database = 'mymongodb'
collection = 'mymongocollection2'
File "views.py" (the view)
from mongomodels import ModelTest1, ModelTest2
print ModelTest1.objects.all() #Should print 'mymongocollection1'
print ModelTest2.objects.all() #Should print 'mymongocollection2'
The problem is that it's not accessing the variables from ModelTest1, but from the original Model... what's wrong??
You must give objects some sort of link to class that contains it. Currently, you are just hard-coding it to use Model()s atttributes. Because you are not instantiating these classes, you will either have to use either a decorator or a metaclass to create the object class for you in each subclass of Model().
I want convert any string to my existing Entiy. Is it possible writing a convertToEntity() function as below?
class Personel(db.Model):
name=db.StringProperty()
class IsEntityExists(webapp.RequestHandler):
def get(self):
entity="Personal"
Entity=entity.convertToEntity()
Entity.all()
I wonder if the question is just asking to somehow look up the model class given its name, when it has already been imported. You can do this easily (but only when it has already been imported!), as follows:
cls = db.class_for_kind("Personel")
... cls.all() ...
The equivalent in NDB:
cls = ndb.Model._kind_map["Personel"]
... cls.query() ...
Good luck!
PS. No, it won't do spell correction. :-)
Only if you build loader for models... for example:
from app import model_loader
class IsEntityExists(webapp.RequestHandler):
def get(self):
Entity=model_loader("Personal")
Entity.all()
while the model_loader function would search the folder structure (python modules) for defined model.. for example you have folder structure:
models/
personal.py
other_model.py
user.py
So the model_loader("Personal") would import personal.py and extract "Personal" class from that module, allowing you to perform whatever you want with that class - if it finds it and loads it.
Of course you would have to code the loader.
However if the class (defined model) is in the same file as the code, you could search over locals() for "Personal"
def load_model(name):
local = locals()
try:
return local[name]
except KeyError:
return None