Define variable for testing only in Python - python

This is probably a relatively simple question, but I struggle to find an answer otherwise.
I am working on a small Python project, and I would like to do this in a test driven way. File operations in a defined folder in the User's home directory are quite essential to the programme, but I would like these to happen in a separate temporary folder for the tests.
Is there a way to set a variable in my app that is different if the app realises it's run as part of the testing environment? My current workarounds (def some_function(self, test=False) or to include lots of #patch decorators) are not very elegant...
I'm thinking of something along the lines of:
def return_home_folder():
if testing:
home = os.getcwd() + '/testrun'
else:
home = os.path.expanduser('~')
returnvalue = home + '/appname/'
return returnvalue

IMHO it is not a good idea to have a function behave differently in test vs. production. The whole point of tests is to foretell how the program will behave in production, and changing the behaviour kinda defeats that.
(Unit Testing is different from a "dry run", but that's a separate issue.)
I'ld go for something like this:
def get_app_folder():
return os.path.join(os.path.expanduser("~"), "appname")
def test_get_app_folder():
assert get_app_folder().startswith(os.path.expanduser("~"))
assert get_app_folder().endswith("appname")
The unit tests themselves aren't overly instructive, but they show how you can work around the need for a "testing mode" altogether.

You could define your environment in an environment variable:
$ export MY_APP_ENVIRONMENT=test
Read it in a settings.py module:
import os
ENVIRONMENT = os.environ['MY_APP_ENVIRONMENT']
_base_dir_map = {
'test': os.path.join(os.getcwd(), 'testrun'),
'prod': os.path.expanduser('~'),
}
HOME_DIR = os.path.join(_base_dir_map[ENVIRONMENT], 'appname')
Then, everywhere (tests and app code), you would use settings.HOME_DIR:
import os
from my_app import settings
file_path = os.path.join(settings.HOME_DIR, 'filename')
...
Hope this works for you or gets you on a track to something that does.

Related

Check name of running script file in module

I have 2 app files with import same module:
#app1.py
import settings as s
another code
#app2.py
import settings as s
another code
I need in module check if running first or second app:
#settings.py
#pseudocode
if running app1.py:
print ('app1')
elif:
print ('app2')
I check module inspect but no idea.
Also I am open for all better solutions.
EDIT: I feel a bit foolish (I guess it is easy)
I try:
var = None
def foo(a):
var = a
print (var)
but still None.
I'm not sure it is possible for an importee to know who imported it. Even if it was, it sounds like code smell to me.
Instead, what you can do is delegate the decision of what actions are to be taken by app1 and app2, instead of having settings make that decision.
For example:
settings.py
def foo(value):
if value == 'app1':
# do something
else:
# do something else
app1.py
from settings import foo
foo('app1')
And so on.
To assign within the function and have it reflect on a global variable. Example:
A.py
var = None
def foo(a):
global var
var = a
def print_var():
print(var)
test.py
import A
A.print_var()
A.foo(123)
A.print_var()
Output:
None
123
Note that globals aren't recommended in general as a programming practice, so use them as little as possible.
I think your current approach is not the best way do solve your issue. You can solve this, too, by modifying settings.py slightly. You have two possible ways to go: either the solution of coldspeed, or using delegates. Either way, you have to store the code of your module inside functions.
Another way to solve this issue would be (depending on the amount of code lines which depend on the app name) to pass a function/delegate to the function as a parameter like this:
#settings.py
def theFunction(otherParemters, callback):
#do something
callback()
#app1.py
from settings import theFunction
def clb():
print("called from settings.py")
#do something app specific here
theFunction(otherParameter, clb)
This appears to be a cleaner solution compared to the inspect solution, as it allows a better separation of the two modules.
It depends highly on the range of application, whether you should choose the first or the second version; maybe you could provide us with more information about the broader issue you are trying to solve.
As others have said, perhaps this is not the best way to achieve it. If you do want to though, how about using sys.argv to identify the calling module?
app:
import settings as s
settings:
import sys
import os
print sys.argv[0]
# \\path\\to\\app.py
print os.path.split(sys.argv[0])[-1]
# app.py
of course, this gives you the file that was originally run from the command line, so if this is part of a further nested set of imports this won't work for you.
This works for me.
import inspect
import os
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 1)
if os.path.basename(calframe[1][1]) == 'app1.py':
print ('app1')
else:
print ('app2')

Pytest and Dynamic fixture modules

I am writing functional tests using pytest for a software that can run locally and in the cloud. I want to create 2 modules, each with the same module/fixture names, and have pytest load one or the other depending if I'm running tests locally or in the cloud:
/fixtures
/fixtures/__init__.py
/fixtures/local_hybrids
/fixtures/local_hybrids/__init__.py
/fixtures/local_hybrids/foo.py
/fixtures/cloud_hybrids
/fixtures/cloud_hybrids/__init__.py
/fixtures/cloud_hybrids/foo.py
/test_hybrids/test_hybrids.py
foo.py (both of them):
import pytest
#pytest.fixture()
def my_fixture():
return True
/fixtures/__init__.py:
if True:
import local_hybrids as hybrids
else:
import cloud_hybrids as hybrids
/test_hybrids/test_hybrids.py:
from fixtures.hybrids.foo import my_fixture
def test_hybrid(my_fixture):
assert my_fixture
The last code block doesn't work of course, because import fixtures.hybrids is looking at the file system instead of __init__.py's "fake" namespace, which isn't like from fixtures import hybrids, which works (but then you cannot use the fixtures as the names would involve dot notation).
I realize that I could play with pytest_generate_test to alter the fixture dynamically (maybe?) but I'd really hate managing each fixture manually from within that function... I was hoping the dynamic import (if x, import this, else import that) was standard Python, unfortunately it clashes with the fixtures mechanism:
import fixtures
def test(fixtures.hybrids.my_fixture): # of course it doesn't work :)
...
I could also import each fixture function one after the other in init; more legwork, but still a viable option to fool pytest and get fixture names without dots.
Show me the black magic. :) Can it be done?
I think in your case it's better to define a fixture - environment or other nice name.
This fixture can be just a getter from os.environ['KEY'] or you can add custom command line argument like here
then use it like here
and the final use is here.
What im trying to tell is that you need to switch thinking into dependency injection: everything should be a fixture. In your case (and in my plugin as well), runtime environment should be a fixture, which is checked in all other fixtures which depend on the environment.
You might be missing something here: If you want to re-use those fixtures you need to say it explicitly:
from fixtures.hybrids.foo import my_fixture
#pytest.mark.usefixtures('my_fixture')
def test_hybrid(my_fixture):
assert my_fixture
In that case you could tweak pytest as following:
from local_hybrids import local_hybrids_fixture
from cloud_hybrids import cloud_hybrids_fixture
fixtures_to_test = {
"local":None,
"cloud":None
}
#pytest.mark.usefixtures("local_hybrids_fixture")
def test_add_local_fixture(local_hybrids_fixture):
fixtures_to_test["local"] = local_hybrids_fixture
#pytest.mark.usefixtures("cloud_hybrids_fixture")
def test_add_local_fixture(cloud_hybrids_fixture):
fixtures_to_test["cloud"] = cloud_hybrids_fixture
def test_on_fixtures():
if cloud_enabled:
fixture = fixtures_to_test["cloud"]
else:
fixture = fixtures_to_test["local"]
...
If there are better solutions around I am also interested ;)
I don't really think there is a "good way" of doing that in python, but still it is possible with a little amount of hacking. You can update sys.path for the subfolder with fixtures you would like to use and import fixtures directly. In dirty case it look's like that:
for your fixtures/__init__.py:
if True:
import local as hybrids
else:
import cloud as hybrids
def update_path(module):
from sys import path
from os.path import join, pardir, abspath
mod_dir = abspath(join(module.__file__, pardir))
path.insert(0, mod_dir)
update_path(hybrids)
and in the client code (test_hybrids/test_hybrids.py) :
import fixtures
from foo import spam
spam()
In other cases you can use much more complex actions to perform a fake-move of all modules/packages/functions etc from your cloud/local folder directly into the fixture's __init__.py. Still, I think - it does not worth a try.
One more thing - black magic is not the best thing to use, I would recommend you to use a dotted notation with "import X from Y" - this is much more stable solution.
Use the pytest plugins feature and put your fixtures in separate modules. Then at runtime select which plug-in you’ll be drawing from via a command line argument or an environment variable. It needs to be something global because you need to place different pytest_plugins list assignments based on the global value.
Take a look at the section Conditional Plugins from this repo https://github.com/jxramos/pytest_behavior/tree/main/conditional_plugins

Automatically delete MEDIA_ROOT between tests

I was wondering if it were possible, and preferably not too difficult, to use Django DiscoverRunner to delete my media directory between every test, including once at the very beginning and once at the very end. I was particularly interested in the new attributes "test_suite" and "test_runner" that were introduced in Django 1.7 and was wondering if they would make this task easier.
I was also wondering how I can make the test specific MEDIA_ROOT a temporary file, currently I have a regular MEDIA_ROOT called "media" and a testing MEDIA_ROOT called "media_test" and I use rmtree in setup and tearDown of every test class that involves the media directory. The way I specify which MEDIA_ROOT to use is in my test.py settings file, currenly I just have:
MEDIA_ROOT = normpath(join(DJANGO_ROOT, 'media_test'))
Is there a way I can set MEDIA_ROOT to a temporary directory named "media" instead?
This question is a bit old, my answer is from Django 2.0 and Python 3.6.6 or later. Although I think the technique works on older versions too, YMMV.
I think this is a much more important question than it gets credit for! When you write good tests, its only a matter of time before you need to whip up test files, or generate test files. Either way, your in danger of polluting the File System of your server or developer machine. Neither is desirable!
I think the write up on this page is a best-practice. I'll copy/paste the code snippet below if you don't care about the reasoning (more notes afterwards):
----
First, let’s write a basic, really basic, model
from django.db import models
class Picture(models.Model):
picture = models.ImageField()
Then, let’s write a really, really basic, test.
from PIL import Image
import tempfile
from django.test import TestCase
from .models import Picture
from django.test import override_settings
def get_temporary_image(temp_file):
size = (200, 200)
color = (255, 0, 0, 0)
image = Image.new("RGBA", size, color)
image.save(temp_file, 'jpeg')
return temp_file
class PictureDummyTest(TestCase):
#override_settings(MEDIA_ROOT=tempfile.TemporaryDirectory(prefix='mediatest').name)
def test_dummy_test(self):
temp_file = tempfile.NamedTemporaryFile()
test_image = get_temporary_image(temp_file)
#test_image.seek(0)
picture = Picture.objects.create(picture=test_image.name)
print "It Worked!, ", picture.picture
self.assertEqual(len(Picture.objects.all()), 1)
----
I made one important change to the code snippet: TemporaryDirectory().name. The original snippet used gettempdir(). The TemporaryDirectory function creates a new folder with a system generated name every time its called. That folder will be removed by the OS - but we don't know when! This way, we get a new folder each run, so no chance of name conflicts. Note I had to add the .name element to get the name of the generated folder, since MEDIA_ROOT has to be a string. Finaly, I added prefix='mediatest' so all the generated folders are easy to identify in case I want to clean them up in a script.
Also potentially useful to you, is how the settings over-ride can be easy applied to a test class, not just one test function. See this page for details.
Also note in the comments after this article some people show an even easier way to get a temp file name without worrying about media settings using NamedTemporaryFile (only valid for tests that don't use Media settings!).
The answer by Richard Cooke works but leaves the temporary directories lingering in the file system, at least on Python 3.7 and Django 2.2. This can be avoided by using a combination of setUpClass, tearUpClass and overriding the settings in the test methods. For example:
import tempfile
class ExampleTestCase(TestCase):
temporary_dir = None
#classmethod
def setUpClass(cls):
cls.temporary_dir = tempfile.TemporaryDirectory()
super(ExampleTestCase, cls).setUpClass()
#classmethod
def tearDownClass(cls):
cls.temporary_dir = None
super(ExampleTestCase, cls).tearDownClass()
def test_example(self):
with self.settings(MEDIA_ROOT=self.temporary_dir.name):
# perform a test
pass
This way the temporary files are removed right away you don't need to worry about the name of the temporary directory either. (Of course, if you want you can still use the prefix argument in calling tempfile.TemporaryDirectory)
One solution I have found that works is to simply delete it in setUp / tearDown, I would prefer finding some way to make it automatically apply to all tests instead of having to put the logic in every test file that involves media, but I have not figured out how to do that yet.
The code I use is:
from shutil import rmtree
from django.conf import settings
from django.test import TestCase
class MyTests(TestCase):
def setUp(self):
rmtree(settings.MEDIA_ROOT, ignore_errors=True)
def tearDown(self):
rmtree(settings.MEDIA_ROOT, ignore_errors=True)
The reason I do it in both setUp and tearDown is because if I only have it in setUp I might end up with a lingering media_test directory, and even though it won't be checked in to GitHub by accident (it's in the .gitignore) it still takes up unnecessary space in my project explorer and I just prefer not having it sit there taking up space. If I only have it in tearDown then I risk causing problems if I quit out of the tests part way through and it tries to run a test that involves media while the media from the terminated test still lingers.
Something like that?
TESTING_MODE = True
...
MEDIA_ROOT = os.path.join(DJANGO_ROOT, 'media_test' if TESTING_MODE else 'media')

Test if code is executed from within a py.test session

I'd like to connect to a different database if my code is running under py.test. Is there a function to call or an environment variable that I can test that will tell me if I'm running under a py.test session? What's the best way to handle this?
A simpler solution I came to:
import sys
if "pytest" in sys.modules:
...
Pytest runner will always load the pytest module, making it available in sys.modules.
Of course, this solution only works if the code you're trying to test does not use pytest itself.
There's also another way documented in the manual:
https://docs.pytest.org/en/latest/example/simple.html#pytest-current-test-environment-variable
Pytest will set the following environment variable PYTEST_CURRENT_TEST.
Checking the existence of said variable should reliably allow one to detect if code is being executed from within the umbrella of pytest.
import os
if "PYTEST_CURRENT_TEST" in os.environ:
# We are running under pytest, act accordingly...
Note
This method works only when an actual test is being run.
This detection will not work when modules are imported during pytest collection.
A solution came from RTFM, although not in an obvious place. The manual also had an error in code, corrected below.
Detect if running from within a pytest run
Usually it is a bad idea to make application code behave differently
if called from a test. But if you absolutely must find out if your
application code is running from a test you can do something like
this:
# content of conftest.py
def pytest_configure(config):
import sys
sys._called_from_test = True
def pytest_unconfigure(config):
import sys # This was missing from the manual
del sys._called_from_test
and then check for the sys._called_from_test flag:
if hasattr(sys, '_called_from_test'):
# called from within a test run
else:
# called "normally"
accordingly in your application. It’s also a good idea to use your own
application module rather than sys for handling flag.
Working with pytest==4.3.1 the methods above failed, so I just went old school and checked with:
script_name = os.path.basename(sys.argv[0])
if script_name in ['pytest', 'py.test']:
print('Running with pytest!')
While the hack explained in the other answer (http://pytest.org/latest/example/simple.html#detect-if-running-from-within-a-pytest-run) does indeed work, you could probably design the code in such a way you would not need to do this.
If you design the code to take the database to connect to as an argument somehow, via a connection or something else, then you can simply inject a different argument when you're running the tests then when the application drives this. Your code will end up with less global state and more modulare and reusable. So to me it sounds like an example where testing drives you to design the code better.
This could be done by setting an environment variable inside the testing code. For example, given a project
conftest.py
mypkg/
__init__.py
app.py
tests/
test_app.py
In test_app.py you can add
import os
os.environ['PYTEST_RUNNING'] = 'true'
And then you can check inside app.py:
import os
if os.environ.get('PYTEST_RUNNING', '') == 'true':
print('pytest is running')

How to concatenate multiple Python source files into a single file?

(Assume that: application start-up time is absolutely critical; my application is started a lot; my application runs in an environment in which importing is slower than usual; many files need to be imported; and compilation to .pyc files is not available.)
I would like to concatenate all the Python source files that define a collection of modules into a single new Python source file.
I would like the result of importing the new file to be as if I imported one of the original files (which would then import some more of the original files, and so on).
Is this possible?
Here is a rough, manual simulation of what a tool might produce when fed the source files for modules 'bar' and 'baz'. You would run such a tool prior to deploying the code.
__file__ = 'foo.py'
def _module(_name):
import types
mod = types.ModuleType(name)
mod.__file__ = __file__
sys.modules[module_name] = mod
return mod
def _bar_module():
def hello():
print 'Hello World! BAR'
mod = create_module('foo.bar')
mod.hello = hello
return mod
bar = _bar_module()
del _bar_module
def _baz_module():
def hello():
print 'Hello World! BAZ'
mod = create_module('foo.bar.baz')
mod.hello = hello
return mod
baz = _baz_module()
del _baz_module
And now you can:
from foo.bar import hello
hello()
This code doesn't take account of things like import statements and dependencies. Is there any existing code that will assemble source files using this, or some other technique?
This is very similar idea to tools being used to assemble and optimise JavaScript files before sending to the browser, where the latency of multiple HTTP requests hurts performance. In this Python case, it's the latency of importing hundreds of Python source files at startup which hurts.
If this is on google app engine as the tags indicate, make sure you are using this idiom
def main():
#do stuff
if __name__ == '__main__':
main()
Because GAE doesn't restart your app every request unless the .py has changed, it just runs main() again.
This trick lets you write CGI style apps without the startup performance hit
AppCaching
If a handler script provides a main()
routine, the runtime environment also
caches the script. Otherwise, the
handler script is loaded for every
request.
I think that due to the precompilation of Python files and some system caching, the speed up that you'll eventually get won't be measurable.
Doing this is unlikely to yield any performance benefits. You're still importing the same amount of Python code, just in fewer modules - and you're sacrificing all modularity for it.
A better approach would be to modify your code and/or libraries to only import things when needed, so that a minimum of required code is loaded for each request.
Without dealing with the question, whether or not this technique would boost up things at your environment, say you are right, here is what I would have done.
I would make a list of all my modules e.g.
my_files = ['foo', 'bar', 'baz']
I would then use os.path utilities to read all lines in all files under the source directory and writes them all into a new file, filtering all import foo|bar|baz lines since all code is now within a single file.
Of curse, at last adding the main() from __init__.py (if there is such) at the tail of the file.

Categories

Resources