using vscode + pytest, while the test cases are written in unittest, for example:
class MyTest(unittest.TestCase):
def testEnvVar(self):
account = os.getenv("ACCOUNT")
It fails to read ACCOUNT. However, os.getenv("ACCOUNT") will get right result if it's executed by python test.py directly:
# test.py
import os
print(os.getenv("ACCOUNT"))
that ensure "ACCOUNT" envar is already set.
If executed by pytest tests/test.py, the envrionment var cannot be read too, so it's caused by pytest. I know pytest will do some tricks (for example, pytest will capture all console/stderr output), but I don't know what exactly it does to envar. And same as tox(in tox, you have to set passenv=*, so the test env can inherit all environment variables from shell where tox lives.
I mean, I totally understand this is trick that all test-related tools could make, I just don't know how to disable it in pytest. So don't suggest that I have forgot set variable, or my code is wrong, etc.
I know how to hack this, by use mock or add .env vars in vscode launch file. However, this will definitely expose account/password in some files/settings. since account and password are secrets, I don't want to expose them in any files.
I guess this is a very common requirement and pytest should have already hornored that. So why this still happend?
The easiest way is to use https://github.com/MobileDynasty/pytest-env. But I don't think you should test the enviroment variables in your unit tests.
Related
By default, pytest inflates the error traceback massively and printly some information into sysout stream that are redundant: Considering that I'm using PyCharm, it is really obfuscating to see code snippet out of context, while they are already available in the IDE & debugging interface.
As a result, I intend to set pytest traceback to native permanently. However, according to the documentation, the only way to do so is to add extra command line argument when launching the test runner:
-tb=native
I would like to make my test to always use the native traceback regardless of how it was run. Is it possible to use a TestCase API to do so?
Thanks a lot for your help.
You can add this option to the pytest.ini file and it would be automatically picked by pytest. For your specific case, a pytest.ini with following contents should work:
[pytest]
addopts = --tb=native
Note the double hyphens with tb; I am using pytest 4.6.4 and that is how it works for me.
Also, Refer pytest docs for another alternative by modifying PYTEST_ADDOPTS env variable.
I'm not sure how you can do this using pytest, nor am I familiar with this package. With that being said, you can always create a bash function to accomplish this:
function pytest() {
pytest -tb=native "$#"
}
The "$#" symbol will pass all arguments following pyt to the function (kind of like *args in python), so running pyt arg1 arg2 ... argn will be the same as running
pytest -tb=native arg1 arg2 ... argn
If you are unfamiliar with creating bash shortcuts, see this question.
Update
I misunderstood and thought OP was calling pytest from the cli. Instead of creating the pyt function, if you override pytest directly, PyCharm might invoke your bash version of it instead (I'm not really sure though).
That being said, yaniv's answer seems superior to this, if it works.
I know unittests and use write them daily.
They get executed during development and CI.
Now I have test which I would like to ensure on the production system:
PYTHONIOENCODING must be "utf8"
Above I used the verb "test", this means I want to check the state. This question is not about how to do this.
AFAIK the unittest framework can't help me here, since it only gets executed during development and CI.
How to solve this in the python world withou re-inventing the wheel?
Above is only an example. There are several other things next to PYTHONIOENCODING which I would like to check.
Next use case for these checks: Some days ago we had an issue on the production sever. The command-line tool convert gets used and some versions are broken and create wrong results. I would like to write a simple check to ensure that the convert tool on the production server is not broken.
Straightforward approach (Checking)
Put this near the start of the code:
import os
if os.environ.get('PYTHONIOENCODING', '').lower() not in {'utf-8', 'utf8'}:
raise EnvironmentError("Environment variable $PYTHONIOENCODING must be set to 'utf8'")
Alternative solution (Ensuring)
In one of the projects I code for, there's a "startup script", so instead of running python3 main.py, we run this in production:
bash main.sh
whose content is rather simple:
#!/bin/bash
export PYTHONIOENCODING=utf8
exec /usr/bin/env python3 main.py
testinfra
If you want to write and run tests against the deployment infrastructure, you can use the testinfra plugin for pytest. For example, test for a simple requirement of validating an environment variable on target machine could look like:
def test_env_var(host):
assert host.run_expect((0,), 'test "$PYTHONIOENCODING" == "utf8"')
This infrastructure test suite can be developed in a separate project and invoked before the actual deployment takes place (for example, we invoke the infra tests right after the docker image is built; if the tests fail, the image is not uploaded to our private image repository/deployed to prod etc).
I have a script go.py, which needs to run unit tests (with nose) on several different modules, each with their own virtual envs.
How can I activate each virtual env prior to testing, and deactivate it after?
i.e. I want to do this (pseudo-code):
for fn in functions_to_test:
activate(path_to_env)
run_test(fn)
deactivate()
activate()
Inside a virtual env, there is ./bin/activate_this.py
This does what I want. So in go.py I say
import os
activate_this_file = os.path.join(env_dir,'bin/deactivate_this.py')
execfile(activate_this_file, dict(__file__=activate_this_file))
run_test()
I've currently got run_test() working using
suite_x = TestLoader().loadTestsFromName(test_module + ":" + test_class)
r = run(suite = suite_x, argv = [sys.argv[0], "--verbosity=0", "-s"])
deactivate
This is the part that I can't figure out.
What's the deactivation equivalent of env/bin/activate_this.py?
Broader Context
Each module will be uploaded to AWS by go.py as a lambda function. (Where 'lambda function' has a specific meaning in an AWS context, and is not related to lambda x:foo(x))
I want go.py to run unit tests on each lambda function, inside their respective virtual env (since they'll be executed in those virtual envs once deployed to AWS). Each lambda function uses different libraries, hence they have different virtual envs.
The activate_this.py script is not meant to be used to switch virtual environments in the middle of a computation. It is meant to be used ASAP at the start of your process and not be touched ever again. If you look at the content of the script you'll see that it does not take care to record anything for a future deactivation. Once the activate_this.py script is done, what the state of the interpreter was before the script started is lost. Moreover the documentation also warns (with emphasis added):
Also, this cannot undo the activation of other environments, or modules that have been imported. You shouldn’t try to, for instance, activate an environment before a web request; you should activate one environment as early as possible, and not do it again in that process.
Instead of the approach you were hoping to use, I'd have the orchestrator spawn the python interpreter (with subprocess) that is specific to the virtual environment that needs to be used and pass to it the test runner ("nosetests", presumably) with the arguments needed for it to find the tests it needs to run in that environment.
There isn't an easy, complete, and generic way to do this. The reason being that activate_this.py does not just modify the module search path, but it also does site configurations with site.addsitedir(), which may execute sitecustomize or usercustomize in the same the python process. In comparison, the shell script version of activate simply modifies environment variables and have each python process reexecute site customization themselves, so cleanup is significantly easier.
How to work around this issue? There are several possibilities:
You might want to run your tests under tox. This is the solution I think would be most preferred.
If you are sure that none of the packages in your virtualenv have irreversible sitecustomize/usercustomize, you can write a deactivate() that undoes virtualenv's modification to sys.path, os.environ, and sys.prefix or one that remembers those value in activate() so deactivate() can undo them.
You can fork or create a subprocess in activate() before execfile("activate_this.py"). To deactivate the virtual environment, simply return to parent process. You'll have to figure out how the child process can return the test results so the parent/main process can compile the final test report.
I recently started playing with pytest and I use pytest.main() to run the tests. However it seems that pytest caches the test. Any changes made to my module or to the tests gets ignored. I am unable to run pytest from command line so pytest.main() is my only option, this is due to writing python on my ipad.
I have googled this extensively and was able to find one similar issue with advice to run pytest from command line. Any help would be greatly appreciated.
Thanks,
Pytest doesn't cache anything. A module (file) is read once and only once per instance of a Python interpreter.
There is a reload built-in, but it almost never does what you hope it will do.
So if you are running
import pytest
...
while True:
import my_nifty_app
my_nifty_app.be_nifty()
pytest.main()
my_nifty_app.py will be read once and only once even if it changes on disk. What you really need is something like
exit_code = pytest.main()
sys.exit(exit_code)
which will end that instance of the interpreter which is the only way to ensure your source files get re-read.
I am developing a Python package using a text editor and IPython. Each time I change any of the module code I have to restart the interpreter to test this. This is a pain since the classes I am developing rely on a context that needs to be re-established on each reload.
I am aware of the reload() function, but this appears to be frowned upon (also since it has been relegated from a built-in in Python 3.0) and moreover it rarely works since the modules almost always have multiple references.
My question is - what is the best/accepted way to develop a Python module/package so that I don't have to go through the pain of constantly re-establishing my interpreter context?
One idea I did think of was using the if __name__ == '__main__': trick to run a module directly so the code is not imported. However this leaves a bunch of contextual cruft (specific to my setup) at the bottom of my module files.
Ideas?
A different approach may be to formalise your test driven development, and instead of using the interpreter to test your module, save your tests and run them directly.
You probably know of the various ways to do this with python, I imagine the simplest way to start in this direction is to copy and paste what you do in the interpreter into the docstring as a doctest and add the following to the bottom of your module:
if __name__ == "__main__":
import doctest
doctest.testmod()
Your informal test will then be repeated every time the module is called directly. This has a number of other benefits. See the doctest docs for more info on writing doctests.
Ipython does allow reloads see the magic function %run iPython doc
or if modules under the one have changed the recursive dreloadd() function
If you have a complex context is it possible to create it in another module? or assign it to a global variable which will stay around as the interpreter is not restarted
How about using nose with nosey to run your tests automatically in a separate terminal every time you save your edits to disk? Set up all the state you need in your unit tests.
you could create a python script that sets up your context and run it with
python -i context-setup.py
-i When a script is passed as first argument or the -c option is
used, enter interactive mode after executing the script or the
command. It does not read the $PYTHONSTARTUP file. This can be
useful to inspect global variables or a stack trace when a
script raises an exception.