I know unittests and use write them daily.
They get executed during development and CI.
Now I have test which I would like to ensure on the production system:
PYTHONIOENCODING must be "utf8"
Above I used the verb "test", this means I want to check the state. This question is not about how to do this.
AFAIK the unittest framework can't help me here, since it only gets executed during development and CI.
How to solve this in the python world withou re-inventing the wheel?
Above is only an example. There are several other things next to PYTHONIOENCODING which I would like to check.
Next use case for these checks: Some days ago we had an issue on the production sever. The command-line tool convert gets used and some versions are broken and create wrong results. I would like to write a simple check to ensure that the convert tool on the production server is not broken.
Straightforward approach (Checking)
Put this near the start of the code:
import os
if os.environ.get('PYTHONIOENCODING', '').lower() not in {'utf-8', 'utf8'}:
raise EnvironmentError("Environment variable $PYTHONIOENCODING must be set to 'utf8'")
Alternative solution (Ensuring)
In one of the projects I code for, there's a "startup script", so instead of running python3 main.py, we run this in production:
bash main.sh
whose content is rather simple:
#!/bin/bash
export PYTHONIOENCODING=utf8
exec /usr/bin/env python3 main.py
testinfra
If you want to write and run tests against the deployment infrastructure, you can use the testinfra plugin for pytest. For example, test for a simple requirement of validating an environment variable on target machine could look like:
def test_env_var(host):
assert host.run_expect((0,), 'test "$PYTHONIOENCODING" == "utf8"')
This infrastructure test suite can be developed in a separate project and invoked before the actual deployment takes place (for example, we invoke the infra tests right after the docker image is built; if the tests fail, the image is not uploaded to our private image repository/deployed to prod etc).
Related
I've been looking and so far been unable to find a way of validating/linting my Jenkinsfile. At least not by using tox, pycharm or another way outside of visual code for example (I did saw some examples of that, more or less).
Does anyone know of a way to do this? I would like to perform some simple checks, like:
return a warning if an environment variable inside the file is used but isn't declared (so I know I have to check if it is set on server level, for example).
Creating some custom checks would be a huge plus: e.g. if strings, without variables, use single quotes instead of double.
Jenkins can validate, or "lint", a Declarative Pipeline from the command line before actually running it. This can be done using a Jenkins CLI command.
Linting via the CLI with SSH
# ssh (Jenkins CLI)
# JENKINS_SSHD_PORT=[sshd port on controller]
# JENKINS_HOSTNAME=[Jenkins controller hostname]
ssh -p $JENKINS_SSHD_PORT $JENKINS_HOSTNAME declarative-linter < Jenkinsfile
More info available at : Linter
Additional reference: Validate Jenkinsfile
using vscode + pytest, while the test cases are written in unittest, for example:
class MyTest(unittest.TestCase):
def testEnvVar(self):
account = os.getenv("ACCOUNT")
It fails to read ACCOUNT. However, os.getenv("ACCOUNT") will get right result if it's executed by python test.py directly:
# test.py
import os
print(os.getenv("ACCOUNT"))
that ensure "ACCOUNT" envar is already set.
If executed by pytest tests/test.py, the envrionment var cannot be read too, so it's caused by pytest. I know pytest will do some tricks (for example, pytest will capture all console/stderr output), but I don't know what exactly it does to envar. And same as tox(in tox, you have to set passenv=*, so the test env can inherit all environment variables from shell where tox lives.
I mean, I totally understand this is trick that all test-related tools could make, I just don't know how to disable it in pytest. So don't suggest that I have forgot set variable, or my code is wrong, etc.
I know how to hack this, by use mock or add .env vars in vscode launch file. However, this will definitely expose account/password in some files/settings. since account and password are secrets, I don't want to expose them in any files.
I guess this is a very common requirement and pytest should have already hornored that. So why this still happend?
The easiest way is to use https://github.com/MobileDynasty/pytest-env. But I don't think you should test the enviroment variables in your unit tests.
i got a python script in my project. Depending on whether it was run manually or via Jenkins, I need to respond to it differently.
The only solution I came up with so far was to set an environment variable via Jenkins and check this var in the script.
Has anyone had a similar problem and could somehow solve it differently ?
If you only want to avoid setting an env variable, but still prefer using an existing env var you might want to use https://plugins.jenkins.io/envinject/#EnvInjectPlugin-BuildCauses which exposes following variables in the Env variables:
BUILD_CAUSE=MANUALTRIGGER
BUILD_CAUSE_MANUALTRIGGER=true in case of manual trigger. It internally checks which cause(s) triggered the build, which can be many like SCMTriggerCause, TimerTriggerCause etc.
In a Django project, it is possible to create unit-tests to verify what we had done so far. The principle is simple. We have to execute the command python3 manage.py test in the shell. When an error is detected in the program, the shell will display it and stop the process. However, the procedure has a little gap. If we have several errors, we have to correct it and restart the whole process. This process could take several minutes which depends of our program. Is there a manner to restart the process where the error has been detected instead of restart the whole procedure?
EDIT :
In fact, another problem I have is to retain the databases instead of recreate it. How could I do such thing?
If you want to automatically run only failing tests you need to use a third party testing driver like Nose or create your own. But it's not worth it because ...
You can specify particular tests to run by supplying any number of
“test labels” to ./manage.py test. Each test label can be a full
Python dotted path to a package, module, TestCase subclass, or test
method. For instance:
# Run just one test method
$ ./manage.py test animals.tests.AnimalTestCase.test_animals_can_speak
Source: https://docs.djangoproject.com/en/1.10/topics/testing/overview/
This approach can be used to re run only the ones that have failed.
Please note that third party test runners will probably recreate the database every time you run the test - even for only the failing test. On the other hand the django default test runner has the --keep option which allows the database to be reused. For more details see: https://stackoverflow.com/a/37100979/267540
I really love python because I love interactive development. There's one area where python appears to fall short, however, and that's in the area of automatically reloading changed files. Basically, what I want to have happen is to be able to modify a python file on-disk and then have my running python instance automatically reload the changed module to allow me to immediately access my changes in the REPL so I can test them out. Basically, I want some sort of watch command.
I happen to use the bpython shell because I think it's the best one available, but this feature is so important to me that I'd be willing to switch to any other python shell that does it right. Is it possible?
Something like tail -f in python + reload().
I really think they should make the tags OS and Python version mandatory though.
If you're trying to "test out" your code, perhaps you should be looking into doing automated unit tests instead of testing your code repeatedly and manually. It'll allow you to test more code quicker and waste less of your precious, precious development time.
Personally, I use unittest with py.test as the runner.