I'm trying to use Tox to test specific versions of Python and Django, but also include a general Pip requirements file of additional dependencies to use for all cases.
As the Tox docs explain, you do the first like:
deps =
django15: Django>=1.5,<1.6
django16: Django>=1.6,<1.7
py33-mysql: PyMySQL ; use if both py33 and mysql are in an env name
py26,py27: urllib3 ; use if any of py26 or py27 are in an env name
py{26,27}-sqlite: mock ; mocking sqlite in python 2.x
and you do the second like:
deps = -r{toxinidir}/pip-requirements.txt
-r{toxinidir}/pip-requirements-test.txt
but how do you combine these?
If I try to define multiple deps, Tox gives me the error "duplicate name 'deps'", but I don't see a way to combine the dictionary and list notatations for deps.
I also tried:
deps =
-r{toxinidir}/pip-requirements.txt
-r{toxinidir}/pip-requirements-test.txt
django15: Django>=1.5,<1.6
django16: Django>=1.6,<1.7
and although that doesn't give me any parsing error, when I go to run a test, I get the error:
ERROR: py27-django15: could not install deps [-r/usr/local/myproject/pip-requirements.txt, -r/usr/local/myproject/pip-requirements-test.txt, Django>=1.5,<1.6]; v = InvocationError('/usr/local/myproject/.tox/py27-django15/bin/pip install -r/usr/local/myproject/pip-requirements.txt -r/usr/local/myproject/pip-requirements-test.txt Django>=1.5,<1.6 (see /usr/local/myproject/.tox/py27-django15/log/py27-django15-1.log)', 1)
presumably because it's interpreting the requirements file as a literal Python package name.
Related
I am trying to switch a project from using setup.py to PEP518. I have written the following minimal pyproject.toml:
[build-system]
requires = ["cython", "setuptools", "wheel", "oldest-supported-numpy"]
build-backend = "setuptools.build_meta"
I need some custom installation logic relying on setup.py, so I cannot currently switch to a purely declarative setting.
Notably, my setup.py contains an import numpy which I use to add numpy.get_include() to the includes of an extension. I can build the sdist / wheel using python -m build, which works as intended (providing a build environment by installing the dependencies before calling into setup.py)
I also have a test suite which I run using tox. However, when I run tox in my project I see the following error:
GLOB sdist-make: /project/setup.py
ERROR: invocation failed (exit code 1), logfile: /project/.tox/log/GLOB-0.log
...
File "/project/setup.py", ...
ModuleNotFoundError: No module named 'numpy'
So, per default tox does not install the build dependencies before building the sdist to be used for testing later, causing everything to fail.
Therefore, as suggested in the tox example, I added
[tox]
isolated_build = True
[testenv]
commands = pytest
to the top of tox.ini, which should enable the isolated build. However, when I then execute tox now, all I get is
___ summary ___
congratulations :)
so nothing is actually built / tested (as opposed to a non-isolated build with numpy installed). Is this the expected behavior? How can I actually build and run tests in an isolated environment?
OK, so as it turns out, isolated builds require an envlist like this to work properly (as opposed to the ordinary one which defaults to using the current python environment):
[tox]
isolated_build = True
envlist = py310
Currently I have the following:
[gh-actions]
python =
3.7: py37
3.8: py38
3.9: py39
3.10: py310
pypy-3.7: pypy3
pypy-3.8: pypy3
[tox]
minversion = 1.9
envlist =
lint
py{37,38,39,py3}-django22-{sqlite,postgres}
py{37,38,39,310,py3}-django32-{sqlite,postgres}
py{38,39,310,py3}-django40-{sqlite,postgres}
py310-djangomain-{sqlite,postgres}
docs
examples
linkcheck
toxworkdir = {env:TOX_WORKDIR:.tox}
[testenv]
deps =
Pillow
SQLAlchemy
mongoengine
django22: Django>=2.2,<2.3
django32: Django>=3.2,<3.3
django40: Django>=4.0,<4.1
djangomain: https://github.com/django/django/archive/main.tar.gz
py{37,38,39,310}-django{22,32,40,main}-postgres: psycopg2-binary
py{py3}-django{22,32,40,main}-postgres: psycopg2cffi
I need to install a different psycopg2 depending on cpython vs pypy. I've tried all kinds of combinations, and nothing, it all ends in failure. I can't get any of the *-postgres envs to install.
What I'm doing wrong?
The issue is that you do not run the correct environments in your GitHub Actions.
For example. In your tox.ini you create an env with the name:
py37-django22-alchemy-mongoengine-postgres
Then you define the requirements as following:
py{37,38,39,310}-postgres: psycopg2-binary
Which means - install psycopg2-binary when the env name contains the factors py37 + postgres. This matches the above env! So far so good.
But in your gha your run:
- python-version: "3.7"
tox-environment: django22-postgres
... which does not contain the py37 factor - so no match - no installation.
The sqlite tests succeed as it sqlite comes along with Python.
I would suggest that you have a look at the django projects in the jazzband github organization. They all are heavy use of tox factors (the parts separated by dashes) and they also use gha - mostly via https://github.com/ymyzk/tox-gh-actions which I would recommend, too.
Basically you just run tox on gha and let the plugin do the heavy lifting of matching Python environments from tox to github.
Disclaimer:
I am one of the tox maintainers and you earn a prize for the most complex factor setup I have ever seen :-)
The issue was never tox or tox's configuration.
The issue was github actions, when you use tox-environment or python-version + tox-environment, tox-gh-actions won't parse it correctly. Causing it to never match.
This is what I removed.
This is what tox.ini looks like and what github actions looks like [and line 47]
As the title says, I want to either run multiple conda environements from 1 flask app such that certain pages use 1 version of packages and the others use a different version of packages.
Alternatively, I could do something where I run 2 apps concurrently and then would need to be able to properly redirect from one to another.
I scoured the internet and didn't find anything. Any ideas/documentation on where to get started?
EDIT I was told this was a bad idea and to elaborate on the problem rather than my attempted solution
The problem is that I have certain packages that I am trying to interact with 2 different ML models that were done in different versions of scikit. I can't recreate the model because it was given to me by a coworker. Additionally I am doing some name matching using fuzzywuzzy which is causing issues with other packages I need.
You can do what you are asking by installing both versions to different locations (so they don't overwrite each other), and then renaming the package as this seems to be your only option.
Take the following example, I am going to setup 2 virtual environments, in the first I'll install scitkit-learn 0.22.2 and in the second I'll install 0.20.4, then move the name of the package so python can differentiate them and print the version ($ denotes something to enter on the command line):
$ python3 -m venv sk1
$ source sk1/bin/activate
$ pip3 install scikit-learn==0.22.2 # install to venv 1
$ deactivate # leave
$ python3 -m venv sk2
$ source sk2/bin/activate
$ pip3 install scikit-learn==0.20.4 # install to venv 2
$ deactivate
# move the package names
$ mv ./sk1/lib/python3.7/site-packages/sklearn ./sk1/lib/python3.7/site-packages/sklearn0222
$ mv ./sk2/lib/python3.7/site-packages/sklearn ./sk2/libpython3.7/site-packages/sklearn0204
# add both of them to your PYTHONPATH
$ export PYTHONPATH=$PYTHONPATH:$(pwd)/sk1/lib/python3.7/site-packages/sklearn0222
$ export PYTHONPATH=$PYTHONPATH:$(pwd)/sk2/lib/python3.7/site-packages/sklearn0204
Now let's go into the python interpreter, import them:
$ python3
>>> import sklearn0222 as sk0222
>>> import sklearn0204 as sk0204
>>> sk0222.__version__
'0.22.2'
>>> sk0204.__version__
'0.20.4'
This will use the packages version specific code to run, but you have to be SUPER CAREFUL when referencing each and you cannot use both packages within the same module. so in mymodule1.py you can import sklearn0222 and use its submodules and in mymodule2.py you can import sklearn0204 and use its submodules, but if you try to use both in the same module in your program the second will not be recognized.
Again, this is a bad idea but this is a way to get what you are looking for.
Sorry this is a long question. See the sentence in bold at the bottom for the TL;DR version.
I've spent many hours trying to track down a problem where pylint sometimes doesn't report all the errors in a module. Note that it does find some errors (e.g. long lines), just not all of them (e.g. missing docstrings).
I'm running pylint 1.7.2 on Ubuntu 16.04. (The version available from apt was 1.5.2 but installing via pip gives 1.7.2.)
We typically run pylint from tox, with a tox.ini that looks something like this (this is a cut-down version):
[tox]
envlist = py35
[testenv]
setenv =
MODULE_NAME=our_module
ignore_errors = True
deps =
-r../requirements.txt
whitelist_externals = bash
commands =
pip install --editable=file:///{toxinidir}/../our_other_module
pip install -e .
bash -c \'set -o pipefail; pylint --rcfile=../linting/pylint.cfg our_module | tee pylint.log\'
Amongst other things, the ../requirements.txt file contains a line for pylint==1.7.2.
The behaviour is like this:
[wrong] When the line that imports our_other_module is present, pylint appears to complete successfully and not report any warnings, even though there are errors in the our_module code that it should pick up.
[correct] When that line is commented out, pylint generates the expected warnings.
As part of tracking this down I took two copies of the .tox folder with and without the module import, naming them .tox-no-errors-reported and .tox-with-errors-reported respectively.
So now, even without sourcing their respective tox virtualenvs, I can do the following:
$ .tox-no-errors-reported/py35/bin/pylint --rcfile=../linting/pylint.cfg our_module -- reports no linting warnings
$ .tox-with-errors-reported/py35/bin/pylint --rcfile=../linting/pylint.cfg our_module -- reports the expected linting warnings
(where I just changed the pylint script's #! line in each case to reference the python3.5 inside that specific .tox directory instead of the unrenamed .tox)
By diffing .tox-no-errors-reported and .tox-with-errors-reported, I've found that they are very similar. But I can make the "no errors" version start to report errors by removing the path to our_other_module from .tox-no-errors-reported/py35/lib/python3.5/site-packages/easy-install.pth.
So my question is why is pylint using easy_install at runtime, and what is it picking up from our other component that is causing it to fail to report some errors.
As I understand it, pylint has dependencies on astroid and logilab-common, but including these in the requirements.txt doesn't make any difference.
One possible reason for the surprising pylint behavior is the --editable option.
it creates a special .egg-link file in the deployment directory, that links to your project’s source code. And, ..., it will also update the easy-install.pth file to include your project’s source code
The pth file will then affect the sys.path which affects the module import logic of astroid and it is deeply buried in the call stack of pylint.expand_files via pylint.utils.expand_modules. Also pylint identifies the module part and function names in the AST using astroid.modutils.get_module_part.
To test the theory, you can try calling some of the affected astroid functions manually:
import sys, astroid
print(sys.path)
print(astroid.modutils.get_module_part('your_package.sub_package.module'))
astroid.modutils.file_from_modpath(['your_package', 'sub_package', 'module'])
I am using simple entry points to make a custom script, with this in setup.py:
entry_points = {
'my_scripts': ['combine_stuff = mypackage.mymod.test:foo']
}
where mypackage/mymod/test.py contains:
import argh
from argh import arg
#arg("myarg", help="Test arg.")
def foo(myarg):
print "Got: ", myarg
When I install my package using this (in same directory as setup.py)
pip install --user -e .
The entry points do not get processed at all it seems. Why is that?
If I install with distribute easy_install, like:
easy_install --user -U .
then the entry points get processed and it creates:
$ cat mypackage.egg-info/entry_points.txt
[my_scripts]
combine_stuff = mypackage.mymod.test:foo
but no actual script called combine_stuff gets placed anywhere in my bin dirs (like ~/.local/bin/). It just doesn't seem to get made. What is going wrong here? How can I get it to make an executable script, and ideally work with pip too?
The answer was to use console_scripts instead of my_scripts. It was unclear that the scripts name was anything other than internal label for the programmer.