I'm trying to integrate a project with a flake8 plugin I wrote as a local plugin (not a PyPI package for example) as explained here. The project uses a virtual env, both locally and as a github workflow. Since flake8 is invoked from within the virtual env it can't find the plugin, which resides as a folder under the project root. When I manually add the plugin code to the virtual env folder it integrates nicely and flake8 is able to find to execute it.
The solution smells like some kind of github pre-commit-config/hook, but I can't find any reference in the docs for this usecase. Currently flake8 is configured in pre-commit-config like so:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.2.0
hooks:
- id: debug-statements
- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
hooks:
- id: flake8
additional_dependencies: ['dlint']
Is there a way to use my non-packaged flake8 plugin with a virtual env locally / in a github workflow?
you'll want to configure paths to make sure that flake8 puts the appropriate directories on sys.path to discover your plugin
this is mentioned in the docs you linked, just a little bit further down:
However, if you are working on a project that isn’t set up as an installable package, or Flake8 doesn’t run from the same virtualenv your code runs in, you may need to tell Flake8 where to import your local plugins from. You can do this via the paths option in the local-plugins section of your config:
[flake8:local-plugins]
extension =
MC1 = myflake8plugin:MyChecker1
paths =
./path/to
note that if your local plugin has dependencies it needs, those will also be needed to be listed in additional_dependencies in your pre-commit configuration
disclaimer: I'm the current flake8 maintainer, and I created pre-commit
Turns out the paths property was configured wrong and should have been
paths =
. flake8_plugins/
instead of
paths =
.flake8_plugins/
Related
One of our repositories relies on another first-party one. Because we're in the middle of a migration from (a privately hosted) gitlab to azure, some of our dependencies aren't available in gitlab, which is where the problem comes up.
Our pyproject.toml file has this poetry group:
# pyproject.toml
[tool.poetry.group.cli.dependencies]
cli-parser = { path = "../cli-parser" }
In the Gitlab-CI, this cannot resolve. Therefore, we want to run the pipelines without this dependency. There is no code being run that actually relies on this library, nor files being imported. Therefore, we factored it out into a separate poetry group. In the gitlab-ci, that looks like this:
# .gitlab-ci.yml
install-poetry-requirements:
stage: install
script:
- /opt/poetry/bin/poetry --version
- /opt/poetry/bin/poetry install --without cli --sync
As visible, poetry is instructed to omit the cli dependency group. However, it still crashes on it:
# Gitlab CI logs
$ /opt/poetry/bin/poetry --version
Poetry (version 1.2.2)
$ /opt/poetry/bin/poetry install --without cli --sync
Directory ../cli-parser does not exist
If I comment out the cli-parser line in pyproject.toml, it will install successfully (and the pipeline passes), but we cannot do that because we need it in production.
I can't find another way to tell poetry to omit this library. Is there something I missed, or is there a workaround?
Good, Permanent Solution
As finswimmer mentioned in a comment, Poetry 1.4 should handle this perfectly good. If you're reading this question after it is published, the code in the question should resolve any issues.
Hacky, Bad, Temporary Solution
Since the original problem was in gitlab CI pipelines, I used a workaround there. Right in front of the install command, I used the following command:
sed -i '/cli-parser/d' pyproject.toml
This modifies the projects' pyproject.toml in-place to remove the line that has my module. This prevents poetry from ever parsing the dependency.
See the sed man page for more information on how this works.
Keep in mind that if your pipeline has any permanent effects, like for example turning your package into an installable wheel, or build artifacts being used elsewhere, this WILL break your setup.
Recently I started using the sphinx_autodoc_typehints and the sphinx_autodoc_defaultargs extensions for Sphinx via my project's conf.py. As it seems are not default packages in the sphinx installation on readthedocs (over there sphinx is on v1.8.5). Because my build fails with an Extension error shown here:
Could not import extension sphinx_autodoc_typehints (exception: No module named
'sphinx_autodoc_typehints')
I understand I somehow have to tell readthedocs to get sphinx_autodoc_typehints (and later sphinx_autodoc_defaultargs as well) from PyPI. Or is there a way I can install packages on readthedocs myself?
Since I use pbr for package management I use a requirements.txt that readthedocs knows of. I don't want to specify the sphinx extensions there because every user of my package would have to install them. Is there no other way of telling readthedocs which extensions to use?
Following the comment of Steve Piercy, I found a way to have two requiremens.txt. readthedocs' advanced settings (on the website) allow for one requirements.txt only.
readthedocs' preferred way are .readthedocs.yaml, which needs to live in the root folder. Following https://docs.readthedocs.io/en/stable/config-file/v2.html, this is how the file now looks like:
version: 2
sphinx:
configuration: docs/conf.py
python:
version: 3.7
install:
- requirements: requirements.txt
- requirements: docs/requirements.txt
and the docs/requirements.txt looks like this:
sphinx==3.4.3
sphinx_autodoc_typehints==1.12.0
sphinx_autodoc_defaultargs==0.1.2
In the advanced settings page I had to manually set the location of sphinx' conf.py, although it's a standard location. Without this setting my build would still fail.
I'm running Python 3.9.1 and i have successfully installed poetry version 1.1.4.
When I am trying to add requests ($ poetry add requests) I am facing
RuntimeError
Poetry could not find a pyproject.toml file in C:\...
I have just installed it and I am not sure if I have missed something.
Could anyone advise please?
You have to create a pyproject.toml first. Go into your project folder, run poetry init and follow the instructions. As an alternative you can run poetry new myproject to create a basic folder structure and a pyproject.toml. Also have a look into the docs.
In my case, I was in a Docker container of a legacy Python version (3.6). I had to use pip instead of conda and therefore, I installed Poetry to keep the dependencies right.
bash-4.4# docker exec -it MY_CONTAINER bash
starts the command prompt of the container.
Now turning to answer the question, which is not a Docker question.
In the next command, you might need to write /usr/local/bin/poetry instead of just poetry instead.
bash-4.4# poetry init
This command will guide you through creating your pyproject.toml config.
Package name []: test
Version [0.1.0]: 1.0.0
Description []: test
Author [None, n to skip]: n
License []:
Compatible Python versions [^3.6]:
Would you like to define your main dependencies interactively? (yes/no) [yes] no
Would you like to define your development dependencies interactively? (yes/no) [yes] no
Generated file
[tool.poetry]
name = "test"
version = "0.1.0"
description = "test"
authors = ["Your Name <you#example.com>"]
[tool.poetry.dependencies]
python = "^3.6"
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
Do you confirm generation? (yes/no) [yes]
Very easy side remark which should be clear to the most: If you press Enter at a field of filled brackets, it will just enter what is written in the brackets. For example, if you press Enter at Version [0.1.0]:, you will make it the version 0.1.0, unless you enter your own. That is also to say that those brackets [] do not mean that you have to enter a list, it is just to show what is entered when you just press Enter.
After this, I could run:
bash-4.4# poetry add pandas
Another Docker side note: It turned out that apk (Alpine containers) on legacy Python 3.6 cannot handle basic packages well enough, with or without Poetry, see Installing pandas in docker Alpine. I had to switch to a newer Python version. And you can install Poetry already by means of the Dockerfile and not just in the container bash, see Integrating Python Poetry with Docker
Wrapping up:
It was strange to me that I had to enter things that I would only use when I published a self-written single package (see: Package name []), although I was expecting a general setup of a package manager of many packages as a whole. In the end, I just followed the menu by entering some irrelevant placeholders. The right Python version as the only important core of the pyproject.toml file was already automatically suggested. This was all that was needed.
I just updated my project to Python 3.7 and I'm seeing this error when I run mypy on the project: error: "Type[datetime]" has no attribute "fromisoformat"
datetime does have a function fromisoformat in Python 3.7, but not in previous versions of Python. Why is mypy reporting this error, and how can I get it to analyze Python 3.7 correctly?
Things I've tried so far:
Deleting .mypy_cache (which has a suspicious looking subfolder titled 3.6)
Reinstalling mypy with pip install --upgrade --force-reinstall mypy
To reproduce:
Create a python 3.6 project
install mypy 0.761 (latest) in the project venv
scan the project with mypy (mypy .)
update the project to python 3.7
add a file with this code in it:
from datetime import datetime
datetime.fromisoformat('2011-11-04 00:05:23.283')
scan the project again (mypy .) [UPDATE: this actually works fine. It was rerunning my precommit hooks without reinstalling pre-commit on the new Python version venv that was causing the problems.]
You are running mypy under an older version of Python. mypy defaults to the version of Python that is used to run it.
You have two options:
You can change the Python language version with the --python-version command-line option:
This flag will make mypy type check your code as if it were run under Python version X.Y. Without this option, mypy will default to using whatever version of Python is running mypy.
I'd put this in the project mypy configuration file; the equivalent of the command-line switch is named python_version; put it in the global [mypy] section:
[mypy]
python_version = 3.7
Install mypy into the virtualenv of your project, so that it uses the exact same Python version.
Note that if you see this issue (and didn't accidentally set --python-version, on the command-line or in a configuration file, you are certainly not running mypy from your project venv.
The solution was simple: simply run mypy with the --python-version flag. so in my case it was --python-version=3.7.
If you use pre-commit, you can also add this as an argument in a precommit check in your .pre-commit-config.yaml. Mine looks like this:
repos:
...
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.750 # Use the sha / tag you want to point at
hooks:
- id: mypy
args: [--python-version=3.7]
If you run mypy often from the command line you can also add it to a config file as described herehttps://mypy.readthedocs.io/en/stable/config_file.html.
Another note: if mypy is reporting errors when your pre-commit hooks run, but not when it is run by itself from the project venv, then you need to either
add python-version as an argument like I do above, or
reinstall pre-commit in the new project venv (with the correct python version)
I didn't worked on the django project for somedays and now I return and I can't work on it. When I debug or run in eclipse aptana I get the "Error: No module named staticfiles" error.
I have even updated aptana to today's updates and no luck.
I have uninstalled the django, delete all files and reinstall.
If I import django with python in cmd (on windows) it is in the place I expect to be
But if I delete the "'django.contrib.staticfiles'," string in the "INSTALLED_APPS" from settings.py everything works fine but I have no access to the static files, as expected..
In those days I have installed Google app engine + python 2.5; can this be the problem and how to solve?
thank you very much*
Here are the steps I'd take to find out the problem:
verify that it's working correctly in the command-line (cmd.exe in windows) - just to remove the issues associated with Aptana. You need to do something like: C:\Path\to\Python2.6\python.exe manage.py runserver (NB: choose any management command that will check your settings.py). If this gives the same error, then you haven't got Django 1.3.1 installed in Python2.6 (you could install it, or you could set up a fresh virtualenv, see below)
once you've got it working in the command line, you just have to make sure that aptana is using the correct interpreter path. You need to check that you've defined it correctly (in your global preferences (the workspace settings) -- pydev python interpreter) and then are using it in the specific project -- (check the project settings that it is using the python interpreter you just defined )
NB: Django 1.3.1 can use python2.5, but not next version of django
Here is how I would avoid this in future:
use virtualenv[1] to avoid being dependent on the arbitrary nature of your installation history (once you've installed virtualenv in ANY version of python you can then specify which python when you set up the virtualenv: virtualenv -p C:\Path\to\Python2.6\python.exe ).
use virtualenv --no-site-packages to ensure you have no dependencies locally
use pip[2] to install all your python packages (problems may occur with packages with binary content - use easy_install for those)
use pip freeze > requirements.txt to record your dependencies (and add this file into your source code control)
[1] http://pypi.python.org/pypi/virtualenv
[2] http://pypi.python.org/pypi/pip
NB pip and easy_install are automatically installed into your new virtualenv
use {% load static %} instead of {% load staticfiles %}
in the new version of Django, the syntax of loading static files changed