How to run a tests suite that comes in a sdist? - python

I would like to be able to run the tests suite when "mypackage" has been installed via pip.
I have created the mypackage-version.tar.gz file myself, using python setup.py sdist. mypackage-version.tar.gz does contain the tests/ directory (moreover, the .egg-info/SOURCES.txt inside it does list all files under tests/ too).
But after I've run pip install mypackage-version.tar.gz inside a dedicated brand new virtual environment, there is no tests/ directory anywhere in the virtual environment:
$ find ./myvenv/ -name "*tests*"
./myvenv/lib/python3.6/site-packages/pip/_vendor/webencodings/__pycache__/tests.cpython-36.pyc
./myvenv/lib/python3.6/site-packages/pip/_vendor/webencodings/tests.py
Reading pip documentation or the output of pip install --help, I just can't figure out if there's any way to ask pip to install the tests correctly along the rest.
I've also tried to unpack the tests/ directory manually (along with pytest.ini), but the tests do not start because pytest complains about not finding mypackage module (ModuleNotFoundError: No module named 'mypackage'), even if both pytest and mypackage show up in pip list... and both are the ones from the virtual environment (as which command tells), as well as pip and python (that can import mypackage).
Yet manually unpacking looks hacky, so before digging further into this, I'd liked to know what is the right way to go: using pip? Manually unpacking? Anything else? Am I missing something obvious?
EDIT: finally, I could run the manually unpacked tests using python -m pytest. But this remains a workaround, and I'd still liked to know whether there's a more proper (and automated) way to install and run the tests suite

Related

What is the preferred way to develop a python package without using setup.py

I am developing a python package, and I don't want to have to keep running pip install . to reinstall my package every time I change something. Using the -e or --editable doesn't seem to work unless I have a setup.py file, nor does --no-use-pep517. I have a pyproject.toml instead, as is preferred nowadays if I am not mistaken. So, what is the preferred way to do this nowadays?
My package is just a CLI script, but it imports some functions from another file in the same directory called utils.py. When developing, I can't just run the script manually rfrom the terminal, because then I get name_of_package is not a package from the line
from name_of_package.utils import function, whereas If i just have
from utils import function, I can run the script from the terminal, but when I pip install it, it says there is no module named utils.
I did install poetry and installed my dependencies, ran poetry shell and then tried to run my script with poetry run /path/to/script.py, but I kept getting an error that my package wasn't a package.
If you want to keep using setuptools as "build back-end" then you can replace the setup.py script with a setup.cfg declarative configuration file and still be able to do "editable" installations (independently of whether or not you have a pyproject.toml file).
There is now PEP 660 which standardizes editable installations. The following tools have support for PEP 660:
PDM
Flit
Hatch
Poetry
On top of setuptools-based projects, all projects that use a PEP 660 build back-end should be installable as editable by pip (python -m pip install --editable .).

How does tox install my poetry project on its virtual environment?

I'm using tox with poetry with pyenv and I'm getting quite confused.
I'm using pyenv local 3.6.9 3.7.10 to set several python versions in my myprojectroot folder. Above that, I use poetry to manage dependencies and the mypackage build. Finally, I use tox to make automatic testings for several python versions.
My problem is that tox creates for - let's say versions 3.6.9 - a virtual environment located in the myproject/.tox directory. To that end, it installs all dependencies listed by poetry in that virtual env, but is installs also mypackage !!! (I checked in the .tox folder.
Questions:
tox usually install the packages with pip. Yet, I use poetry here. How can it install my package then? Does it build the wheel with poetry and install it afterwards?
Does it update my local directory code on modification? Should I make a tox -r?
I recently moved my test folder configuration into
project.toml
src
+- mypackage
+- __init__.py
+- mypackage.py
tests
+- test_mypackage.py
and, I need to run pytest when modifying mypackage. How to do that?
What's the link with skipsdist=True?
Thanks for your help!
ad 1) tox does not build a wheel, but an sdist (source distribution). If you wanted to build a wheel, you need to install https://github.com/ionelmc/tox-wheel
But your idea is right. poetry builds the sdist, and uses pip under the hood to install it.
ad 2) tox notices changes of your source code, so no need to do a tox -r. The documentation of tox lacks a bit info about this topic. Meanwhile, have a look at https://github.com/tox-dev/tox/issues/2003#issuecomment-815832363.
ad 3) pytest does test discovery on its own, so it should be able to find the tests. A command like pytest or poetry run pytest in your case should be enough. Disclaimer: I do not use poetry, so maybe you'd need to be a more explicit about the path. The official poetry documentation on tox suggests the following command: poetry run pytest tests/, see https://python-poetry.org/docs/faq/#is-tox-supported
ad 4) You can read more about skipsdist=True at https://tox.readthedocs.io/en/latest/config.html#conf-skipsdist - so this tells tox whether to build an sdist or not. If you do not build an sdist, your tests will not test the built package, but only the source code (if you direct pytest to it). This may be not what you want - depending whether you develop an app or a library, or other circumstances I do not know.

pip install editable working dir to custom path using requirements.txt

Short version:
Is it possible to use the -e parameter in requirements.txt with a path where the editable package should be installed?
First approach
requirements.txt:
-e git+https://github.com/snake-soft/imap-storage.git#egg=imap-storage
Pro: Automated install
Contra: Editable directory is inside virtualenv src folder (not in workspace)
Second approach (Edit: Don't use this until you know what you're doing, look at bottom)
If i clone the repo and installed it like this (virtualenv activated):
cd /home/user/workspace
git clone https://github.com/snake-soft/imap-storage.git
pip install -e .
Gives the structure i want:
workspace
├── imap-storage
├── django-project # uses imap-storage module
I have what i want. The repository (imap-storage) lays parallel to the django-project, that uses it.
It is importable because it is installed inside the virtualenv.
Pro: Editable directory is inside my workspace
Contra: Not automated, not intuitive
Goal
pip install -r requirements.txt to install module from git (like first approach)
Module is in pythonpath of virtualenv -> importable
Editable working dir of the module is in my workspace (like second approach)
PS: Or am i completely wrong-thinking and should go for something completely different?
Why did i ask such a crazy question?
I thought i could make my life a little bit easier when both (package and Django project that is using this package) are laying editable inside my workspace because i work on them in parallel.
My résumé
I tried it a little bit with the second approach and at the end, i decided to prefer the first approach.
Reason
With both methods pydev won't show it as an installed package.
When mix both methots like that:
install package via requirements.txt (with the -e switch)
uninstall it
clone it into (eg. ~/workspace/)
install it with the 'pip install -e .' inside the package
Then you will end up in a bad situation.
The 'virtualenv/src/' directory won't be deleted and is recognized as source for the package inside pydev.
When running the Django instance that uses that package, it runs the package-code from '~/workspace/'.
Suggestion
Use the first approach, import that source dir as project in pydev ('virtualenv/src/') and make a link inside the file-manager of your choice.
It will save you from a complicated mistake.

Install local dist package into virtualenv

I have a pytest test, let's call it test.py. I used to run this test outside of virtualenv; now I'm trying to run it inside a virtualenv sandbox.
The project is structured like this:
~/project/test # where test.py and all virtualenv files live
~/project/mylibrary
test.py imports from mylibrary. In the past, this worked because I have the code in ~/project/mylibrary installed into /usr/lib/python2.7/dist-packages/mylibrary.
I can't run virtualenv with the --system-site-packages flag. I also can't move the code from ~/project/mylibrary into the ~/project/test folder. How can I get access to the code in mylibrary inside my virtualenv?
You don't need to do anything special - as long as you are working inside a virtualenv, python setup.py install will automatically install packages into
$VIRTUAL_ENV/lib/python2.7/site-packages
rather than your system-wide
/usr/lib/python2.7/dist-packages
directory.
In general it's better to use pip install mylibrary/, since this way you can neatly uninstall the package using pip uninstall mylibrary.
If you're installing a working copy of some code that you're developing, it might be a good idea to install it in "editable" mode using pip install -e mylibrary/, which creates a link to your source directory so that your installed module gets updated as you edit the code.
The easiest way would be to add the directory containing the library to your sys.path

Making a neat, installable Python library with Click

I'm trying to make a command line tool with Click in Python, and I can't seem to find any documentation on packaging up the library into something that's installable. Is there any way to do this? At the moment I'm just using a virtual environment and installing it for testing using the commands listed in the docs: (http://click.pocoo.org/4/setuptools/#testing-the-script)
$ virtualenv venv
$ . venv/bin/activate
$ pip install --editable .
I'm relatively new to Click so forgive me if I'm missing something painfully obvious.
If you've followed the Setuptools Integration steps in the article you linked to, you're most of the way there. Try installing the package as if it came from pip (maybe in a different virtualenv):
$ virtualenv deploy
$ source deploy/bin/activate
$ pip install .
Then you can invoke your command as normal - it'll be installed under the bin directory in the virtualenv. It's a good idea to try testing the command from somewhere else to make sure you don't have a dependency on being inside the project directory (like you've probably been doing during testing).
Once you're happy that it installs correctly and all the imports work as expected, you can proceed to register your package with PyPI (the Package Index). You can read about this in the Python Docs
That's about it really - setuptools/Click does most of the heavy lifting.

Categories

Resources