My project has django-heroku in its Pipfile as a package.
django-heroku has gunicorn in its Pipfile as a dev-package. See: https://github.com/heroku/django-heroku/blob/master/Pipfile
I would expect that after running pipenv install --dev in my project, I could then run pipenv run gunicorn.
But it throws the following error:
Error: the command gunicorn could not be found within PATH or Pipfile's [scripts].
If dev dependencies aren't available, what's the point of install --dev?
One answer is that the "dev dependencies" of package X are the packages someone would need if they were developing (as opposed to using) package X.
I would expect that after running pipenv install --dev in my project, ...
If you use pipenv install --dev in your project, pipenv should install all the packages that are required to develop your project.
If it recursively installed all dev dependencies all the way down, it might pull in Python profiling packages, test runners, etc., that other packages need for development. Those wouldn't necessarily be appropriate for someone developing your project.
As an example, if my project listed pytest as a dev dependency , I would be unhappy in pipenv installed nose, which could be listed as an dev depenedency in some other, out-of-date package.
If developers of your package need gunicorn, you should list it explicitly as a dev dependency of your project.
I believe the Pipfile you've linked to is only relevant to the development of this package.
However when the package is installed, it usually relies on setup.py:
REQUIRED = [
'dj-database-url>=0.5.0', 'whitenoise', 'psycopg2', 'django'
]
As you can see, gunicorn is missing.
Related
I'm using tox with poetry with pyenv and I'm getting quite confused.
I'm using pyenv local 3.6.9 3.7.10 to set several python versions in my myprojectroot folder. Above that, I use poetry to manage dependencies and the mypackage build. Finally, I use tox to make automatic testings for several python versions.
My problem is that tox creates for - let's say versions 3.6.9 - a virtual environment located in the myproject/.tox directory. To that end, it installs all dependencies listed by poetry in that virtual env, but is installs also mypackage !!! (I checked in the .tox folder.
Questions:
tox usually install the packages with pip. Yet, I use poetry here. How can it install my package then? Does it build the wheel with poetry and install it afterwards?
Does it update my local directory code on modification? Should I make a tox -r?
I recently moved my test folder configuration into
project.toml
src
+- mypackage
+- __init__.py
+- mypackage.py
tests
+- test_mypackage.py
and, I need to run pytest when modifying mypackage. How to do that?
What's the link with skipsdist=True?
Thanks for your help!
ad 1) tox does not build a wheel, but an sdist (source distribution). If you wanted to build a wheel, you need to install https://github.com/ionelmc/tox-wheel
But your idea is right. poetry builds the sdist, and uses pip under the hood to install it.
ad 2) tox notices changes of your source code, so no need to do a tox -r. The documentation of tox lacks a bit info about this topic. Meanwhile, have a look at https://github.com/tox-dev/tox/issues/2003#issuecomment-815832363.
ad 3) pytest does test discovery on its own, so it should be able to find the tests. A command like pytest or poetry run pytest in your case should be enough. Disclaimer: I do not use poetry, so maybe you'd need to be a more explicit about the path. The official poetry documentation on tox suggests the following command: poetry run pytest tests/, see https://python-poetry.org/docs/faq/#is-tox-supported
ad 4) You can read more about skipsdist=True at https://tox.readthedocs.io/en/latest/config.html#conf-skipsdist - so this tells tox whether to build an sdist or not. If you do not build an sdist, your tests will not test the built package, but only the source code (if you direct pytest to it). This may be not what you want - depending whether you develop an app or a library, or other circumstances I do not know.
I have written a Python utility script that relies on the Debian package python3-apt:
import apt
...
def get_packages():
cache = apt.Cache()
for pkg in cache:
if pkg.installed and pkg.name in PACKAGE_LIST:
yield pkg.name
I am now expanding the script into a project, with the eventual intent of making it available on PyPI and/or as a Debian package itself.
I use virtualenvs to isolate my Python development environments. What package name (or path) do I need to add to my virtualenv so that I can call import apt from within that environment?
So far I have tried:
apt on PyPI. Strange old release.
vext. Does not currently support apt.
other things on PyPI that start with "apt". None of them are a simple intermediary for python3-apt.
You can achieve this with pipenv as follows (similar instructions should work for other venv managers):
pipenv --site-packages # see note 1
PIP_IGNORE_INSTALLED=1 pipenv install # see note 2
You are more likely to run this as:
pipenv --site-packages
PIP_IGNORE_INSTALLED=1 pipenv install -e . --dev
# treats codebase as a package, also installs dev dependencies
Note 1: We must access system packages (aka site packages) so that we can import apt.
Note 2: ...but we prefer virtualenv packages to system packages. See
https://pipenv.pypa.io/en/latest/advanced/#working-with-platform-provided-python-components for details.
Comments:
This means that all other system packages not defined in your Pipfile are also available in your venv. You must remember that they aren't necessarily available to other developers using the same codebase. If you have a basic CI environment, it should catch this.
This approach will work for other packages not supported by vext.
I develop a cli tool with Python for in-house use.
I would like to introduce pipenv to my project to manage "dependencies of dependencies". It is because I encountered a bug due to a difference between production environment and development environment.
However, MY cli tool is installed as a package.(httpie and ansible takes this strategy).
So, I have to specify all dependencies in setup.py.
How should I import "dependencies of dependencies" in Pipfile.lock to setup.py?
(or should take other method?)
It is suggested that you to do this the other way around. Instead of referencing dependencies in Pipfile, you should list them in setup.py instead, and reference them in Pipfile with
pipenv install -e .
I'm trying to deploy my Django site on Heroku, and thus, I need a requirements.txt file with the necessary packages that Heroku needs to install for me. I understand Django is a necessary package to be installed. Unfortunately, Django isn't included in the file when I run pip freeze > requirements.txt. Why is this? I'm not sure what to show you so you can tell me what's going wrong. Let me know and I'll add it. FYI the site hosts just fine on my local computer, so Django is definitely installed.
Sounds like you are working in a virtual environment, yet your Django dependency is installed globally. Check which Python packages are installed globally and uninstall Django (you probably don't need it globally). Then install it into your virtual environment. Now the freeze command should output Django as well.
General note: Most packages should be installed into your project virtual environment. There are only few packages where it makes sense to install them globally (eg aws management tools).
I have a python project that has a few dependencies (defined under install_requires in setup.py). My ops people requires a package to be self contained and only depend on a python installation. The litmus test would be that they're able to get a zip-file and then unzip and run it without an internet connection.
Is there an easy way to package an install including dependencies? It is acceptable if I have to build on the OS/architecture that it will eventually be run on.
For what it's worth, I've tried both setup.py build and setup.py sdist, but they don't seem to fit the bill since they do not include dependencies. I've also considered virtualenv (which could be installed if absolutely necessary), but that has hard coded paths which makes it less than ideal.
There are a few nuances to how pip works. Unfortunately, using --prefix vendor to store all the dependencies of the project doesn't work if any of those dependencies, or dependencies of dependencies are installed into a place where pip can find them. It will skip those dependencies and just install the rest to your vendor folder.
In the past I've used virtualenv's --no-site-packages option to solve this issue. At one company we would ship the whole virtualenv, which includes the python binary. In the interest of only shipping the dependencies, you can combine using a virtualenv with the --prefix switch on pip to give yourself a clean environment that installs to the right place.
I'll provide an example script that creates a temporary virtualenv, activates it, then installs the dependencies to a local vendor folder. This is handy if you are running in CI.
#!/bin/bash
tempdir=$(mktemp -d -t project.XXX) # create a temporary directory
trap "rm -rf $tempdir" EXIT # ensure it is cleaned up
# create the virtualenv and exclude packages outside of it
virtualenv --python=$(which python2.7) --no-site-packages $tempdir/venv
# activate the virtualenv
source $tempdir/venv/bin/activate
# install the dependencies as above
pip install -r requirements.txt --prefix=vendor
In most cases you should be able to "vendor" all the dependencies. It's basically a crude version of virtualenv.
For example look at how the requests package includes chardet and urllib3 in its own source tree. Here's an example script that should do the initial downloading and copying for you: https://gist.github.com/proppy/1136723
Once you have the dependencies installed, you can reference them with from .some.namespace import dependency_name to make sure that you're using your local versions.
It's possible to do this with recent versions of pip (I'm using 8.1.2). On the build machine:
pip install -r requirements.txt --prefix vendor
Then run it:
PYTHONPATH=vendor/lib/python2.7/site-packages python yourapp.py
(This is basically an expansion of #valentjedi comment. Thanks!)
let's say you have python module app.py with dependencies in requirements.txt file.
first, install all your dependencies in appdeps folder.
python -m pip install -r requirements.txt --target=./appdeps
then in your app.py module add this dependency folder to the pythonpath
# app.py
import sys
sys.path.append('appdeps')
# rest of your module normally
#...
this will work the same way as if you were running this script from venv with all the dependencies installed inside ;>