Custom post install script not running with pip - python

Please before flagging as duplicate, I have tried a bunch of solutions
including one here
but no luck
I have created a simple tool to do some tasks and was able to package it successfully.
When trying to install it, I get the desired effect when I use python setup.py install but pip install package_name just installs the package but no post installation script.
Here is part of my code;
setup.py
from distutils import setup
from app.scripts import *
setup(
#Application name
name = "my-app-name",
version = "my-app-version",
author = "my-name",
author_email = "my-email",
packages = ['app'],
include_package_data = True,
license = 'MIT',
url = "https://my-url",
description = "description",
install_requires = ["flake8"],
cmdclass = {
"install":Post_install
}
)
scripts.py
from distutils.command.install import install
import os
class Post_install(install):
#staticmethod
def func():
return True
def run(self):
install.run(self)
#Pre install actions
if Post_install.func():
print("Bingo")
else:
print("Failed")
Thanks :)
PS I run pip install after uploading the package.

Install the package directly from your GitHub repository:
pip install -vvv git+url/for/github/repo#my-branch
You mentioned in the chat that you'd like to add this package to your requirements.txt file. See this question for details:
-e git://github.com/path/to/project
Former answer (rejected by the OP):
I managed to recreate the issue you're having. It seems to be a matter of pip install silencing or redirecting output (as indicated by an answer to this question).
The solution is to add the option -vvv after pip install. I'm guessing the v stands for verbose.

Related

How can I check that the installed packages match Pipfile.lock using pipenv?

In my tests, I would like to run a command to make sure that the installed packages in my virtual environment match the packages found in Pipfile.lock.
Is there a command like this?
pipenv checkifinstalled || exit 1
This problem can be reduced down to these two steps:
Convert Pipfile.lock into a requirements.txt file (in the format generated by pip freeze).
This is easily done by running pipenv requirements (or pipenv requirements --dev). (Note that this command is supported in pipenv >= 2022.4.8)
Check that the installed packages match the generated requirements.txt file.
Solutions to this are found under this question: Check if my Python has all required packages
Implementation:
Here is how I put it all together in a test:
import pkg_resources
import subprocess
import unittest
class DependencyChecks(unittest.TestCase):
def test_requirements_installed(self):
requirements_lines = subprocess.check_output(["pipenv", "requirements", "--dev"], text=True).splitlines()
req_lines = [line for line in requirements_lines if not line.startswith("-i ")]
requirements = pkg_resources.parse_requirements(req_lines)
for requirement in requirements:
req = str(requirement)
with self.subTest(requirement=req):
pkg_resources.require(req)

Specify setup time dependency with `--global-option` for a python package

I'm trying to package a python library that has setup-time (and also run-time) dependencies: it imports the modules so that the modules can inform the setup process of the location of some provided C headers:
from distutils.extension import Extension
from pybedtools.helpers import get_includes as pybedtools_get_includes
from pysam import get_include as pysam_get_include
# [...]
extensions = [
Extension(
"bam25prime.libcollapsesam", ["bam25prime/libcollapsesam.pyx"],
include_dirs=pysam_get_include()),
Extension(
"bam25prime.libcollapsebed", ["bam25prime/libcollapsebed.pyx"],
include_dirs=pybedtools_get_includes(),
language="c++"),
]
# [...]
However, one of the dependencies (pybedtools) needs to be installed with a specific --global-option pip option (see at the end of the post what happens when the option is not provided).
If I understand correctly, the currently up-to-date way to automatically have some dependencies available before setup.py is used is to indicate them in the [build-system] section of a pyproject.toml file.
I tried the following pyproject.toml:
[build-system]
requires = [
"pysam",
"pybedtools # git+https://github.com/blaiseli/pybedtools.git#fix_missing_headers --global-option='cythonize'",
]
build-backend = "setuptools.build_meta"
(By the way, it took me quite some time to figure out how to specify the build-backend, the documentation is not easily discoverable.)
However, this generates the following error upon pip install:
ERROR: Invalid requirement: "pybedtools # git+https://github.com/blaiseli/pybedtools.git#fix_missing_headers --global-option='cythonize'"
Hint: It looks like a path. File 'pybedtools # git+https://github.com/blaiseli/pybedtools.git#fix_missing_headers --global-option='cythonize'' does not exist.
How can I correctly specify options for dependencies ?
If I simply specify the package and its URL ("pybedtools # git+https://github.com/blaiseli/pybedtools.git#fix_missing_headers), the install fails as follows:
Exception:
Cython-generated file 'pybedtools/cbedtools.cpp' not found.
Please install Cython and run
python setup.py cythonize
It was while trying to tackle the above error that I found out about the --global-option pip option.
I can manually run pip install --global-option="cythonize" git+https://github.com/blaiseli/pybedtools.git#fix_missing_headers, and the install works, provided the dependencies of that package are already installed, otherwise their install fails because of an unrecognized "cythonize" option (which is another issue...).
Note that this option is only needed when installing "from source" (for instance when installing from a fork on github, as is my case here).
Same thing as in your other question, I suspect cythonize is a setuptools command and not a global option.
If it's indeed the case, then you would be better off setting an alias in your setup.cfg. If you run python setup.py alias install cythonize install, this should add the following to your setup.cfg:
[aliases]
install = cythonize install
When running pip install later, pip will honor this alias and the cythonize command will be executed right before the install command.

Running post install code from PyPi via pip

I am trying to run a block of code after my python package has been downloaded from PyPi.
I've setup a custom cmdclass in my setuptools.setup
from setuptools import find_packages, setup
from setuptools.command.install import install
class CustomInstallCommand(install):
def run(self):
print "Here is where I would be running my code..."
install.run(self)
setup(
name = 'packagename',
packages=find_packages(),
version = '0.1',
description = '',
author = '',
cmdclass={
'install': CustomInstallCommand,
},
author_email = '',
url = '',
keywords = [],
classifiers = [],
)
This works great when I run python setup.py install which outputs my print statement. However, when I build the tar.gz package (using python setup.py sdist) and try to install via pip (pip install dist/mypackage-0.1.tar.gz), the print statement is never printed. I have also tried to upload the built package to PyPi and pip install from there.
I have looked at a similar question asked on stackoverflow but the solution did not work.
pip install does run your custom command, it just hides all standard output from setup.py. To increase verbosity level and see your command output, try running
pip install -v ...

Is it possible to use pip to install the latest tag?

One can use pip to install a specific tag:
pip install -e git+ssh://git#github.com/{usr}/{repo}.git#{tag}#egg={egg}
However, I can't seem to find a way to make it point to the latest release (which would be releases/latest), and not just to the HEAD of master. Is it at all possible?
One constraint, it has to use ssh.
If you are using python packages here is one way to do this:
setup.py
import setuptools
import urllib.request
deps = [
{
'name': 'gunicorn',
'url': 'github.com/benoitc/gunicorn',
},
]
for i in range(len(deps)):
tag_url = urllib.request.urlopen(f"https://{deps[i]['url']}/releases/latest").geturl()
latest_tag = tag_url.split('/')[-1]
deps[i] = f"{deps[i]['name']} # git+ssh://{deps[i]['url']}#{latest_tag}"
setuptools.setup(
install_requires=deps,
)
And then install the package locally
python -m pip install .

Pip freeze does not show repository paths for requirements file

I've created an environment and added a package django-paramfield via git:
$ pip install git+https://bitbucket.org/DataGreed/django-paramfield.git
Downloading/unpacking git+https://bitbucket.org/DataGreed/django-paramfield.git
Cloning https://bitbucket.org/DataGreed/django-paramfield.git to /var/folders/9Z/9ZQZ1Q3WGMOW+JguzcBKNU+++TI/-Tmp-/pip-49Eokm-build
Unpacking objects: 100% (29/29), done.
Running setup.py egg_info for package from git+https://bitbucket.org/DataGreed/django-paramfield.git
Installing collected packages: paramfield
Running setup.py install for paramfield
Successfully installed paramfield
Cleaning up...
But when i want to create a requirements file, i see only the package name:
$ pip freeze
paramfield==0.1
wsgiref==0.1.2
How can I make it output the whole string git+https://bitbucket.org/DataGreed/django-paramfield.git instead of just a package name? The package isn't in PyPi.
UPD: perhaps, it has to do something with setup.py? Should I change it somehow to reflect repo url?
UPD2: I found quite a similar question in stackoverflow, but the author was not sure how did he manage to resolve an issue and the accepted answer doesn't give a good hint unfortunately, though judging from the author's commentary it has something to do with the setup.py file.
UPD3: I've tried to pass download_url in setup.py and installing package via pip with this url, but he problem persists.
A simple but working workaround would be to install the package with the -e flag like pip install -e git+https://bitbucket.org/DataGreed/django-paramfield.git#egg=django-paramfield.
Then pip freeze shows the full source path of the package. It's not the best way it should be fixed in pip but it's working. The trade off -e (editing flag) is that pip clones the git/hg repo into /path/to/venv/src/packagename and run python setup.py deploy instead of clone it into a temp dir and run python setup.py install and remove the temp dir after the setup of the package.
Here's a script that will do that:
#!/usr/bin/env python
from subprocess import check_output
from pkg_resources import get_distribution
def download_url(package):
dist = get_distribution(package)
for line in dist._get_metadata('PKG-INFO'):
if line.startswith('Download-URL:'):
return line.split(':', 1)[1]
def main(argv=None):
import sys
from argparse import ArgumentParser
argv = argv or sys.argv
parser = ArgumentParser(
description='show download urls for installed packages')
parser.parse_args(argv[1:])
for package in check_output(['pip', 'freeze']).splitlines():
print('{}: {}'.format(package, download_url(package) or 'UNKNOWN'))
if __name__ == '__main__':
main()
This is an old question but I have just worked through this same issue and the resolution
Simply add the path to the repo (git in my case) to the requirements fie instead of the package name
i.e.
...
celery==3.0.19
# chunkdata isn't available on PyPi
https://github.com/aaronmccall/chunkdata/zipball/master
distribute==0.6.34
...
Worked like a charm deplying on heroku

Categories

Resources