I am trying to run a block of code after my python package has been downloaded from PyPi.
I've setup a custom cmdclass in my setuptools.setup
from setuptools import find_packages, setup
from setuptools.command.install import install
class CustomInstallCommand(install):
def run(self):
print "Here is where I would be running my code..."
install.run(self)
setup(
name = 'packagename',
packages=find_packages(),
version = '0.1',
description = '',
author = '',
cmdclass={
'install': CustomInstallCommand,
},
author_email = '',
url = '',
keywords = [],
classifiers = [],
)
This works great when I run python setup.py install which outputs my print statement. However, when I build the tar.gz package (using python setup.py sdist) and try to install via pip (pip install dist/mypackage-0.1.tar.gz), the print statement is never printed. I have also tried to upload the built package to PyPi and pip install from there.
I have looked at a similar question asked on stackoverflow but the solution did not work.
pip install does run your custom command, it just hides all standard output from setup.py. To increase verbosity level and see your command output, try running
pip install -v ...
Related
I have a root image which is tagged as repo_a/image_a. Its Dockerfile looks like so
FROM some_repo/some_image
# ...
RUN pip3 install https://$authToken:x-oauth-basic#github.com/library_a/archive/v0.0.1.tar.gz
# ...
library_a is a custom python library using setuptools for installation. Its installation file looks like
setuptools.setup(
name=library_a
version=...,
author=...,
author_email=...,
description=...,
url=...,
entry_points={
'console_scripts': [
'luigim = ....luigim:main',
'luigip = ....luigip:main'
]
},
include_package_data=True,
packages=setuptools.find_packages(),
install_requires=[
'luigi==3.0.3',
'boto3==1.18.52',
'influxdb',
'boto',
'paramiko',
'termcolor',
'retrying'
]
)
Now I am using this root image to create another image tagged like repo_a/custom_image. Its Dockerfile looks like
FROM repo_a/image_a
# ...
RUN pip3 install https://$authToken:x-oauth-basic#github.com/library_b/archive/v0.0.45.tar.gz
RUN pip3 install https://$authToken:x-oauth-basic#github.com/library_c/archive/v1.0.43.tar.gz
# ...
Both libraries a and b are custom Python libraries also using setuptools for installation. Their setup.pys look like
setuptools.setup(
name=library_b, #or c
version=...,
author=...,
author_email=...,
description=...,
url=...,
include_package_data=True,
packages=setuptools.find_packages(),
install_requires=[
'luigi==3.0.3'
]
)
This is successfully built but my problem is that library_a installed in repo_a/image_a is no longer installed in repo_a/custom_image. To be more specific:
Libraries library_b and library_c are properly installed in the final image - I can see both in the result of pip3 freeze. Unfortunately, I cannot use them since they have a dependency on library_a which does not show up in the pip3 freeze output
If I remove the pip3 installs from the second docker file, library_a can be found in the final image - I can see library_a in the result of pip3 freeze and can successfully import them in a python3 REPL.
So what happens here? it seems like the second set of pip3 install shadows the first pip3 install ran by repo_a/image_a. Why is that? What am I doing wrong?
Hello I'm trying to add a custom step to my Python3 package. What I wish for, is to execute a make command in a root directory of my Python package. However, when I install my package with pip3 install ., the CWD (current working directory) changes to /tmp/pip-req-build-<something>
Here is what I have:
from setuptools import setup, find_packages
from setuptools.command.develop import develop
from setuptools.command.install import install
import subprocess, os
class CustomInstall(install):
def run(self):
#subprocess.run(['make', '-C', 'c_library/']) # this doesn't work as c_library/ doesn't exist in the changed path
subprocess.run('./getcwd.sh')
install.run(self)
setup(
cmdclass={
'install': CustomInstall
},
name='my-py-package',
version='0.0.1',
....
)
Now, what's interesting to me is that the getcwd.sh gets executed, and inside of it I have this. It also prints the TMP location.
#!/bin/bash
SCRIPT=`realpath $0`
SCRIPTPATH=`dirname $SCRIPT`
echo $SCRIPTPATH > ~/Desktop/my.log
Is there a way to get the path from where the pip install . was run?
Python 3.8.5, Ubuntu 20.04, Pip3 pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
I'm trying to prepare setup.py that will install all necessary dependencies, including google-cloud-pubsub. However, python setup.py install fails with
pkg_resources.UnknownExtra: googleapis-common-protos 1.6.0b6 has no such extra feature 'grpc'
The weird thing is that I can install those dependencies through pip install in my virtualenv.
How can I fix it or get around it? I use Python 2.7.15.
Here's minimal configuration to reproduce the problem:
setup.py
from setuptools import setup
setup(
name='example',
install_requires=['google-cloud-pubsub']
)
In your setup.py use the following:
from setuptools import setup
setup(
name='example',
install_requires=['google-cloud-pubsub', 'googleapis-common-protos==1.5.3']
)
That seems to get around it
Please before flagging as duplicate, I have tried a bunch of solutions
including one here
but no luck
I have created a simple tool to do some tasks and was able to package it successfully.
When trying to install it, I get the desired effect when I use python setup.py install but pip install package_name just installs the package but no post installation script.
Here is part of my code;
setup.py
from distutils import setup
from app.scripts import *
setup(
#Application name
name = "my-app-name",
version = "my-app-version",
author = "my-name",
author_email = "my-email",
packages = ['app'],
include_package_data = True,
license = 'MIT',
url = "https://my-url",
description = "description",
install_requires = ["flake8"],
cmdclass = {
"install":Post_install
}
)
scripts.py
from distutils.command.install import install
import os
class Post_install(install):
#staticmethod
def func():
return True
def run(self):
install.run(self)
#Pre install actions
if Post_install.func():
print("Bingo")
else:
print("Failed")
Thanks :)
PS I run pip install after uploading the package.
Install the package directly from your GitHub repository:
pip install -vvv git+url/for/github/repo#my-branch
You mentioned in the chat that you'd like to add this package to your requirements.txt file. See this question for details:
-e git://github.com/path/to/project
Former answer (rejected by the OP):
I managed to recreate the issue you're having. It seems to be a matter of pip install silencing or redirecting output (as indicated by an answer to this question).
The solution is to add the option -vvv after pip install. I'm guessing the v stands for verbose.
I've created an environment and added a package django-paramfield via git:
$ pip install git+https://bitbucket.org/DataGreed/django-paramfield.git
Downloading/unpacking git+https://bitbucket.org/DataGreed/django-paramfield.git
Cloning https://bitbucket.org/DataGreed/django-paramfield.git to /var/folders/9Z/9ZQZ1Q3WGMOW+JguzcBKNU+++TI/-Tmp-/pip-49Eokm-build
Unpacking objects: 100% (29/29), done.
Running setup.py egg_info for package from git+https://bitbucket.org/DataGreed/django-paramfield.git
Installing collected packages: paramfield
Running setup.py install for paramfield
Successfully installed paramfield
Cleaning up...
But when i want to create a requirements file, i see only the package name:
$ pip freeze
paramfield==0.1
wsgiref==0.1.2
How can I make it output the whole string git+https://bitbucket.org/DataGreed/django-paramfield.git instead of just a package name? The package isn't in PyPi.
UPD: perhaps, it has to do something with setup.py? Should I change it somehow to reflect repo url?
UPD2: I found quite a similar question in stackoverflow, but the author was not sure how did he manage to resolve an issue and the accepted answer doesn't give a good hint unfortunately, though judging from the author's commentary it has something to do with the setup.py file.
UPD3: I've tried to pass download_url in setup.py and installing package via pip with this url, but he problem persists.
A simple but working workaround would be to install the package with the -e flag like pip install -e git+https://bitbucket.org/DataGreed/django-paramfield.git#egg=django-paramfield.
Then pip freeze shows the full source path of the package. It's not the best way it should be fixed in pip but it's working. The trade off -e (editing flag) is that pip clones the git/hg repo into /path/to/venv/src/packagename and run python setup.py deploy instead of clone it into a temp dir and run python setup.py install and remove the temp dir after the setup of the package.
Here's a script that will do that:
#!/usr/bin/env python
from subprocess import check_output
from pkg_resources import get_distribution
def download_url(package):
dist = get_distribution(package)
for line in dist._get_metadata('PKG-INFO'):
if line.startswith('Download-URL:'):
return line.split(':', 1)[1]
def main(argv=None):
import sys
from argparse import ArgumentParser
argv = argv or sys.argv
parser = ArgumentParser(
description='show download urls for installed packages')
parser.parse_args(argv[1:])
for package in check_output(['pip', 'freeze']).splitlines():
print('{}: {}'.format(package, download_url(package) or 'UNKNOWN'))
if __name__ == '__main__':
main()
This is an old question but I have just worked through this same issue and the resolution
Simply add the path to the repo (git in my case) to the requirements fie instead of the package name
i.e.
...
celery==3.0.19
# chunkdata isn't available on PyPi
https://github.com/aaronmccall/chunkdata/zipball/master
distribute==0.6.34
...
Worked like a charm deplying on heroku