I have a root image which is tagged as repo_a/image_a. Its Dockerfile looks like so
FROM some_repo/some_image
# ...
RUN pip3 install https://$authToken:x-oauth-basic#github.com/library_a/archive/v0.0.1.tar.gz
# ...
library_a is a custom python library using setuptools for installation. Its installation file looks like
setuptools.setup(
name=library_a
version=...,
author=...,
author_email=...,
description=...,
url=...,
entry_points={
'console_scripts': [
'luigim = ....luigim:main',
'luigip = ....luigip:main'
]
},
include_package_data=True,
packages=setuptools.find_packages(),
install_requires=[
'luigi==3.0.3',
'boto3==1.18.52',
'influxdb',
'boto',
'paramiko',
'termcolor',
'retrying'
]
)
Now I am using this root image to create another image tagged like repo_a/custom_image. Its Dockerfile looks like
FROM repo_a/image_a
# ...
RUN pip3 install https://$authToken:x-oauth-basic#github.com/library_b/archive/v0.0.45.tar.gz
RUN pip3 install https://$authToken:x-oauth-basic#github.com/library_c/archive/v1.0.43.tar.gz
# ...
Both libraries a and b are custom Python libraries also using setuptools for installation. Their setup.pys look like
setuptools.setup(
name=library_b, #or c
version=...,
author=...,
author_email=...,
description=...,
url=...,
include_package_data=True,
packages=setuptools.find_packages(),
install_requires=[
'luigi==3.0.3'
]
)
This is successfully built but my problem is that library_a installed in repo_a/image_a is no longer installed in repo_a/custom_image. To be more specific:
Libraries library_b and library_c are properly installed in the final image - I can see both in the result of pip3 freeze. Unfortunately, I cannot use them since they have a dependency on library_a which does not show up in the pip3 freeze output
If I remove the pip3 installs from the second docker file, library_a can be found in the final image - I can see library_a in the result of pip3 freeze and can successfully import them in a python3 REPL.
So what happens here? it seems like the second set of pip3 install shadows the first pip3 install ran by repo_a/image_a. Why is that? What am I doing wrong?
Related
Hello I'm trying to add a custom step to my Python3 package. What I wish for, is to execute a make command in a root directory of my Python package. However, when I install my package with pip3 install ., the CWD (current working directory) changes to /tmp/pip-req-build-<something>
Here is what I have:
from setuptools import setup, find_packages
from setuptools.command.develop import develop
from setuptools.command.install import install
import subprocess, os
class CustomInstall(install):
def run(self):
#subprocess.run(['make', '-C', 'c_library/']) # this doesn't work as c_library/ doesn't exist in the changed path
subprocess.run('./getcwd.sh')
install.run(self)
setup(
cmdclass={
'install': CustomInstall
},
name='my-py-package',
version='0.0.1',
....
)
Now, what's interesting to me is that the getcwd.sh gets executed, and inside of it I have this. It also prints the TMP location.
#!/bin/bash
SCRIPT=`realpath $0`
SCRIPTPATH=`dirname $SCRIPT`
echo $SCRIPTPATH > ~/Desktop/my.log
Is there a way to get the path from where the pip install . was run?
Python 3.8.5, Ubuntu 20.04, Pip3 pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
I wanted to try a github project named deformable kernels, and followed the steps described in the README.md file:
conda env create -f environment.yml
cd deformable_kernels/ops/deform_kernel;
pip install -e .;
The structure of deformable_kernel/ops/deform_kernel is showed here:
.
csrc
filter_sample_depthwise_cuda.cpp
filter_sample_depthwise_cuda.h
filter_sample_depthwise_cuda_kernel.cu
nd_linear_sample_cuda.cpp
nd_linear_sample_cuda.h
nd_linear_sample_cuda_kernel.cu
functions
filter_sample_depthwise.py
__init__.py
nd_linear_sample.py
__init__.py
modules
filter_sample_depthwise.py
__init__.py
setup.py
And the content of file setup.py is showed here:
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
setup(
name='filter_sample_depthwise',
ext_modules=[
CUDAExtension(
'filter_sample_depthwise_cuda',
[
'csrc/filter_sample_depthwise_cuda.cpp',
'csrc/filter_sample_depthwise_cuda_kernel.cu',
]
),
],
cmdclass={'build_ext': BuildExtension}
)
setup(
name="nd_linear_sample",
ext_modules=[
CUDAExtension(
"nd_linear_sample_cuda",
[
"csrc/nd_linear_sample_cuda.cpp",
"csrc/nd_linear_sample_cuda_kernel.cu",
],
)
],
cmdclass={"build_ext": BuildExtension},
)
When I install this directory using command pip install -e ., it failed and the result is:
Obtaining file:///home/xxx/Downloads/deformable_kernels/deformable_kernels/ops/deform_kernel
ERROR: More than one .egg-info directory found in /tmp/pip-pip-egg-info-pta6z__q
So I tried to separate the 2 setup()s in different setup.py files. It worked but I didn't get a python file. Instead a .so file was generated.
Does anyone know how to solve a problem like this?
Check your pip version. I've had the same error (when installing other things in dev mode with pip) and downgrading to pip version 20.0.2 worked. Unsure why, but I've seen other folks on the internet solve the problem similarly.
I'm trying to prepare setup.py that will install all necessary dependencies, including google-cloud-pubsub. However, python setup.py install fails with
pkg_resources.UnknownExtra: googleapis-common-protos 1.6.0b6 has no such extra feature 'grpc'
The weird thing is that I can install those dependencies through pip install in my virtualenv.
How can I fix it or get around it? I use Python 2.7.15.
Here's minimal configuration to reproduce the problem:
setup.py
from setuptools import setup
setup(
name='example',
install_requires=['google-cloud-pubsub']
)
In your setup.py use the following:
from setuptools import setup
setup(
name='example',
install_requires=['google-cloud-pubsub', 'googleapis-common-protos==1.5.3']
)
That seems to get around it
I have previously created a python package and uploaded it to pypi. The package depends upon 2 other packages defined within the setup.py file:
from setuptools import setup
from dominos.version import Version
def readme():
with open('README.rst') as file:
return file.read()
setup(name='dominos',
version=Version('0.0.1').number,
author='Tomas Basham',
url='https://github.com/tomasbasham/dominos',
license='MIT',
packages=['dominos'],
install_requires=[
'ratelimit',
'requests'
],
include_package_data=True,
zip_safe=False)
As both of these were already installed within my virtualenv this package would run fine.
Now trying to consume this package within another python application (and within a separate virtualenv) I have defined the following requirements.txt file:
dominos==0.0.1
geocoder==1.13.0
For reference dominos is the package I uploaded to pypi. Now running pip install --no-cache-dir -r requirements.txt fails because dependencies of dominos are missing:
ImportError: No module named ratelimit
Surely pip should be resolving these dependencies since I have defined them in the setup.py file of dominos. Clarity on this would be great.
One can use pip to install a specific tag:
pip install -e git+ssh://git#github.com/{usr}/{repo}.git#{tag}#egg={egg}
However, I can't seem to find a way to make it point to the latest release (which would be releases/latest), and not just to the HEAD of master. Is it at all possible?
One constraint, it has to use ssh.
If you are using python packages here is one way to do this:
setup.py
import setuptools
import urllib.request
deps = [
{
'name': 'gunicorn',
'url': 'github.com/benoitc/gunicorn',
},
]
for i in range(len(deps)):
tag_url = urllib.request.urlopen(f"https://{deps[i]['url']}/releases/latest").geturl()
latest_tag = tag_url.split('/')[-1]
deps[i] = f"{deps[i]['name']} # git+ssh://{deps[i]['url']}#{latest_tag}"
setuptools.setup(
install_requires=deps,
)
And then install the package locally
python -m pip install .