I'm trying to convert a project that I had installing fine with distribute over to a newer setuptools based installer. For some reason I can't get setuptools to run at all. When I run my setup.py I get errors from distutils about unsupported options which are all the extension options provided by setuptools. I can't figure out why setuptools isn't taking care of these correctly. This is on a Debian Wheezy system running Python 2.7.
I created a simple test case to demonstrate the problem. It is a standalone script that I want installed as a utility with an executable wrapper script:
foo.py
#!/usr/bin/python
def main():
print 'Foo main() ran'
if __name__ == '__main__':
main()
setup.py
from setuptools import setup
setup(name='foo',
version='1.0',
py_modules = ['foo'],
entry_points = {
'console_scripts': ['foo = foo:main'] # setuptools extension
},
include_package_data = True # another setuptools extension
)
The version of setuptools in the Debian package archive is 0.6.24 which predates the merger with distribute and I would prefer to use something that retains some of the distribute heritage. I've been installing various versions of setuptools with pip. The latest 4.0.1 won't install properly but most anything from 1.4 to 3.8 works:
$ sudo pip install setuptools==3.8
...
$ python
Python 2.7.3 (default, Mar 13 2014, 11:03:55)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from setuptools import version
>>> version.__version__
'3.8'
The setuptools package/egg is placed in /usr/local/lib/python2.7/dist-packages which is normal on a Debian system when using pip.
When I try to install with my setup.py I get the following errors:
$ sudo python setup.py install
/usr/lib/python2.7/distutils/dist.py:267:
UserWarning: Unknown distribution option: 'entry_points'
warnings.warn(msg)
/usr/lib/python2.7/distutils/dist.py:267:
UserWarning: Unknown distribution option: 'include_package_data'
warnings.warn(msg)
running install
running build
running build_py
running install_lib
running install_egg_info
Removing /usr/local/lib/python2.7/dist-packages/foo-1.0.egg-info
Writing /usr/local/lib/python2.7/dist-packages/foo-1.0.egg-info
Distutils installs foo.py to /usr/local/lib/python2.7/dist-packages/ but obviously the setuptools generated wrapper script is absent. I would have expected setuptools to do its subclassing/monkeypatching thing to take care of anything not supported by distutils.
I remember setuptools being cranky years ago which is why I settled on using distribute. It's disappointing that it still doesn't "just work". Is there anything obvious that I'm missing? I have a suspicion that this has something to do with Debian's use of the dist-packages directory in place of site-packages but that was never an issue with distribute.
Well it looks like the problem is that setuptools doesn't like being installed into /usr/local/lib/....
I forced pip to put it into /usr/lib/... with:
sudo pip install --install-option="--prefix=/usr/lib/python2.7/dist-packages" \
setuptools==3.8
setup.py installs cleanly after that.
Related
I have the following package structure:
setup.py
/cpp_graph_test
fastgraph.pyx
graph.cpp
graph.h
graph.pxd
heap.cpp
heap.h
__init__.py
I've created a setup.py as follows:
from setuptools import Extension, setup
from Cython.Build import cythonize
sourcefiles = ['cpp_graph_test/fastgraph.pyx', 'cpp_graph_test/graph.cpp', 'cpp_graph_test/heap.cpp']
extensions = [
Extension(
name="cpp_graph_test.fastgraph",
sources=sourcefiles,
extra_compile_args=['-O3']
)
]
setup(
name='cpp_graph_test',
packages=['cpp_graph_test'],
ext_modules=cythonize(extensions, language_level=3, include_path=["cpp_graph_test"]),
version='0.0.1'
)
I install, and things seem to go fine...
$ sudo pip3 install .
Processing /home/le_user/Documents/cpp_graph_test
Preparing metadata (setup.py) ... done
Building wheels for collected packages: cpp-graph-test
Building wheel for cpp-graph-test (setup.py) ... done
Created wheel for cpp-graph-test: filename=cpp_graph_test-0.0.1-cp310-cp310-linux_x86_64.whl size=641411 sha256=8e3b20a0bec7a8f5a739deba272289bce370ab99abae02a3787d3f10718b03c9
Stored in directory: /tmp/pip-ephem-wheel-cache-ofm34m2v/wheels/c1/03/6a/f746b1b945b60e93aa67ee67e8e6a2c4537c0a87dbb72ffa34
Successfully built cpp-graph-test
Installing collected packages: cpp-graph-test
Attempting uninstall: cpp-graph-test
Found existing installation: cpp-graph-test 0.0.1
Uninstalling cpp-graph-test-0.0.1:
Successfully uninstalled cpp-graph-test-0.0.1
Successfully installed cpp-graph-test-0.0.1
However, this fails...
$ python3
Python 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from cpp_graph_test.fastgraph import FastGraph
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'cpp_graph_test.fastgraph'
I can import the cpp_graph_test package in its entirety, but if I try to use pkgutil to list all the modules in the package, there are none.
Is there something wrong with my setup.py file?
EDIT: When I run the install with pip, the pyx file is translated to C++, and I get binary generated in the build directory, so the Cython build is running, and apparently not creating errors.
EDIT2: While experimenting, I got this error: ImportError: cannot import name 'FastGraph' from 'cpp_graph_test' (/home/le_user/Documents/cpp_graph_test/cpp_graph_test/__init__.py) This error makes it seem like Python is looking at the raw code for the package instead of looking at the actual built, installed package after the pip install. It's like there's some weird symlink somewhere or something...
EDIT3: I can sudo pip3 uninstall cpp_graph_test and it'll tell me "skipping cpp_graph_test as it is not installed." But then I can start a Python shell (from any folder) and say import cpp_graph_test and it'll be successful. Not sure how to uninstall a package that's already uninstalled but that lives on anyway?
I am trying to use tox to test a graphics package I am working on. One of its dependencies is pycairo, so when I set up my tox.ini file, I specify it under deps like so:
[testenv]
deps =
pycairo
...(some other packages)
and while my tests work fine on Windows, when I try testing the package on MacOS, the test always fails with the following error when I try to pip-install pycairo:
pip3 install pycairo
Collecting pycairo
Using cached pycairo-1.20.1.tar.gz (344 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing wheel metadata: started
Preparing wheel metadata: finished with status 'done'
Collecting pygame
Downloading pygame-2.0.1-cp39-cp39-macosx_10_9_intel.whl (6.9 MB)
Building wheels for collected packages: pycairo
Building wheel for pycairo (PEP 517): started
Building wheel for pycairo (PEP 517): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /Users/appveyor/projects/cpython-cmu-graphics-0l7rb/.tox/py39/bin/python /Users/appveyor/projects/cpython-cmu-graphics-0l7rb/.tox/py39/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/5s/g225f6nd6jl4g8tshbh1ltk40000gn/T/tmpnqn0c3o6
cwd: /private/var/folders/5s/g225f6nd6jl4g8tshbh1ltk40000gn/T/pip-install-1vu11s7g/pycairo_6159cae3f6b14ec3a8681d1238fa6919
Complete output (12 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.15-x86_64-3.9
creating build/lib.macosx-10.15-x86_64-3.9/cairo
copying cairo/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/cairo
copying cairo/__init__.pyi -> build/lib.macosx-10.15-x86_64-3.9/cairo
copying cairo/py.typed -> build/lib.macosx-10.15-x86_64-3.9/cairo
running build_ext
Requested 'cairo >= 1.15.10' but version of cairo is 1.12.14
Command '['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for pycairo
Failed to build pycairo
ERROR: Could not build wheels for pycairo which use PEP 517 and cannot be installed directly
I established that the main reason I'm getting this error is because wheels and Cairo binary files are not provided for the pip installation of pycairo on MacOS. (It's worth noting that I'm running my MacOS tests via a remote VM) As such, I tried to install cairo first using Homebrew like so:
brew install cairo
However, whenever I retry the tests, I still get the same error message. I read on another SO post that you should brew install pkg-config as well, so in addition to the brew installation above, I also did:
brew install pkg-config
And still ended up with the same error message when I retried the tests. Frustrated, I once again took to Stack Overflow and discovered that you can directly install pycairo (as well as its dependencies, like cairo) with one single brew install command:
brew install py3cairo
Now, whenever I SSH'd into the Mac VM, running the test files worked, but because tox runs tests inside of virtual environments, it can't access this version of pycairo.
Now, one nasty, probably-horrible-practice, brute-force solution I found was to print out the path of the pycairo directory using this small Python script:
import os
import cairo
print(os.path.dirname(cairo.__file__))
And then I cp'd that directory into a virtual environment and found that it actually allows you to run import cairo without getting an error.
cp -r <path>/cairo venv3.9/lib/python3.9/site-packages
Python 3.9.1 (default, Dec 26 2020, 00:12:24)
[Clang 12.0.0 (clang-1200.0.32.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cairo
>>>
However, not surprisingly, this doesn't seem to work with any other Python minor version that I'm testing, and I wouldn't be surprised if this breaks the library in other ways I haven't discovered yet. So that's not really an acceptable solution either.
What can I do to make my tests run properly? In my tests I just want to simulate an environment that already has all the package dependencies installed, but with pycairo it doesn't seem like there's a way for me to access the package.
I just need this to work in tox for testing purposes only. I don't anticipate anyone using our package inside of a virtual environment, so our users should just be able to install py3cairo via brew directly to their system in the worst case.
Most likely, it looks like I need a way to install cairo and pkg-config such that pip inside of a virtual environment can access those files and still install the Python bindings. But I'm also open to any other suggestions that would just allow my tox tests to run. Does anyone have any thoughts on how to fix this?
Requested 'cairo >= 1.15.10' but version of cairo is 1.12.14
Your issue is not about package discoverability but an out-of-date version. If the brew-installed version of cairo is newer than 1.15.10 then you might have a separate cairo installation lying around which gets preferred over your brew-installed version.
To reproduce the issue, I did the following:
brew install cairo
python -m venv cairo
source cairo/bin/activate
pip install pycairo
which worked as expected (Python 3.9.1, pip 20.2.3).
My pyflakes portion of flake8 is not running for my global python instance (/usr/bin/python, not virtualenv).
flake8 --version
2.2.3 (pep8: 1.5.7, mccabe: 0.2.1) CPython 2.7.5 on Darwin
It doesn't seem like pyflakes is getting attached to flake8. pip freeze confirms that pyflakes==0.8.1 is installed. I installed on my global site-packages ($ sudo pip install flake8).
However, when running inside of a virtualenv, pyflakes is in the list and flake8 works as expected.
I had a similar problem with conda's flake8. Here are some debugging notes:
flake8 registers the pyflakes checker in its setup.py file:
setup(
...
entry_points={
'distutils.commands': ['flake8 = flake8.main:Flake8Command'],
'console_scripts': ['flake8 = flake8.main:main'],
'flake8.extension': [
'F = flake8._pyflakes:FlakesChecker',
],
},
...
When checking a file, flake8 loads the registered entry points for 'flake8.extension' and registers found checkers:
...
for entry in iter_entry_points('flake8.extension'):
checker = entry.load()
pep8.register_check(checker, codes=[entry.name])
...
conda's flake8 seems to have problems writing those entry points.
from pkg_resources import iter_entry_points
list(iter_entry_points('flake8.extension'))
returns an empty list for me, which is why pyflakes won't be registered and thus does not work, even though it is installed and importable.
Updating setuptools and installing via pip install flake8 fixes the problem for me.
I have a setup.py that looks a bit (okay, exactly) like this:
#!/usr/bin/env python
from setuptools import setup
import subprocess
import distutils.command.build_py
class BuildWithMake(distutils.command.build_py.build_py):
"""
Build using make.
Then do the default build logic.
"""
def run(self):
# Call make.
subprocess.check_call(["make"])
# Keep installing the Python stuff
distutils.command.build_py.build_py.run(self)
setup(name="jobTree",
version="1.0",
description="Pipeline management software for clusters.",
author="Benedict Paten",
author_email="benedict#soe.ucsc.edu",
url="http://hgwdev.cse.ucsc.edu/~benedict/code/jobTree.html",
packages=["jobTree", "jobTree.src", "jobTree.test", "jobTree.batchSystems",
"jobTree.scriptTree"],
package_dir= {"": ".."},
install_requires=["sonLib"],
# Hook the build command to also build with make
cmdclass={"build_py": BuildWithMake},
# Install all the executable scripts somewhere on the PATH
scripts=["bin/jobTreeKill", "bin/jobTreeStatus",
"bin/scriptTreeTest_Sort.py", "bin/jobTreeRun",
"bin/jobTreeTest_Dependencies.py", "bin/scriptTreeTest_Wrapper.py",
"bin/jobTreeStats", "bin/multijob", "bin/scriptTreeTest_Wrapper2.py"])
It installs the package perfectly fine when run with ./setup.py install. However, it does this whether or not the "sonLib" package is installed, ignoring the dependency.
Is this expected behavior? Should a setup.py install blithely proceed if the dependencies are not installed, leaving it up to pip or whatever to install them beforehand? If not, and setup.py install ought to fail when dependencies are absent, what am I doing wrong?
EDIT: Some version information:
Python 2.7.2 (default, Jan 19 2012, 21:40:50)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import setuptools
>>> setuptools.__version__
'0.6c12'
>>>
The default install command for Distutils setup doesn't know anything about dependencies. If you are running that, you're right that dependencies will not be checked.
Just going by what you've show in the setup.py, though, you are using Setuptools for the setup function. The Setuptools install command is declared to run easy_install, which in turn does check and download dependencies.
It is possible you are explicitly invoking the Distutils install, by specifying install --single-version-externally-managed.
Due to lack of support for some libraries I want to use, I moved some Python development from Windows to Linux development. I've spent most of the day messing about getting nowhere with dependencies.
The question
Whenever I pick up Linux, I usually run into some kind of dependency issue, usually with development libraries, whether they're installed via apt-get, easy_install or pip. I can waste days on what should be simple tasks, spending longer on getting libraries to work than writing code. Where can I learn about strategy for dealing with these kind of issues rather than aimlessly googling for someone who's come across the same problem before?
An example
Just one example: I wanted to generate some QR codes. So, I thought I'd use github.com/bitly/pyqrencode which is based on pyqrcode.sourceforge.net but supposedly without the Java dependencies. There are others (pyqrnative, github.com/Arachnid/pyqrencode) but that one seemed like the best bet for my needs.
So, I found the package on pypi and thought using that would make life easier:
(I've perhaps made life more difficult for myself by using virtualenv to keep things neat and tidy.)
(myenv3)mat#ubuntu:~/myenv3$ bin/pip install pyqrencode
Downloading/unpacking pyqrencode
Downloading pyqrencode-0.2.tar.gz
Running setup.py egg_info for package pyqrencode
Installing collected packages: pyqrencode
Running setup.py install for pyqrencode
building 'qrencode' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c qrencode.c -o build/temp.linux-i686-2.7/qrencode.o
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/qrencode.o -lqrencode -o build/lib.linux-i686-2.7/qrencode.so
Successfully installed pyqrencode
Cleaning up...
(I guess I probably sudo apt-get install libqrencode-dev at some point prior to that too.)
So then I tried to run the test script:
(myenv3)mat#ubuntu:~/myenv3$ python test_qr.py
Traceback (most recent call last):
File "test_qr.py", line 1, in <module>
from qrencode import Encoder
File "qrencode.pyx", line 1, in init qrencode (qrencode.c:1520)
ImportError: No module named ImageOps
:(
Well, investigations revealed that ImageOps appears to be part of PIL...
(myenv3)mat#ubuntu:~/myenv3$ pip install pil
Downloading/unpacking pil
Downloading PIL-1.1.7.tar.gz (506Kb): 122Kb downloaded
Operation cancelled by user
Storing complete log in /home/mat/.pip/pip.log
(myenv3)mat#ubuntu:~/myenv3$ bin/pip install pil
Downloading/unpacking pil
Downloading PIL-1.1.7.tar.gz (506Kb): 506Kb downloaded
Running setup.py egg_info for package pil
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Installing collected packages: pil
Running setup.py install for pil
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
building '_imaging' extension
gcc ...
building '_imagingmath' extension
gcc ...
--------------------------------------------------------------------
PIL 1.1.7 SETUP SUMMARY
--------------------------------------------------------------------
version 1.1.7
platform linux2 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2]
--------------------------------------------------------------------
*** TKINTER support not available
*** JPEG support not available
*** ZLIB (PNG/ZIP) support not available
*** FREETYPE2 support not available
*** LITTLECMS support not available
--------------------------------------------------------------------
To add a missing option, make sure you have the required
library, and set the corresponding ROOT variable in the
setup.py script.
To check the build, run the selftest.py script.
...
Successfully installed pil
Cleaning up...
Hmm, PIL's installed but hasn't picked up the libraries I installed with sudo apt-get install libjpeg62 libjpeg62-dev libpng12-dev zlib1g zlib1g-dev earlier. I'm not sure how to tell pip to feed the library locations to setup.py. Googling suggests a variety of ideas which I've tried, but none of them seem to help much other than to send me round in circles.
Ubuntu 11.04: Installing PIL into a virtualenv with PIP suggests using the pillow package instead, so let's try that:
(myenv3)mat#ubuntu:~/myenv3$ pip install pillow
Downloading/unpacking pillow
Downloading Pillow-1.7.5.zip (637Kb): 637Kb downloaded
Running setup.py egg_info for package pillow
...
Installing collected packages: pillow
Running setup.py install for pillow
building '_imaging' extension
gcc ...
--------------------------------------------------------------------
SETUP SUMMARY (Pillow 1.7.5 / PIL 1.1.7)
--------------------------------------------------------------------
version 1.7.5
platform linux2 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2]
--------------------------------------------------------------------
*** TKINTER support not available
--- JPEG support available
--- ZLIB (PNG/ZIP) support available
--- FREETYPE2 support available
*** LITTLECMS support not available
--------------------------------------------------------------------
To add a missing option, make sure you have the required
library, and set the corresponding ROOT variable in the
setup.py script.
To check the build, run the selftest.py script.
...
Successfully installed pillow
Cleaning up...
Well, we seem to have the JPEG and PNG support this time, yay!
(myenv3)mat#ubuntu:~/myenv3$ python test_qr.py
Traceback (most recent call last):
File "test_qr.py", line 1, in <module>
from qrencode import Encoder
File "qrencode.pyx", line 1, in init qrencode (qrencode.c:1520)
ImportError: No module named ImageOps
Still no ImageOps though. Now I'm stumped, is ImageOps missing from pillow, or is it a different problem that was also there with pil.
I see two separate problems here:
Keeping track of all the python modules you need for your project.
Keeping track of all the dynamic libraries you need for the python modules in your project.
For the first problem, I have found that buildout is good help, althought it takes a litle while to grasp.
In your case, I would start by creating a directory for my new project. I would then go into that directory and download bootstrap.py
wget http://python-distribute.org/bootstrap.py
I would then create a buildout.cfg file:
[buildout]
parts = qrproject
python
eggs = pyqrencode
[qrproject]
recipe = z3c.recipe.scripts
eggs = ${buildout:eggs}
entry-points= qrproject=qrprojectmodule:run
extra-paths = ${buildout:directory}
# This is a simple way of creating an interpreter that will have
# access to all the eggs / modules that this project uses.
[python]
recipe = z3c.recipe.scripts
interpreter = python
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}
In this buildout.cfg I'm referencing the module qrprojectmodule (in entry-points under [qrproject]. This will create a bin/qrproject that runs the function run in the module qrprojectmodule. So I will also create the file qrprojectmodule.py
import qrencode
def run():
print "Entry point for qrproject. Happily imports qrencode module"
Now it's time to run bootstrap.py with the python binary you want to use:
python bootstrap.py
Then run the generated bin/buildout
bin/buildout
This will create two additional binaries in the bin/ directory - bin/qrproject and bin/python. The former is your project's main binary. It will be created automatically each time you run buildout and will have all the modules and eggs you want loaded. The second is simply a convenient way to get a python prompt where all your modules and eggs are loaded, for easy debugging. The fine thing here is that bin/buildout will automatically install any python eggs that the eggs (in your case pyqrencode) have specified as dependencies.
Actually, you will probably get a compilation error in the step where you run bin/buildout. This is because you need to address problem 2: All dynamic libraries being available on your system. On Linux, it's usually best to get help from your packaging system. I'm going to assume you're using a Debian derivate such as Ubuntu here.
The pyqrencode web site specifies that you need the libqrencode library for pyqrencode to work. So I used my package manager to search for that:
$ apt-cache search libqrencode
libqrencode-dev - QR Code encoding library -- development
libqrencode3 - QR Code encoding library
qrencode - QR Code encoder into PNG image
In this case, I want the -dev package, as that installs linkable libraries and header files required to compile python C-modules. Also, the dependency system in the package manager will make sure that if I install libqrencode-dev, I will also get libqrencode3, as that is required at runtime, i.e. after compilation of the module.
So, I install the package:
sudo apt-get install libqrencode-dev
Once that has completed, rerun bin/buildout and the pyqrencode module will (hopefully) compile and install successfully. Now try to run bin/qrproject
$ bin/qrproject
Entry point for qrproject. Happily imports qrencode module
Success! :-)
So, in summary:
Use buildout to automatically download and install all the python modules/eggs you need for your project.
Use your system's package manager to install any dynamic (C) libraries required by the python modules you use.
Be aware that in many cases there are already packaged versions of your python modules available in the package system. For example, pil is available by installing the python-imaging package on Ubuntu. In this case, you don't need to install it via buildout, and you don't need to worry about libraries being available - the package manager will install all dependencies required for the module to run. Doing it via buildout can however make it easier to distribute your project and make it run on other systems.
Your story reminds me of my early experiences with Linux, and why I love APT.
There is no universal solution to your general problem; the best you can do is to take advantage of the work or others. The Debian packagers do a great job of flagging the dependencies of packages, so apt-get will pull in what you need. So, my strategy is simply to avoid building and installing stuff on my own, and use apt-get wherever possible.
Note that Ubuntu is based on Debian and thus gains the benefit of the work of the Debian packagers. I haven't used Fedora but I hear that the packages are not as well-organized as the ones from Debian.