Setup.py woes: WARNING: '' not a valid package name - python

For my Python project, I keep my source code in the directory src. Thus, for my project's setup.py script:
from setuptools import setup
setup(name='pyIAST',
...
package_dir={'':'src'},
packages=[''])
so that it looks for src/IAST.py, where my code resides. e.g. there is a function plot_isotherms() in my IAST.py script so the user can, after installation, call it:
import IAST
IAST.plot_isotherms()
Everything works great, but there is an annoying warning when I python setup.py install or use pip install pyIAST from PyPi:
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
How do I make this go away?
My project is here. I'm also a bit confused as to why I name my package pyIAST, yet the user still types import IAST for my package to import.

One way to clear that warning is to change your first line to:
from setuptools import setup, find_packages
and then change your packages line to:
packages=find_packages(),
The setup install will no longer generate a warning.
You can run the following two commands to see your isotherm method is now available:
import pyiast #(<==notice this is not IAST)
dir(pyiast)
['BETIsotherm', 'InterpolatorIsotherm', 'LangmuirIsotherm', 'ModelIsotherm', 'QuadraticIsotherm', 'SipsIsotherm', '_MODELS', '_MODEL_PARAMS', '_VERSION', '__author__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', 'iast', 'np', 'plot_isotherm', 'print_selectivity', 'reverse_iast', 'scipy']
It can be called using pyiast.plot_isotherm()
You may need to update your setuptools. You can check what version you have with:
import setuptools; print "setup version: ", setuptools.__version__
Can update it with:
sudo pip install --upgrade setuptools

Related

How to check version of builtin python modules, for example "logging"? [duplicate]

I installed the Python modules construct and statlib using setuptools:
sudo apt-get install python-setuptools
sudo easy_install statlib
sudo easy_install construct
How do I check their versions from the command line?
Use pip instead of easy_install.
With pip, list all installed packages and their versions via:
pip freeze
On most Linux systems, you can pipe this to grep (or findstr on Windows) to find the row for the particular package you're interested in.
Linux:
pip freeze | grep lxml
lxml==2.3
Windows:
pip freeze | findstr lxml
lxml==2.3
For an individual module, you can try the __version__ attribute. However, there are modules without it:
python -c "import requests; print(requests.__version__)"
2.14.2
python -c "import lxml; print(lxml.__version__)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: 'module' object has no attribute 'version'
Lastly, as the commands in your question are prefixed with sudo, it appears you're installing to the global python environment. I strongly advise to take look into Python virtual environment managers, for example virtualenvwrapper.
You can try
>>> import statlib
>>> print statlib.__version__
>>> import construct
>>> print contruct.__version__
This is the approach recommended by PEP 396. But that PEP was never accepted and has been deferred. In fact, there appears to be increasing support amongst Python core developers to recommend not including a __version__ attribute, e.g. in Remove importlib_metadata.version..
Python >= 3.8:
If you're on Python >= 3.8, you can use a module from the built-in library for that. To check a package's version (in this example construct) run:
>>> from importlib.metadata import version
>>> version('construct')
'4.3.1'
Python < 3.8:
Use pkg_resources module distributed with setuptools library. Note that the string that you pass to get_distribution method should correspond to the PyPI entry.
>>> import pkg_resources
>>> pkg_resources.get_distribution('construct').version
'2.5.2'
Side notes:
Note that the string that you pass to the get_distribution method should be the package name as registered in PyPI, not the module name that you are trying to import. Unfortunately, these aren't always the same (e.g. you do pip install memcached, but import memcache).
If you want to apply this solution from the command line you can do something like:
python -c \
"import pkg_resources; print(pkg_resources.get_distribution('construct').version)"
Use pip show to find the version!
# In order to get the package version, execute the below command
pip show YOUR_PACKAGE_NAME | grep Version
You can use pip show YOUR_PACKAGE_NAME - which gives you all details of package. This also works in Windows.
grep Version is used in Linux to filter out the version and show it.
The better way to do that is:
For the details of a specific package
pip show <package_name>
It details out the package_name, version, author, location, etc.
$ pip show numpy
Name: numpy
Version: 1.13.3
Summary: NumPy: array processing for numbers, strings, records, and objects.
Home-page: http://www.numpy.org
Author: NumPy Developers
Author-email: numpy-discussion#python.org
License: BSD
Location: c:\users\prowinjvm\appdata\local\programs\python\python36\lib\site-packages
Requires:
For more details: >>> pip help
pip should be updated to do this.
pip install --upgrade pip
On Windows the recommended command is:
python -m pip install --upgrade pip
In Python 3 with brackets around print:
>>> import celery
>>> print(celery.__version__)
3.1.14
module.__version__ is a good first thing to try, but it doesn't always work.
If you don't want to shell out, and you're using pip 8 or 9, you can still use pip.get_installed_distributions() to get versions from within Python:
The solution here works in pip 8 and 9, but in pip 10 the function has been moved from pip.get_installed_distributions to pip._internal.utils.misc.get_installed_distributions to explicitly indicate that it's not for external use. It's not a good idea to rely on it if you're using pip 10+.
import pip
pip.get_installed_distributions() # -> [distribute 0.6.16 (...), ...]
[
pkg.key + ': ' + pkg.version
for pkg in pip.get_installed_distributions()
if pkg.key in ['setuptools', 'statlib', 'construct']
] # -> nicely filtered list of ['setuptools: 3.3', ...]
The previous answers did not solve my problem, but this code did:
import sys
for name, module in sorted(sys.modules.items()):
if hasattr(module, '__version__'):
print name, module.__version__
Use dir() to find out if the module has a __version__ attribute at all.
>>> import selenium
>>> dir(selenium)
['__builtins__', '__doc__', '__file__', '__name__',
'__package__', '__path__', '__version__']
>>> selenium.__version__
'3.141.0'
>>> selenium.__path__
['/venv/local/lib/python2.7/site-packages/selenium']
You can try this:
pip list
This will output all the packages with their versions.
Output
In the Python 3.8 version, there is a new metadata module in the importlib package, which can do that as well.
Here is an example from the documentation:
>>> from importlib.metadata import version
>>> version('requests')
'2.22.0'
Some modules don't have __version__ attribute, so the easiest way is check in the terminal: pip list
If the methods in previous answers do not work, it is worth trying the following in Python:
import modulename
modulename.version
modulename.version_info
See Get the Python Tornado version
Note, the .version worked for me on a few others, besides Tornado as well.
First add executables python and pip to your environment variables. So that you can execute your commands from command prompt. Then simply give Python command.
Then import the package:
import scrapy
Then print the version name
print(scrapy.__version__)
This will definitely work.
Assuming we are using Jupyter Notebook (if using Terminal, drop the exclamation marks):
if the package (e.g., xgboost) was installed with pip:
!pip show xgboost
!pip freeze | grep xgboost
!pip list | grep xgboost
if the package (e.g. caffe) was installed with Conda:
!conda list caffe
I suggest opening a Python shell in the terminal (in the Python version you are interested), importing the library, and getting its __version__ attribute.
>>> import statlib
>>> statlib.__version__
>>> import construct
>>> contruct.__version__
Note 1: We must regard the Python version. If we have installed different versions of Python, we have to open the terminal in the Python version we are interested in. For example, opening the terminal with Python 3.8 can (surely will) give a different version of a library than opening with Python 3.5 or Python 2.7.
Note 2: We avoid using the print function, because its behavior depends on Python 2 or Python 3. We do not need it, and the terminal will show the value of the expression.
This answer is for Windows users. As suggested in all other answers, you can use the statements as:
import [type the module name]
print(module.__version__) # module + '.' + double underscore + version + double underscore
But, there are some modules which don't print their version even after using the method above. So, you can simply do:
Open the command prompt.
Navigate to the file address/directory by using cd (file address) where you've kept your Python and all supporting modules installed. If you have only one Python interpreter on your system, the PyPI packages are normally visible in the directory/folder: Python → Lib → site-packages.
use the command "pip install [module name]" and hit Enter.
This will show you a message as "Requirement already satisfied: file address\folder name (with version)".
See the screenshot below for example: I had to know the version of a pre-installed module named "Selenium-Screenshot". It correctly showed as 1.5.0:
Go to terminal like pycharm-terminal
Now write py or python
and hit Enter.
Now you are inside python in the terminal you can try this way:
# import <name_of_the_library>
import kivy
# So if the library has __version__ magic method, so this way will help you
kivy.__version__ # then hit Enter to see the version
# Output >> '2.1.0'
but if the above way not working for you can try this way to know information include the version of the library
pip show module <HERE PUT THE NAME OF THE LIBRARY>
Example:
pip show module pyperclip
Output:
Name: pyperclip
Version: 1.8.2
Summary: A cross-platform clipboard module for Python. (Only handles plain text for now.)
Home-page: https://github.com/asweigart/pyperclip
Author: Al Sweigart
Author-email: al#inventwithpython.com
License: BSD
Location: c:\c\kivymd\virt\lib\site-packages
Requires:
Required-by:
There is another way that could help you to show all the libraries and versions of them inside the project:
pip freeze
# I used the above command in a terminal inside my project this is the output
certifi==2021.10.8
charset-normalizer==2.0.12
docutils==0.18.1
idna==3.3
Kivy==2.1.0
kivy-deps.angle==0.3.2
kivy-deps.glew==0.3.1
kivy-deps.sdl2==0.4.5
Kivy-Garden==0.1.5
kivymd # file:///C:/c/kivymd/KivyMD
Pillow==9.1.0
Pygments==2.12.0
pyperclip==1.8.2
pypiwin32==223
pywin32==303
requests==2.27.1
urllib3==1.26.9
and sure you can try using the below command to show all libraries and their versions
pip list
Hope to Help anyone,
Greetings
In summary:
conda list
(It will provide all the libraries along with version details.)
And:
pip show tensorflow
(It gives complete library details.)
After scouring the Internet, trying to figure out how to ensure the version of a module I’m running (apparently python_is_horrible.__version__ isn’t a thing in Python 2?) across operating systems and Python versions... literally none of these answers worked for my scenario...
Then I thought about it a minute and realized the basics... after ~30 minutes of fails...
assumes the module is already installed and can be imported
Python 3.7
>>> import sys,sqlite3
>>> sys.modules.get("sqlite3").version
'2.6.0'
>>> ".".join(str(x) for x in sys.version_info[:3])
'3.7.2'
Python 2.7
>>> import sys,sqlite3
>>> sys.modules.get("sqlite3").version
'2.6.0'
>>> ".".join(str(x) for x in sys.version_info[:3])
'2.7.11'
Literally that’s it...
(See also How do I get the version of an installed module in Python programmatically?)
I found it quite unreliable to use the various tools available (including the best one pkg_resources mentioned by Jakub Kukul' answer), as most of them do not cover all cases. For example
built-in modules
modules not installed but just added to the python path (by your IDE for example)
two versions of the same module available (one in python path superseding the one installed)
Since we needed a reliable way to get the version of any package, module or submodule, I ended up writing getversion. It is quite simple to use:
from getversion import get_module_version
import foo
version, details = get_module_version(foo)
See the documentation for details.
This works in Jupyter Notebook on Windows, too! As long as Jupyter is launched from a Bash-compliant command line such as Git Bash (Mingw-w64), the solutions given in many of the answers can be used in Jupyter Notebook on Windows systems with one tiny tweak.
I'm running Windows 10 Pro with Python installed via Anaconda, and the following code works when I launch Jupyter via Git Bash (but does not when I launch from the Anaconda prompt).
The tweak: Add an exclamation mark (!) in front of pip to make it !pip.
>>>!pip show lxml | grep Version
Version: 4.1.0
>>>!pip freeze | grep lxml
lxml==4.1.0
>>>!pip list | grep lxml
lxml 4.1.0
>>>!pip show lxml
Name: lxml
Version: 4.1.0
Summary: Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API.
Home-page: http://lxml.de/
Author: lxml dev team
Author-email: lxml-dev#lxml.de
License: BSD
Location: c:\users\karls\anaconda2\lib\site-packages
Requires:
Required-by: jupyter-contrib-nbextensions
A Python program to list all packages (you can copy it to file requirements.txt):
from pip._internal.utils.misc import get_installed_distributions
print_log = ''
for module in sorted(get_installed_distributions(), key=lambda x: x.key):
print_log += module.key + '~=' + module.version + '\n'
print(print_log)
The output would look like:
asn1crypto~=0.24.0
attrs~=18.2.0
automat~=0.7.0
beautifulsoup4~=4.7.1
botocore~=1.12.98
To get a list of non-standard (pip) modules imported in the current module:
[{pkg.key : pkg.version} for pkg in pip.get_installed_distributions()
if pkg.key in set(sys.modules) & set(globals())]
Result:
>>> import sys, pip, nltk, bs4
>>> [{pkg.key : pkg.version} for pkg in pip.get_installed_distributions() if pkg.key in set(sys.modules) & set(globals())]
[{'pip': '9.0.1'}, {'nltk': '3.2.1'}, {'bs4': '0.0.1'}]
Note:
This code was put together from solutions both on this page and from How to list imported modules?
For situations where field __version__ is not defined:
try:
from importlib import metadata
except ImportError:
import importlib_metadata as metadata # python<=3.7
metadata.version("package")
Alternatively, and like it was already mentioned:
import pkg_resources
pkg_resources.get_distribution('package').version
Here's a small Bash program to get the version of any package in your Python environment. Just copy this to your /usr/bin and provide it with executable permissions:
#!/bin/bash
packageName=$1
python -c "import ${packageName} as package; print(package.__version__)"
Then you can just run it in the terminal, assuming you named the script py-check-version:
py-check-version whatever_package
And in case your production system is hardened beyond comprehension so it has neither pip nor conda, here is a Bash replacement for pip freeze:
ls /usr/local/lib/python3.8/dist-packages | grep info | awk -F "-" '{print $1"=="$2}' | sed 's/.dist//g'
(make sure you update your dist-packages folder to your current python version and ignore inconsistent names, e.g., underscores vs. dashes).
Sample printout:
Flask==1.1.2
Flask_Caching==1.10.1
gunicorn==20.1.0
[..]
I myself work in a heavily restricted server environment and unfortunately none of the solutions here are working for me. There may be no global solution that fits all, but I figured out a swift workaround by reading the terminal output of pip freeze within my script and storing the modules labels and versions in a dictionary.
import os
os.system('pip freeze > tmpoutput')
with open('tmpoutput', 'r') as f:
modules_version = f.read()
module_dict = {item.split("==")[0]:item.split("==")[-1] for item in modules_versions.split("\n")}
Retrieve your module's versions through passing the module label key, e.g.:
>> module_dict["seaborn"]
'0.9.0'
Building on Jakub Kukul's answer I found a more reliable way to solve this problem.
The main problem of that approach is that requires the packages to be installed "conventionally" (and that does not include using pip install --user), or be in the system PATH at Python initialisation.
To get around that you can use pkg_resources.find_distributions(path_to_search). This basically searches for distributions that would be importable if path_to_search was in the system PATH.
We can iterate through this generator like this:
avail_modules = {}
distros = pkg_resources.find_distributions(path_to_search)
for d in distros:
avail_modules[d.key] = d.version
This will return a dictionary having modules as keys and their version as value. This approach can be extended to a lot more than version number.
Thanks to Jakub Kukul for pointing in the right direction.
You can first install some package like this and then check its version:
pip install package
import package
print(package.__version__)
It should give you the package version.

Two pip-installed modules have the same name, how to select which one is loaded?

I'm writing a python program that relies on a specific module, rtmidi. Thing is, at least two different packages in PyPI have a module with that name, rtmidi and python-rtmidi. They offer almost the same functionalities but have different syntax.
When only the "right" package is installed everything works fine. But if both packages are installed, using import rtmidi loads the "wrong" module and my program crashes. The only way to get it working again is to uninstall both packages then re-install the right one. Of course, since the user might rely on the other module for other programs, I can't expect them to do that.
Trying to identify the module with rtmidi.__name__ gives the same result with both packages.
So my question, how do I go about resolving this name clash problem? Is there a best-practice way to handle this?
You can't have them both installed (if relying on default pip behavior). Here I'll demonstrate with a couple simple pure-Python packages that have an import-name conflict: coveralls and python-coveralls.
# in a fresh environment
pip install python-coveralls
python -c "import coveralls; print(dir(coveralls)); print(coveralls.__file__)"
# ['__author__', '__builtins__', '__cached__', '__classifiers__', '__copyright__',
# '__doc__', '__docformat__', '__file__', '__license__', '__loader__', '__name__',
# '__package__', '__path__', '__spec__', '__version__', 'parse_args', 'wear']
# /path/to/lib/python3.8/site-packages/coveralls/__init__.py
These are the contents of python-coveralls. Note that the actual __init__.py file is located in the site-packages/coveralls/ directory.
pip install coveralls
python -c "import coveralls; print(dir(coveralls)); print(coveralls.__file__)"
# ['Coveralls', '__all__', '__builtins__', '__cached__', '__doc__', '__file__',
# '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__',
# 'api', 'exception', 'git', 'reporter', 'version']
# /path/to/lib/python3.8/site-packages/coveralls/__init__.py
These are the contents of coveralls. python-coveralls has been overwritten. Note that this is the same file. Anything in the old __init__.py is gone.
pip uninstall coveralls
python -c "import coveralls; print(dir(coveralls)); print(coveralls.__file__)"
# ['__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__',
# '__spec__']
# None
There's still a ghost of the package there, but its contents are gone (except things that all packages have internally). Notice that the file is now None: there is no __init__.py file for this package.
You can un-bork your environment by also running pip uninstall python-coveralls and then reinstalling whichever you want.
Solution
You do have to require that your users only use the package that is in your requirements, but that's why we use virtual environments. Alternatively, a user that wants to directly use the other package can change the install location (and thus the name used when loading) with the --target option to pip (but that won't help other apps that use the other library).
In practice, it's probably best to think of this as part of your installation process. You either need to (a) tell your users what requirements they need to install; or (b) have some automated process that gets the right requirements.
I'd say best practice is (b). A common way of doing this is to use setuptools, and defining the install_requires argument to the setup function. This refers to the package name on PyPI, not the import name. The setuptools quickstart is a good place to start.
The following is based on the comments by hoefling and the answer by dwhswenson (the latter explains well what the problem is).
Flag it to the project owners so they change the names of their modules. In theory this is by far the best solution as not only it will solve your problem but also save others from it, which is in the best interest of the module's devs. In practice it opens a whole can of worms as to which one should change name.
Or include the module in your project, aka vendoring, and import it using the path my_project/_vendor/the_module (or add that path first in sys.path). If the module is just a .py file and the license permits it, this is the simplest way to resolve the issue. This wasn't doable in my case because the module I needed includes some C code that needs to be compiled, and including the whole source code then requiring the user to compile it seemed like big ask. So I had to do this:
Or tell the user to install the module in your project's folder. Using pip with the option --target followed by a folder path inside your project, you can get your own copy of the module which won't overwrite the system one. You can then check if that folder exists before importing. If it does, import the module from there, if not you can check if the system-installed module is the right one, and prompt the user if it isn't.
And here's a rough example of how I implemented the code in my project. It replaces import rtmidi
if os.path.isdir('my_project/rtmidi'):
from .rtmidi.rtmidi import _rtmidi as rtmidi
else:
try:
import rtmidi
except ImportError:
print('Please run: pip3 install rtmidi')
exit()
try:
# something that only works with the right module:
test = rtmidi.MidiMessage()
except:
print('Please run: pip3 install --target=./my_project/rtmidi rtmidi')
exit()
Note: I should probably just try to import the local folder module instead of using a conditional statement, and use the path my_project/_vendor/rtmidi.
i think you should see which one you dont want and delete it then try importing it again.
you can find packages in site-packages folder.
If package names are different import it with package name if are the same you can change package name

How to add dependencies inside setup.py file?

How to add dependencies inside setup.py file ? Like, I am writing this script on VM and want to check whether certain dependencies like, jdk or docker is there or not, and if there is no dependencies installed, then need to install automatically on VM using this script.
Please do tell me as soon as possible, as it is required for my project.
In simplest form, you can add (python) dependencies which can be install via pip as follow:
from setuptools import setup
setup(
...
install_requires=["install-jdk", "docker>=4.3"],
...
)
Alternatively, write down a requirement.txt file and then use it:
with open("requirements.txt") as requirements_file:
requirements = requirements_file.readlines()
requirements = [x[:-1] for x in requirements]
setup(
...
install_requires=requirements,
...
)
Whenever you'll execute python setup.py install then these dependencies will be checked against the available libraries in your VM and if they are not available (or version mismatch) then it will be installed (or replaced). More information can be found here.
Refer the https://github.com/boto/s3transfer/blob/develop/setup.py and check the requires variables.
You can refer many other open source projects
You can add dependencies using setuptools, however it can only check dependencies on python packages.
Because of that, you could check jdk and docker installation before setup(), manually.
You could call system like the code below and check the reponse.
import os
os.system("java -version")
os.system("docker version --format \'{{.Server.Version}}\'")

Setup.py, , setuptools, cmdclass - Custom commands not working

I am trying to create a directory upon a package installation. The function to create the directory, by itself, successfully creates it. Additionally, when I run "python3.7 setup.py install", the directory is created.
Why does this not work when using pip though? I don't see any errors. When I added print statements, I do not see them.
I have chosen to use setuptools' 'bdist_egg' function instead of the 'install' function for reasons found in here:
Running custom setuptools build during install
from sys import platform
from setuptools import setup
from os import mkdir, chmod, path
from setuptools.command.bdist_egg import bdist_egg as _bdist_egg
class OverrideInstall(_bdist_egg):
def run(self):
_bdist_egg.run(self)
# create log directory
log = "/var/log/FOO"
mode = 0o777
if not path.exists(log):
mkdir(log)
chmod(log, mode)
setup(
name='cox-nams',
version='FOO',
description='FOO',
<-- output omitted for brevity / security>
cmdclass={"bdist_egg": OverrideInstall},
)
Apparently not supported with pip install.

How to install NodeBox for console

I'm working on OS X Mavericks and want to use the NodeBox modules in Python scripts.
The post about how to install the modules for console is from 2009 and doesn't work anymore as this refers to version 1.9.x (current is 3.0.40). Also the SVN source isn't there anymore. The sources are available at GitHub.
By cloning the project and running:
ant run
all I get is a build of the desktop version.
How do I properly install and run the up to date NodeBox modules in Python scripts?
As said in the docs here in section 2. Installing the NodeBox module:
If you want to use NodeBox from the command line, you will have to install it. We currently recommend using Subversion to grab a copy:
svn co http://dev.nodebox.net/svn/nodebox/trunk/ nodebox
...
cd src
python setup.py install
we should be installing the usual way from the source, but as you say the procedure is rather outdated. The source apparently moved from SVN to GitHub at https://github.com/nodebox/nodebox-pyobjc as mentioned on the download page and the source package structure changed too.
Let's grab the source and try to install it:
$ git clone https://github.com/nodebox/nodebox-pyobjc.git
$ cd nodebox-pyobjc
$ python nodebox/setup.py install
Traceback (most recent call last):
File "nodebox/setup.py", line 17, in <module>
import nodebox
ImportError: No module named nodebox
So setup.py needs to import the nodebox package, let's add the project root dir to Python path, so that the nodebox package can be found and try again:
$ export PYTHONPATH=$PYTHONPATH:.
$ python nodebox/setup.py install
...
clang: error: no such file or directory: 'nodebox/ext/cGeo.c'
clang: error: no input files
error: command 'clang' failed with exit status 1
Now it turns out some lib paths in setup.py are wrong, no one probably used this for some time while the libs moved around, but we can fix it:
# ext_modules = [
# Extension('cGeo', ['nodebox/ext/cGeo.c']),
# Extension('cPathmatics', ['nodebox/ext/cPathmatics.c']),
# Extension('cPolymagic', ['nodebox/ext/gpc.c', 'nodebox/ext/cPolymagic.m'], extra_link_args=['-framework', 'AppKit', '-framework', 'Foundation'])
# ]
ext_modules = [
Extension('cGeo', ['libs/cGeo/cGeo.c']),
Extension('cPathmatics', ['libs/pathmatics/pathmatics.c']),
Extension('cPolymagic', ['libs/polymagic/gpc.c', 'libs/polymagic/polymagic.m'], extra_link_args=['-framework', 'AppKit', '-framework', 'Foundation'])
]
Try install again:
$ python nodebox/setup.py install
...
running install_egg_info
Writing <python>/lib/python2.7/site-packages/NodeBox-1.9.7rc2-py2.7.egg-info
$ pip list
...
NodeBox (1.9.7rc2)
...
Now the package installed successfully and we should be able to use it:
$ python
>>> import nodebox
>>> dir(nodebox)
['__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', 'get_version']
>>> nodebox.__version__
'1.9.7rc2'
Also, you may still need to manually install some of the dependencies for everything to work correctly, as noted in setup.py itself:
# We require some dependencies:
# - PyObjC
# - psyco
# - py2app
# - cPathMatics (included in the "libs" folder)
# - polymagic (included in the "libs" folder)
# - Numeric (included in the "libs" folder)
# - Numpy (installable using "easy_install numpy")
I already created a pull request with fixed setup.py lib paths, see here.
Tested on OS X Mavericks (System Version: OS X 10.9.3 (13D65), Kernel Version: Darwin 13.2.0) using Homebrew Python 2.7.6.

Categories

Resources