I know this question have been posted before. But I couldn't find a complete answer on how to do it.
I would like to use python packages with C extensions such as Numpy and Twisted in an embedded system (platform architecture: ARM 32Bit and some Linux distribution).
Info: the tool chain is already configured.
Found this alternatives:
Using dockers
Using distutilscross (sounds the easiest, but couldn't find documentation)
https://pypi.python.org/pypi/distutilscross
Using VM
Thank you in advance
Crossenv
But I get the issues with multiarray among working with numpy all the time.
Numpy dependable cross compilations are failing with the issue. So I modified PATH to make them start building.
Built numpy also fires the issue when imported in target python.
For the second issue:
$ sudo apt install python-numpy
is meant to be the solution but since I am developing for an embedded system I cannot/have never tried to use it.
I get to a conclusion that I should cross compile it by myself but then a lot more dependency issues occurred. However, one can try it if the target is not a minimalist linux and apt is available.
Related
I have a Python script which uses NumPy and another third party library. The third party library is written in Python and has no bindings to other languages. It makes us of Cython, SciPy, NumPy and Matplotlib. Though I only use a small subset of this library, it has no easy replacement (scientific software).
I'd like to use a computing server to run my program, since it takes over 10 hours to finish. Needless to say there is no support for python. So I see two possibilities: precompile my code for Unix or convert it to C/C++.
What I tried:
shedskin: Doesn't work with unsupported libraries
cx_freeze et al.: Countless errors, it's difficult to make simple programs work
PyInstaller: Doesn't work using OpenSuse. Is not able to resolve the dependencies of third party libraries
Nuitka: I get a memory error
Any suggestions on what to do are welcome.
Anaconda/Miniconda is a perfect fit for this problem. It installs locally to your users home directory and installs all the binaries you need (with minimal effort to add extra custom packages). It's designed specifically with the python science ecosystem (and all it's annoying build dependencies) in mind.
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod +x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh
export PATH=$PATH:~/miniconda/bin
conda install numpy scipy matplotlib cython
You also get the nice side effect that installing a new machine will take seconds to minutes rather than minutes to hours.
Once it's setup, it's also compatible with pip (ie/ it puts a local copy of pip beside conda)
Preface: I am a Mac/Unix user and am now a little lost with Windows.
Situation: I am trying to use python on a school machine that has a 64-bit architecture and running Windows 7. I have gotten the module NetworkX to work via python setup.py install, but need the numerical libraries to be available as well.
Question: I have the identical output as this question elaborates and need to install numpy with correct dependencies. How do I do this with limited permissions?
Problems: The solution in the above link cannot be adopted in my case. I do not have Visual Studio 2008 and cannot install it due to permissions. Also, the linear algebra library that is required costs 500$, which frankly is a deal breaker. I thought I could adopt this SO solution, but I do not have access to Bash. I also cannot run .exe files due to permissions. All the modules I have installed have been using python setup.py install. Any help or suggestions are VERY much appreciated.
Could you install one of the scientific python distributions like Anaconda or Canopy? That might include everything you need. See http://scipy.org/install.html for a list of options.
I'm researching an appropriate (or at least straightforward stack) for ultimately getting skeleton information from a kinect, via a python api on an OSX platform. Most of the information I am finding is quite spread out and disconnected.
While it seems perfectly obvious that a windows-based stack would be microsoft's own pykinect on top of their kinect SDK, I can't seem to figure out what works well in an OSX environment.
Here is the info that I have compiled so far:
libfreenect is the obvious source for the low level drivers (this part is working just fine)
OpenNI offers the framework + NITE middleware to provide recognition. (not python)
PyOpenNI - python bindings for OpenNI with support for skeleton and other advanced features.
I have concluded that this is the most recommended stack to date. What I would like to achieve is simple skeleton data similar to what the windows SDK python wrapper gives you out of the box. Ultimately I will be using this in a PyQt-based app to draw the display, and then into Maya to apply the data.
My question is two parts, and I would accept an answer in either direction if it were the most appropriate...
Build issues for PyOpenNI
So far, I have been unable to successfully build PyOpenNI on either OSX Snow Leopard (10.6.8), or Lion (10.7.4). Both systems have updated xcode. I have noticed that the source files are hardcoded to expect python2.7, so on snow leopard I have had to make sure it was installed and the default version (also tried a virtualenv).
On Snow Leopard, I was seeing the cmake process find different libs, headers, bin for python, and ultimately the make produced an .so that crashed with 'mismatched interpreter'.
On Lion, I also got mismatched interpreter crashes. But after I installed python2.7 via homebrew, it generated a new error:
ImportError: dlopen(./openni.so, 2): Symbol not found: _environ
Referenced from: /usr/local/lib/libpython2.7.dylib
Expected in: dynamic lookup
Are there any specific steps to building this on OSX that I am missing, such as environment variables to ensure its pointing at the correct python2.7 libs? Does anyone have a successful build process for this platform?
Alternate question
Is this still the most recommended stack for OSX?
Follow up
I've accepted my own answer as a temporary working solution. If someone can provide a better one, I will gladly accept it!
Update
Part of this process isn't necessary after a patch I submitted (information here). Since then I have also written up a more detailed blog post about installing the entire stack on OSX: Getting Started With Xbox360 Kinect On OSX
After hacking around on this a bit, I have found a working fix (though it doesn't address the issue at the build level). There is an existing issue with cmake, where it does not properly detect other python frameworks besides the system framework (which causes the mismatch between the python binary and the libs).
I first reinstalled my python2.7 install via homebrew, adding the --framework flag
After building the module, I noticed via otool that it was still linking to my system python, and the system python on Lion is fat i386 and x86_64. I also noticed that the libboost (boost installed via homebrew) which was linked to openni.so was also linked against the system python instead of homebrew. So I used the following to relink them:
install_name_tool -change \
/System/Library/Frameworks/Python.framework/Versions/2.7/Python \
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Python \
openni.so
install_name_tool -change \
/System/Library/Frameworks/Python.framework/Versions/2.7/Python \
/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Python \
/usr/local/Cellar/boost/1.49.0/lib/libboost_python-mt.dylib
After doing this, I was able to import openni without any errors.
Here is the summary of the workaround:
python2.7 as framework, x86_64 (not fat)
libboost linked to the proper 64bit python
export CPPFLAGS="-arch x86_64"
cmake and make steps like normal
relink openni.so to the 64bit python
Ideally, someone would post a better answer than this one showing how to fix this during the build phase with environment variables, and not have to do a relink fix at the end.
I have a linux VPS that uses an older version of python (2.4.3). This version doesn't include the UUID module, but I need it for a project. My options are to upgrade to python2.6 or find a way to make uuid work with the older version. I am a complete linux newbie. I don't know how to upgrade python safely or how I could get the UUID modules working with the already installed version. What is a better option and how would I go about doing it?
The safest way to upgrading Python is to install it to a different location (away from the default system path).
To do this, download the source of python and do a
./configure --prefix=/opt
(Assuming you want to install it to /opt which is where most install non system dependant stuff to)
The reason why I say this is because some other system libraries may depend on the current version of python.
Another reason is that as you are doing your own custom development, it is much better to have control over what version of the libraries (or interpreters) you are using rather than have a operating system patch break something that was working before. A controlled upgrade is better than having the application break on you all of a sudden.
The UUID module exists as a separate package for Python 2.3 and up:
http://pypi.python.org/pypi/uuid/1.30
So you can either install that in your Python2.4, or install Python2.6. If your distro doesn't have it, then Python is quite simple to compile from source. Look through the requirements to make sure all the libraries you need/want are installed before compiling Python. That's it.
The best solution will be installing python2.6 in the choosen directory - It will you give you access to many great features and better memory handling (infamous python=2.4 memory leak problem).
I have got several pythons installed onto my two computers, I found that the best solution for are two directories:
$HOME/usr-32
$HOME/usr-64
respectively to using operating system (I share $HOME between 32 and 64 bit versions of Linux).
In each I have one directory for every application/program, for example:
ls ~/usr-64/python-2.6.2/
bin include lib share
It leads completetely to avoiding conflicts between version and gives great portability (you can use usb pendrives etc).
Python 2.6.2 in previously example has been installed with option:
./configure --prefix=$HOME/usr-64/python-2.6.2
The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work).
I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.
You can also add a second python installation, but that also causes more problems than it's worth IMO.
So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?
Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?
Hyposaurus,
It is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above.
The easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local
Another method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time
./configure --prefix=/usr/local/python64
make
sudo make install
Then you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive
Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.
The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.
EDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.