I recently purchased a new mac laptop with the M1 core. I've previously done work using tensorflow, and am trying to continue with my new laptop. I'm aware that apple has a tensorflow_macos version of tensorflow, but unfortunately for my project I need higher order derivatives, which the fork is not currently capable of doing. Is there a way to run the non_mac specific version of tensorflow on a M1?
If I attempt to import tensorflow from an arm terminal, I get the error:
python3
import tensorflow
zsh: illegal hardware instruction python3
The same happens if I run from a Rosetta terminal.
I've also tried running from PyCharm (with a Python 3.8 virtual environment), but I receive a different problem on trying to import tensorflow:
Process finished with exit code 132 (interrupted by signal 4: SIGILL)
I have python3.8.5 installed. I've seen on other related questions that there should be two versions of python3 installed by default on mac (arm and x86_64), but it looks like I only have one version:
file $(which python3)
/Users/xxx/opt/miniconda3/bin/python3: Mach-O 64-bit executable x86_64
I'm inexperienced when it comes to handling different core architectures, so any advice would be appreciated. I'm happy to provide clarification for any specifics.
Related
I have been a staunch user of Eclipse on Windows - mostly for developing Python code. Lately, I needed to do something with the packages xarray and netcdf4. I first used an old version of Eclipse, but when I encountered problems I installed the latest LiClipse - version 8.2.0 (64 bits) on my Windows 10 machine. I use Miniconda 3 py37_4.9.2 (64 bits) with Anaconda Navigator 2.1.2 to manage my Python environments. I wrote a script of only a few lines. When I tried to import package netcdf4, I got an import error immediately that one of the netcdf4 DLLs could not be found after pressing the debug button. When I tried working without direct involvement of netcdf4 but only with xarray, I also got an import error that my packages were not configured correctly. When I started the script - outside Eclipse - from the command-line, there was no problem at all. BTW, I tried running with different Python versions: 3.6, 3.8 and 3.9 but that made no difference. I suspect that Pydev does not work together well with the packages netcdf4 and xarray. Has anybody else experienced similar problems?
It seems like some environment variable isn't properly set when running from PyDev...
Do you have the flag to load conda environment variables set in the interpreter configuration?
i.e.:
Note: if it runs in the command line you can compare the values you have in os.environ from one to the other to find what may be different (in general just making sure that the conda environment variables are loaded should do the trick, but if it doesn't comparing those and setting what's needed in the Environment tab may help).
I'm working on a project built with Cython and have been having some interesting installation issues on some particular systems. The problem boils down to either a mismatch in Mac OS versions, or a mismatch in my understanding of Mac OS versions.
When installing a Cython library on my system, the compilation log shows that I am using Mac OS version 10.12, even though my system tells me I am on 10.13. An excerpt of the compilation log is below:
creating build/temp.macosx-10.12-x86_64-3.6
This is not a problem for me, but another user is having trouble where their system release on 10.13, but their compilation is showing 10.7 (which requires some other work arounds).
Is it possible to update this build system? Or is there something about Mac OS that I'm misunderstanding?
Is there a way to tell from a Python script which build libraries will be used?
Thank you!
Edit:
Output of platform information:
>>> python -c "import platform; print(platform.mac_ver())"
('10.13.6', ('', '', ''), 'x86_64')
>>> python -c "import distutils.util; print(distutils.util.get_platform())"
macosx-10.12-x86_64
My context
A similar issue came up with Big Sur (macOS 10.16), python3.6, and cython.
Checking the build directory of cython (.pyxbld) I find temp.macOSX-10.7 builds and files, which apparently is incompatible with 10.16.
Some Explanations why this is going on
Under the hood cython uses distutils to determine the platform version (as opposed to other things). https://github.com/python/cpython/blob/master/Lib/distutils/util.py So then what's up with distutils?
After a lot of searching (including this post), I came across a really interesting and good post (although from 2014) that explains this phenomenon.
https://lepture.com/en/2014/python-on-a-hard-wheel
Basically, it has to do with how distutils was build and that it has hard-coded platform tags... (this also applies to the wheel files).
Solution (kinda)
It seems that the solution is to :
reinstall python (and then all packages)
make sure that python -c "import distutils.util; print(distutils.util.get_platform())" macosx-10.12-x86_64 outputs a correct version.
Edit
Apparently On Big Sur with Anaconda this solution does not work yet... (work in progress will update if solution on Anaconda will work) [Fri 20 Nov 2020]
I've been trying to understand this situation:
I want to use python packages in Anaconda 3 that requires glibc 2.14. As Centos 6.x only uses glibc 2.12 I've compiled glibc 2.14 and installed to /opt/glibc-2.14.
I'm installing Anaconda3. The test I run looks like this:
With system default glibc it works:
/opt/anaconda3/bin/python -c "import pandas"
but with compiled glibc
export LD_LIBRARY_PATH=/opt/glibc-2.14/lib/:$LD_LIBRARY_PATH
/opt/anaconda3/bin/python -c "import pandas"
it works on some machines... I installed over 20 VM and on some machines it works always and on some it never works and I receive: Segmentation fault (core dumped). On most of the machines it doesn't work.
Does anyone have any idea why this strange situation occures? Or maybe experienced this problems with
Does anyone have any idea why this strange situation occures
As this answer explains, what you are doing is not supposed to work: you have a mismatch between ld-linux and libc.so.6.
After some more investigating I've noticed that assigning more memory to lab machines (from 2/4 GB to 6 and more) make the segmentation fault error to go away. However problem still exist on production machine with 32 GB of RAM. Really strange.
Right now I found workaround which were the new python packages from anaconda that are compatibile with glibc 2.12 (become available few days ago) and packages' dependencies also does not require newer glibc.
#Employed Russian:
Thanks, but probabaly not the problem with Multiple glibc libraries on a single host. In my case Python works with additional glibc. The problem is that the segmentation fault shows up on random machines while only using new glibc. Also I'm using other Python packages that requires glibc 2.14 to work, so I'm aware which version of glibc I'm using at the moment.
Also if there were some kind of mismatch in libraries then it shouldn't work at all (...probably).
Also as I mentioned at the beginning I've noticed that the problem is related to memory (still not sure what is happening with 32 GB of RAM machine).
One more thing: I'm not compiling python packages, so changing of compilator options od 'myapp' (python package) is not an option.
Appreciate your answers though.
Recently, I started to use theano to do some basic BP network. The theano was installed and my network based on theano works well in my PC.
In order to share my code among my colleagues, I am looking for a method to package the theano python file to one execution file which can be run under windows without python environment.
I am trying py2exe to finish the packaging work and I found that, the packaged exe can only work in my PC. When I copy the exe to other PCs without python, it does not work. Only warning message is giving as:
“WARNING (theano.configdefaults): g++ not detected !
Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.”
My working environment is:
Win10 64bit + Anaconda2
Could anyone give me some advice to generate the exe file based on theano python file?
Thank you a lot.
Try PyInstaller
PyInstaller is a program that freezes (packages) Python programs into stand-alone executables, under Windows, Linux, Mac OS X, FreeBSD, Solaris and AIX.
I use cxFreeze to convert my Python scripts to .exe Its a really good module. You can see its working here.
I have some problems loading mat files into spyder environment. I update my conda version and OS system (now I have Windows 10) and my script doesn't work anymore.
This is the script:
import scipy.io as sio
# I do a lot of stuff here
file = "filename.mat"
a = sio.loadmat(file)
I have found that there is a problem with the loadmat routine, but I don't know what.
I have this information about the system:
OS: Windows 10, 64 bits.
Anaconda 2, python 2.7
Other related software installed: Visual Studio 2015
It works before system operation update. Previus version was Windows 7, 64 bits
The error message when load line is executing: "Apparently the core died unexpectedly. Use 'Restart the core' to continue using this terminal."
UPDATE 1:
I have tested in a laptop the same situation, i.e., windows updating as well anaconda. The same problem with scipy.io.loadmat
UPDATE 2:
I installed an older version of anaconda. IT WORKS!!... Does somebody know what to do now?
Regards
This is a bug in the new scipy. You should downgrade it by running this in the Command Prompt:
conda install scipy==0.16.0