python/miniconda - proper use of environment variable paths - python

TLDR: How should miniconda environments be added to the environment variables such that multiple conda environments will run with minimal fuss?
Long Story/Background
I'm on Windows 10 and got fed up with trying to use python directly and decided to give miniconda a go. I'm running python 3.8 with the main installed package being numpy. Everything was fine in the console, but Pycharm had the classic Importing the numpy c-extensions failed. After trying reinstallation, I found another question that had gotten it to work by adding more folders to the system path. This worked only when the additional library paths
C:\Users\USERNAME\.conda\envs\num38
C:\Users\USERNAME\.conda\envs\num38\DLLs
C:\Users\USERNAME\.conda\envs\num38\Lib
C:\Users\USERNAME\.conda\envs\num38\Library
C:\Users\USERNAME\.conda\envs\num38\Library\bin
C:\Users\USERNAME\.conda\envs\num38\Scripts
were directly added to the system path, not as a secondary path variable, i.e. %num38_path%. I also tried it with the secondary path as a runtime environment variable for the configuration in Pycharm but that also didn't work.
Why doesn't this secondary path method work?
I'm currently only using this one virtual environment, but if, in the future, I want to have another conda environment, would these paths being in system path be an issue?

Related

How to use more than one conda environment in a project

I am working on a research project in which I need to use some scientific packages each of which comes with their specific requirement files including their needed libraries. I am coding python in jupyter notebook using Anaconda in Windows 10.
Based on what I've read on the web, each project needs to have its own environment, so I created an environment (say project_env) using conda. During my project, in some parts, I need to use some external scientific packages (let's call 'bst' and 'MDN'), cloned from Github, each of which has their specific dependencies.
my current practice is just installing all these dependencies in the same environment (project_env), and code the whole project in one notebook. However, as going forward, things getting more complicated and facing some conflicts between installed packages even using conda installation. So, I came up with this idea to keep things apart as much as possible, i.e. creating two other environments for the external packages (bst_env and MDN_env) and then using them whenever I need them in the project. Under this scenrio, I cannot include all my project code in one jupyter notebook because as far as I know there is no way to switch between environments from inside a notebook. However, in this way it is quite difficult and messy to run different notebooks for different parts of the project.
My question is: Is there a method to run more than one environment from a notebook? if no, what would be the best practice to handle these environments in a project? should I export my variables from my source code (run in project_env) to other environments (bst_env or MDN_env) every time and activate and run their according environments and notebooks every time or there is a better practice to do that?
I found this great package (nb_conda_kernels), which is exactly what I wanted. It enables you to switch between environments (kernels) inside a jupyter notebook, just by selecting from a list of available environments.
As mentioned here (https://github.com/Anaconda-Platform/nb_conda_kernels), just type: 'conda install nb_conda_kernels' in conda terminal to install this package in the environment (kernel) from which you want to run other environments (kernels). In my case (the above question) it is 'project_env'. Also, make sure to have 'ipykernel' installed in the external environments you want to use in your notebook (in my case: 'bst_env' and 'MDN_env').
Now, during working in a notebook under environment "A", you can use dependencies installed in environments "B" or "C" just by selecting these environments from the list of kernels in jupyter notebook.

Conflicts between multiple versions of conda

I use miniconda to manage my python environments on Windows 10. Additionally, I use software called ESRI ArcGIS Pro that comes bundled with it's own versions of conda and python that are somewhat modified to work with their software. I must use ESRI's conda to manage environments that interact with this application.
I have this same set up on both my laptop and desktop, and until recently had no issues. However, recently ESRI's conda stopped working on my laptop. Any conda commands (e.g. conda list, conda info --envs, conda create -n myenv, even just conda by itself) produce no output whatsoever. At first I suspected that PATH was set incorrectly, but I've check that this is not the case (even calling ERIS's conda.exe with a full path still does not work). I then suspected that the conda.exe file itself was corrupted, but this also is not the case (copied it to my desktop and it works fine there).
I suspect it may have something to do with my separate miniconda installation. It doesn't seem to be an issue of environmental variables being set incorrectly (again checked against the working system), but I'm wondering if there is any possibility that there are Registry entries (perhaps set by my Miniconda install) that could be causing this issue?
Any thoughts on why this might be the case? Or advice on how to proceed with diagnosing the issue?
EDIT:
Per merv's request, my conda environmental variables:
CONDA_DEFAULT_ENV=C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3
CONDA_PREFIX=C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3
CONDA_PS1_BACKUP=$P$G
Clearly these paths are different than normal due to the custom distribution.
To answer your other questions, no other conda commands generate any output whatsoever. As for activate I don't have any other environments to try activating (the arcgispro-py3 env you see above is the name of the 'base' env that ships with the software), but deactivate seems to work. Another slight difference to mention is that conda activate ... is not a command in this special conda, you have to just use activate by itself which AFAICT calls a shell script.

Python, site-packages partially on network file system?

CentOS 7 system, Python 2.7. The OS's installed python has directory
/usr/lib/python2.7/site-packages
and that is where a
python setup.py install
command would install a package. On a computing cluster I would like to install some packages so that they are referenced from that directory but which actually reside in an NFS served directory here:
/usr/common/lib/python2.7/site-packages
That is, I do not want to have to run setup.py on each of the cluster nodes to do a local install on each, duplicating the package on every machine. The packages already installed locally must not be affected, some of those are used by the OS's commands. Also the local ones must work even if the network is down for some reason. I am not trying to set up a virtual environment, I am only trying to place a common set of packages in a different directory in such a way that the OS supplied python sees them.
It isn't clear to me what is the best way to do this. It seems like such a common problem that there must be a standard or preferred way of doing this, and if possible, that is the method I would like to use.
This command
/usr/bin/python setup.py install --prefix=/usr/common
would probably install into the target directory. However the "python" command on the cluster nodes will not know this package is present, and there is no "network" python program that corresponds to the shared site-packages.
After the network install one could make symlinks from the local to the shared directory for each of the files created. That would be acceptable, assuming that is sufficient.
It looks like the PYTHONPATH environmental variable might also work here, although I'm unclear about what it expects
for "path" (full path to site-packages, or just the /usr/common part.)
EDIT: This does seem to work as needed, at least for the test case. The software package in question was installed using --prefix, as above. PYTHONPATH was not previously defined.
export PYTHONPATH=/usr/common/lib/python2.7/site-packages
python $PATH_TO_START_SCRIPT/start.py
ran correctly.
Thanks.

Programatically running Python programs in virtual environments (conda or venv)

I am planning to implement some functionality in Python to be used as part of a larger non-Python project, so as to take advantage of some Python libraries. I have done some scripting in Python before, but nothing this substantial.
From the advice I've gotten, it seems like we will definitely want to use a virtual environment to manage dependencies. I am exploring venv and conda and haven't committed to either yet, though it seems like conda would have the advantage of providing pre-built versions of Cython dependencies.
With both conda and venv, though, the documentation I've been finding seems to be oriented toward working interactively inside the environment. For our purposes, I want to be able to run the programs we've written in Python programmatically, without going through the system shell.
Is there an established, recommended way to do this?
I've been trying to look at what the Bash scripts to activate a virtual environment actually do, and it looks like they basically just set up some environment variables. Both add their virtual environment's bin directory to the beginning of the PATH, venv sets VIRTUAL_ENV, and conda sets a bunch of CONDA_ environment variables. Interestingly, it doesn't look like either sets, say, PYTHONPATH.
For programmatic use, is it sufficient to set these environment variables and then run the equivalent of python3 -m mymodule, or is there more setup that needs to be done? I would particularly like to know if this is documented anywhere, for conda, venv, or both: relying on having figured out what environment variables need to be set to what values seems a bit fragile.

System Python conflict between Anaconda and existing Python installation

I've been going with a basic Python3.4 install that I've been installing many modules into for over the past month but have reached a point where pip is coming up short and I'm going to just install the full Anaconda on my system to go deeper into bokeh-server stuff.
I get a popup during the Windows 64bit installer (Anaconda3-2.3.0-Windows-x86_64.exe) saying
A version of Python 3.4 (64-bit) is already at C:\Python34\ We
recommend that if you want Anaconda registered as your system Python,
you unregister this Python first. If you really know this is what you
want, click OK, otherwise click cancel to continue.
Didn't find much documentation on this subject, and I'm not really sure how to "unregister" that installation of Python apart from uninstalling it entirely from Windows which I imagine would accomplish such a thing. Is this basically telling me to check how my Python Launcher for Windows is setup after the Anaconda installation? I'm completely unfamiliar with this notion of python system registration? Is that just a round about warning about which python version takes precedence on the system path, or which installation holds the file associations?
The solution is simply to uninstall python (for example, run the original python installer and select the uninstall option). The python key in the windows registry will be removed (which is what unregister means in this context).
Here is a link to a script that will unregister a Python installation (if you haven't come across it already). I personally have not dealt with anything like this. It seems like it should work, but you may have to tinker around with some of the paths in the script to get things to work. The links in #nightuser 's post will also probably fix the issue.
Why not just remove your version of Python? You could do a pip freeze > requirements.txt with your current Python and add them to Anaconda, or create an environment with Anaconda using those packages. Anaconda has greatly decreased the amount of time I spend setting things up.
You are getting that prompt because you have another version already installed the safe way to do this is going to the directory of the existing version and running the uninstaller. Once the previous version is completely uninstalled. You can run the installation as normal it should works!
enter image description hereYou have already installed python in your environment, so your system can handle your python code. Anaconda can handle your python code as well. If you install Anaconda and expect use python provided by Anaconda, then your system will confuse about assigning the code job. To avoid this confusion, there is always a path to point out which python you want.
You can ask your OS to find specific python by changing your environment path in windows OS:
By deleting the python path your Python will be invisible. Changing path is more convenient compared with the uninstall.
If you got a python compiling software like Pycharm, things are different. Pycharm may have python3.6 while your system got python 3.5. You need to delete path in those software or uninstall python specified by those software.
Pycharm
Actually, instead of registering the Anaconda as the system python, you can install it first, then specify your Pycharm and system path to the Anaconda python path. Thus, your Pycharm will use python provided by Anaconda and package & virtual environment you need.
I edit this for lots of times because I got Great wall, so I can't close something pop up in my screen(cause it is Blank). And everything is missing if I refresh. This is annoying.
My system path Anaconda3's python36
Add path in Pycharm
Or you can't use these packages and Anacon's virtual Environment

Categories

Resources