Azure Web app on Linux - Make Conda Installation persistent - python

I've been deploying a Node JS project that runs a Python script on Azure web app on Linux.
I've installed Conda in my /home/miniconda3 directory, but when I run the app it does recognizes neither the path nor the packages installed, like pandas or other. However, the installation is clearly present:
How can I make persistent the Conda installation?

Make sure you update the conda and install python:
conda update conda
conda install python=3.X
You can follow the below steps to activate the environment with few commands:
source activate environment-name
source conda activate environment-name
Below commands helped me in building conda environment:
#build the conda environment
ENV ENV_PREFIX $PWD/env
RUN conda update --name base --channel defaults conda && \
conda env create --prefix $ENV_PREFIX --file /tmp/environment.yml --force && \
conda clean --all --yes# run the postBuild script to install any JupyterLab extensions
RUN conda activate $ENV_PREFIX && \
/usr/local/bin/postBuild.sh && \
conda deactivate
You can have a look on the links mentioned by DeepDave.

Related

How do I create a conda env from a yml with no default packages?

I want to describe all my deps in a yml file:
I can do this and run this command: conda env create -f environment.yml
This installs a bunch of extra rubbish I don't want.
But I can run this command to create a minimal conda env: conda create --name test-layers python --no-default-packages
OK so now I want to use my yml config.
This doesn't work: conda create --name myenv -f environment.yml --no-default-packages --yes
Error: PackagesNotFoundError: The following packages are not available from current channels: -environment.yml
And this doesn't work: conda env create -f environment.yml --no-default-packages
Error: unrecognized arguments: --no-default-packages
How do I use a yml file and also not install a bunch of default packages?
Edit
I feel like I'm missing something because the tooling can't be this obtuse and stupid. I thought I would try to create the env first and then update from a file to see if at least that worked:
conda create --name myenv python --no-default-packages --yes
conda env update --name myenv --file environment.yml
And now my env has all those default packages I wanted to avoid! Am I seriously going to have to wrap this in a script that parses my environment.yaml and runs a command to install each dep and pip package myself?
Did you try using --no-deps? as in:
conda create --name myenv -f environment.yml --no-deps --yes

How do I create a docker container with both Python 2 and 3 available in Jupyter Notebooks?

I am trying to create a docker container that has anaconda and supports Jupyter notebooks with both python 2 and 3. I created a container based on the official anaconda python 3 container like so:
FROM continuumio/anaconda3:latest
WORKDIR /app/
COPY requirements.txt /app/
RUN pip install --upgrade pip && \
pip install -r requirements.txt
Once on the container, I am able to get python 2 and 3 working with Jupyter notebooks by entering the following commands:
conda create -y -n py2 python=2.7
conda activate py2
conda install -y notebook ipykernel
ipython kernel install --user
conda deactivate
Then when I go back to base and run jupyter kernelspec list I see:
(base) root#1683850aacf0:/app# jupyter kernelspec list
Available kernels:
python2 /root/.local/share/jupyter/kernels/python2
python3 /root/.local/share/jupyter/kernels/python3
and when I open a jupyter notebook server I see both python 2 and 3 options. This is the state that I would like to end up in. I tried to turn all these into docker commands like so:
RUN conda create -y -n py2 python=2.7
RUN conda activate py2
RUN conda install -y notebook ipykernel
RUN ipython kernel install --user
RUN conda deactivate
but running the command to activate or deactivate (RUN conda activate py2) a conda environment gives me an error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
Adding RUN conda init bash before the commands doesn't change the error message.
Also, based on this SO question I tried:
RUN conda create -y -n py3 python=3.7 ipykernel
RUN conda create -y -n py2 python=2.7 ipykernel
but after I build and enter the container I only see the python 3 environment:
(base) root#b301d8ab5f1e:/app# jupyter kernelspec list
Available kernels:
python3 /opt/conda/share/jupyter/kernels/python3
I can activate py2 and see that kernel, but not both:
(py2) root#b301d8ab5f1e:/app# jupyter kernelspec list
Available kernels:
python2 /opt/conda/envs/py2/share/jupyter/kernels/python2
What else should I try?
EDIT:
I tried specifying the shell as Adiii suggested with the following:
FROM continuumio/anaconda3:latest
WORKDIR /app/
COPY requirements.txt /app/
RUN pip install --upgrade pip && \
pip install -r requirements.txt
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
RUN conda create -y -n py2 python=2.7
RUN conda activate py2
RUN conda install -y notebook ipykernel
RUN ipython kernel install --user
RUN conda deactivate
This allows the container to build but for some reason there was no python 2.7 environment:
(base) root#31169f698f14:/app# jupyter kernelspec list
Available kernels:
python3 /root/.local/share/jupyter/kernels/python3
(base) root#31169f698f14:/app# conda info --envs
# conda environments:
#
base * /opt/conda
py2 /opt/conda/envs/py2
(base) root#31169f698f14:/app# conda activate py2
(py2) root#31169f698f14:/app# jupyter kernelspec list
Available kernels:
python3 /root/.local/share/jupyter/kernels/python3
From this issue, you need sepcify the SHELL directive in the Dockerfile, like SHELL ["/bin/bash", "-c"]. The problem could be the fact that the default shell in RUN command is sh.
This is similar to the solutions above, but avoids some of the
boilerplate in every RUN command:
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
Then something like this should work as expected:
RUN conda activate my-env && conda info --envs
Or, to set the environment persistently (including for an interactive
shell) you could:
RUN echo "conda activate my-env" >> ~/.bashrc
Dockerfile
FROM continuumio/anaconda3:latest
WORKDIR /app/
RUN pip install --upgrade pip
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
RUN conda create -y -n py2 python=2.7
RUN conda activate py2
RUN conda install -y notebook ipykernel
RUN ipython kernel install --user
RUN conda deactivate
This was what ended up working:
RUN conda env create -f py2_env.yaml
RUN conda env create -f py3_env.yaml
RUN /bin/bash -c "conda init bash && source /root/.bashrc && conda activate py2 && conda install -y notebook ipykernel && ipython kernel install --user && conda deactivate"
RUN /bin/bash -c "conda init bash && source /root/.bashrc && conda activate py3 && conda install -y notebook ipykernel && ipython kernel install --user && conda deactivate"

How to install packages from yaml file in Conda

I would like to have one YAML file that could serve to both create virtual environments and (most importantly) as a base for installing packages by conda into the global env. I am trying:
conda install --file ENV.yaml
But it is not working since conda expects pip-like format of the requirements. What command should I execute to install packages from my YAML file globally?
You want the conda-env command instead, specifically
conda env update -n my_env --file ENV.yaml
Read the conda env update --help for details.
If you wish to install this in the base env, then you would use
conda env update -n base --file ENV.yaml
Note that the base env isn't technically "global", but rather just the default env as well as where the conda Python package lives. All envs are isolated unless you are either using the --stack flag during activation to override the isolation or have - contra recommended practice - manually manipulated PATH to include an env.
If your conda env is already activated, use:
conda env update --file environment.yml
Or update a specific environment without activating it:
conda env update --name envname --file environment.yml

Docker and Conda: Differences when building the same container on Mac and on Ubuntu

I'm using to Docker to build a Python container with the intention of having a reproducible environment on several machines, which are a bunch of development Macbooks and several AWS EC2 servers.
The container is based on continuumio/miniconda3, i.e. Dockerfile starts with
FROM continuumio/miniconda3
A few days ago on Ubuntu the conda install and conda upgrade commands in the Docker file complained about a new Conda version (4.11) being available:
==> WARNING: A newer version of conda exists. <==
current version: 4.4.10
latest version: 4.4.11
Please update conda by running
$ conda update -n base conda
If I ignore this, package installations quit with an error:
Downloading and Extracting Packages
The command '/bin/sh -c conda install -y pandas=0.22.0 matplotlib
scikit-learn=0.19.1 pathos lazy openpyxl pytables dill pydro psycopg2
sqlalchemy pyarrow arrow-cpp parquet-cpp scipy tensorflow keras
xgboost' returned a non-zero code: 1
When I add this conda update... to the Docker file, things work again.
What's really annoying, however, is that the update that makes things run in Ubuntu does not work on Mac Docker. I get the following error:
CondaEnvironmentNotFoundError: Could not find environment: base .
You can list all discoverable environments with `conda info --envs`.
Note that I get this error when I docker build the same Docker file that works on the Ubuntu machine, which kind of ruins the whole point about using Docker in the first place. On the Mac, the old version of the file (without conda update -n base conda) still runs fine and installs all packages.
Docker / Conda experts, any ideas?
Edit: Here's the full Dockerfile (the one that works in Ubuntu):
# Use an official Python runtime as a parent image
FROM continuumio/miniconda3
WORKDIR /app/dev/predictive.analytics
RUN apt-get update; \
apt-get install -y gcc tmux htop
RUN conda update -y -n base conda
RUN conda config --add channels babbel; \
conda config --add channels conda-forge;
RUN conda install -y pandas=0.22.0 matplotlib scikit-learn=0.19.1 pathos lazy openpyxl pytables dill pydro psycopg2 sqlalchemy pyarrow arrow-cpp parquet-cpp scipy tensorflow keras xgboost
RUN pip install recordclass sultan
RUN conda upgrade -y python
ENV DATA_DIR /host/data
ENV PYTHONPATH /host/predictive.analytics/python
ENV PATH="/host/predictive.analytics:${PATH}"
Perhaps you're using an outdated miniconda on one of the build machine, try doing docker build --pull --no-cache.
Docker doesn't necessarily pull the latest image from the repository, so unless you do a --pull, it is possible that some of your machines may be starting the build with outdated base image.

Conda: Creating a virtual environment

I'm trying to create a virtual environment. I've followed steps from both Conda and Medium.
Everything works fine until I need to source the new environment:
conda info -e
# conda environments:
#
base * /Users/fwrenn/anaconda3
test_env /Users/fwrenn/anaconda3/envs/test_env
source ~/anaconda3/bin/activate test_env
_CONDA_ROOT=/Users/fwrenn/anaconda3: Command not found.
Badly placed ()'s.
I can't figure out the problem. Searching on here has solutions that say adding lines to your bash_profile file, but I don't work in Bash, only C shell (csh). It looks like it's unable to build the directory path in activate.
My particulars:
OS X
Output of python --version:
Python 3.6.3 :: Anaconda custom (64-bit)
Output of conda --version:
conda 4.4.7
I am not sure what causes the problem in your case, but code below works for me without any issues (OS X, the same version of Conda as yours).
Creation of the environment
conda create -n test_env python=3.6.3 anaconda
Some explanation of the documentation of conda create is not clear:
-n test_env sets name of the environment to test_env
python=3.6.3 anaconda says that you want to use python in version 3.6.3 in this environment (exactly the one you have, and you can use a different one if you need it) and package anaconda. You can put all the things you need there, separated with spaces, e.g., sqlite matplotlib requests and specify their versions the same way as for python.
Activation
conda activate test_env
Deactivation
conda deactivate
Getting rid of it
conda remove -n test_env --all
Check if Conda is installed
conda -V
Check if Conda is up to date
conda update conda
Create a virtual environment
conda create -n yourenvname python=x.x anaconda
Activate your virtual environment
source activate yourenvname
Install additional Python packages to a virtual environment
conda install -n yourenvname [package]
Deactivate your virtual environment
source deactivate
Delete the virtual environment
conda remove -n yourenvname --all
I was able to solve my problem. Executing the source activate test_env command wasn't picking up my .bash_profile, and I normally work in tcsh. Simply starting a subprocess in Bash was enough to get activate working. I guess I assumed, incorrectly, that the activate command would start a child process in Bash and use Bash environment variables.
> conda info -e
> # conda environments:
> #
> base * ~/anaconda3
> test_env ~/anaconda3/envs/test_env
> bash
~$ source ~/anaconda3/bin/activate test_env
(test_env) ~$
(test_env) ~$ conda info -e
# conda environments:
#
test_env * ~/anaconda3/envs/test_env
root ~/anaconda3

Categories

Resources