How to import torchreid in google colab? - python

I'm trying to install torchreid which is a library for person re-identification in PyTorch. I've followed the steps mentioned on the git repository but is getting this error.
#conda install
!wget -c https://repo.anaconda.com/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh
!chmod +x Miniconda3-4.5.4-Linux-x86_64.sh
!bash ./Miniconda3-4.5.4-Linux-x86_64.sh -b -f -p /usr/local
!conda install -q -y --prefix /usr/local python=3.6 ujson
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages')
!git clone https://github.com/KaiyangZhou/deep-person-reid.git
!cd deep-person-reid/
!conda create --name torchreid python=3.7
!conda activate torchreid
!pip install -r /content/deep-person-reid/requirements.txt
!conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
import torchreid
error
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-4faf46a39b5d> in <module>()
----> 1 import torchreid
ModuleNotFoundError: No module named 'torchreid'

Run below command,
# install torchreid (don't need to re-build it if you modify the source code)
python setup.py develop

First deactivate you created conda environment 'torchreid'. Activate it again and try to import.
deactivate torchreid
conda activate torchreid
Note: you are installing torchreid in a specifica conda environment, which in not accessible from outside of that environment.

Related

How can I install the FeniCS dolfin module?

So I'm trying to install FEniCS from the instructions here. I did the
pip3 install fenics-ffc --upgrade
inside my virtualenv and it worked but when I try to import dolfin I get a ModuleNotFound error. I'm not sure how to get dolfin installed. I did
pip install pybind11
to get pybind11 installed then copied the code for dolfin installation into my cmd
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.__version__)")
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr
mkdir dolfin/build && cd dolfin/build && cmake .. && make install && cd ../..
mkdir mshr/build && cd mshr/build && cmake .. && make install && cd ../..
cd dolfin/python && pip3 install . && cd ../..
cd mshr/python && pip3 install . && cd ../..
but it just spat out dozens of errors like:
FENICS_VERSION=$(python3 -c"import ffc; print(ffc.version)") 'FENICS_VERSION' is not recognized as an internal or external command, operable program or batch file.
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/dolfin Cloning into 'dolfin'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
git clone --branch=$FENICS_VERSION https://bitbucket.org/fenics-project/mshr Cloning into 'mshr'...
fatal: Remote branch $FENICS_VERSION not found in upstream origin
There were lots more errors after too. Am I not supposed to paste the dolfin code into cmd? I don't know much about this stuff so unsure of how to get the dolfin module. I've previously only used pip to get my packages but this does not work for dolfin as it doesn't appear to be on PyPI.
Do you have cmake? It says in the docs you need it. Also its says to to this to install pybind11 not pip install pybind11
For building optional Python interface of DOLFIN and mshr, pybind11 is needed since version 2018.1.0. To install it:
wget -nc --quiet https://github.com/pybind/pybind11/archive/v${PYBIND11_VERSION}.tar.gz
tar -xf v${PYBIND11_VERSION}.tar.gz && cd pybind11-${PYBIND11_VERSION}
mkdir build && cd build && cmake -DPYBIND11_TEST=off .. && make install
Also what is your os?
So here is how you can install fenics 2019.1 using conda (miniconda):
Install Conda:
First go to https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
and follow the installation instructions.
Create a conda environment for fenics:
Open a terminal and type:
conda create -n fenics
To activate the created environment "fenics", type:
conda activate fenics
If you want the fenics environment to be activated automatically everytime you open a new terminal, then open you .bashrc file (should be under /home/username/.bashrc) and add the line "source activate fenics" below the ">>> conda initialize >>>" block.
Install fenics:
Type all these commands:
conda install -c conda-forge h5py=*=*mpich*
conda install -c conda-forge fenics
pip install meshio
pip install matplotlib
pip install --upgrade gmsh
conda install -c conda-forge paraview
pip install scipy
The second command will take a while. I added a few nice to have programs like gmsh and paraview which will help you to create meshes and view your solutions.

setup.py install within conda env via Docker

So I'n new-ish to Docker.
What I'm trying to do:
create a docker image (that puts you into an anaconda enviroment and downloads some libraries via conda), within that enviroment run a setup.py install so that the library CPLEX is also in the image
Build the image. Next I run the image, and set a scratch directory that should have all the python programs in it that I want to run.
This is my docker file:
## syntax=docker/dockerfile:1
from continuumio/miniconda3
RUN conda create --name opt python=3.7
ENV PATH /opt/conda/envs/mro_env/bin:$PATH
RUN /bin/bash -c "source activate opt"
RUN conda install -c conda-forge fenics
RUN conda install -c conda-forge scipy
RUN conda install -c conda-forge numpy
RUN conda install -c conda-forge matplotlib
RUN conda install -c conda-forge ipopt
RUN conda install -c conda-forge glpk
RUN conda install -c conda-forge pyomo
RUN conda install -c conda-forge pandas
RUN conda install -c conda-forge scikit-learn
ENTRYPOINT [ "python" ]
COPY CPLEX_Studio129/ /CPLEX_Studio129/
COPY . /codes/
RUN cd /CPLEX_Studio129/ && python cplex/python/3.7/x86-64_osx/setup.py install
WORKDIR /codes/
RUN cd /codes/
Next I build the image, and then run the image :
sudo docker build -t ex:latest .
docker run --rm -it --entrypoint /bin/bash ex
(base) root#bc9beca93d69:/codes# python test_cplex.py
Traceback (most recent call last):
File "/codes/test_cplex.py", line 1, in <module>
import cplex
ModuleNotFoundError: No module named 'cplex'
(base) root#bc9beca93d69:/codes# ls
For what its worth, if I make /CPLEX_Studio129/ the WORKDIR, and install the library, then start a python in that directory, and try to import cplex it works. But my hope was that it would work when I run a program that imports CPLEX in /codes/, which it cannot find.
Any tips, and a bried explation of where I went wrong would be greatly appreciated.

MEEP install in Google Colab

I am trying to use Google Colab to install MEEP.
!wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -p ./anaconda
import os
os.environ['PATH'] += ":/content/anaconda/bin"
!conda create -n mp -c conda-forge pymeep
import sys
sys.path.append('/content/anaconda/envs/mp/lib/python3.8/site-packages/')
I copied the code from here: https://gist.github.com/venky18/e24df1e55502e2d6523881b3f71c0bff.
However, it turns out an error message:
ImportError: /content/anaconda/envs/mp/lib/python3.9/site-packages/meep/_meep.so: undefined symbol: PyCMethod_New
How do I modify my code to install it correctly?
From here. This works currently:
!wget -c https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
!chmod +x Miniconda3-latest-Linux-x86_64.sh
!bash ./Miniconda3-latest-Linux-x86_64.sh -b -u -p /usr/local
import os
if os.path.isdir('/content/anaconda'): root_path = '/content/anaconda'
else: root_path = '/usr/local/'
os.environ['PATH'] += f":{root_path}/bin"
!conda create -n mp -c conda-forge pymeep python=3.7 -y
print(">> ", root_path)
import sys
sys.path.append(f'{root_path}envs/mp/lib/python3.7/site-packages/')
I believe the problem with the code snippet you linked was that currently the colab python default version is 3.7.12. Conda now defaults to 3.8. Trying to use packages from a 3.8 install in a 3.7 install is asking for trouble. Adding python=3.7 to the conda create line forces conda to use python 3.7, so the right packages are installed.

Cant run 2 conda commands in Dockerfile

I have a docker file like :
FROM conda/miniconda3-centos7
WORKDIR /tmp
COPY app/ /tmp
RUN conda install gcc_linux-64
RUN conda install gxx_linux-64
CMD ["python", "Hello_World.py"]
The code gets stuck after the first RUN conda command. The error i get is :
WARNING: The conda.compat module is deprecated and will be removed in a future release.
==> WARNING: A newer version of conda exists. <==
current version: 4.6.11
latest version: 4.9.2
Please update conda by running
$ conda update -n base -c defaults conda
Removing intermediate container 277edb28a107
---> e6b51d71eac0
Step 7/8 : RUN conda install gxx_linux-64
---> Running in 94166fbfff2a
Traceback (most recent call last):
File "/usr/local/bin/conda", line 12, in <module>
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
The command '/bin/sh -c conda install gxx_linux-64' returned a non-zero code: 1
Can you please suggest?
Adding conda update -n base -c defaults conda in your Dockerfile solves the mentioned problem.
You could also consider using && for optimizing the creation of docker images. Read more about it here.
An optimized Dockerfile would be:
FROM conda/miniconda3-centos7
WORKDIR /arnav
COPY app/ /arnav
RUN conda update -n base -c defaults conda \
&& conda install gcc_linux-64 && conda install gxx_linux-64
CMD ["python", "Hello_World.py"]

How do I create a docker container with both Python 2 and 3 available in Jupyter Notebooks?

I am trying to create a docker container that has anaconda and supports Jupyter notebooks with both python 2 and 3. I created a container based on the official anaconda python 3 container like so:
FROM continuumio/anaconda3:latest
WORKDIR /app/
COPY requirements.txt /app/
RUN pip install --upgrade pip && \
pip install -r requirements.txt
Once on the container, I am able to get python 2 and 3 working with Jupyter notebooks by entering the following commands:
conda create -y -n py2 python=2.7
conda activate py2
conda install -y notebook ipykernel
ipython kernel install --user
conda deactivate
Then when I go back to base and run jupyter kernelspec list I see:
(base) root#1683850aacf0:/app# jupyter kernelspec list
Available kernels:
python2 /root/.local/share/jupyter/kernels/python2
python3 /root/.local/share/jupyter/kernels/python3
and when I open a jupyter notebook server I see both python 2 and 3 options. This is the state that I would like to end up in. I tried to turn all these into docker commands like so:
RUN conda create -y -n py2 python=2.7
RUN conda activate py2
RUN conda install -y notebook ipykernel
RUN ipython kernel install --user
RUN conda deactivate
but running the command to activate or deactivate (RUN conda activate py2) a conda environment gives me an error:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- fish
- tcsh
- xonsh
- zsh
- powershell
Adding RUN conda init bash before the commands doesn't change the error message.
Also, based on this SO question I tried:
RUN conda create -y -n py3 python=3.7 ipykernel
RUN conda create -y -n py2 python=2.7 ipykernel
but after I build and enter the container I only see the python 3 environment:
(base) root#b301d8ab5f1e:/app# jupyter kernelspec list
Available kernels:
python3 /opt/conda/share/jupyter/kernels/python3
I can activate py2 and see that kernel, but not both:
(py2) root#b301d8ab5f1e:/app# jupyter kernelspec list
Available kernels:
python2 /opt/conda/envs/py2/share/jupyter/kernels/python2
What else should I try?
EDIT:
I tried specifying the shell as Adiii suggested with the following:
FROM continuumio/anaconda3:latest
WORKDIR /app/
COPY requirements.txt /app/
RUN pip install --upgrade pip && \
pip install -r requirements.txt
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
RUN conda create -y -n py2 python=2.7
RUN conda activate py2
RUN conda install -y notebook ipykernel
RUN ipython kernel install --user
RUN conda deactivate
This allows the container to build but for some reason there was no python 2.7 environment:
(base) root#31169f698f14:/app# jupyter kernelspec list
Available kernels:
python3 /root/.local/share/jupyter/kernels/python3
(base) root#31169f698f14:/app# conda info --envs
# conda environments:
#
base * /opt/conda
py2 /opt/conda/envs/py2
(base) root#31169f698f14:/app# conda activate py2
(py2) root#31169f698f14:/app# jupyter kernelspec list
Available kernels:
python3 /root/.local/share/jupyter/kernels/python3
From this issue, you need sepcify the SHELL directive in the Dockerfile, like SHELL ["/bin/bash", "-c"]. The problem could be the fact that the default shell in RUN command is sh.
This is similar to the solutions above, but avoids some of the
boilerplate in every RUN command:
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
Then something like this should work as expected:
RUN conda activate my-env && conda info --envs
Or, to set the environment persistently (including for an interactive
shell) you could:
RUN echo "conda activate my-env" >> ~/.bashrc
Dockerfile
FROM continuumio/anaconda3:latest
WORKDIR /app/
RUN pip install --upgrade pip
ENV BASH_ENV ~/.bashrc
SHELL ["/bin/bash", "-c"]
RUN conda create -y -n py2 python=2.7
RUN conda activate py2
RUN conda install -y notebook ipykernel
RUN ipython kernel install --user
RUN conda deactivate
This was what ended up working:
RUN conda env create -f py2_env.yaml
RUN conda env create -f py3_env.yaml
RUN /bin/bash -c "conda init bash && source /root/.bashrc && conda activate py2 && conda install -y notebook ipykernel && ipython kernel install --user && conda deactivate"
RUN /bin/bash -c "conda init bash && source /root/.bashrc && conda activate py3 && conda install -y notebook ipykernel && ipython kernel install --user && conda deactivate"

Categories

Resources