how to mount a local volume for my docker? - python

i am newbie to the Linux and docker. I am using the below command to run the docker:
sudo nvidia-docker run --gpus all -p 8888:8888 -it -v /home/pyman/PEYMAN:??????? 21bbc6c8f7ed
where; /home/pyman/PEYMAN is my local directory
and 21bbc6c8f7ed is the image ID.
after running this command, the workspace root changes to root#0ce2ee24bac0:/workspace#
then I type jupyter notebook and run it, and it provides two links which only the second link opens the jupyter notebook in the browser.
http://hostname:8888/?token=xxxxxxxxxxx
http://127.0.0.1:8888/?token=xxxxxxxxxxx
but I dont know what is my container_dir in the first command to put in ?????, and how to get the directory. is the container_dir the same directory that jupyter is?

The container_dir is the path inside the container, where you'd like to see your mounted files. The directory inside container does not even have to exist, yon can pick almost any place to mount your files. If you work with jupyter, it makes sense to add your files to the working directory:
docker run -v /home/pyman/PEYMAN:/workspace/myfiles
Once inside the container you will find /home/pyman/PEYMAN in /workspace/myfiles.

Related

Is there a way to modify files inside docker via PyCharm?

I want to modify files inside docker container with PyCharm. Is there possibility of doing such thing?
What you want to obtain is called Bind Mounting and it can be obtained adding -v parameter to your run command, here's an example with an nginx image:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
The specific parameter obtaining this result is -v.
-v ~/nginxlogs:/var/log/nginx sets up a bindmount volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine.
Docker uses a : to split the host’s path from the container path, and the host path always comes first.
In other words the files that you edit on your local filesystem will be synced to the Docker folder immediately.
Source
Yes. There are multiple ways to do this, and you will need to have PyCharm installed inside the container.
Following set of instructions should work -
docker ps - This will show you details of running containers
docker exec -it *<name of container>* /bin/bash
At this point you will oh shell inside the container. If PyCharm is not installed, you will need to install. Following should work -
sudo apt-get install pycharm-community
Good to go!
Note: The installation is not persistence across Docker image builds. You should add the installation step of PyCharm on DockerFile if you need to access it regularly.

Unable to delete the files given by Docker with mounted volume

I am running a python script that downloads CSV file using some API with a docker run command.
I am using one Dockerfile to install all the OS level dependencies and requirements.
Once the build is created, I am using the following command :
docker run -v $(pwd)/Reports:/usr/src/app/Reports --rm ImgName python myScript.py -d 2015-11-25
As mentioned in the above command I have one directory named Reports.
I have mounted that directory with Docker.
The Script Executes successfully and downloads a CSV file but The Problem is, The Downloaded CSV file is in read-only mode. I am not able to delete it.
I need to have the flexibility to delete any file downloaded via script.
Note: When I run the script without docker i.e. python myScript.py I can read, write and delete the file.*
Any feedback will be appreciated.
you can run docker with your own user with the following command:
docker run --user $(id -u):$(id -g) ...
This will make the container run as your user and all the files will be created with the right owner (see docker run docs). Don't forget to delete the files you already have there or create a new folder for this

How to persist my notebooks and data in my Docker image/container

I am new to Docker and I am confused about containers and images somehow. I want to sue Docker for Tensorflow development. All I need is to have an easy way to write Jupyter Notebooks and use GPU powered Tensorflow.
I have the latest Tensorflow Jupyter Python 3 Image already. I run the Image with
docker run --rm --runtime=nvidia -v -it -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
How can I make it so that my data when I work in that Image and add and edit my Jupyter Notebooks won't get lost after I exit the process. I know that Docker Images aren't meant to persist state but I am so new to this I just want something to work in with persistent data. Can someone help me guide me through this or point to a resource which will answer all my prayers?
I would also like to move some stuff into the Container that is going to be run so that I can access some custom Python libs because they contain some things that my Notebooks need to import!
Side questions:
--rm removes the container or whatever by default I run it without this flag still my data was lost
-v is for volumes? I tried with -v Bachelor:/app to mount a volume like so. It apparently doesn't make any difference. I don't know how to use the volume Bachelor that I created. Instead there are a multitude of unnamed volumes being created that are not usable whenever I run this
-it does also something no idea what
-p is the port number right?
Use Docker volumes:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
Example:
docker run --runtime=nvidia -v ${SOURCE_FOLDER}:${DEST_FOLDER} -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
Change SOURCE_FOLDER and DEST_FOLDER accordingly (use absolute paths!).
Now if you navigate to localhost:8888 and create a notebook on DEST_FOLDER, it also should be available on SOURCE_FOLDER.
As for your side questions:
--it runs a container in interactive mode. You generally add /bin/bash after the run command, so you can start an interactive bash session inside the container.
--rm cleans the container after it exists.
Those options aren't really necessary for your use case. Just remember to use docker ps and docker rm <ID> to clean up your container after you're done.

How can I make Docker container shared files with host appeared in the container?

I'm trying to create a container to run a program. I'm using a pre-configured image and now I need to run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Well, I have made the shared folder from my machine appeared using docker run -it -v ~/Volumes/Data/Studies/PhD\Work/gitlab/J2/ydk-py:/ydk-py ydkdev/ydk-py in the container, but all files in folder ydk-py are not shown. This is the safe, usually-desired behavior. But for development and instance setup, it would be immensely useful to have access to an existing file structure.
docker run with -v will automatically mount sub-directories. In your case you are using relative path, which you need to use absolute path as per this documentation.
So change your command from
docker run -it -v ~/Volumes/Data/Studies/PhD\Work/gitlab/J2/ydk-py:/ydk-py ydkdev/ydk-py
to
docker run -it -v /home/<what ever user>/Volumes/Data/Studies/PhD\Work/gitlab/J2/ydk-py:/ydk-py ydkdev/ydk-py
it will work.
Make sure you have enough permissions on directory that you are trying to mount.

Setting up graph-tool on Docker Toolbox for WIndows

I followed the graph-tool docker installation instructions here. I've set up Docker Toolbox (can't use Docker for Windows, not on Pro), and I've gotten jupyter running with the Docker image.
However, I need to access a notebook in my C: drive. For the sake of this post let's say the notebook is in C:\Users\Gab\Desktop. I've successfully moved into that location, but when I run the command docker run -p 8888:8888 -p 6006:6006 -it -u user -w /home/user tiagopeixoto/graph-tool bash, it opens a bash in /home/user, not in the directory I cd'd into previously.
From what I understand, the -w /home/user is what tells it where to open, but I'm not sure how to tell it to open in the Desktop folder.
How can I set things up properly so that I can run the command jupyter notebook --ip 0.0.0.0, and still be able to access the notebook I need?
Thanks!
Here's the deal with Docker. When you execute the docker run command, what happens is that an entirely new subsystem is created which is separate from your host Operating System ( this is known as a Docker container ). This subsystem runs the tiagopeixoto/graph-tool image in it so hence the graph-tool ( and hence jupyter-notebook ) and the entrypoint /home/user is present inside this system instead of the host OS you use to run the Docker container ( in your case its Windows ). Unfortunately for you the notebook that you wish to view using jupyter-notebook isn't present inside the container and is located elsewhere ( Windows to be exact ).
What you can do in this case is mount a folder of the host Operating System to the container such that this folder contains the notebook you wish to view :-
Open a new command prompt, and type in this command as follows :- docker run -p 8888:8888 -p 6006:6006 -v /c/Users/Gab/Desktop:/mnt/temp/ -it -u user -w /home/user tiagopeixoto/graph-tool bash
The main difference to note here is the -v switch which mounts the C:\Users\Gab\Desktop volume from the host system to the Docker container in /mnt/temp/. Once that is done try viewing the notebook present in /mnt/temp in jupyter-notebook.
According to this post there exists an issue related to mounting a volume in Windows, so please check this out as well :- docker toolbox mount file on windows

Categories

Resources