I'm currently working with a remote Jupyter notebook (through a docker image), and am having an issue with finding a folder that exists in the directory (where I'm running the notebook) but doesn't exist in the notebook tree.
Command I'm using to execute the notebook:
nvidia-docker run -it -p 8888:8888 --entrypoint /usr/local/bin/jupyter NAMEOFDOCKERIMAGE notebook --allow-root --ip=0.0.0.0 --no-browser
Command I'm using to access the notebook remotely:
ssh -N -f -L localhost:8888:localhost:8888 remote_user#remote_host
What's weird is that if I navigate to the notebook's working directory (on the remote host / server) and add a folder + files, the notebook will not reflect the changes (i.e. mkdir new_folder in the working directory will not add new_folder to the notebook's tree).
Would anyone know why this could be the case, and if so, how to "refresh" / "update" the tree?
Thanks so much for all and any help!
Docker containers have an isolated file system. This means that the program running in the container (jupyter notebook in your case) sees different folders than the ones you have in the host system.
If you want to give the container access to one folder in the host, you can use the option -v when running the docker.
In your case, you should run the container with this command:
nvidia-docker run -it -p 8888:8888 -v /PATH_TO_HOST_FOLDER:/PATH_TO_CONTAINER_FOLDER --entrypoint /usr/local/bin/jupyter NAMEOFDOCKERIMAGE notebook --allow-root --ip=0.0.0.0 --no-browser
where:
PATH_TO_HOST_FOLDER is the path of the folder in the host system that you want to share with the container.
PATH_TO_CONTAINER_FOLDER is the mountpoint of the folder in the container file system (e.g., /home/username/work where username is the name of the user in the container).
The path in the container depends on the docker image that you are using. If you do not know the path in the container, you can take a look at the container file system, by running a bash inside the container with this command:
nvidia-docker run -it --entrypoint /bin/bash NAMEOFDOCKERIMAGE
After you run this command, you are in a bash inside the container, so you can see the inner filesystem with ls, pwd, etc.
Related
I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?
docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.
Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)
Appreciate the help
Docker run command syntax is
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
image name tensorflow/tensorflow:nightly-jupyter should be after options (-v, -p --name et.al.) and before the command.
docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash
I downloaded a python script and a docker image containing commands to install all the dependencies. How can I run the python script using the docker image?
Copy python file in Docker image then execute -
docker run image-name PATH-OF-SCRIPT-IN-IMAGE/script.py
Or you can also build the DockerFile by using the RUN python PATH-OF-SCRIPT-IN-IMAGE/script.py inside DockerFile.
How to copy container to host
docker cp <containerId>:/file/path/within/container /host/path/target
How to copy host to the container
docker cp /host/local/path/file <containerId>:/file/path/in/container/file
Run in interactive mode:
docker run -it image_name python filename.py
or if you want host and port to be specified:
docker run -it -v filename.py:filename.py -p 8888:8888 image_name python filename.py
Answer
First, Copy your python script and other required files to your docker container.
docker cp /path_to_file <containerId>:/path_where_you_want_to_save
Second, open the container cli using docker desktop and run your python script.
The best way, I think, is to make your own image that contains the dependencies and the script.
When you say you've been given an image, I'm guessing that you've been given a Dockerfile, since you talk about it containing commands.
Place the Dockerfile and the script in the same directory. Add the following lines to the bottom of the Dockerfile.
# Existing part of Dockerfile goes here
COPY my-script.py .
CMD ["python", "my-script.py"]
Replace my-script.py with the name of the script.
Then build and run it with these commands
docker build -t my-image .
docker run my-image
Attempt to run ray docker image on M1 results in
$ docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
> WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I've tried to use DOCKER_DEFAULT_PLATFORM=linux/amd64, but then nothing happens:
$ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
>
$ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The latest tag has digest 744f499644cc
The image has /bin/bash defined as the command to run when it starts. When you run it, you don't attach a TTY, so the container exits immediately.
I'm not familiar with the image, so I don't know the way to run it correctly and your port mappings confuse me a bit. But a way to run it is
docker run -it rayproject/ray:latest
That will put you at a prompt inside the container and you can explore the contents.
I have been struggling to add env variables into my container for the past 3 hrs :( I have looked through the docker run docs but haven't managed to get it to work.
I have built my image using docker build -t sellers_json_analysis . which works fine.
I then go to run it with: docker run -d --env-file ./env sellers_json_analysis
As per the docs: $ docker run --env-file ./env.list ubuntu bash but I get the following error:
docker: open ./env: no such file or directory.
The .env file is in my root directory
But when running docker run --help I am unable to find anything about env variables, but it doesn't provide the following:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So not sure I am placing things incorrectly. I could add my variables into the dockerfile but I want to keep it as a public repo as it's a project I would like to display.
Your problem is wrong path, either use .env or ./.env, when you use ./env it mean a file named env in current directory
docker run -d --env-file .env sellers_json_analysis
I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root#66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root#66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
My way of doing it is slightly different with some advantages.
It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>
# Now start it.
docker start new-container
# Now attach bash session.
docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
You can run a docker image, perform a script and have an interactive session with a single command:
sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"
The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.
There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).
Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there:
sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"
Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows:
sudo docker run -it image bash -c "python myscript.py; bash"
Why not this?
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host
Or if you want it to stay on the container
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy
Hope it was helpful.
I think this is what you mean.
Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):
Output:
$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root#66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root#66bddaa892ed:/usr/src/python# exit
Goodbye!
Dockerfile:
# Test Docker Image
FROM python:2
ADD myscript.py /usr/bin/myscript
RUN pip install fabric
CMD ["/usr/bin/myscript"]
myscript.py:
#!/usr/bin/env python
from __future__ import print_function
from fabric.api import local
with open("hello.txt", "w") as f:
f.write("Hello World!")
print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.
I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files
Create one text script
sudo nano /usr/local/bin/dock-folder
Add this script as its content
#!/bin/bash
echo "IMAGE = $1"
## image name is the first param
IMAGE="$1"
## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"
# remove the image from this dir, if exists
## rm remove container command
## pwd | md5 get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)" create a unique name for the container based in the folder and image
## --force force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
echo "## removing previous container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
# create one special container for this folder based in the python image and let this folder mapped
## -it interactive mode
## pwd | md5 get the unique code for the current folder
## --name="${CONTAINER}" create one container with unique name based in the current folder and image
## -v "$(pwd)":/data create ad shared volume mapping the current folder to the /data inside your container
## -w /data define the /data as the working dir of your container
## -p 80:80 some port mapping between the container and host ( not required )
## pyt#hon name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"
# start the container
docker start "${CONTAINER}"
# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash
# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
echo "## removing container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
Add execution permission
sudo chmod +x /usr/local/bin/dock-folder
Then you can call the script into your project folder calling:
# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python
# destroy if the container already exists and replace it
dock-folder python --replace
# destroy the container after closing the interactive mode
dock-folder python --remove
This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.
Using this strategy, you can quickly test your project in a container and interact with the container to debug it.
One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.
So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.