Is there a way to stop a command in a docker container - python

I have a docker container that is running a command. In the Dockerfile the last line is CMD ["python", "myprogram.py"] . This runs a flask server.
There are scenarios when I update myprogram.py and need to kill the command, transfer the updated myprogram.py file to the container, and execute python myprogram.py again. I imagine this to be a common scenario.
However, I haven't found a way to do this. Since this is the only command in the Dockerfile...I can't seem to kill it. from the containers terminal when I run ps -aux I can see that python myprogram.py is assigned a PID of 1. But when I try to kill it with kill -9 1 it doesn't seem to work.
Is there a workaround to accomplish this? My goal is to be able to change myprogram.py on my host machine, transfer the updated myprogram.py into the container, and execute python myprogram.py again.

You could use VOLUMES to mount your myprogram.py source code on your container, and just docker stop and docker restart the container.
To make a VOLUME :
add a VOLUME directive in your Dockerfile and rebuild your image :
VOLUME /path/to/mountpoint
and use the -v option when running your image.
docker run -d -v /path/to/dir/to/mount:/path/to/mountpoint myimage
/!\ These steps above are enough only for a Linux environment. /!\
To use it with something else (like Docker-machine on OSX), you must also make a mount point in the VM running Docker (probably virtualbox).
You'll have the following scheme :
<Dir to share from your host (OSX)> <= (1) mounted on => <Mountpoint on VM> <= (2) mounted on => <Container mountpoint>
The (2) is exactly like a Linux case (in fact, it IS a linux case).
The only step added is mounting the directory you want to share from your host on your VM.
Here are the steps to mount the directory you want to share on the mountpoint in your VM, and then using it with your container :
1- First stop the docker machine.
docker-machine stop <machine_name>
2- Add a sharedfolder to the VM.
VBoxManage sharedfolder add <machine_name> --name <mountpoint_name> --hostpath <dir_to_share>
3- Restart the docker machine :
docker-machine start <machine_name>
4- Creating the mountpoint with ssh and mounting the sharedfolder on it :
docker-machine ssh <machine_name> "sudo mkdir <mountpoint_in_vm>; sudo mount -t vboxsf <mountpoint_name> <mountpoint_in_vm>"
5- And then to mount the directory on your container, run :
docker run -d -v <mountpoint_in_vm>:</path/to/mountpoint (in the container)> myimage
And to clean all this when you don't need it anymore :
6- Unmount in VM :
docker-machine ssh <machine_name> "sudo umount <mountpoint_in_vm>; sudo rmdir <mountpoint_in_vm>"
7- Stop VM :
docker-machine stop <machine_name>
8- Remove shared folder :
VBoxManage sharedfolder remove <machine_name> --name <mountpoint_name>
Here is a script I made for studies purpose, feel free to use it if it can help you.

There are scenarios when I update myprogram.py and need to kill the
command, transfer the updated myprogram.py file to the container, and
execute python myprogram.py again. I imagine this to be a common
scenario.
Not really. The common scenario is either:
Kill existing container
Build new image via your Dockerfile
Boot container from new image
Or:
Start container with a volume mount pointing at your source
Restart the container when you update your code
Either one works. The second is useful for development, since it has a slightly quicker turnaround.

Related

Running tests without ssh'ing into my container

I have a Python API in a docker container, but I want to be able to run tests without sshing in and running the command, but I'm not really sure how I can do that via the command line. For example, I know to ssh in I do (via a script so I can ssh into any of my three containers):
docker exec -it gp-api ash
but when I want to run tests, I need to ssh in, go up a folder, and then run pytest. Not sure how to do that all from the docker command line.
As it is stated in the docs for the exec, you can use -w option in the command to set the current working directory.
docker exec -w /your/working/directory container_name_or_id command

Is there a way to modify files inside docker via PyCharm?

I want to modify files inside docker container with PyCharm. Is there possibility of doing such thing?
What you want to obtain is called Bind Mounting and it can be obtained adding -v parameter to your run command, here's an example with an nginx image:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
The specific parameter obtaining this result is -v.
-v ~/nginxlogs:/var/log/nginx sets up a bindmount volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine.
Docker uses a : to split the host’s path from the container path, and the host path always comes first.
In other words the files that you edit on your local filesystem will be synced to the Docker folder immediately.
Source
Yes. There are multiple ways to do this, and you will need to have PyCharm installed inside the container.
Following set of instructions should work -
docker ps - This will show you details of running containers
docker exec -it *<name of container>* /bin/bash
At this point you will oh shell inside the container. If PyCharm is not installed, you will need to install. Following should work -
sudo apt-get install pycharm-community
Good to go!
Note: The installation is not persistence across Docker image builds. You should add the installation step of PyCharm on DockerFile if you need to access it regularly.

How to persist my notebooks and data in my Docker image/container

I am new to Docker and I am confused about containers and images somehow. I want to sue Docker for Tensorflow development. All I need is to have an easy way to write Jupyter Notebooks and use GPU powered Tensorflow.
I have the latest Tensorflow Jupyter Python 3 Image already. I run the Image with
docker run --rm --runtime=nvidia -v -it -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
How can I make it so that my data when I work in that Image and add and edit my Jupyter Notebooks won't get lost after I exit the process. I know that Docker Images aren't meant to persist state but I am so new to this I just want something to work in with persistent data. Can someone help me guide me through this or point to a resource which will answer all my prayers?
I would also like to move some stuff into the Container that is going to be run so that I can access some custom Python libs because they contain some things that my Notebooks need to import!
Side questions:
--rm removes the container or whatever by default I run it without this flag still my data was lost
-v is for volumes? I tried with -v Bachelor:/app to mount a volume like so. It apparently doesn't make any difference. I don't know how to use the volume Bachelor that I created. Instead there are a multitude of unnamed volumes being created that are not usable whenever I run this
-it does also something no idea what
-p is the port number right?
Use Docker volumes:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
Example:
docker run --runtime=nvidia -v ${SOURCE_FOLDER}:${DEST_FOLDER} -p 8888:8888 tensorflow/tensorflow:latest-gpu-py3-jupyter
Change SOURCE_FOLDER and DEST_FOLDER accordingly (use absolute paths!).
Now if you navigate to localhost:8888 and create a notebook on DEST_FOLDER, it also should be available on SOURCE_FOLDER.
As for your side questions:
--it runs a container in interactive mode. You generally add /bin/bash after the run command, so you can start an interactive bash session inside the container.
--rm cleans the container after it exists.
Those options aren't really necessary for your use case. Just remember to use docker ps and docker rm <ID> to clean up your container after you're done.

Setting up graph-tool on Docker Toolbox for WIndows

I followed the graph-tool docker installation instructions here. I've set up Docker Toolbox (can't use Docker for Windows, not on Pro), and I've gotten jupyter running with the Docker image.
However, I need to access a notebook in my C: drive. For the sake of this post let's say the notebook is in C:\Users\Gab\Desktop. I've successfully moved into that location, but when I run the command docker run -p 8888:8888 -p 6006:6006 -it -u user -w /home/user tiagopeixoto/graph-tool bash, it opens a bash in /home/user, not in the directory I cd'd into previously.
From what I understand, the -w /home/user is what tells it where to open, but I'm not sure how to tell it to open in the Desktop folder.
How can I set things up properly so that I can run the command jupyter notebook --ip 0.0.0.0, and still be able to access the notebook I need?
Thanks!
Here's the deal with Docker. When you execute the docker run command, what happens is that an entirely new subsystem is created which is separate from your host Operating System ( this is known as a Docker container ). This subsystem runs the tiagopeixoto/graph-tool image in it so hence the graph-tool ( and hence jupyter-notebook ) and the entrypoint /home/user is present inside this system instead of the host OS you use to run the Docker container ( in your case its Windows ). Unfortunately for you the notebook that you wish to view using jupyter-notebook isn't present inside the container and is located elsewhere ( Windows to be exact ).
What you can do in this case is mount a folder of the host Operating System to the container such that this folder contains the notebook you wish to view :-
Open a new command prompt, and type in this command as follows :- docker run -p 8888:8888 -p 6006:6006 -v /c/Users/Gab/Desktop:/mnt/temp/ -it -u user -w /home/user tiagopeixoto/graph-tool bash
The main difference to note here is the -v switch which mounts the C:\Users\Gab\Desktop volume from the host system to the Docker container in /mnt/temp/. Once that is done try viewing the notebook present in /mnt/temp in jupyter-notebook.
According to this post there exists an issue related to mounting a volume in Windows, so please check this out as well :- docker toolbox mount file on windows

Docker development workflow

What's the proper development workflow for code that runs in a Docker container?
Solomon Hykes said that the "official" workflow involves building and running a new Docker image for each Git commit. That makes sense, but what if I want to test a change before committing it to the Git repo?
I can think of two ways to do it:
Run the code on a local development server (e.g., the Django development server). Edit a file; test on the dev server; make a Git commit; rebuild the Docker image with the new code; test again on the local Docker container.
Don't run a local dev server. Instead, build and run a new Docker image each time I edit a file, and then test the change on local Docker container.
Both approaches are pretty inefficient. Is there a better way?
A more efficient way is to run a new container from the latest image that was built (which then has the latest code).
You could start that container starting a bash shell so that you will be able to edit files from inside the container:
docker run -it <some image> bash -l
You would then run the application in that container to test the new code.
Another way to alter files in that container is to start it with a volume. The idea is to alter files in a directory on the docker host instead of messing with files from the command line from the container itself:
docker run -it -v /home/joe/tmp:/data <some image>
Any file that you will put in /home/joe/tmp on your docker host will be available under /data/ in the container. Change /data to whatever path is suitable for your case and hack away.

Categories

Resources