Running tests without ssh'ing into my container - python

I have a Python API in a docker container, but I want to be able to run tests without sshing in and running the command, but I'm not really sure how I can do that via the command line. For example, I know to ssh in I do (via a script so I can ssh into any of my three containers):
docker exec -it gp-api ash
but when I want to run tests, I need to ssh in, go up a folder, and then run pytest. Not sure how to do that all from the docker command line.

As it is stated in the docs for the exec, you can use -w option in the command to set the current working directory.
docker exec -w /your/working/directory container_name_or_id command

Related

How to leave my training process running after I exit the container and ssh?

I'm training a model on AWS, and my workflow is:
Connect to the EC2 instance via ssh
Start the Docker container if not running already
docker exec -it <container_name> bash
python train.py
I can use Ctrl+Z to put the Python process in the background. However, I cannot exit the container shell, because the training process is attached to it. I assume it will also exit if I disconnect from ssh entirely (laptop shuts down, I close the terminal, etc.)
I thought that running python train.py & would fix it, but the training process is still stopped.
What's the best/most common way of accomplishing this?
Your approach won't work, because when you exit the container shell the Python process will be killed.
You can either:
Run docker exec with the -d detached mode:
docker exec -it -d <container_name> python train.py
Configure your docker container to have the Python script as the entrypoint:
ENTRYPOINT ["python", "train.py"]
then you can docker run the container with the -d detached mode.

Calling Docker command within Dockerized app

It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik

How to create a daemon container with interface terminal using docker-py?

I'm using docker python SDK docker-py, which is quite convenient. I look through the document, I still can't figure out how to create a daemon container with interface terminal,that is to say, if in the shell, this equals to the command docker run -dit image.
I know docker-py right now offers the client.containers.run to run a contaniner, and with deatch argument I can run it as a daemon. However, I want start it with a interface terminal.
'Cause my further code would access to the container from the remote server. Is there any way to create it directly with docker-py instead of using os.system(docker run -dit image)?
After swimming in the doc for a while, I figure it out.
The command docker run -dit image in docker-py is client.containers.run(image,tty=True,stdin_open=True, detach=True) This would work. Thank u David.

Is there a way to stop a command in a docker container

I have a docker container that is running a command. In the Dockerfile the last line is CMD ["python", "myprogram.py"] . This runs a flask server.
There are scenarios when I update myprogram.py and need to kill the command, transfer the updated myprogram.py file to the container, and execute python myprogram.py again. I imagine this to be a common scenario.
However, I haven't found a way to do this. Since this is the only command in the Dockerfile...I can't seem to kill it. from the containers terminal when I run ps -aux I can see that python myprogram.py is assigned a PID of 1. But when I try to kill it with kill -9 1 it doesn't seem to work.
Is there a workaround to accomplish this? My goal is to be able to change myprogram.py on my host machine, transfer the updated myprogram.py into the container, and execute python myprogram.py again.
You could use VOLUMES to mount your myprogram.py source code on your container, and just docker stop and docker restart the container.
To make a VOLUME :
add a VOLUME directive in your Dockerfile and rebuild your image :
VOLUME /path/to/mountpoint
and use the -v option when running your image.
docker run -d -v /path/to/dir/to/mount:/path/to/mountpoint myimage
/!\ These steps above are enough only for a Linux environment. /!\
To use it with something else (like Docker-machine on OSX), you must also make a mount point in the VM running Docker (probably virtualbox).
You'll have the following scheme :
<Dir to share from your host (OSX)> <= (1) mounted on => <Mountpoint on VM> <= (2) mounted on => <Container mountpoint>
The (2) is exactly like a Linux case (in fact, it IS a linux case).
The only step added is mounting the directory you want to share from your host on your VM.
Here are the steps to mount the directory you want to share on the mountpoint in your VM, and then using it with your container :
1- First stop the docker machine.
docker-machine stop <machine_name>
2- Add a sharedfolder to the VM.
VBoxManage sharedfolder add <machine_name> --name <mountpoint_name> --hostpath <dir_to_share>
3- Restart the docker machine :
docker-machine start <machine_name>
4- Creating the mountpoint with ssh and mounting the sharedfolder on it :
docker-machine ssh <machine_name> "sudo mkdir <mountpoint_in_vm>; sudo mount -t vboxsf <mountpoint_name> <mountpoint_in_vm>"
5- And then to mount the directory on your container, run :
docker run -d -v <mountpoint_in_vm>:</path/to/mountpoint (in the container)> myimage
And to clean all this when you don't need it anymore :
6- Unmount in VM :
docker-machine ssh <machine_name> "sudo umount <mountpoint_in_vm>; sudo rmdir <mountpoint_in_vm>"
7- Stop VM :
docker-machine stop <machine_name>
8- Remove shared folder :
VBoxManage sharedfolder remove <machine_name> --name <mountpoint_name>
Here is a script I made for studies purpose, feel free to use it if it can help you.
There are scenarios when I update myprogram.py and need to kill the
command, transfer the updated myprogram.py file to the container, and
execute python myprogram.py again. I imagine this to be a common
scenario.
Not really. The common scenario is either:
Kill existing container
Build new image via your Dockerfile
Boot container from new image
Or:
Start container with a volume mount pointing at your source
Restart the container when you update your code
Either one works. The second is useful for development, since it has a slightly quicker turnaround.

Running Disco in a Docker container

I need to run a run a Python script in a Docker container (I currently have execution of "disco_test.py" as my ENTRYPOINT command) that will utilize Disco (which of course needs to be running in that container). The problem is I cannot seem to get Disco running either with CMD or RUN in the Dockerfile, or from within the Python script itself (using the subprocess module).
If, however, I create an otherwise identical image with no ENTRYPOINT command, run it with docker run -i -t disco_test /bin/bash and then open a Python shell, I can successfully get Disco running using the subprocess module (simply using call(["disco", "start"]) works). Upon exiting the Python shell I can indeed verify that Disco is still running properly (disco status reports "Master 0cfddb8fb0e4:8989 running"). When I attempt to start Disco in the same way (using call(["disco", "start"])) from within "disco_test.py", which I execute as the ENTRYPOINT command, it doesn't work! It will print "Master 0cfddb8fb0e4:8989 started", however checking disco status afterwards ALWAYS shows "Master 0cfddb8fb0e4:8989 stopped".
Is there something about how the ENTRYPOINT command is run that is preventing me from being able to get Disco running from within the corresponding Python script? Running "disco_test.py" on my machine (not in a Docker container) does indeed get Disco up and running successfully.
Any insights or suggestions would be greatly appreciated!
I would guess that its running daemonized and exits immediately stopping the container. You could try these containers dockerized disco . It uses supervisor to run disco.

Categories

Resources