How to run a command on a linked docker container? - python

I am using docker-compose with:
an existing (python) app running inside a docker container.
another (ruby) command-line app running in a docker container.
How do I 'connect' those two containers so that the python container can call the command-line app in the ruby container? (and pass arguments via stdin/stdout)

Options are available, but not great. If you're using a recent version of Docker Compose then both containers will be in the same Docker network and can communicate, so you could install sshd in the destination container and make ssh calls from the source container.
Alternatively, use Docker in Docker with the source container, so you can run docker exec inside the source and execute commands on the target container.
It's low-level communication though, and raising it to a service call or message passing would be better, if changing your apps is feasible.

Related

Calling Docker command within Dockerized app

It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik

How can I debug `Error while fetching server API version` when running docker?

Context
I am running Apache Airflow, and trying to run a sample Docker container using Airflow's DockerOperator. I am testing using docker-compose and deploying to Kubernetes (EKS). Whenever I run my task, I am receiving the Error: ERROR - Error while fetching server API version. The erros happens both on docker-compose as well as EKS (kubernetes).
I guess your Airflow Docker container is trying to launch a worker on the same Docker machine where it is running. To do so, you need to give Airflow's container special permissions and, as you said, acces to the Docker socket. This is called Docker In Docker (DIND). There are more than one way to do it. In this tutorial there are 3 different ways explained. It also depends on where those containers are run: Kubernetes, Docker machines, external services (like GitLab or GitHub), etc.

SSH to backend app

I have a lot docker containers and my idea is to have one ssh server and when type ssh <containerid>#myserver it actually do docker attach to specific container. What I need is way how to after running someuser#host runs python script which make a tunnel to docker container. Same way git works using SSH.

Docker development workflow

What's the proper development workflow for code that runs in a Docker container?
Solomon Hykes said that the "official" workflow involves building and running a new Docker image for each Git commit. That makes sense, but what if I want to test a change before committing it to the Git repo?
I can think of two ways to do it:
Run the code on a local development server (e.g., the Django development server). Edit a file; test on the dev server; make a Git commit; rebuild the Docker image with the new code; test again on the local Docker container.
Don't run a local dev server. Instead, build and run a new Docker image each time I edit a file, and then test the change on local Docker container.
Both approaches are pretty inefficient. Is there a better way?
A more efficient way is to run a new container from the latest image that was built (which then has the latest code).
You could start that container starting a bash shell so that you will be able to edit files from inside the container:
docker run -it <some image> bash -l
You would then run the application in that container to test the new code.
Another way to alter files in that container is to start it with a volume. The idea is to alter files in a directory on the docker host instead of messing with files from the command line from the container itself:
docker run -it -v /home/joe/tmp:/data <some image>
Any file that you will put in /home/joe/tmp on your docker host will be available under /data/ in the container. Change /data to whatever path is suitable for your case and hack away.

Docker - Run Container from Inside Container

I have two applications:
a Python console script that does a short(ish) task and exits
a Flask "frontend" for starting the console app by passing it command line arguments
Currently, the Flask project carries a copy of the console script and runs it using subprocess when necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.
I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.
Q: How can I spawn a container with a short lived task from inside a container?
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need a docker client installed on the container as well.
A simple example of a container that can execute docker commands on the host:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image
It's important to note that this is not the same as running a docker daemon in a container. For that you need a solution like jpetazzo/dind.

Categories

Resources