It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik
Related
I have a Python API in a docker container, but I want to be able to run tests without sshing in and running the command, but I'm not really sure how I can do that via the command line. For example, I know to ssh in I do (via a script so I can ssh into any of my three containers):
docker exec -it gp-api ash
but when I want to run tests, I need to ssh in, go up a folder, and then run pytest. Not sure how to do that all from the docker command line.
As it is stated in the docs for the exec, you can use -w option in the command to set the current working directory.
docker exec -w /your/working/directory container_name_or_id command
So I'm trying to run this project using docker.
I followed the standard docker protocol:
docker build -t orange .
docker run -p 8080:8080 orange
I used the following command to check that the docker image was indeed created.
docker image ls
However, after running these commands, there is still no site running on localhost:8080. Any tips on troubleshooting this?
EDIT: After using the right port, I'm getting a directory listing instead of an actual site. Directory listing
By looking at the repository, it seems that the exposed port is 9999 and not 8080. Also, it looks like you can use docker-compose, that is, you can run
docker-compose up --build
to spin up the server. You should then be able to reach it at localhost:9999
I try to run django on a docker container using sqllite as the db and the django dev server. So far I was able to launch locally the django server:
python .\manage.py runserver
I can build the docker image using Dockerfile:
docker build . -t pythocker
But when I run the image with docker run -p 8000:8000 pythocker no output is shown and the machine is not reachable, I have to kill the running container.
If I add the -it flag on the docker run command then the server is running and I can go to http://192.168.99.100:8000 and display the django welcome page. Why is this flag mandatory here?
Docker logs on the container gives nothing. I also tried to add custom logging inside the manage.py but it's not diplayed in the console or in the docker logs.
I am using the Docker Windows toolbox as I have only a windows home computer.
I am using docker-compose with:
an existing (python) app running inside a docker container.
another (ruby) command-line app running in a docker container.
How do I 'connect' those two containers so that the python container can call the command-line app in the ruby container? (and pass arguments via stdin/stdout)
Options are available, but not great. If you're using a recent version of Docker Compose then both containers will be in the same Docker network and can communicate, so you could install sshd in the destination container and make ssh calls from the source container.
Alternatively, use Docker in Docker with the source container, so you can run docker exec inside the source and execute commands on the target container.
It's low-level communication though, and raising it to a service call or message passing would be better, if changing your apps is feasible.
I have two applications:
a Python console script that does a short(ish) task and exits
a Flask "frontend" for starting the console app by passing it command line arguments
Currently, the Flask project carries a copy of the console script and runs it using subprocess when necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.
I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.
Q: How can I spawn a container with a short lived task from inside a container?
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need a docker client installed on the container as well.
A simple example of a container that can execute docker commands on the host:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image
It's important to note that this is not the same as running a docker daemon in a container. For that you need a solution like jpetazzo/dind.