I have two applications:
a Python console script that does a short(ish) task and exits
a Flask "frontend" for starting the console app by passing it command line arguments
Currently, the Flask project carries a copy of the console script and runs it using subprocess when necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.
I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.
Q: How can I spawn a container with a short lived task from inside a container?
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need a docker client installed on the container as well.
A simple example of a container that can execute docker commands on the host:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image
It's important to note that this is not the same as running a docker daemon in a container. For that you need a solution like jpetazzo/dind.
Related
I am running succesfully a django app that is hosted inside a docker container. I change something on my code on purpose in order for my code to break. What I need is somehow to see the log of the running code as if I was running this locally on my computer. For example I forgot to import a library and when I run this locally I get a message on the terminal like "ModuleNotFoundError: No module named 'somemodule'". But when i run the same code from inside the container I get no log, just the container fails to start.
My question is: How can I get a log for my script from inside the container, so I can debug my code?
So, what I wanted to do was to somehow debug/run my own python code that was inside a container in order to see the log of my code.
I managed to do it using VSC and Remote SSH and Remote - Containers extensions.
Remote SSH
Remote - Containers
If the containers are hosted locally on your PC, you dont need the Remote - SSH extension
It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik
I am building a webapp (a simple flask site) that uses docker. I want my development code to not reside within docker, but be executed by the development environment (using python3) I have defined in my dockerfile. I know that I can use the COPY . . syntax in a dockerfile to copy my source code into the image for execution, but that violates my aim of separating the container from my source. Is there a way to have a docker container read and execute the code that it is in the directory I run the docker container run command from?
Right now my container uses the copy company to copy all the source code into the container. It then uses the CMD command to automatically run the flask app:
CMD [ "python", "flask_app/server.py" ]
(I'm storing all my flask code in a directory called flask_app). I'm assuming this works because all this has been copied into the container (according to the specifications given in the dockerfile) and is being executed when I run the container. I would like for the container to instead access and execute flask_app/server.py without copying this information into itself -- is this possible? If so, how?
Instead of using COPY to move the code into the container, you'll use a "bind mount" (https://docs.docker.com/storage/bind-mounts/).
When you run the container, you'll do it with a command like this:
docker run --mount type=bind,source=<path_outside_container>,target=<path_inside_container> <image_tag>
For portability, I recommending putting this line in a script intended to be run from the repository root, and having the <path_outside_container> be "$(pwd)", so that it will work on other people's computers. You'll need to adjust <path_inside_container> and your CMD depending on where you want the code to live inside the container.
(Obviously you can also put whatever other options you'd like on the command, like --it --rm or -p <whatever>.)
Suppose the following setup:
Website written in php / laravel
User uploads a file (either text / doc / pdf)
We have a docker container which contains a python script for converting text into a numpy array.
I want to take this uploaded data and pass it to the python script.
I can't find anything which explains how to pass dynamically generated inputs into a container.
Can this be done by executing a shell script from inside the laravel app which contains the uploaded file as a variable specified in the dockerfile's ENTRYPOINT?
Are there any other ways of doing this?
I would strongly recommend using tcp/ip for such purposes. By the way, in this case you benefit from:
You can detect whether your python service is online
You can move python container to another machine
Implementation is really simple. You can choose any framework, but for me suitable is Twisted, and implement your python script as follows:
from twisted.internet.protocol import Factory, Protocol
from twisted.protocols.basic import LineReceiver
class DataProcessor(LineReceiver):
def lineReceived(self, line):
# line contains your data
pass
Factory factory = Factory()
factory.protocol = DataProcessor
reactor.listenTCP(8080, factory)
... a python script for ...
Just run it; don't package it into a Docker container. That's doubly true if its inputs and outputs are both local files, and it expects to do its thing and exit promptly: the filesystem isolation Docker provides works against you here.
This is, of course, technically possible. Depending on how exactly the support program container is set up, the "command" at the end of docker run will be visible to the Python script in sys.argv, like any other command-line options. You can use a docker run -v option to publish parts of the host's filesystem into the container. So you might be able to run something like
docker run --rm -v $PWD/files:/data \
converter_image \
python convert.py /data/in.txt /data/out.pkl
where all of the /data paths are in the container's private filesystem space.
There are two big caveats:
The host paths in the docker run -v option are paths specifically on the physical host. If your HTTP service is also running in a container you need to know some host-system path you can write to that's also visible in your container filesystem.
Running any docker command at all effectively requires root privileges. If any of the filenames or paths involved are dynamic, shell injection attacks can compromise your system. Be very careful with how you run this from a network-accessible script.
One way to do this would be to upload the files to a directory to which the Docker container has access to and then poll the directory for new files using the Python script. You can access local directories from Docker containers using "bind mounts". Google something like "How to share data between a Docker container and host system" to read more about bind mount and sharing volumes.
I am using docker-compose with:
an existing (python) app running inside a docker container.
another (ruby) command-line app running in a docker container.
How do I 'connect' those two containers so that the python container can call the command-line app in the ruby container? (and pass arguments via stdin/stdout)
Options are available, but not great. If you're using a recent version of Docker Compose then both containers will be in the same Docker network and can communicate, so you could install sshd in the destination container and make ssh calls from the source container.
Alternatively, use Docker in Docker with the source container, so you can run docker exec inside the source and execute commands on the target container.
It's low-level communication though, and raising it to a service call or message passing would be better, if changing your apps is feasible.