I'm working on a project with python where I want to automate docker containers creation. I have the project folder already with includes all the files required to create the image.
One of these is create_image.sh
docker build -t my_container:latest .
Currently I do:
sudo bash create_image.sh
But now I need to automate this process from python.
I have tried:
import os
import subprocess
subprocess.check_call("bash -c '. create_image.sh'", shell=True)
But I get this error:
CalledProcessError: Command 'bash -c '. create_image.sh'' returned non-zero exit status 1.
EDIT:
The use case is to automate containers creation through an API, I have the code in flask and python until this point, where I got stuck in the images creation from the docker file. The rest is automated from templates.
You can try:
subprocess.call(['sudo', 'bash', 'create_image.sh' ])
which is equivalent of
sudo bash create_image.sh
Note: Let me say that there are better ways of automating docker container creation - please check docker-compose which can build and start the container easily. If you can elaborate more on the use case, we could help you with an elegant solution for docker. It might not be a python problem
EDIT:
Following the comments, it would be better to create a docker-compose and makefile is used to issue docker commands. Inspiration - https://medium.com/#daniel.carlier/how-to-build-a-simple-flask-restful-api-with-docker-compose-2d849d738137
In case that's because your user can't run docker without sudo, probably it's better to grant him docker API access by including him to docker group: https://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo
Simply adding user to docker group:
sudo gpasswd -a $USER docker
Also if You want to automate docker operations on python, I'd recommend to use python library for docker: How to build an Image using Docker API Python Client?
Related
It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik
I am starting to get a hand on docker and try to containerized some of the applications I use. Thanks to the tutorial I was able to create docker images and containers but now I am trying to thing about the most efficient and practical ways to do things.
To present my use-case, I have a python code (let's call it process.py) that takes as an input a single .jpg image, does some operations on this image, and then output the processed .jpg image.
Normally I would run it through :
python process.py -i path_of_the_input_image -o path_of_the_output_image
Then, the way I do the connection input/output with my docker is the following. First I create the docker file :
FROM python:3.6.8
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD python ./process.py -i ./input_output/input.jpg -o ./input_output/output.jpg
And then after building the image, I run docker run mapping the a local folder with the input_output folder of docker:
docker run -v C:/local_folder/:/app/input_output my_docker_image
This seems to work, but is not really practical, as I have to create locally a specific folder to mount it to the docker container. So here are the questions I am asking myself :
Is there a more practical ways of doings things ? To directly send one single input file and directly receive one single output files from the output of a docker container ?
When I run the docker image, what happens (If I understand correctly) is that it will create a docker container that will run my program once process.py once and then just sits there doing nothing. Even after finishing running process.py it will still be there listed in the command "docker ps -a". Is this behaviour expected ? Is there a way to automatically delete finished container ? Am I using docker run the right way ?
Is there a more practical way of having a container running continuously and on which I can query to run the program process.py on demand with a given input ?
I have a python code (let's call it process.py) that takes as an input a single .jpg image, does some operations on this image, and then output the processed .jpg image.
That's most efficiently done without Docker; just run the python command you already have. If your application has interesting Python library dependencies, you can install them in a virtual environment to avoid conflicts with the system Python installation.
When I run the Docker image...
...the container runs its main command (docker run command arguments, Dockerfile CMD, possibly combined with an entrypoint from the some sources), and when that command exits, the container exits. It will be listed in docker ps -a output, but as "Stopped" (probably with status 0 for a successful completion). You can docker run --rm to have the container automatically delete itself.
Is there a more practical way of having a container running continuously and on which I can query to run the program process.py on demand with a given input ?
Wrap it in a network service, like a Flask application. As long as this is running, you can use a tool like curl to do an HTTP POST with the input JPEG file as the body, and get the output JPEG file as the response. Avoid using local files and Docker together whenever that's an option (prefer network I/O for process inputs and outputs; prefer a database to local-file storage).
Why are volume mounts not practical?
I would argue that Dockerising your application is not practical, but you've chosen to do so for, presumably very good, reasons. Volume mounts are simply an extension to this. If you want to get data in/out of your container, the 'normal' way to do this is by using volume mounts as you have done. Sure, you could use docker cp to copy the files manually, but that's not really practical either.
As far as the process exiting goes, normally, once the main process exits, the container exits. docker ps -a shows stopped containers as well as running ones. You should see that it says Exited n minutes(hours, days etc) ago. This means that your container has run and exited, correctly. You can remove it with docker rm <containerid>.
docker ps (no -a) will only show the running ones, btw.
If you use the --rm flag in your Docker run command, it will be removed when it exits, so you won't see it in the ps -a output. Stopped containers can be started again, but that's rather unusual.
Another solution might be to change your script to wait for incoming files and process them as they are received. Then you can leave the container running, and it will just process them as needed. If doing this, make sure that your idle loop has a sleep or something in it to ensure that you don't consume too many resources.
I want to modify files inside docker container with PyCharm. Is there possibility of doing such thing?
What you want to obtain is called Bind Mounting and it can be obtained adding -v parameter to your run command, here's an example with an nginx image:
docker run --name=nginx -d -v ~/nginxlogs:/var/log/nginx -p 5000:80 nginx
The specific parameter obtaining this result is -v.
-v ~/nginxlogs:/var/log/nginx sets up a bindmount volume that links the /var/log/nginx directory from inside the Nginx container to the ~/nginxlogs directory on the host machine.
Docker uses a : to split the host’s path from the container path, and the host path always comes first.
In other words the files that you edit on your local filesystem will be synced to the Docker folder immediately.
Source
Yes. There are multiple ways to do this, and you will need to have PyCharm installed inside the container.
Following set of instructions should work -
docker ps - This will show you details of running containers
docker exec -it *<name of container>* /bin/bash
At this point you will oh shell inside the container. If PyCharm is not installed, you will need to install. Following should work -
sudo apt-get install pycharm-community
Good to go!
Note: The installation is not persistence across Docker image builds. You should add the installation step of PyCharm on DockerFile if you need to access it regularly.
I'm using docker python SDK docker-py, which is quite convenient. I look through the document, I still can't figure out how to create a daemon container with interface terminal,that is to say, if in the shell, this equals to the command docker run -dit image.
I know docker-py right now offers the client.containers.run to run a contaniner, and with deatch argument I can run it as a daemon. However, I want start it with a interface terminal.
'Cause my further code would access to the container from the remote server. Is there any way to create it directly with docker-py instead of using os.system(docker run -dit image)?
After swimming in the doc for a while, I figure it out.
The command docker run -dit image in docker-py is client.containers.run(image,tty=True,stdin_open=True, detach=True) This would work. Thank u David.
I am currently developing a Spring Boot Application which triggers a Python program via CLI. I've used Processbuilder to do that and it's been working ok so far.
Now I'm trying to get the Spring Boot Application and the Python program in a Docker container. Since I'm new to Docker I don't know the best way to do this. I've tried using COPY to copy the whole folder to create an image but for some reason the folder pythonapp in the Container is always empty.
Am I missing something or is there a better way to do this?
FROM openjdk:8u151-jdk-slim
EXPOSE 8080
ADD springbootapp-0.0.1.jar app.jar
COPY . /root/pythonapp
RUN sh -c 'touch /app.jar'
RUN apt-get update && apt-get install -y python \
python-gi \
gir1.2-gtk-3.0
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
Normally the idea of docker is that 1 container does 1 thing and 1 thing good. So it mostly is not a good idea to put two things in 1 docker container. Think about two containers :-)
Other than that it might be a good idea to add files separately or as a tar/zip file and extract it in the image.