I need to run a run a Python script in a Docker container (I currently have execution of "disco_test.py" as my ENTRYPOINT command) that will utilize Disco (which of course needs to be running in that container). The problem is I cannot seem to get Disco running either with CMD or RUN in the Dockerfile, or from within the Python script itself (using the subprocess module).
If, however, I create an otherwise identical image with no ENTRYPOINT command, run it with docker run -i -t disco_test /bin/bash and then open a Python shell, I can successfully get Disco running using the subprocess module (simply using call(["disco", "start"]) works). Upon exiting the Python shell I can indeed verify that Disco is still running properly (disco status reports "Master 0cfddb8fb0e4:8989 running"). When I attempt to start Disco in the same way (using call(["disco", "start"])) from within "disco_test.py", which I execute as the ENTRYPOINT command, it doesn't work! It will print "Master 0cfddb8fb0e4:8989 started", however checking disco status afterwards ALWAYS shows "Master 0cfddb8fb0e4:8989 stopped".
Is there something about how the ENTRYPOINT command is run that is preventing me from being able to get Disco running from within the corresponding Python script? Running "disco_test.py" on my machine (not in a Docker container) does indeed get Disco up and running successfully.
Any insights or suggestions would be greatly appreciated!
I would guess that its running daemonized and exits immediately stopping the container. You could try these containers dockerized disco . It uses supervisor to run disco.
Related
I am new to container, below questions might sound stupid.
There are two questions actually.
I have a non-web python application fully tested in VScode without any error, then I use below Dockerfile to build it locally.
FROM python:3.8-slim
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./mycode.py"]
An image was built successfully, but running ended with a TypeError. I have made sure the requirements.txt has same dependency as in my project environment. The error message is "wrong tuple index" which gives me no clue on where the problem could come from a fully tested code. I am stuck here with a weird feeling.
I then tried buildpack with Procfile: worker: python mycode.py
An image was built successfully, but docker run could not launch the application with below error. I have no idea about what else beside "worker:" could launch a non-web python application in Procfile. Stuck again!
ERROR: failed to launch: determine start command: when there is no
default process a command is required
I searched but all are about web application with "web:" in Procfile. Any help on either question will be appreciated.
When you start the container, you'll need to pass it the worker process type like this:
$ docker run -it myapp worker
Then it should run the the command you added to the Procfile.
A few other things:
Make sure you're using the heroku/python buildpack or another buildpack that includes Procfile detection.
Confirm in the build output that the worker process type was created
You can put your command as the web: process type if you do not want to add worker to your start command. There's nothing wrong with a web: process that does run web app
Thank #codefinger for reminder. with several trials, I finally get my application launched with below command:
docker run -it --name container_name image_name python mycode.py
In fact docker run command has its format as:
docker run [options] image [command] [arg..]
I suspect even buildpack image has no worker process, it is still possible to use "command" option to launch your application.
However, successfully running image built by buildpack without any error leaves me an increasing weird feeling.
Same code and same requirement.txt file, nothing changed! Why image built by docker build gives me a TypeError? So weird.
I am starting to get a hand on docker and try to containerized some of the applications I use. Thanks to the tutorial I was able to create docker images and containers but now I am trying to thing about the most efficient and practical ways to do things.
To present my use-case, I have a python code (let's call it process.py) that takes as an input a single .jpg image, does some operations on this image, and then output the processed .jpg image.
Normally I would run it through :
python process.py -i path_of_the_input_image -o path_of_the_output_image
Then, the way I do the connection input/output with my docker is the following. First I create the docker file :
FROM python:3.6.8
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
CMD python ./process.py -i ./input_output/input.jpg -o ./input_output/output.jpg
And then after building the image, I run docker run mapping the a local folder with the input_output folder of docker:
docker run -v C:/local_folder/:/app/input_output my_docker_image
This seems to work, but is not really practical, as I have to create locally a specific folder to mount it to the docker container. So here are the questions I am asking myself :
Is there a more practical ways of doings things ? To directly send one single input file and directly receive one single output files from the output of a docker container ?
When I run the docker image, what happens (If I understand correctly) is that it will create a docker container that will run my program once process.py once and then just sits there doing nothing. Even after finishing running process.py it will still be there listed in the command "docker ps -a". Is this behaviour expected ? Is there a way to automatically delete finished container ? Am I using docker run the right way ?
Is there a more practical way of having a container running continuously and on which I can query to run the program process.py on demand with a given input ?
I have a python code (let's call it process.py) that takes as an input a single .jpg image, does some operations on this image, and then output the processed .jpg image.
That's most efficiently done without Docker; just run the python command you already have. If your application has interesting Python library dependencies, you can install them in a virtual environment to avoid conflicts with the system Python installation.
When I run the Docker image...
...the container runs its main command (docker run command arguments, Dockerfile CMD, possibly combined with an entrypoint from the some sources), and when that command exits, the container exits. It will be listed in docker ps -a output, but as "Stopped" (probably with status 0 for a successful completion). You can docker run --rm to have the container automatically delete itself.
Is there a more practical way of having a container running continuously and on which I can query to run the program process.py on demand with a given input ?
Wrap it in a network service, like a Flask application. As long as this is running, you can use a tool like curl to do an HTTP POST with the input JPEG file as the body, and get the output JPEG file as the response. Avoid using local files and Docker together whenever that's an option (prefer network I/O for process inputs and outputs; prefer a database to local-file storage).
Why are volume mounts not practical?
I would argue that Dockerising your application is not practical, but you've chosen to do so for, presumably very good, reasons. Volume mounts are simply an extension to this. If you want to get data in/out of your container, the 'normal' way to do this is by using volume mounts as you have done. Sure, you could use docker cp to copy the files manually, but that's not really practical either.
As far as the process exiting goes, normally, once the main process exits, the container exits. docker ps -a shows stopped containers as well as running ones. You should see that it says Exited n minutes(hours, days etc) ago. This means that your container has run and exited, correctly. You can remove it with docker rm <containerid>.
docker ps (no -a) will only show the running ones, btw.
If you use the --rm flag in your Docker run command, it will be removed when it exits, so you won't see it in the ps -a output. Stopped containers can be started again, but that's rather unusual.
Another solution might be to change your script to wait for incoming files and process them as they are received. Then you can leave the container running, and it will just process them as needed. If doing this, make sure that your idle loop has a sleep or something in it to ensure that you don't consume too many resources.
I have a Python API in a docker container, but I want to be able to run tests without sshing in and running the command, but I'm not really sure how I can do that via the command line. For example, I know to ssh in I do (via a script so I can ssh into any of my three containers):
docker exec -it gp-api ash
but when I want to run tests, I need to ssh in, go up a folder, and then run pytest. Not sure how to do that all from the docker command line.
As it is stated in the docs for the exec, you can use -w option in the command to set the current working directory.
docker exec -w /your/working/directory container_name_or_id command
I have a docker container running a bunch of python scripts. I am using HyperV as backend virtualization engine on Docker and running Docker for Windows.
The container builds just fine but when I start the container with:
docker run --memory 10240mb -it container_name
It runs the few initial operations from the file, prints out the results and then exits without an error. When I run:
docker logs --tail=50 container_id
I see just the print outs as when I ran docker run, funnily enough the moment it exists is pretty random operation wise (it might exit after first 2 ops or sometimes 1 op) but it usually ends the same time, as if there was a timer letting it run only for 5 minutes minutes for example. The script runs fine on a different machine running VirtualBox and Docker-Machine.
Right click on the docker icon in the system tray
Click on advanced
increase the memory settings to what you need, if you're not sure try setting it somewhere close to the middle depending on your system. You might go ahead and increase the CPU setting as well if you can.
Save your changes Docker will restart
Once that's done you should be able to run your app run it without the --memory option
I'm using docker python SDK docker-py, which is quite convenient. I look through the document, I still can't figure out how to create a daemon container with interface terminal,that is to say, if in the shell, this equals to the command docker run -dit image.
I know docker-py right now offers the client.containers.run to run a contaniner, and with deatch argument I can run it as a daemon. However, I want start it with a interface terminal.
'Cause my further code would access to the container from the remote server. Is there any way to create it directly with docker-py instead of using os.system(docker run -dit image)?
After swimming in the doc for a while, I figure it out.
The command docker run -dit image in docker-py is client.containers.run(image,tty=True,stdin_open=True, detach=True) This would work. Thank u David.