How to run container and execute python on docker in one command? - python

We have a very small python code which we need to run on container. Following two commands are working fine ,
I want to merge them in one ? the ultimate goal is container should be up(base on image) , run python and exit by itself.
docker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d pythonimage:latest
docker exec -it mycontainer python3 /usr/src/app/subfolder/createfile.py
I tried -c "/bin/bash python3 /usr/src/app/subfolder/createfile.py" this didnt work , it just.

You can add a file called Dockerfile (with no dot before or after file name, exactly this one) with the below contents:
from ubuntu:20.04
env run=/usr/src/app/subfolder/createfile.py
cmd ["/bin/bash", "-c", "${run}"]
Then run:
docker build YOUR_IMAGE_NAME # this should be a new name, like pythonimage2
Now run it:
docker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d pythonimage2:latest
So, now the docker executing command which makes container running is now /usr/src/app/subfolder/createfile.py.
Regarding your answer, you can add a variable and each time just modify that line.
Even you can do this to your Dockerfile:
from ubuntu:20.04
env path=/usr/src/app/subfolder/
env file=$path/createfile.py
cmd ["/bin/bash", "-c", "${file}"]
Now you just need to modify file variable. If any of your sub-directories changes, you can remove that from path variable and add it to file variable.

Related

Why a dockerized script have a different behaviour when I docker run or I docker execute it?

I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.

Running multiple python main files off Docker image

I have created a Docker image with dockerfile where the Entrypoint is as follows:
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python", "./myprojectmain.py", "--config", "./config.py"]
When I run I use the command:
docker run myproject
all is fine it seems.
However I have a secondary .py file in the root of the project called setup.py. The purpose of this file is to update some of the config and json files after getting some input from the user.
Is there a way to run this secondary file (setup.py) or do I need to create a whole new image (which seems ridiculous).
Thanks
Well... if you got an image, you don't have to use entrypoint... just run your scripts like this:
docker run image "python /some/path/myscript.py"
or
docker run image /bin/bash -c "cd /some/path && python myscript.py"
or with entry point
RUN ./myprojectmain.py --config ./config.py
RUN ./myproject2main.py --config ./config.py
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "python"]
You can straightforwardly provide an alternate command after the image name in the docker run command. It's harder to override the entrypoint, though. If you have both a command and an entrypoint then they are combined together into a single command.
This workflow is easiest if your Dockerfile has a CMD, and that's a complete runnable shell command. If you have an ENTRYPOINT at all, it is some kind of wrapper that does some initial setup and then runs the command it's given as additional arguments. In this particular setup, conda run with its arguments seems to meet that need and have the correct form, so you could say
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "myproject", "--"]
CMD ["python", "./myprojectmain.py", "--config", "./config.py"]
(Note that conda run seems to have some issues; you could probably simulate it using a custom entrypoint wrapper script or use a pip-based non-virtual-environment workflow instead.)
If you split the ENTRYPOINT and CMD like this, then you can run
docker run myproject \
python setup.py
The alternate python setup.py command will be appended to the conda run entrypoint command.
... update some of the config and json files ...
It's often a good idea to inject these into your container using a bind mount. Depending on how exactly the files get set up, you may be able to initialize them from the host environment, without Docker
./setup.py
docker run -d -v $PWD/config:/app/config myproject
but if they are sensitive to the Docker environment in some way, you could do it in Docker too; make sure to mount the same configuration storage into both containers.
docker network create mynet
docker volume create config
docker run --rm --net mynet -v config:/app/config myproject ./setup.py
docker run -d -p 8000:8000 --net mynet -v config:/app/config myproject

Passing multiple parameters to docker container

I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)

Running a Python script automatically when launching a Docker container

Is it possible to run a python script automatically upon starting a Docker container?
My command to attach to an image is:
docker run -i -t --entrypoint /bin/bash myimage -s
Is there a way to add an additional command that runs a script upon launching it?
I would prefer not to use a Dockerfile as some of the python modules I use are from private repos and need to be downloaded manually, so a Dockerfile would not completely build the image I want.
As a matter of fact there is. Just don't use --entrypoint. Instead:
docker run -it myimage /bin/bash -c /run.sh
Obviously, this assumes that the image itself contains a simple Bash script at the location /run.sh.
#!/bin/bash
command1
command2
command3
...
If you don't want that, you can mount the current folder inside the running container and run a local script:
docker run -it -v $(pwd):/mnt myimage /bin/bash -c /mnt/run.sh
ENTRYPOINT vs. CMD seems to be a common cause of confusion.
Think about it this way:
ENTRYPOINT is a way to hard-code a certain behavior that cannot be changed after setting it up.
CMD is the default way to supply a command to be run.
Docker containers can be set up to run as self-contained applications. If you're so inclined, you could create throwaway containers that accept command line arguments (a file for example), pull that in, work their magic and return you a processed file. Some people use this to set up build environments with different configurations and just run them on demand, not cluttering up their host machine.
However, your usage scenario feels tedious, since you are apparently doing the setup by hand. It would be easier to set the download credentials as environment variables, like this:
docker run -d -e "DEEP=purple" -e "LED=zeppelin" myimage /bin/bash -c /run.sh
You can then use those within the script as placeholders. This way, you get the best of both worlds. For added security, your run.sh should unset the environment variables once they have been used, like this:
#!/bin/bash
command1
command2
command3
...
unset DEEP
unset LED

Docker interactive mode and executing script

I have a Python script in my docker container that needs to be executed, but I also need to have interactive access to the container once it has been created ( with /bin/bash ).
I would like to be able to create my container, have my script executed and be inside the container to see the changes/results that have occurred (no need to manually execute my python script).
The current issue I am facing is that if I use the CMD or ENTRYPOINT commands in the docker file I am unable to get back into the container once it has been created. I tried using docker start and docker attach but I'm getting the error:
sudo docker start containerID
sudo docker attach containerID
"You cannot attach to a stepped container, start it first"
Ideally, something close to this:
sudo docker run -i -t image /bin/bash python myscript.py
Assume my python script contains something like (It's irrelevant what it does, in this case it just creates a new file with text):
open('newfile.txt','w').write('Created new file with text\n')
When I create my container I want my script to execute and I would like to be able to see the content of the file. So something like:
root#66bddaa892ed# sudo docker run -i -t image /bin/bash
bash4.1# ls
newfile.txt
bash4.1# cat newfile.txt
Created new file with text
bash4.1# exit
root#66bddaa892ed#
In the example above my python script would have executed upon creation of the container to generate the new file newfile.txt. This is what I need.
My way of doing it is slightly different with some advantages.
It is actually multi-session server rather than script but could be even more usable in some scenarios:
# Just create interactive container. No start but named for future reference.
# Use your own image.
docker create -it --name new-container <image>
# Now start it.
docker start new-container
# Now attach bash session.
docker exec -it new-container bash
Main advantage is you can attach several bash sessions to single container. For example I can exec one session with bash for telling log and in another session do actual commands.
BTW when you detach last 'exec' session your container is still running so it can perform operations in background
You can run a docker image, perform a script and have an interactive session with a single command:
sudo docker run -it <image-name> bash -c "<your-script-full-path>; bash"
The second bash will keep the interactive terminal session open, irrespective of the CMD command in the Dockerfile the image has been created with, since the CMD command is overwritten by the bash - c command above.
There is also no need to appending a command like local("/bin/bash") to your Python script (or bash in case of a shell script).
Assuming that the script has not yet been transferred from the Docker host to the docker image by an ADD Dockerfile command, we can map the volumes and run the script from there:
sudo docker run -it -v <host-location-of-your-script>:/scripts <image-name> bash -c "/scripts/<your-script-name>; bash"
Example: assuming that the python script in the original question is already on the docker image, we can omit the -v option and the command is as simple as follows:
sudo docker run -it image bash -c "python myscript.py; bash"
Why not this?
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker cp scriptPy:/path/to/newfile.txt /path/to/host
vim /path/to/host
Or if you want it to stay on the container
docker run --name="scriptPy" -i -t image /bin/bash python myscript.py
docker start scriptPy
docker attach scriptPy
Hope it was helpful.
I think this is what you mean.
Note: THis uses Fabric (because I'm too lazy and/or don't have the time to work out how to wire up stdin/stdout/stderr to the terminal properly but you could spend the time and use straight subprocess.Popen):
Output:
$ docker run -i -t test
Entering bash...
[localhost] local: /bin/bash
root#66bddaa892ed:/usr/src/python# cat hello.txt
Hello World!root#66bddaa892ed:/usr/src/python# exit
Goodbye!
Dockerfile:
# Test Docker Image
FROM python:2
ADD myscript.py /usr/bin/myscript
RUN pip install fabric
CMD ["/usr/bin/myscript"]
myscript.py:
#!/usr/bin/env python
from __future__ import print_function
from fabric.api import local
with open("hello.txt", "w") as f:
f.write("Hello World!")
print("Entering bash...")
local("/bin/bash")
print("Goodbye!")
Sometimes, your python script may call different files in your folder, like another python scripts, CSV files, JSON files etc.
I think the best approach would be sharing the dir with your container, which would make easier to create one environment that has access to all the required files
Create one text script
sudo nano /usr/local/bin/dock-folder
Add this script as its content
#!/bin/bash
echo "IMAGE = $1"
## image name is the first param
IMAGE="$1"
## container name is created combining the image and the folder address hash
CONTAINER="${IMAGE}-$(pwd | md5sum | cut -d ' ' -f 1)"
echo "${IMAGE} ${CONTAINER}"
# remove the image from this dir, if exists
## rm remove container command
## pwd | md5 get the unique code for the current folder
## "${IMAGE}-$(pwd | md5sum)" create a unique name for the container based in the folder and image
## --force force the container be stopped and removed
if [[ "$2" == "--reset" || "$3" == "--reset" ]]; then
echo "## removing previous container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
# create one special container for this folder based in the python image and let this folder mapped
## -it interactive mode
## pwd | md5 get the unique code for the current folder
## --name="${CONTAINER}" create one container with unique name based in the current folder and image
## -v "$(pwd)":/data create ad shared volume mapping the current folder to the /data inside your container
## -w /data define the /data as the working dir of your container
## -p 80:80 some port mapping between the container and host ( not required )
## pyt#hon name of the image used as the starting point
echo "## creating container ${CONTAINER} as ${IMAGE} image"
docker create -it --name="${CONTAINER}" -v "$(pwd)":/data -w /data -p 80:80 "${IMAGE}"
# start the container
docker start "${CONTAINER}"
# enter in the container, interactive mode, with the shared folder and running python
docker exec -it "${CONTAINER}" bash
# remove the container after exit
if [[ "$2" == "--remove" || "$3" == "--remove" ]]; then
echo "## removing container ${CONTAINER}"
docker rm "${CONTAINER}" --force
fi
Add execution permission
sudo chmod +x /usr/local/bin/dock-folder
Then you can call the script into your project folder calling:
# creates if not exists a unique container for this folder and image. Access it using ssh.
dock-folder python
# destroy if the container already exists and replace it
dock-folder python --replace
# destroy the container after closing the interactive mode
dock-folder python --remove
This call will create a new python container sharing your folder. This makes accessible all the files in the folder as CSVs or binary files.
Using this strategy, you can quickly test your project in a container and interact with the container to debug it.
One issue with this approach is about reproducibility. That is, you may install something using your shell script that is required to your application run. But, this change just happened inside of your container. So, anyone that will try to run your code will have to figure out what you have done to run it and do the same.
So, if you can run your project without installing anything special, this approach may suits you well. But, if you had to install or change some things in your container to be able to run your project, probably you need to create a Dockerfile to save these commands. That will make all the steps from loading the container, making the required changes and loading the files easy to replicate.

Categories

Resources