I am trying to run two different commands when doing docker-compose up.
My command: parameter looks like:
gunicorn --reload analytics_api:api --workers=3 --timeout 10000 -b :8083 && exec python /analytics/model_download.py
But when I run this the container fails with the error:
gunicorn: error: unrecognized arguments: && exec python
/analytics/model_download.py
The second part of the command python /analytics/model_download.py, is used to download some dependencies from a sharedpath to a directory inside the container.
I want to run it while the service is up, not during the build.
What is going wrong here?
One way can be to have a startup shell script as the command which has both these entries
startup.sh
#start this in background if the other process need not wait for completion
exec python /analytics/model_download.py &
gunicorn --reload analytics_api:api --workers=3 --timeout 10000 -b :8083
while true
do
sleep 5
done
Adding bash -c before the command solved the problem.
The command value will look like:
bash -c "python model_download.py && gunicorn --reload analytics_api:api -- workers=3 --timeout 10000 -b :8083"
Related
I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.
I'm trying to pass 2 parameters to a docker container for a dash app (via a shell script). Passing one parameter works, but two doesn't. Here's what happens when I pass two parameters:
command:
sudo sh create_dashboard.sh 6 4
Error:
creating docker
Running for parameter_1: 6
Running for parameter_2: 4
usage: app.py [-h] [-g parameter_1] [-v parameter_2]
app.py: error: argument -g/--parameter_1: expected one argument
The shell script:
echo "creating docker"
docker build -t dash-example .
echo "Running for parameter_1: $1 "
echo "Running for parameter_2: $2 "
docker run --rm -it -p 8080:8080 --memory=10g dash-example $1 $2
Dockerfile:
FROM python:3.8
WORKDIR /app
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
COPY src /app
EXPOSE 8080
ENTRYPOINT [ "python", "app.py", "-g", "-v"]
When I use this command:
sudo sh create_dashboard.sh 6
the docker container runs perfectly, with parameter_2 being None.
You can pass a command into the shell of a container like this:
docker run --rm -it -p 8080:8080 dash-example sh -c "--memory=10g dash-example $1 $2"
So it allows arguments and any other command.
When you docker run ... dash-example $1 $2, the additional parameters are interpreted as the "command" the container should run. Since your image has an ENTRYPOINT, the words of the command are just tacked on to the end of the words of the entrypoint (see Understand how CMD and ENTRYPOINT interact in the Dockerfile documentation). There's no way to cause the words of one command to be interspersed with the words of another; you are effectively getting a command line of
python app.py -g -v 6 4
The approach I'd recommend here is to not use an ENTRYPOINT at all. Make sure you can directly run the application script (its first line should be #!/usr/bin/env python3, it should be executable) and make the image's default CMD be to run the script:
FROM python:3.9
...
# RUN chmod +x app.py # if needed
# no ENTRYPOINT at all
CMD ["./app.py"] # finds "python" via the shebang line
Then your wrapper can supply a complete command line, including the options you need to run:
#!/bin/sh
docker run --rm -it -p 8080:8080 --memory=10g dash-example \
./app.py -g "$1" -v "$2"
(There is an alternate "container as command" pattern, where the ENTRYPOINT contains the command to run and the CMD its options. This can lead to awkward docker run --entrypoint command lines for routine debugging tasks, and if the command itself is short it doesn't really save you a lot. You'd still need to repeat the -g and -v options in the wrapper.)
I am trying to do the following in a Makefile recipe. Get the server container-ip using a python script. Build the command to run within the docker container. Run the command in the docker container.
test:
SIP=$(shell python ./scripts/script.py get-server-ip)
CMD="iperf3 -c ${SIP} -p 33445"
docker exec server ${CMD}
I get this
$ make test
SIP=172.17.0.6
CMD="iperf3 -c -p 33445"
docker exec server
"docker exec" requires at least 2 arguments.
See 'docker exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
make: *** [test] Error 1
I ended up with something like this.
SERVER_IP=$(shell python ./scripts/script.py get-server-ip); \
SERVER_CMD="iperf3 -s -p ${PORT} -4 --logfile s.out"; \
CLIENT_CMD="iperf3 -c $${SERVER_IP} -p ${PORT} -t 1000 -4 --logfile c.out"; \
echo "Server Command: " $${SERVER_CMD}; \
echo "Client Command: " $${CLIENT_CMD}; \
docker exec -d server $${SERVER_CMD}; \
docker exec -d client $${CLIENT_CMD};
This seems to work ok. Would love to hear if there are other ways of doing this.
You could write something like this. Here I used a target-specific variable assuming
the IP address is required only in this rule. iperf_command is defined as a variable
as the format looks rather fixed except the IP address which is injected by call function.
Also, as the rule doesn't seem to be supposed to produce the target as a file, I put .PHONY target as well.
iperf_command = iperf3 -c $1 -p 33445
.PHONY: test
test: iperf_server_ip = $(shell python ./scripts/script.py get-server-ip)
test:
docker exec server $(call iperf_command,$(iperf_server_ip))
I use gunicorn to start my django app. For that I usually go into the directory where the manage.py file is located and then use this command:
gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi --workers=2
that I got form the official documentation (it's using a different settings file)
Now, I want to write a script that does that which I found here:
#!/bin/sh
GUNICORN=/usr/local/bin/gunicorn
ROOT=/path/to/folder/with/manage.py
PID=/var/run/gunicorn.pid
#APP=main:application
if [ -f $PID ]; then rm $PID; fi
cd $ROOT
exec $GUNICORN -c $ROOT/ gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi --pid=$PID #$APP
But I get this
usage: gunicorn [OPTIONS] [APP_MODULE]
gunicorn: error: unrecognized arguments: app.wsgi
when I execute it. Any idea on how to write it so it will work?
And also, what is that PID ?
Thanks !
Ok, it's pretty simple, just create a file with (sudo nano gunicorn.sh)
cd /path/to/folder/with/manage.py/
exec gunicorn --env DJANGO_SETTINGS_MODULE=app.my_settings app.wsgi
and then execute it
./gunicorn.sh
This is the code for my shell script :
#!/bin/bash
source /path/to/active
gunicorn_django -c /path/to/conf.py -D
The above sh file when executed, starts gunicorn process but it is not using the config file .
But, if i execute the command directly from the command line, like
gunicorn_django -c path/to/conf.py -D
then it is using the config file .
Also, in the sh file, if i give options directly like -w 3 -error-logfile etc.. then it is taking options .
Use this script, worked for me :
#!/bin/bash
source /path/to/active
gunicorn_django -c $(pwd)/path/to/conffilefrom/presentworkingdirectory -D