How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.
backgroud
The system is Centos7, which have a python2.x. 1GB memory and single core.
I install python3.x , I can code python3 into python3.
The django-celery project is based on a virtualenv python3.x,and I had make it well at nginx,uwsgi,mariadb. At least,I think so for no error happend.
I try to use supervisor to control the django-celery's worker,like below:
command=env/bin/python project/manage.py celeryd -l INFO -n worker_%(process_num)s
numprocs=4
process_name=projects_worker_%(process_num)s
stdout_logfile=logfile.log
etderr_logfile=logfile_err.log
Also had make setting about celery events,celery beat,this part is well ,no error happend. Error comes from the part of worker.
When I keep the proces big than 1,it would run at first,when I do supervisorctl status,all are running.
But when I do the same command to see status once more times,some process status change to starting.
So I try more times,I found that:the worker's status would always change from running to starting and then changeing starting to running-- no stop.
When I check the supervisor's logfile at tmp/supervisor.log,it shows like:
exit status 1; not expected
entered runnging state,process has stayed up for > than 1 seconds(startsecs)
'project_worker_0' with pid 2284
Maybe it shows why the worker change status all the time.
What's more ,when I change the proces to 1,the worker could failed.The worker's log show me:
stale pidfile exists.Removing it
But,I did not ponit the pidfile path to worker.And,I just found the events's and beat 's pidfie at the / path,no worker's pidfile.Also ,I try find / -name *.pid to find a pidfile like worker,or celeryd,but here did not exist.
question
firstly, I want to deploy the project , so ,did here any other way to deploy the django-celery with virtulanev's celery part?
If here anyone can tell me how this phenomenon comes,I would better to choose supervisor to deploy the celery part. Anyone can help me about it ?
PS
Any of your thoughts may be helpful to me, best wishs!
Finally, I solve this problem yesterday night.
about the reason
I make the project could success running at a windows 10 system, but did no check when I change the project to centos7.+. The command:env/bin/python project/manage.py celeryd could not run success. So the supervisor would start a process which will failed soon.
Why the command could not success? I had pip installed all the package need. But it show err below:
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
I try to search some blog about this error, and get the anser:
export C_FORCE_ROOT='true' # at the centos enviroument
action to solve(after meeting error like this)
add export C_FORCE_ROOT='true' to centos's enviroment file and source it.
check command 'env/bin/python project/manage.py celeryd ',did it run successful.
restart the supervisord. Attention please! not supervisorctl reload,it just reload the .conf file,not the environment file. Try kill the process supervisord -c xx.conf(ps aux | grep supervisord and kill -9 process_number,be careful).
some url about the blog
the error when just run celeryd not sucess in chinese
So i have a code that i'm trying to run and when it finish it should display a message to the user saying what changes have happend. This works fine if i run it on terminal. But when i add it to a cronjob nothing happens and no error is shown.
It seems to me that it is losing the display session, but cant figure it how to solve it. Here is the show message code:
def sendmessage(message):
subprocess.Popen(['notify-send', message])
return
I also tried this version:
def sendmessage(message):
Notify.init("Changes")
n = Notify.Notification.new(message)
n.show()
return
Which gives the error (only in background also):
cannot open display:
Activated service 'org.freedesktop.Notifications' failed: Process org.freedesktop.Notifications exited with status
Would be very thankful for any help or alternative given.
I had a similar issue running a system daemon, where I wanted the user to be notified about a network bandwidth upload being exceeded.
The program was designed to run under systemd but at a push also under upstart.
You may get some mileage out of my configuration files:
systemd - bandwidth.service file
[Unit]
Description=Bandwidth traffic monitor
Documentation=man:bandwidth(7)
After=graphical.target
[Service]
Type=simple
Environment="DISPLAY=:0" "XAUTHORITY=/home/#USER#/.Xauthority"
PIDFile=/var/run/bandwidth.pid
ExecStart=/usr/bin/dbus-launch /usr/sbin/bandwidth_logd
ExecReload=/bin/kill -HUP $MAINPID
User=root
[Install]
WantedBy=graphical.target
upstart - bandwidth.conf file
#
# These are the scripts that run when a network appears.
# Use this on an upstart system in /etc/init
# test for systemd or upstart system with
# ps -p1 | grep systemd && echo systemd || echo upstart
# better
# ps -p1 | grep systemd >/dev/null && echo systemd || echo upstart
# Using upstart the script will need to daemonise which is a bugger
# so test for it and import Daemon_server.py
description "Bandwidth upstart events"
start on net-device-up # Start a daemon or run a script
stop on net-device-down # (Optional) Stop a daemon, scripts already self-terminate.
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect fork
#Set environment variables
env DISPLAY=":0"
export DISPLAY
env XAUTHORITY="/home/#USER#/.Xauthority"
export XAUTHORITY
script
# You can put shell script in here, including if/then and tests.
# replace #USER# with your name and ensure that you have .Xauthority in $HOME
/usr/sbin/bandwidth_logd
end script
You will note that in both configuration files the environment becomes key and #USER# is replaced by a real user name with a valid .Xauthority file in their $HOME directory.
In the python code I use the following to emit the message(import notify2).
def warning(msg):
result = True
try:
notify2.init("Bandwidth")
mess = notify2.Notification("Bandwidth",msg,'/usr/share/bandwidth/bandwidth.png')
mess.set_urgency(2)
mess.set_timeout(0)
mess.show()
except:
result = False
return result
I have a django application on an Ubuntu server that is run with gunicorn and started/stopped with supervisor. I am migrating it to a new server that is running the same Ubuntu server OS. It has been working fine up until now when I've tried starting it with supervisor. When I try to start it it exits with ERROR (abnormal termination) I'm using the exact same configuration files on my new server as I was on the old one and it's really bothering me why it's not working on the new server... I'll put some of my configs below as well as part of the supervisor log.
/etc/supervisor/supervisor.conf
; supervisor config file
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0766 ; sockef file mode (default 0700)
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
childlogdir=/var/log/supervisor ; ('AUTO' child log dir, default $TEMP)
loglevel=debug
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL for a unix socket
[include]
files = /etc/supervisor/conf.d/*.conf
/etc/supervisor/conf.d/beta.conf
[program:beta]
command = /var/www/beta/myapp/bin/gunicorn_start
user = eli
stdout_logfile = /var/web-data/logs/beta/gunicorn_supervisor.log
redirect_stderr = true
/var/www/beta/myapp/bin/gunicorn_start
#!/bin/bash
source /var/www/beta/env/bin/activate
exec gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi
/var/www/beta/myapp/bin/gunicorn_config.py
command = '/var/www/beta/env/bin/gunicorn'
pythonpath = '/var/www/beta/myapp'
bind = '127.0.0.1:9000'
workers = 1
user = 'eli'
/var/log/supervisor/supervisord.log
http://pastebin.com/fAGdJMKg
/var/web-data/logs/beta/gunicorn_supervisor.log
empty
My next step would be to just wipe the new server clean and start from scratch again to see if that might solve my problem. This is really bothering me why two servers with the exact same configurations, one works and the other doesn't.
I have also tried changing the location of the socket file as well as adding user=eli to the [supervisord] block to no avail.
Lastly, if I run /var/www/beta/myapp/bin/gunicorn_start from the command line as eli it will run and I'm able to access my website. However when I sudo su and then run it, it will pause like it's running (~1sec) and then exit. It doesn't print anything to the console nor add anything to the log file.
The problem was solved by running /var/www/beta/myapp/bin/gunicorn_start line by line as root. The first source worked fine but it was the second line that was having trouble.
I then tried running gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi and this was doing the same thing but stil not showing any errors. So then I ran gunicorn -c /var/www/beta/myapp/bin/gunicorn_config.py myapp.wsgi --preload and found that it was raising a KeyError for an environment variable that I had in my settings. Then it all made sense. When I was running this as root, it didn't have the same environment variables as my normal user did.
And granted this was the only difference between my two servers. I decided to try out environment variables for my django secret key and database password. So I switched these to their actual values. Then ran sudo supervisorctl start beta and it started perfectly fine.
When trying to run python bottle on port 80 I get the following:
socket.error: [Errno 10013] An attempt was made to access a socket in a way forb
idden by its access permissions
My goal is to run the web server on port 80 so the url's will be nice and tidy without any need to specify the port
for example:
http://localhost/doSomething
instead of
http://localhost:8080/doSomething
Any ideas?
Thanks
Exactly as the error says. You need to have permissions to run something on the 80th port, normal user cannot do it. You can execute the bottle webapp as root (or maybe www-data) and it should be fine as long as the port is free.
But taking security (and stability) into consideration you should look at different ways of deployment, for example nginx together with gunicorn.
Gunicorn Docs
Nginx Docs
Check your system's firewall setting.
Check whether another application already use port 80 using following commands:
On unix: netstat -an | grep :80
On Windows: netstat -an | findstr :80
According to Windows Sockets Error Codes:
WSAEACCES 10013
Permission denied.
An attempt was made to access a socket in a way forbidden by its
access permissions. An example is using a broadcast address for sendto
without broadcast permission being set using setsockopt(SO_BROADCAST).
Another possible reason for the WSAEACCES error is that when the bind
function is called (on Windows NT 4.0 with SP4 and later), another
application, service, or kernel mode driver is bound to the same
address with exclusive access. Such exclusive access is a new feature
of Windows NT 4.0 with SP4 and later, and is implemented by using the
SO_EXCLUSIVEADDRUSE option.
Sometimes dont need to want install nginx, python with gunicorn is a viable alternative with supervisor but you need to make lot of tricks for working
i assume you know install supervisor, and later install again requirements
pip3 install virtualenv
mkdir /home/user/.envpython
virtualenv /home/user/.envpython/bin/activate
source /home/user/.envpython/bin/activate
cd /home/user/python-path/
pip3 install -r requirements
create a supervisor file like that
nano /etc/supervisord.d/python-file.conf
and edit with this example, edit the program that you need, remember the python3 run in other ports > 1024
;example with path python supervisor in background
[program:python]
environment=HOME="/home/user",USER="user"
user=user
directory = /home/user/python-path
command = python3 /home/user/python-path/main.py
priority = 900
autostart = true
autorestart = true
stopsignal = TERM
;redirect_stderr = true
stdout_logfile = /home/user/.main-python-stdout.log
stderr_logfile = /home/user/.main-python-stderr.log
;example with path python gunicorn supervisor and port 80
[program:gunicorn]
;environment=HOME="/home/user",USER="user"
;user=user
directory = /home/user/python-path
command = bash /home/user/.scripts/supervisor-initiate.sh
priority = 900
autostart = true
autorestart = true
stopsignal = TERM
;redirect_stderr = true
stdout_logfile = /home/user/.main-python-stdout.log
stderr_logfile = /home/user/.main-python-stderr.log
and create the file
nano /home/user/.scripts/supervisor-initiate.sh
with the following content
source /home/user/.envpython/bin/activate
cd /home/user/python-path
gunicorn -w 1 -t 120 -b 0.0.0.0:80 main:app
i assume you file in python is called main and you initiate the app with flask or django that called "app"
only restart supervisord process
systemctl restart supervisord
and you have the app with gunicorn in the port 80, i post because i find for a very long time for this solution
Waiting works for anyone