So I have a simple app written in python and running on a set (10) Raspberry Pis.
It is a folder with one runnable script.
I want to have on my external server with public IP some kind of CI/CD like service that would deploy updates to all edge nodes and restart my app on them.
Internet is rare on edge devices thus I want to push updates when I push some button on the server
Is there such thing for python programs that are meant to run on edge devices?
As I understand the main problem is to update and run some script on multiple Raspberry Pi boards, correct?
There are a lot of ready-made solution like dokku or piku. Both allows you to do git push deployments to your own servers (manually).
Or you can develop your own solution, using GitHub webhooks or some HTML form (for manual push) and Flask web-server that will do CI/CD steps internally.
You'll need to run script like above on each node/board. And configure Webhook with URL similar to: http://your-domain-or-IP.com:8000/deploy-webhook but with different port per node.
Or you can open that page manually from browser. Or create separate page that allows you to do that asynchronously. As you'll wish.
from flask import Flask
import subprocess
app = Flask(__name__)
script_proc = None
src_path = '~/project/src/'
def bash(cmd):
subprocess.Popen(cmd)
def pull_code(path):
bash('git -C {path} reset --hard'.format(path=path))
bash('git -C {path} clean -df'.format(path=path))
bash('git -C {path} pull -f'.format(path=path))
# or
# If need just to copy files to remote machine:
# (example of remote path "pi#192.168.1.1:~/project/src/")
bash('scp -r {src_path} {dst_path}'.format(src_path=src_path, dst_path=path))
def installation(python_path):
bash('{python_path} -m pip install -r requirements.txt'.format(python_path=python_path))
def stop_script():
global script_proc
if script_proc:
script_proc.terminate()
def start_script(python_path, script_path, args=None):
global script_proc
script_proc = subprocess.Popen(
'{} {} {}'.format(str(python_path), script_path, ' '.join(args) or '')
)
#app.route('/deploy-webhook')
def deploy_webhook():
project_path = '~/project/some_project_path'
script_path = 'script1.py'
python_path = 'venv/bin/python'
pull_code(project_path)
installation(python_path)
stop_script()
start_script(python_path, script_path)
return 'Deployed'
If your don't need a user interface and use linux I want suggest to use a bash script.
I wrote a simple bash script "to push an update and restart" to
as set for raspberry pi's. Please configure before ssh with key-less login.
#!/bin/bash
listOfIps=(
192.168.1.100
192.168.1.101
192.168.1.102
192.168.1.103
)
username="pi"
destDir="work/"
pythonScriptName="fooScript.py"
for i in "${listOfIps[#]}"
do
echo "will copy folder \"$1\" with content to ip: ${i} and perform"
echo "scp -r $1 ${username}#${i}:${destDir}"
scp -r $1 ${username}#${i}:${destDir}
echo "will kill all python scripts unfriendly"
ssh ${username}#${i} "pkill python"
echo "will restart my python scripts ${pythonScriptName} in dir ${destDir} "
ssh ${username}#${i} "python3 ${destDir}/${pythonScriptName} &"
done
exit 0
save the code in file copyToAll.sh edit username destDir and your script name and make it executable:
chmod 755 copyToAll.sh
call
copyToAll.sh myFileToSend
So i have a code that i'm trying to run and when it finish it should display a message to the user saying what changes have happend. This works fine if i run it on terminal. But when i add it to a cronjob nothing happens and no error is shown.
It seems to me that it is losing the display session, but cant figure it how to solve it. Here is the show message code:
def sendmessage(message):
subprocess.Popen(['notify-send', message])
return
I also tried this version:
def sendmessage(message):
Notify.init("Changes")
n = Notify.Notification.new(message)
n.show()
return
Which gives the error (only in background also):
cannot open display:
Activated service 'org.freedesktop.Notifications' failed: Process org.freedesktop.Notifications exited with status
Would be very thankful for any help or alternative given.
I had a similar issue running a system daemon, where I wanted the user to be notified about a network bandwidth upload being exceeded.
The program was designed to run under systemd but at a push also under upstart.
You may get some mileage out of my configuration files:
systemd - bandwidth.service file
[Unit]
Description=Bandwidth traffic monitor
Documentation=man:bandwidth(7)
After=graphical.target
[Service]
Type=simple
Environment="DISPLAY=:0" "XAUTHORITY=/home/#USER#/.Xauthority"
PIDFile=/var/run/bandwidth.pid
ExecStart=/usr/bin/dbus-launch /usr/sbin/bandwidth_logd
ExecReload=/bin/kill -HUP $MAINPID
User=root
[Install]
WantedBy=graphical.target
upstart - bandwidth.conf file
#
# These are the scripts that run when a network appears.
# Use this on an upstart system in /etc/init
# test for systemd or upstart system with
# ps -p1 | grep systemd && echo systemd || echo upstart
# better
# ps -p1 | grep systemd >/dev/null && echo systemd || echo upstart
# Using upstart the script will need to daemonise which is a bugger
# so test for it and import Daemon_server.py
description "Bandwidth upstart events"
start on net-device-up # Start a daemon or run a script
stop on net-device-down # (Optional) Stop a daemon, scripts already self-terminate.
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect fork
#Set environment variables
env DISPLAY=":0"
export DISPLAY
env XAUTHORITY="/home/#USER#/.Xauthority"
export XAUTHORITY
script
# You can put shell script in here, including if/then and tests.
# replace #USER# with your name and ensure that you have .Xauthority in $HOME
/usr/sbin/bandwidth_logd
end script
You will note that in both configuration files the environment becomes key and #USER# is replaced by a real user name with a valid .Xauthority file in their $HOME directory.
In the python code I use the following to emit the message(import notify2).
def warning(msg):
result = True
try:
notify2.init("Bandwidth")
mess = notify2.Notification("Bandwidth",msg,'/usr/share/bandwidth/bandwidth.png')
mess.set_urgency(2)
mess.set_timeout(0)
mess.show()
except:
result = False
return result
I've seen this post Unable to start service with nohup due to 'INFO spawnerr: unknown error making dispatchers for 'app_name': EACCES' and tried the answer but it doesn't work
I'm using the Amazon AMI, and since Amazon doesn't have apt-get, I had to use easy_install to install supervisor. here is my /etc/supervisord.conf
[program:awesome]
command = /srv/awesome/www/app.py
directory = /srv/awesome/www
user = ec2-user
startsecs = 3
redirect_stderr = true
stdout_logfile_maxbytes = 50MB
stdout_logfile_backups = 10
stdout_logfile = /srv/awesome/log/app.log
my app files are placed under /srv/awesome/www/ and the owner set to ec2-user which is the same user when I ran whoami. I first ran
supervisord -c /etc/supervisord.conf
which gave me
Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.
I entered the command
sudo unlink /tmp/supervisor.sock
which resolved it, then I did
supervisorctl start awesome
which spawn the error, I've tried reloading, stop and start but none works
I switched to ubuntu instead of amazon AMI and everything worked out
Finally I migrated my development env from runserver to gunicorn/nginx.
It'd be convenient to replicate the autoreload feature of runserver to gunicorn, so the server automatically restarts when source changes. Otherwise I have to restart the server manually with kill -HUP.
Any way to avoid the manual restart?
While this is old question you need to know that ever since version 19.0 gunicorn has had the --reload option.
So now no third party tools are needed.
One option would be to use the --max-requests to limit each spawned process to serving only one request by adding --max-requests 1 to the startup options. Every newly spawned process should see your code changes and in a development environment the extra startup time per request should be negligible.
Bryan Helmig came up with this and I modified it to use run_gunicorn instead of launching gunicorn directly, to make it possible to just cut and paste these 3 commands into a shell in your django project root folder (with your virtualenv activated):
pip install watchdog -U
watchmedo shell-command --patterns="*.py;*.html;*.css;*.js" --recursive --command='echo "${watch_src_path}" && kill -HUP `cat gunicorn.pid`' . &
python manage.py run_gunicorn 127.0.0.1:80 --pid=gunicorn.pid
I use git push to deploy to production and set up git hooks to run a script. The advantage of this approach is you can also do your migration and package installation at the same time. https://mikeeverhart.net/2013/01/using-git-to-deploy-code/
mkdir -p /home/git/project_name.git
cd /home/git/project_name.git
git init --bare
Then create a script /home/git/project_name.git/hooks/post-receive.
#!/bin/bash
GIT_WORK_TREE=/path/to/project git checkout -f
source /path/to/virtualenv/activate
pip install -r /path/to/project/requirements.txt
python /path/to/project/manage.py migrate
sudo supervisorctl restart project_name
Make sure to chmod u+x post-receive, and add user to sudoers. Allow it to run sudo supervisorctl without password. https://www.cyberciti.biz/faq/linux-unix-running-sudo-command-without-a-password/
From my local / development server, I set up git remote that allows me to push to the production server
git remote add production ssh://user_name#production-server/home/git/project_name.git
# initial push
git push production +master:refs/heads/master
# subsequent push
git push production master
As a bonus, you will get to see all the prompts as the script is running. So you will see if there is any issue with the migration/package installation/supervisor restart.
I'm using django-gunicorn-nginx setup by following this tutorial http://ijcdigital.com/blog/django-gunicorn-and-nginx-setup/ Upto nginx setup, it is working. Then I installed supervisor, configured it and then I reboot my server and checked, it shows 502 bad gateway. I'm using Ubuntu 12.04 LTS
/etc/supervisor/conf.d/qlimp.conf
[program: qlimp]
directory = /home/nirmal/project/qlimp/qlimp.sh
user = nirmal
command = /home/nirmal/project/qlimp/qlimp.sh
stdout_logfile = /path/to/supervisor/log/file/logfile.log
stderr_logfile = /path/to/supervisor/log/file/error-logfile.log
Then I restarted supervisor and I run this command $ supervisorctl start qlimp and I'm getting this error
unix:///var/run/supervisor.sock no such file
Is there any problem in my supervisor setup?
Thanks!
That there is no socket file probably means that supervisor isn't running. A reason that it isn't running might be that your qlimp.conf file has some sort of error in it. If you do a
sudo service supervisor start
you can see whether or not this is the case. If supervisor is already running, it will say. And if it is catching an error, it will usually give you a more helpful error message than supervisorctl.
I have met the same issue as you and after several times, here comes the solution:
First remove the apt-get supervisor version:
sudo apt-get remove supervisor
Kill the backend supervisor process:
sudo ps -ef | grep supervisor
Then get the newest version(apt-get version was 3.0a8):
sudo easy_install(pip install) supervisor==3.0b2
Echo the config file(root premission):
echo_supervisord_conf > /etc/supervisord.conf
5.Start supervisord:
sudo supervisord
6.Enter supervisorctl:
sudo supervisorctl
Anything has been done! Have fun!
Try this
cd /etc/supervisor
sudo supervisord
sudo supervisorctl restart all
Are you sure that supervisord is installed and running? Is there a socket file in present at /var/run/supervisor.sock?
The error indicates that supervisorctl, the control CLI, cannot reach the UNIX socket to communicate with supervisord, the daemon.
You could also check /etc/supervisor/supervisord.conf and see if the values for the unix_http_server and supervisorctl sections match.
Note that this is a Ubuntu-level problem, not a problem with Python, Django or nginx and as such this question probably belongs on ServerFault.
On Ubuntu 16+ it seems to been caused by the switch to systemd, this workaround may fix for new servers:
# Make sure Supervisor comes up after a reboot.
$ sudo systemctl enable supervisor
# Bring Supervisor up right now.
$ sudo systemctl start supervisor
and then do check your status of iconic.conf [My example] of supervisor
$ sudo supervisorctl status iconic
PS: Make sure gunicorn should not have any problem while running.
The error may be due to that you don't have the privilege.
Maybe you can fix the error by this way, open your terminal, and input vim /etc/supervisord.conf to edit the file, search the lines
[unix_http_server]
;file=/tmp/supervisor.sock ; (the path to the socket file)
;chmod=0700 ; socket file mode (default 0700)
and delete the Semicolon in the start of the string ;file=/tmp/supervisor.sock and ;chmod=0700, restart your supervisord. I suggest you do it.
Make sure that in /etc/supervisor.conf the following two sections exists
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
You can do something like this :-
sudo touch /var/run/supervisor.sock
sudo chmod 777 /var/run/supervisor.sock
sudo service supervisor restart
It's definitely work, try this.
In my case, Supervisor was not running. To spot the issue I run:
sudo systemctl status supervisor.service
The problem was that I had my logs pointing to a non-existing directory, so I just had to create it.
I hope it helps :)
touch /var/run/supervisor.sock
sudo supervisord -c /etc/supervisor/supervisord.conf
and after
supervisorctl restart all
if you want to listen the supervisor port
ps -ef | grep supervisord
if you want kill the process
kill -s SIGTERM 2503
Create a conf file and below add lines
Remember that in order to work with Nginx, you must have to disable autostart on system boot, that you activated while installing Nginx.
https://askubuntu.com/questions/177041/nginx-disable-autostart
Note: All the supervisor processes must be on "daemon off" mode, in order to work with supervisor
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
autostart=true
autorestart=true
startretries=5
stopasgroup=true
stopsignal=QUIT
numprocs=1
startsecs=0
process_name=WebServer(Nginx)
stderr_logfile=/var/log/nginx/error.log
stderr_logfile_maxbytes=10MB
stdout_logfile=/var/log/nginx/access.log
stdout_logfile_maxbytes=10MB
sudo supervisorctl reread && sudo supervisorctl update
I have faced this error several times -
If server is newly created instance and facing this issue
Might be because of some wrong config or mistake happened during the process, or supervisor is not enabled.
Try restarting your supervisor and reconnecting ec2
or
Try reinstalling supervisor (#Scen)
or
Try approaches mentioned by #Yuvaraj Loganathan, #Dinesh Sunny. But mostly you might end up creating a new instance.
If server was running perfectly from long time but then it suddenly stopped
and threw
unix:///var/run/supervisor.sock no such file on sudo supervisorctl status.
It may be due to high memory usage, refer below image usage was 99% of 7.69GB earlier.
You can find the above config after connecting with ec2 via ssh or putty at the top.
You can upgrade your ec2 instance or you can then delete any extra files like logs (/var/logs), zip to free up the space. But careful do not delete any system files.
Restart supervisor
sudo service supervisor restart
Check sudo supervisorctl status