Ubuntu systemd service for pygame - python

I have written a simple code with python and pygame to display some images at my monitor. When I run it, everything works fine. i tried to make it run at system startup with systemd services. here is my service:
[Unit]
Description=Starts pygame
[Service]
User=rplab
WorkingDirectory=/home/myuser/
ExecStart=/bin/bash /home/myuser/MyPygame.sh
KillMode=process
[Install]
WantedBy=multi-user.target
When the system boots, it will start the service but unfortunately when I checked the service with systemd status, it gives this error:
pygame.error: No available video device
it seems as if it starts too soon that can't find my monitor, is it possible that I make the service start after the user logs in so that it can find my monitor?

The service file needs to tell systemd that it needs to start after user-session and graphical environment.
[Unit]
Description=Starts pygame
Wants=systemd-logind.service systemd-user-sessions.service display-manager.service
After=systemd-logind.service systemd-user-sessions.service display-manager.service
[Service]
....
....
[Install]
WantedBy=graphical.target
Make sure you are running a graphical.target as the default.
$ systemctl set-default graphical.target

Related

How get X server for a kivy GUI app in raspberry boot?

I have a Raspberry Pi4 B+ and i install Ubuntu Server 20.04 LTS, 64 bit, by RaspberyPi Imager. I try to start on boot a IHM with kivy, but a get an erros: Couldn not conect to X Server. What i already get:
First: I install all the dependences: python3, kivy (for Raspberry Pi4 source:
https://kivy.org/doc/stable/installation/installation-rpi.html# and
https://kivy.org/doc/stable/gettingstarted/installation.html#kivy-source-install)
Second: Create a service and enable it. Like:
[Unit]
Description= Start IHM test
After=network.target
[Service]
ExecStart=/home/muh/teste/kivyStart.sh
Restart=on-abort
[Install]
WantedBy=multi-user.target
and a .sh file:
#!/usr/bin/bas
startKivy()
{
cd /home/muh/teste/
python3 main.py
}
startKivy
In the terminal:
systemctl enable kivyStart.service
sudo chmod +x /home/muh/kivyStart.sh
Thrird: Try to run my aplication (my aplication is a simple cronometer) with:
sudo systemctl restart kivyStart.service
sudo systemctl status kivyStart.service
After reboot, the service is active but fails because it can not iniciate an X server. My ubuntu dont have a desktop, how get this server?

Ubuntu Server 16.04 systemctl service with python script is running but not working

i have read a lot of posts on this site on how to implement a python script as a service.
After fiddling around i am at the point that the service is started via systemctl (and running) but the script is doing nothing...
My config file in /etc/systemd/system/:
[Unit]
Description=tg Bot
[Service]
Type=simple
User=user
WorkingDirectory=/home/user/tg_onduty/
ExecStart=/usr/bin/python3 /home/user/tg_onduty/on_duty.py
Restart=always
[Install]
WantedBy=multi-user.target
Output:
user#server:~$ sudo service tg_onduty status
● tg_onduty.service - Telegram OnDuty Bot
Loaded: loaded (/etc/systemd/system/tg_onduty.service; enabled; vendor preset
Active: active (running) since Thu 2018-02-15 11:28:20 CET; 2min 17s ago
Main PID: 1538 (python3)
Tasks: 9
Memory: 17.7M
CPU: 351ms
CGroup: /system.slice/tg_onduty.service
└─1538 /usr/bin/python3 /home/user/tg_onduty/on_duty.py
I have read https://unix.stackexchange.com/questions/339638/difference-between-systemd-and-terminal-starting-program/339645#339645 and understand that running the script via systemctl is different than running via CLI (via CLI /usr/bin/python3 /home/user/tg_onduty/on_duty.py is working).
My question is now:
How can i trace or see whats going wrong or why the script seems to do nothing?
Via Journalctl i only see: Feb 15 11:56:17 server systemd[1]: Started tg Bot.
Any help is appreciated.
Thanks,
David
Are you sure you script not running at all? try to put logs in your script to see if something inside the script doesn't work.
Add a simple action to the script. maybe echo something or create a dummy file. That is the easiest way to know if it's working

tensorflow gpu failed init from systemd service

I'm trying to host flask API which is using tensorflow libraries.I installed tensorflow gpu library with CUDA and cudnn libraries.I manually checked with the following command which is working fine.
/captcha/env/bin/gunicorn captcha:app -b 0.0.0.0:5124 -k gevent --worker-connections 1000
But when i add this systemd service im getting a tensorflow gpu error
systemd service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
PIDFile=/run/gunicorn/pid
User=root
Group=root
WorkingDirectory=/captcha/env
ExecStart=/captcha/env/bin/gunicorn captcha:app -b 0.0.0.0:5124 -k gevent --worker-connections 1000
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Error text in Log file:
Failed to load the native TensorFlow runtime.
See
https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Can anyone point me where I'm doing wrong?
Systemd seems to strip all the environment variables and TensorFlow needs to know where to find Cuda. Without a LD_LIBRARY_PATH it fails.
There are probably a handful of ways to do this, but this worked for me.
[Service]
Environment=LD_LIBRARY_PATH=/usr/local/cuda/lib64
ExecStart=/path/to/your/app
...
I'm using PM2 to manage tensorflow-flask api process.
http://pm2.keymetrics.io/
I Create a shell file with the following command as a content.
pm2 start run.sh

How to reflect python changes in django, uwsgi and nginx setup

Hi I have deployed Django using UWSGI and Nginx using following tutorial http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
Everything is running fine. I face a challenge while updating python code. I don't know the efficient way to deploy new changes.
after hit and trial, I used following commands to deploy
git pull; sudo service uwsgi stop; sudo service nginx restart; sudo service uwsgi restart; /usr/local/bin/uwsgi --emperor /etc/uwsgi/vassals
this command works fine. But I face following problems
Usagi runs in the foreground. Every time I make changes, a new UWSGI instance start running.
Due to multiple UWSGI instances, My AWS server get crashed, due to memory exhaustion.
I want to know what commands should I run to reflect changes in python code.
PS: in my previous APACHE Django setup, I only used to restart apache, is it possible to reflect changes by only restarting nginx.
Try this:
git pull
python manage.py migrate # to run any migrations
sudo service uwsgi restart
Press Ctrl + Z and then bg + enter
This should run the process in the background.
Please let me know if this works.
Please have a look at this for running uwsgi in background. create an .ini file /etc/uwsgi/sites/projectname.ini. The script would look like this(for ubuntu 16.04):
[uwsgi]
project = projectname
base = projectpath
chdir = %(base)/%(project)
home = %(base)/Env/%(project)
module = %(project).wsgi:application
master = true
processes = 5
socket = %(base)/%(project)/%(project).sock
chmod-socket = 666
vacuum = true
(For ubuntu 16.04):
then create the following systemd script at /etc/systemd/system/uwsgi.service:
[Unit]
Description=uWSGI Emperor service
After=syslog.target
[Service]
ExecStart=/usr/local/bin/uwsgi --emperor /etc/uwsgi/sites
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
Refresh the state of the systemd init system with this new uWSGI service on board
sudo systemctl daemon-reload
In order to start the script you'll need to run the following:
sudo systemctl start uwsgi
In order to start uWSGI on reboot, you will also need:
sudo systemctl enable uwsgi
You can use the following to check its status:
systemctl status uwsgi
(For ubuntu 14.04):
Create an upstart script for uWSGI:
sudo nano /etc/init/uwsgi.conf
Then add following lines in the above created file:
description "uWSGI application server in Emperor mode"
start on runlevel [2345]
stop on runlevel [!2345]
setuid user
setgid www-data
exec /usr/local/bin/uwsgi --emperor /etc/uwsgi/sites

Start python flask webserver automatically after booting the system and keep it on till the end

I'm using flask as a webserver for my UI (it's a simple web interface which controls the recording using gstreamer on ubuntu from a webcam and a framegrabber simultaneously / kinda simple player)
Every time I need to run the command "python main.py" to run the server from command prompt manually.
I've tried the init.d solution or even writing a simple shell script and launching it every time after rebooting the system on start up but it fails to keep the server up and running till the end (just invokes the server and terminates it I guess)
is there any solution that could help me to start the webserver every time after booting the system on startup and keep it on and running?
I'd like to configure my system to boot directly into the browser so don't wanna have any need for more actions by the user.
Any Kind of suggestion/help is appreciated.
I'd like to suggest using supervisor, the documentation is here
for a very simple demo purpose, after you installed it and finish the set up, touch a new a file like this:
[program:flask_app]
command = python main.py
directory = /dir/to/your/app
autostart = true
autorestart = true
then
$ sudo supervisorctl update
Now, you should be good to go. The flask app will start every time after you boot you machine.(note: distribution package has already integrated into the service management infrastructure, if you're using others, see here)
to check whether you app is running:
$ sudo supervisorctl status
For production, you can use nginx+uwsgi+supervisor. The flask deployment documentation is here
One well documented solution is to use Gunicorn and Nginx server:
Install Components and setup a Python virtualenv with dependencies
Create the wsgi.py file :
from myproject import application
if __name__ == "__main__":
application.run()
That will be handled by Gunicorn :
gunicorn --bind 0.0.0.0:8000 wsgi
Configure Gunicorn with setting up a systemd config file: /etc/systemd/system/myproject.service :
[Unit]
Description=Gunicorn instance to serve myproject
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myproject
Environment="PATH=/home/sammy/myproject/myprojectenv/bin"
ExecStart=/home/sammy/myproject/myprojectenv/bin/gunicorn
--workers 3 --bind unix:myproject.sock -m 007 wsgi:app
[Install]
WantedBy=multi-user.target
Start the Gunicorn service at boot :
sudo systemctl start myproject
sudo systemctl enable myproject

Categories

Resources