Multithreaded Python application in background - python

I have a Python application myapp on a Raspberry Pi, with a front end control panel made with Dash. I run the Dash app in its own thread so that I can use it to manipulate some settings inside myapp.
When I SSH in to the Raspberry Pi, I want to start the Python application myapp in the background and then close the remote shell window and just let it spin and do its thing. After having written this question I found I have to use nohup for this according to
nohup python path/to/myapp.py &
For other python apps just
python path/to/other_app &
seems to suffice.
So I guess I already have the answer to my question. But, while at the subject, is this the preferred and only solution?

It depends a lot on what specific linux variant you're running, but generally speaking the best way is to your let your system's service manager handle that. In most cases these days that means systemd.
Create a service config file-
[Unit]
Description=My Python Service
[Service]
Type=simple
ExecStart=/path/to/my/python/service.py
[Install]
WantedBy=multi-user.target
Put this in the /lib/systemd/system/ with a name like mypythonapp.service.
Run systemctl daemon-reload so systemd knows to look for the new file.
Run systemctl enable mypythonapp.service to tell it to run the app on start.
Run systemctl start mypythonapp.service to tell it to run the app immediately.
Now your script will have logging, will restart when crashed or when the system reboots, and you don't have to manually kick it off.

Related

Autostarting Python scripts on boot using crontab on rasbian

I am a pretty new python programmer and am a little bit familiar with crontab. What I am trying to do is probably not the best practice but it is what I am most familiar with.
I have a raspberry pi with a couple python scripts I want to run on boot and stay running in the background. They are infinite loop programs. They are tested and working in a cmd terminal and have been function for a couple weeks. Just getting tired on manually starting them up. When the pi goes through a power cycle.
So I did a sudo crontab -e and added this line as my only entry
#reboot /usr/bin/python3 /usr/bin/script.py &
If I copy paste this exactly (minus the #reboot) it will run successfully in the cmd line.
I am using a cmd:
pgrep -af pythonto check to see if it is running. I normally see two scripts running there but not the one I am trying to add.
I am not sure where I am going wrong or my best method to troubleshoot my issue. From the research I have been doing it seems like it should work.
Thanks for your help
Kevin
You might find it easier to create a systemd service file for each program that you want to start when you Raspberry Pi boots. systemd comes with a few more tools to help you debug your configuration.
This is what an example systemd service file (located at /etc/systemd/system/myscript.service) would look like:
[Unit]
Description=My service
After=network.target
[Service]
ExecStart=/usr/bin/python3 /usr/bin/script.py
WorkingDirectory=/home/pi/myscript
StandardOutput=inherit
StandardError=inherit
Restart=always
User=pi
[Install]
WantedBy=multi-user.target
and then you can enable this program to run on boot with the command:
sudo systemctl enable myscript.service
These examples are from Raspberry Pi's documentation about systemd. But because systemd is widely used in the Linux world, so you can also follow documentation and StackOverflow answers for other Linux distributions.

How can I automatically run 2 python scripts at the same time in a virtual environment when booting up the Raspberry Pi?

Note: I'm new to everything so bare with me.
I'm using a RPi 4B with Buster. My goal is to automatically run 2 python scripts at the same time when the pi first boots up. Both scripts are in a virtual environment. The first script is called sensor.py which basically uses an ultrasonic distance sensor to continuously calculate distances between the sensor and an object. The other is an object recognition script from Tensorflow Lite called TFLite_detection_webcam.py that identifies objects from a camera feed. I can't use rc.local for autorunning because the object recognition script uses a picamera feed as an input, which rc.local doesn't support. So my preferred option is using autostart. I was able to successfully get the sensor.py script to autorun by issuing this in the terminal: sudo nano /etc/xdg/lxsession/LXDE-pi/autostart and adding this to it: /home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/sensor.py. In this case, tflite1-env is the virtual environment being activated. However, I don't know how to get the second script to run. To run it regularly, I would issue the following into the terminal and the camera feed would pop up on the screen as a window.
cd tflite1
source tflite1-env/bin/activate
python3 TFLite_detection_webcam.py --modeldir=TFLite_model
I've tried to get this script to run by adding this to the autostart file: /home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/TFLite_detection_webcam.py --modeldir=TFLite_model but it doesn't seem to be working. I've tried to run it using shell files, but every time that I run a shell file in the autostart file such as adding ./launch.sh to the bottom, nothing happens. Any help getting the second script to run at the same time as the first upon startup would be greatly appreciated. Thanks in advance.
Use Systemd. Set up Systemd unit files in /etc/systemd/system, e.g.
kitkats-sensor.unit
[Unit]
After=network.target
[Service]
ExecStart=/home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/sensor.py
WorkingDirectory=/home/pi/tflite1/
User=pi
Group=pi
kitkats-tflite.unit
[Unit]
After=network.target
[Service]
ExecStart=/home/pi/tflite1/tflite1-env/bin/python3 /home/pi/tflite1/TFLite_detection_webcam.py --modeldir=TFLite_model
WorkingDirectory=/home/pi/tflite1/
User=pi
Group=pi
Then enable the unit files with systemctl enable kitkats-tflite and systemctl enable kitkats-sensor (to have them autostart) and systemctl start kitkats-tflite (and sensor) to start them right away.
You can then see them in e.g. systemctl, and their logs are diverted to journalctl.

Application on Raspberry Pi (Linux) with OpenCV that autostarts

I have used OpenCv within my Windows applications in the past and in this case, an application would be built and installed as a Windows Service so that it could be set to start automatically and start running. Differences are I have done these in compiled languages and we were on Windows.
Now, I am playing around with porting the application to run on Linux/Raspberry Pi. The application simply gets a video feed, does some object detection using OpenCv and then sends result via HTTP web api.
First comment before my question is (I am still getting familiar with this setup) it seems that Python is by far the language of choice for all of this. However, the end goal is to have this device be headless (no monitor or input devices and act like an IoT device) so I don't need or better, can't open a console and type commands.
So, for the question, what is the equivalent to a Windows Service on Raspberry Pi so that my application just starts up on boot and runs as long as the device is on? The subjective follow up question is Python still a good choice considering everything I have described above or would I be better off doing a full blown compiled app in c or c++?
Thanks!
If you are using Raspbian, then I would say the easiest tool il systemd (daemon) and the systemctl (shell command).
In order to run your python script as a daemon (a daemon is what Windows calls "Service") is to create a configuration file named .service and put it in the /etc/systemd/system path.
To get an idea of how to configure the file, you can take this example:
[Unit]
Description=Your service name
[Service]
ExecStart=python <path to python script>
StandardOutput=null
[Install]
WantedBy=multi-user.target
Alias=this_script_name>.script
Hope it helps!
Check out Supervisor: http://supervisord.org/. It should do what you need to do in terms of running your program on boot and restarting if it crashes, etc.
I don't have any experience with OpenCV, but web app frameworks like Flask (http://flask.pocoo.org/) make it very easy to expose an HTTP API with minimal code.
Good luck!

Run pigpiod daemon - Using python or Ubuntu boot

In order to use pigpio Module in Python (remote GPIO for Raspberry Pi ), pigpiod has to be loaded to memory on each RPi.
what is the right way to to it ? during Ubuntu's boot or a part of Python's script ?
since It needs sudo pigpiod- how is it done (both Ubuntu and Python )?
An alternative it to use the reboot option within cron
Run:
crontab -e
then add the entry:
#reboot /pathtoexecutable
This will run the process every time the system boots.
I haven't used pigpiod, but I'm assuming it's a daemon (a long running Linux process) that you want to start at boot. The standard way to do that in most modern Linux systems (including Raspberri Pi, I think) is to use systemd. Give the following commands a try:
systemctcl start pigpiod # start it now
systemctl enable pigpiod # start it each boot
systemctl status pigpiod # make sure it started
# https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs
journalctl -u pigpiod # Use this to see logs.
If systemctl complains about not being able to find the service, you'll have to create a service file for it. This is a text file you place in a directory that tells systemd how to deamonize the process. Here is a blog post where someone does this, and Google should find you others if it doesn't help.
Then you should be able to connect with Python.
Answered in gpiozero documentation

Difference between Daemon and Upscript for Gunicorn in Django Production

I am deploying a Django site in production and now from a week I couldn't get Gunicorn script in /etc/init/project.conf to bind Nginx no matter what I do inside a Django virtual environment and under newly created user djagno at location /home/Django/project/bin/gunicorn. I need to know that can I run a site in production with daemon. I understand that daemon is simply a background process and not attached to any tty. But with creating a pid with running a command from inside a virtualenv like "gunicorn --bind 127.0.0.1:9500 project.wsgi:application --config=/etc/gunicorn.d/gunicorn.py --name=project -p /tmp/project.pid" wouldn't it act as a service? My project without virtual environment is working just fine but not with virtual environment. I am learning Linux so need an expert advise. Can I launch a project like this?
My upstart script I couldn't attach within virtualenv is given below.
description "Gunicorn daemon for Django project"
start on (local-filesystems and net-device-up IFACE=eth0)
stop on runlevel [!12345]
# If the process quits unexpectadly trigger a respawn
respawn
setuid django
setgid django
chdir /home/django
exec gunicorn \
--name=project\
--pythonpath=project\
--bind=127.0.0.1:9500 \
--config /etc/gunicorn.d/gunicorn.py \
project.wsgi:application
If someone can help me to make change in it according to virtualenv I would be thankful. Again...same settings for my project without virtualenv are working just fine but not for my second website where the only difference is that I am running first project without virtualenv and second one is from virtualenv.
The point is that if you just run it like that, you don't have anything responsible for ensuring it remains up: if the process dies, or if you have to restart your server, you will have to re-run that command manually. That's what upstart, or supervisor, will do for you: monitor that it is indeed running, and bring it back if it isn't.
If you want help with debugging your upstart script, you will need to actually post it, plus any errors from the log.

Categories

Resources