I'm working on an IOT project where multiple users are running a Python package I maintain on a Raspberry Pi Zero. I send the Raspberry Pis out to the users with the software preloaded, but the project is still pretty early in development and updates to the package happen frequently.
The problem is, many of the users are not up to the task of updating a Python package on a Raspberry Pi with a headless OS. I'd like to find a way to set the Pi to automatically upgrade the package with pip whenever I put out a new tagged version.
My initial thought was to use cron or systemd to run "sudo pip3 install my-package --upgrade" on startup. The major downside, though, is that pip takes a long time to run on a Raspberry Pi and using it this way seriously slows down boot time, even when there is no upgrade to install.
Is there a better way I haven't thought of?
You can make an API or mount server where the Raspberry request if there's a new version to update (you have to save what's is the current version installed), if there is, apply the update.
It is not a pythonic solution at all, but I will work for your purpose.
Related
I am developing a python application that is deployed to a raspberry pi. It uses the RPi package which allows the application to connect to the GPIO of the Pi.
I am currently developing the application on Windows and firstly would like to be able to run the application on my windows PC but secondly when I write unit tests I would like to be able to run the tests autonomously and be able to change how the mock RPi behaves to simulate errors.
What are the best practices within python to substitute a package so that it runs normally on the Pi using the real RPi packages and also automatically runs the mocked package?
I'm developing a web system with Django 2.0.6 which at one point needs to read an image of a qrcode. So, on my local machine I used pyzbar on my virtualenv to realize such need and it worked perfectly. I ran several tests, and until then it was working perfectly.
So I had the need to host the system on a shared server (I had already performed such a procedure for another web system made with Django 2.0.6), and this time gave an error in the hosting due to the pyzbar library, claiming that the library was not properly installed. I asked the support of the host company to verify the problem and I was told the following: "I verified that the error when running indexWebScg.fcgi was occurring due to the lack of modules installed in your vritualEnv, I installed the necessary modules , however I noticed that one of the modules of your application is the "zbar" this module is not compatible with our shared plans because it requires a server-level library called libzbar which is not standard on our shared servers. "
My question is, if everything I need to use on the system is installed inside the virtualenv and upload all of this simultaneously to the server, why would I need to install only this library at the root level of the server? Did I do the wrong installation of the "pyzbar" library in my virtual environment?
To run zbar library you need to install on running os or on any server.
sudo apt-get install libzbar0
pyzbar uses this library to decide qr and barcode.
In virtual environment the pyzbar works but the connection to libzbar0 breaks when you Port this to server.
To make this work you have to install libzbar0 on your server.
For the moment I've created an Python web application running on uwsgi with a frontend created in EmberJS. There is also a small python script running that is controlling I/O and serial ports connected to the beaglebone black.
The system is running on debian, packages are managed and installed via ansible, the applications are updated also via some ansible scripts. With other words, updates are for the moment done by manual work launching the ansible scripts over ssh.
I'm searching now a strategy/method to update my python applications in an easy way and that can also be done by our clients (ex: via webinterface). A good example is the update of a router firmware. I'm wondering how I can use a similar strategy for my python applications.
I checked Yocto where I can build my own linux with but I don't see how to include my applications in those builds, and I don't wont to build a complete image in case of hotfixes.
Anyone who has a similar project and that would like to share with me some useful information to handle some upgrade strategies/methods?
A natural strategy would be to make use of the package manager also used for the rest of the system. The various package managers of Linux distributions are not closed systems. You can create your own package repository containing just your application/scripts and add it as a package source on your target. Your "updater" would work on top of that.
This is also a route you can go when using yocto.
folks.
I am making a program with Python on RaspberryPi2 that installed I2C modules. But I frustrated to write codes using I2C on RaspberryPi, because it is very slow and it cannot use my favorite editor Sublime Text2. I think if I will be able to emulate I2C on my Macbook Air or Ubuntu laptop, I can write codes faster and efficient.
Could you kindly advise me a way to realize my wish?
What you really want is a way to deploy to the raspberry pi so you can develop locally. There are a number of different solutions(Git push/ pull, scp, ftp etc..) You should look into FabricLink api. This allows you to seamlessly add deployment to your cycle.
I want to deploy my python application to my customers. Well, I basically don't know much about python application deployment, but my requirements/questions are
1) The user can install it as long as they can access internet. For mac applications, they are all hosted by apple app store. For chrome extensions, they are hosted by google. My question is, if there's a similar place that are hosting python applications, and it provides updating mechanism? If I have to do it on my own, is there any existing framework stuff for me to do it?
2) My application would be used to read USB device, and act as a http server. I want the install package to be as small as possible, and I also need to package python runtime. What is the package size that I should be expecting? 5M? 10M?
I have sucesfully used pyinstaller for my project
https://github.com/pyinstaller/pyinstaller/wiki
My application is reasonably large, so the installer package is around 100MB which compresses to 60MB. A lot of that is numpy, qt, scipy, and matplotlib.
We use a script to invoke pyinstaller which packages our main script and dependencies into a .app file. https://github.com/Erotemic/ibeis/blob/next/installers.py
If you are installing on a mac, this script in my repo will take a pyinstaller package and bundle it into a dmg.
https://github.com/Erotemic/ibeis/blob/next/_scripts/mac_dmg_builder.sh
If you host your program on your own server you can integrate an auto-update mechanism, but I don't know how to do that exactly. I just host my installers on dropbox.