Building Python packages for a Raspberry Pi using Azure Pipelines - python

I have set up two pipelines for a Python package. One is for Windows, the other is for Linux. The one for Windows works as expected. However, if I copy the Linux executable to a Raspberry, it won't run. If I double click it, nothing happens and if I execute it using a terminal, I get permission denied. If I build the Python package locally on my Raspberry, everything works as expected.
So basically my question is, do I need to specifically target Linux ARM for my Python app to run on my Raspberry? If so, how can I achieve this? When creating a Pipeline, I can only choose between x86 and x64 architecture:
Repo can be found here.
This is the pipeline I use for building and publishing:
trigger:
- master
jobs:
- job: 'Raspberry'
pool:
name: arm32 # Already tried to use a self-hosted build agent, but didn't get it to work
variables:
python.version: '3.7'
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '$(python.version)'
- script: |
cd AzurePipelinesWithPython
python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: pip install pyinstaller
name: 'pyinstaller'
- script: cd AzurePipelinesWithPython && pyinstaller --onefile --noconfirm --clean test.py
name: 'build'
- task: PublishBuildArtifacts#1
inputs:
pathtoPublish: './AzurePipelinesWithPython/dist/'
artifactName: 'AzurePipelinesWithPython-raspi-$(python.version)'
Sorry for not being able to post a Azure DevOps repo, it belongs to our corporate subscription and isn't public.

So basically my question is, do I need to specifically target Linux
ARM for my Python app to run on my Raspberry?
No, you don't need to specify it to target ARM for your app. Especially, I think it won't make big effects even if you remove that task. If you're using hosted agent, it contains different python versions by default. And if you use self-hosted agent, you just need to make sure the suitable version is installed in local machine.
However, if I copy the Linux executable to a Raspberry, it won't run.
If I double click it, nothing happens and if I execute it using a
terminal, I get permission denied.
For the strange behavior you met, I guess it didn't result from Azure Devops side. I searched and found several similar issues like yours, see one, two, three, four.
So I would think that it's more like a common issue in Linux environment, try using terminal to execute that program after using chmod u+x file_name to give the file the permission to run. Also, you can use file xx.exe to check if the exe is generated correctly for Linux if the issue persists. Hope all above gives a correct direction to resolve your issue.

Related

why am i getting (gsutil): "C:\Users\user\AppData\Local\Programs\Python\Python37\python.exe": command not found

After installing Google cloud sdk and connecting to desired firebase project i am receiving :
ERROR: (gsutil)
"C:\Users\user\AppData\Local\Programs\Python\Python37\python.exe":
command not found when running any gsutil command.
My current stup is:
windows 10
Google Cloud SDK 281.0.0
bq 2.0.53
core 2020.02.14
gsutil 4.47
python 3.7
My theory is, that while installed "correctly" python doesnt have access to gsutil commands
I had the same problem and I was able to solve it by setting a new environment variable for CLOUDSDK_PYTHON. On windows 10 you can do this from the command line in 2 ways:
Set an env variable for the current terminal session
set CLOUDSDK_PYTHON="C:\Users\user\AppData\Local\Programs\Python\Python37\python.exe"
Set a permanent env variable
setx CLOUDSDK_PYTHON="C:\Users\user\AppData\Local\Programs\Python\Python37\python.exe"
The file path will probably be different for everyone, so check first where is python.exe located and use your own path. I hope this helps.
Run:
set CLOUDSDK_PYTHON=C:\Users\user\AppData\Local\Programs\Python\Python37\python.exe
Note: There should be no quotes around the python path like this "C:\Users\user\AppData\Local\Programs\Python\Python37\python.exe" or it would attempt to run the command with quotes, which we know won't work.
To see a list of components that are available and currently installed, run command:
gcloud components list
To update all installed components to the latest available version(282.0) of Cloud SDK, run command:
gcloud components update
You also can reinstall it following this document, while Cloud SDK currently uses Python 2 by default, you can use an existing Python installation if necessary by unchecking the option to 'Install Bundled Python'.
As was suggested above reinstalling using bundled python worked for me. I had incorrectly assumed from google's doc i should choose between bundled or current python install not realizing both could run without conflict.
Syntax needed to be a little different for me in CMD and/or PowerShell - also I installed Python via the Microsoft Store so the command for me was:
SETX CLOUDSDK_PYTHON "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.1520.0_x64__qbz5n2kfra8p0\python3.9.exe"
you can get the exact path by running the python app from the start menu and then reading the window title:

How to configure the Jenkins ShiningPanda plugin Python Installations

The Jenkins ShiningPanda plugin provides a Managers Jenkins - Configure System setting for Python installations... which includes the ability to Install automatically. This should allow me to automatically setup Python on my slaves.
But I'm having trouble figuring out how to use it. When I use the Add Installer drop down it gives me the ability to
Extract .zip/.tar.gz
Run Batch Command
Run Shell Command
But I can't figure out how people us these options to install Python. Especially as I need to install Python on Windows, Mac, & Linux.
Other Plugins like Ant provide an Ant installations... which installs Ant automatically. Is this possible with Python?
As far as my experiments for jenkins and python goes, shining panda plug-in doesn't install python in slave machines in fact it uses the existing python library set in the jenkins configuration to run python commands.
In order to install python on slaves, I would recommend to use python virtual environment which comes along with shining panda and allows to run the python commands and then close the virtual environment.
(This is a Windows-only answer. Perhaps someone can complement this with a Linux/Unix answer, which is probably even simpler.)
Here is how we're currently doing automatic Python installations on Jenkins, with the ShiningPanda Jenkins plugin, for Python 2.7 on Windows, installing to c:\Python27:
Download the Python Windows MSI installer from https://www.python.org/downloads/windows/, and put it on some central share.
If you're running a server-version of Windows, then make sure to set DisableMsi to 0, i.e., find or create registry key HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Installer and create a value called DisableMsi and set it to zero.
On Jenkins => Manage Jenkins => Global Tool Configuration, add a 'Python installation', choose 'Install automatically' and set the label to cover all agent systems to which this applies. Then choose 'Run Batch Command', and as batch command use something like this:
if not exist c:\Python27\python.exe (
start /wait msiexec /qn /i \\some-central-system\some-share\python-2.7.14.amd64.msi /l*v python27-install-log.txt
)
as shown in the following screenshot:
(Notes on how this works: msiexec is the tool to run an MSI installer where '/i' means "install", and in the case of Python it does not require elevated permissions. /l*v does verbose logging. /qn is to make sure no UI is shown, and cmd.exe's start /wait makes sure that msiexec /i waits until the installation completes.)
That's it!
All of the above might very well work with other versions of Python as well.

google cloud sdk: set environment variable_ python --> linux

ERROR: Python 3 is not supported by the Google Cloud SDK. Please use a Python 2.x version that is 2.6 or greater.
If you have a compatible Python interpreter installed, you can use it by setting the CLOUDSDK_PYTHON environment variable to point to it.
I guess the first question we should be asking is "with all the money google makes off of their customers why can't they hire someone to ensure that their cloud sdk works with python 3?"
How exactly to overcome this error on linux? What specific files need to be edited? and where should those files be located?
I searched around, a lot, and found this question about how to fix this on Windows, but the answer is not really that comprehensive.
Thus far I've attempted:
One source of documentationsays to modify a file called app.yaml, but I searched using the command find . -name "app.yaml" and no such file exists.
Specifically I'm using arch linux, I originally tried to use the AUR package but it's disfunctional.
So I installed from the documentation, making sure to edit the ./install.sh file, specifying python2 as per this discussion on the google groups, that doesn't work either. after running the command gcloud auth login I get the same error as posted above.
This is a very easy thing to solve. The native python command on the Arch command line is actually for Python 3. The SDK requires Python2.7 and the
Just go to the google-cloud-sdk folder and open the install.sh file.
Change the CLOUDSDK_PYTHON="python" value to CLOUDSDK_PYTHON="python2.7"
Rerun the install with the command ./install.sh in the same folder and follow the prompts.
That's all.
I had the same issue so I did a little change in the dev_appserver.py. This file is in the following path :
google-cloud-sdk/bin
change the shebang from /usr/bin/env python to /usr/bin/env python2
I see this almost every time I update gcloud SDK, especially when running dev_appserver.py <my app config yaml file>
I found that setting the CLOUDSDK_PYTHON env variable to 'python2' seems to silence the error. E.g on macOS:
export CLOUDSDK_PYTHON=python2
Not sure why they simply cannot make this dev server compatible with Python 3 already

Installing Anaconda on Amazon Elastic Beanstalk

I've added deploy commands to my Elastic Beanstalk deployment which download the Anaconda installer, and install it into /anaconda. Everything goes well, but I cannot seem to correctly modify the PATH of my instance to include /anaconda/bin as suggested by the Anaconda installation page. If I SSH into an instance and manually add it, everything works fine. But this is obviously not the correct approach, as machines will be added automatically by EB.
So my question is: how can I use Anaconda in my script?
A couple more details:
I've tried adding /anaconda/bin to the system PATH all ways I can think of. Pre/post deploy scripts, custom environment variables, etc. It seems that no matter what I do, the modifications don't persist to when the application is run.
I've tried to include Anaconda via adding it to sys.path: sys.path.append('/anaconda/bin')
to no avail. Using the following: sys.path.append('/anaconda/lib/python2.7/site-packages') allows me to import some packages but fails on import pandas. Strangely enough, if I SSH into the instance and run the application with their python (/opt/python/run/venv/bin/python2.7) it runs fine. Am I going crazy? Why does it fail on a specific import statement when run via EB?
Found the answer: import pandas was failing because matplotlib was failing to initialize, because it was trying to get the current user's home directory. Since the application is run via WSGI, the HOME variable is set to /home/wsgi but this directory doesn't exist. So, creating this directory via deployment command fixed this issue.
My overall setup to use Anaconda on Elastic Beanstalk is as follows:
.ebextensions/options.config contains:
commands:
00_download_conda:
command: 'wget http://repo.continuum.io/archive/Anaconda-2.0.1-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda-2.0.1-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
00_download_conda simply downloads Anaconda. See here for latest Anaconda version download link. The test commands are EB's way of letting you only execute the command if the test fails...Just prevents double downloading when in development.
01_install_conda installs Anaconda with options -b -f -p /anaconda which allows it to be installed in the specified directory, without user input, and skips installation if it has already been installed.
02_create_home creates the missing directory.
And finally - to use Anaconda inside your python application: sys.path.append('/anaconda/lib/python2.7/site-packages')
Cheers!

RPM Post Install Execute with Python

I'm working on a deployment process for work and have run into a bit of a snag. Its more of a quality of life thing than anything else. I've been following Hynek Schlawack's excellent guide and have gotten pretty far. The long and short of what I'm trying to do is install a python application along with a deployment of the python version I'm currently using. I'm using fpm to build an RPM that will then be sent and installed to site.
As part of my deployment, I'd like to run some post-install scripts. Which I can specify in fpm using the "--post-install {SCRIPT_NAME}" This works all well and good when the script is an actual linux script. However, I'd really like to run a python script as my post-install. I can specify an executable python script, but it fails because I believe it is trying to execute the script as: bash my_python_script.py
Does anyone know if there is a way to execute a python script post-install of an RPM?
Thanks in advance!
In the spec file you can specify what interpreter the %post script is for by using the -p parameter, e.g. %post -p /usr/bin/perl .

Categories

Resources