Install Python to Self-hosted Windows build agent - python

I have installed Windows agent and I need to be able run Python scripts. I know I need to install Python, but I have no idea how.
I added Python files from standard installation to
$AGENT_TOOLSDIRECTORY/
Python/
3.8.2/
x64/
{tool files}
x64.complete
Restarted agent, but what now? How to put it to Capabilities?
What I'm missing?
EDIT:
I need to run this YAML task
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- task: BatchScript#1
displayName: 'Run script make.bat'
inputs:
filename: make.bat
arguments: html

I have set up a self-hosted agent on a Windows 10 laptop, (for which I have admin access), and I'm running Azure DevOps Express 2020.
I found, downloaded and installed an agent according to the instructions at Download and configure the agent. I used vsts-agent-win-x64-2.170.1.zip and set this up to run as a service, (I guess anyone running it manually needs to double check that it's runnning at show time). I also ran the install command as admin in powershell!
To install a Python version I need to download the appropriate installer from the ftp site at Python.org, eg. for 3.7.9 I've used python-3.7.9-amd64.exe.
I then run this from the command line (CMD run as administrator) without UI with:
python-3.7.9-amd64.exe /quiet InstallAllUsers=0 TargetDir=$AGENT_TOOLSDIRECTORY\Python\3.7.9\x64 Include_launcher=0
(other options for install available in python docs)
Once this is complete, (and it runs in background so will take longer than the initial command), you need to create an empty {platform}.complete file (as described here), in my case this is x64.complete.
This then worked! I did restart the server for this first version, but I've added other python versions since without needing to. My pipeline task was simply:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'
(with a variable python.version set us as a list of versions 3.7.9, 3.8.8)
One key element for me was the file structure, where the documentation says {tool files} this means the python.exe file and other common dirs such as Lib and Scripts. I initially installed these in a sub-dir which didn't work. So it should look like this:
$AGENT_TOOLSDIRECTORY/
Python/
3.7.9/
x64/
Doc/
Lib/
Scripts/
python.exe
...etc...
x64.complete
To be honest I'm mostly relieved that this worked without too much trouble. I gave up trying to get Artifacts to work on-prem. In my limited experience all of this is much easier, and better, on the cloud version. Haven't yet persuaded my employer to take that leap however...

For this issue, in order to use the python version installed in your on-premise machine. You either need to point to the python.exe physical path in cmd task. Or add the python.exe path to environment variable path manually in powershell task. For example:
To use local python in powershell task:
$env:Path += ";c:\{local path to}\Python\Python38\; c:\{local path to}\Python\Python38\Scripts\"
python -V
To use python in CMD task:
c:\{local path to}\Python\Python38\python.exe -V
c:\{local path to}\Python\Python38\Scripts\pip.exe install
So, I think to run python script with private agent, just make sure python is installed locally, then point to python.exe path. You can refer to this case for details.

I added these 4 Tasks before being able to execute python on my pipeline with a vs2017-win2016 agent:
Use Python 3.x
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.x'
Use Pip Authenticate
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
Use Commandline task
steps:
- script: |
python -m pip install --upgrade pip setuptools wheel
failOnStderr: true
displayName: 'install pip for setup of python framework'
Use Commandline task
steps:
- script: 'pip install -r _python-test-harness/requirements.txt'
failOnStderr: true
displayName: 'install python framework project''s specific requirements'
Hope that helps

Related

python - Automated building of executables

I have a GUI program I'm managing, written in Python. For the sake of not having to worry about environments, it's distributed as an executable built with PyInstaller. I can run this build from a function defined in the module as MyModule.build() (because to me it makes more sense to manage that script alongside the project itself).
I want to automate this to some extent, such that when a new release is added on Gitlab, it can be built and attached to the release by a runner. The approach I currently have to this is functional but hacky:
I use the Gitlab API to download the source of the tag for the release. I run python -m pip install -r {requirements_path} and python -m pip install {source_path} in the runner's environment. Then import and run the MyModule.build() function to generate an executable. Which is then uploaded and linked to the release with the Gitlab API.
Obviously the middle section is wanting. What are best approaches for similar projects? Can the package and requirments be installed in a separate venv than the one the runner script it running in?
One workflow would be to push a tag to create your release. The following jobs have a rules: configuration so they only run on tag pipelines.
One job will build the executable file. Another job will create the GitLab release using the file created in the first job.
build:
rules:
- if: "$CI_COMMIT_TAG" # Only run when tags are pushed
image: python:3.9-slim
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: # https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- python -m pip install -r requirements.txt # package requirements
- python -m pip install pyinstaller # build requirements
- pyinstaller --onefile --name myapp mypackage/__main__.py
artifacts:
paths:
- dist
create_release:
rules:
- if: "$CI_COMMIT_TAG"
needs: [build]
image: registry.gitlab.com/gitlab-org/release-cli:latest
script: # zip/upload your binary wherever it should be downloaded from
- echo "Uploading release!"
release: # create GitLab release
tag_name: $CI_COMMIT_TAG
name: 'Release of myapp version $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
assets: # link uploaded asset(s) to the release
- name: 'release-zip'
url: 'https://example.com/downloads/myapp/$CI_COMMIT_TAG/myapp.zip'

Issues trying to install AirFlow locally

I'm new at airflow and I'm trying to install locally, following the instructions on the link below:
https://airflow.apache.org/docs/apache-airflow/stable/start/local.html
I'm running this code (as mentioned on the link):
# Airflow needs a home. `~/airflow` is the default, but you can put it
# somewhere else if you prefer (optional)
export AIRFLOW_HOME=~/airflow
# Install Airflow using the constraints file
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
# For example: 3.6
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
# For example: https://raw.githubusercontent.com/apache/airflow/constraints-2.2.5/constraints-3.6.txt
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# The Standalone command will initialise the database, make a user,
# and start all components for you.
airflow standalone
# Visit localhost:8080 in the browser and use the admin account details
# shown on the terminal to login.
# Enable the example_bash_operator dag in the home page
and getting this error:
File "C:\Users\F43555~1\AppData\Local\Temp/ipykernel_12908/3415008398.py", line 3
export AIRFLOW_HOME=~/airflow
^
SyntaxError: invalid syntax
Someone knows how to deal with it?
I'm running on windows 10, vs code (jupyter notebook).
Tks!
Airflow is only supported on Linux and it looks like you're trying to run this on a windows machine.
If you want to install Airflow on Windows you'll need to use something like Windows Subsystem for Linux (WSL) or Docker. There are some examples around which show you how to do this on WSL (and loads using docker) - Here is one of them with WSL.

Why does RTD run "python env create" instead of "conda env create" when building?

I have a project on ReadTheDocs that I'm trying to build. I'm using a very basic .readthedocs.yaml file that reads:
# .readthedocs.yaml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-20.04
tools:
python: "3.9"
# You can also specify other tool versions:
# nodejs: "16"
# rust: "1.55"
# golang: "1.17"
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: docs/source/conf.py
fail_on_warning: true
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/requirements.txt
conda:
environment: environment.yml
Unfortunately, the RTD build logs seem to tell me that after cloning and writing out the environment.yml file, the build process runs python env create --quiet --name develop --file environment.yml. This obviously fails with "no such file or directory" (Error 2) since, well, no such file or directory as env exists in the directory structure. Shouldn't RTD be running conda create here? How do I make it do the right thing?
Thanks,
Eli
This problem is described in https://github.com/readthedocs/readthedocs.org/issues/8595
In summary, python: "3.9" now means that CPython 3.9 + venv are used, regardless of conda.environment.
If one wants to use a conda environment, python: "miniconda3-4.7" or python: "mambaforge-4.10" need to be specified.
In the future, a better error message should be shown at least. Feel free to upvote the issue.

Python Package publish in Azure DevOps Artifacts: 'D:\a\1\a' is empty. Nothing will be added to build artifact

A. Used twine to authenticate and publish my Python packages to an Azure Artifacts feed
- task: CmdLine#2
displayName: Build Artifacts
inputs:
script:
echo Building distribution package
python -m pip install --upgrade twine build setuptools
python -m build
- task: TwineAuthenticate#1
inputs:
artifactFeed: ddey-feed
- script:
python -m twine upload -r "ddey-feed" --config-file $(PYPIRC_PATH) dist/*.whl
B. Although it ran successfully, but I didn't get any package in Artifacts. I found the Warning:'D:\a\1\a' is empty. Nothing will be added to build artifact
C. I did some research and decided to add additional section which does a copy and publish
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: |
**/*
!.git/**/*
TargetFolder: '$(Build.ArtifactStagingDirectory)'
condition: succeededOrFailed()
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
condition: succeededOrFailed()
Can anyone please comment what else I can modify in yaml file to get the package available in Artifacts?
Different Tried things after Suggestions:
Add Tree command to see all build folders to confirm generation of file:
2. After removing source folder and let it use default source
successful build and consumed:
Artifacts is generated and I can see it from pipeline.
Problem Statement
In Artifacts tab, I don't see the build available in any feed. How to connect the build with a specific feed (ddey-feed). I though TwineAuthenticate is suppose to take care of it.
ok. I have finally resolved the whole issue and could deploy the package to Artifacts Feed.
Key learning:
When creating Artifacts Feed, Make sure to check permission. Add Allow project-scoped builds otherwise will get permission error in pushing package from Azure pipeline
You need to define PYPIRC_PATH to point where .pypirc file reside. This can be done using environment variable set-up as shown below
- script: |
echo "$(PYPIRC_PATH)"
python -m twine upload -r ddey-feed --verbose --config-file $(PYPIRC_PATH) dist/*
displayName: 'cmd to push package to Artifact Feed'
env:
PYPIRC_PATH: $(Build.SourcesDirectory)
Make sure Twine Authenticate feed name matches with twine upload feed name. If pipeline fails to push the package, you can try to run following command directly from your repo: twine upload -r ddey-feed --config-file ./.pypirc dist/ and it should successfully upload the build to Artifacts.
For debug purpose, print the directories.
echo "Structure of work folder of this pipeline:"
tree $(Agent.WorkFolder)\1 /f
echo "Build.ArtifactStagingDirectory:"
echo "$(Build.ArtifactStagingDirectory)"
echo "Build.BinariesDirectory:"
echo "$(Build.BinariesDirectory)"
echo "Build.SourcesDirectory:"
echo "$(Build.SourcesDirectory)"
Summary of components of the pipeline
Indentation before CopyFiles#2

sphinx-build command not found in gitlab ci pipeline / python 3 alpine image

I want to specify a GitLab job that creates a sphinx html documentation.
I am using a Python 3 alpine image (cannot specify which exactly).
the build stage within my .gitlab-ci.yml looks like this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- sphinx-build -b html docs/ public/
only:
- master
however, the pipeline fails: sphinx-build: command not found. (same error for make html)
According to This Tutorial, my .gitlab-ci.yml should be more or less correct.
What am I doing wrong? Is this issue related to the alpine image I am using?
As #Yasen correctly noted, the path to sphinx-build was not contained in $PATH. However, adding command in before sphinx-build did not solve the problem for me.
Anyway I found the solution in the the runner logs: The output of pip install -U sphinx produced the following warning:
WARNING: The scripts sphinx-apidoc, sphinx-autogen, sphinx-build and sphinx-quickstart are installed in 'some/path' which is not on PATH.
so I added export PATH="some/path" to the script-step in the .gitlab-ci.yml:
script:
- pip install -U sphinx
- export PATH="some/path"
- sphinx-build -b html docs/ public/
Did the command pip install -U sphinx succeed? (You should be able to tell that from the CI job log.)
If so, you may need to specify the full path to sphinx-build, as Yasen said.
If it did not succeed, you should troubleshoot the installation of Sphinx.
Most likely the reason is that $PATH doesn't contain path to sphinx-build
TL;DR try to use command
Try this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- command sphinx-build -b html docs/ public/
only:
- master
Explanation
GitLab runners run different way
Since GitLab CI uses runners, runner's shell profile may differ from commonly used.
So, your runner may be configured without declared $PATH to the directory that contains sphinx-build
Zsh/Bash startup files loading order (.bashrc, .zshrc etc.)
See this explanation:
The issue is that Bash sources from a different file based on what kind of shell it thinks it is in. For an “interactive non-login shell”, it reads .bashrc, but for an “interactive login shell” it reads from the first of .bash_profile, .bash_login and .profile (only). There is no sane reason why this should be so; it’s just historical.
What command does mean?
Since we don't know the path where sphinx-build installed, you may use commands like: which, type, etc.
As per this great answer(shell - Why not use "which"? What to use then? - Unix & Linux Stack Exchange, author recommends to use command <name>, or $(command -v <name>)

Categories

Resources