I'm trying to calculate paths for a pip install from a internal devpi server. I'm running a self-hosted runner on a Windows server virtual machine. I'm trying to install the latest PIP package to the tool directory by calculating the path as follows;
- name: pip install xmlcli
env:
MYTOOLS: ${{ runner.tool_cache }}\mytools
run: |
echo "${{ runner.tool_cache }}"
$env:MYTOOLS
pip install --no-cache-dir --upgrade mytools.xmlcli --target=$env:MYTOOLS -i ${{secrets.PIP_INDEX_URL}}
echo "XMLCLI={$env:MYTOOLS}\mytools\xmlcli" >> $GITHUB_ENV`
- name: test xmlcli
run: echo "${{ env.XMLCLI }}"
As you can see; I've had some noob issues trying to output the env variable in windows. I came to the conclusion that under windows; the "run" command is being sent via powershell. Hence the "$env:MYTOOLS" usage.
The problem is the echo "XMLCLI=..." back to the git_env doesn't seem to be working properly as the test xmlcli step returns empty string.
I'm pretty sure I tried several different iterations of the echo command; but, haven't been successful.
Is there a video/docs/something that will clearly lays out the usage of "path arithmetic" from within the github action environment?
You need to append to $env:GITHUB_ENV, or you can set the script execution engine on your run action.
When using shell pwsh, then you can use:
"{environment_variable_name}={value}" >> $env:GITHUB_ENV
When using shell powershell
"{environment_variable_name}={value}" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
But in your case, if you're more familiar with bash, you can force the run action to always use bash
- run: |
// your stuff here
shell: bash
See:
run: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
Workflow commands for actions: https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions?tool=bash
#jessehouwing gave me enough information to test a solution to my particular problem. Assuming Windows runners always runs on Powershell, the answer is:
- name: pip install xmlcli
env:
MYTOOLS: ${{ runner.tool_cache }}\mytools
run: |
echo "${{ runner.tool_cache }}"
$env:MYTOOLS
pip install --no-cache-dir --upgrade mytools.xmlcli --target=$env:MYTOOLS -i ${{secrets.PIP_INDEX_URL}}
"XMLCLI=$env:MYTOOLS\mytools\xmlcli" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append
- name: test xmlcli
run: echo "${{ env.XMLCLI }}"
Incidentally; I was using
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-environment-variable
as a reference; it didn't seem to give the powershell equivalent.
Related
I have created a GitHub actions pipeline to do linting of the recent committed code. The test script works in my local environment, but not on the GitHub server. What peculiarity I don't notice?
Here is my code for linting:
#!/usr/bin/env bash
LATEST_COMMIT=$(git rev-parse HEAD);
echo "Analyzing code changes under the commit hash: $LATEST_COMMIT";
FILES_UNDER_THE_LATEST_COMMIT=$(git diff-tree --no-commit-id --name-only -r $LATEST_COMMIT);
echo "Files under the commit:";
echo $FILES_UNDER_THE_LATEST_COMMIT;
MATCHES=$(echo $FILES_UNDER_THE_LATEST_COMMIT | grep '.py');
echo "Files under the commit with Python extension: $MATCHES";
echo "Starting linting...";
if echo $MATCHES | grep -q '.py';
then
echo $MATCHES | xargs pylint --rcfile .pylintrc;
else
echo "Nothing to lint";
fi
Here is my GitHub Actions config:
name: Pylint
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10"]
steps:
- uses: actions/checkout#v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python#v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
if: "!contains(github.event.head_commit.message, 'NO_LINT')"
run: |
python -m pip install --upgrade pip
pip install pylint psycopg2 snowflake-connector-python pymssql
chmod +x .github/workflows/run_linting.sh
- name: Analysing all Python scripts in the project with Pylint
if: "contains(github.event.head_commit.message, 'CHECK_ALL')"
run: pylint --rcfile .pylintrc lib processes tests
- name: Analysing the latest committed changes Pylint
if: "!contains(github.event.head_commit.message, 'NO_LINT')"
run: .github/workflows/run_linting.sh
What I get in GitHub:
And what on my computer:
Here's your problem in a nutshell:
steps:
- uses: actions/checkout#v3
By default, checkout#v2 and checkout#v3 make a shallow (depth 1), single-branch clone. Such a clone has exactly one commit in it: the most recent one.
As a consequence, this:
git diff-tree --no-commit-id --name-only -r $LATEST_COMMIT
produces no output at all. There's no parent commit available to compare against. (I'd argue that this is a bit of a bug in Git: git diff-tree should notice that the parent is missing due to the .git/shallow grafts file. However, git diff-tree traditionally produces an empty diff for a root commit, and without special handling in git diff-tree, the shallow clone makes git diff-tree think this is a root commit. Oddly, the user-oriented git diff would treat every file as added—still not what you want, but it would have actually worked.)
To fix this, force the depth to be at least 2. Using depth: 0 will force a full (non-shallow) clone, but the reason for using a shallow, single-branch clone is to speed up the action by omitting unnecessary commits. As only the first two "layers" of commit are required here, depth: 2 provides the correct number.
Side note: your bash code has every command terminated with a semicolon. This works fine, but is unnecessary (it reminds me of doing too much C or C++ programming.1 Also, you can just run git diff-tree <options> HEAD: there's no need for a separate git rev-parse step here (though you might still want that in the echo).
1As I switch from C to C++ to Go to Python to shell etc., I either put in too many or too few parentheses and semicolons, leading to C compiler errors from:
if x == 0 {
because my brain is in Go mode. 😀 When I switch back to hacking on the Go code, I put in too many parentheses, but gofmt removes them and there's no compile error.
I have the following buildspec.yml file
version: 0.2
env:
parameter-store:
s3DestFileName: "/CodeBuild/s3DestFileName"
s3SourceFileName: "/CodeBuild/s3SourceFileName"
imgFileName: "/CodeBuild/imgFileName"
imgPickleFileName: "/CodeBuild/imgPickleFileName"
phases:
install:
on-failure: ABORT
runtime-versions:
python: 3.7
commands:
- echo Entered the install phase. Downloading new assets to /tmp
- aws s3 cp s3://xxxx/yyy/test.csv /tmp/test.csv
- aws s3 cp s3://xxxx/yyy/test2.csv /tmp/test2.csv
- ls -la /tmp
- curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3 -
- export PATH=$PATH:$HOME/.poetry/bin
- poetry --version
- cd ./create-model/ && poetry install
build:
on-failure: ABORT
commands:
- echo Entered the build phase...
- echo Build started on `date`
- ls -la
- poetry run python3 knn.py
I'm using poetry to manage all my packages. I do not have any artifacts to be used.
This is the content of the knn.py file (or part of it actually)
import pandas as pd
print("started...")
df = pd.read_csv('/tmp/n1.csv', index_col=False)
print("df read...")
print(df.head())
I don't see any errors in the logs. The install phase runs fine, but when the build phase is started, I see that it has invoked the knn.py file.
I've waited for almost 30 mins but all I see in the log is "started..."
I dont see any of the print statements in the log. It probably is not progressing any further. I've tried using different AWS Managed images but its still the same result.
This code runs perfectly fine if I run it locally on my machine.
Edit:
I tried the advanced build override and I connected to the container using SSM. I installed pandas locally and ran the read_csv() and it worked. However, the command poetry run python3 knn.py from the buildspec.yml is still hanging
I am trying to setup a project that uses the shiny new Jenkins pipelines, more specifically a multibranch project.
I have a Jenkinsfile created in a test branch as below:
node {
stage 'Preparing VirtualEnv'
if (!fileExists('.env')){
echo 'Creating virtualenv ...'
sh 'virtualenv --no-site-packages .env'
}
sh '. .env/bin/activate'
sh 'ls -all'
if (fileExists('requirements/preinstall.txt')){
sh 'pip install -r requirements/preinstall.txt'
}
sh 'pip install -r requirements/test.txt'
stage 'Unittests'
sh './manage.py test --noinput'
}
It's worth noting that preinstall.txt will update pip.
I am getting error as below:
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip'
Looks like it's trying to update pip in global env instead of inside virtualenv, and looks like each sh step is on its own context, how do I make them to execute within the same context?
What you are trying to do will not work. Every time you call the sh command, jenkins will create a new shell.
This means that if you use .env/bin/activate in a sh it will be only sourced in that shell session. The result is that in a new sh command you have to source the file again (if you take a closer look at the console output you will see that Jenkins will actually create temporary shell files each time you run the command.
So you should either source the .env/bin/activate file at the beginning of each shell command (you can use triple quotes for multiline strings), like so
if (fileExists('requirements/preinstall.txt')) {
sh """
. .env/bin/activate
pip install -r requirements/preinstall.txt
"""
}
...
sh """
. .env/bin/activate
pip install -r requirements/test.txt
"""
}
stage("Unittests") {
sh """
. .env/bin/activate
./manage.py test --noinput
"""
}
or run it all in one shell
sh """
. .env/bin/activate
if [[ -f requirements/preinstall.txt ]]; then
pip install -r requirements/preinstall.txt
fi
pip install -r requirements/test.txt
./manage.py test --noinput
"""
Like Rik posted, virtualenvs don't work well within the Jenkins Pipeline Environment, since a new shell is created for each command.
I created a plugin that makes this process a little less painful, which can be found here: https://wiki.jenkins.io/display/JENKINS/Pyenv+Pipeline+Plugin. It essentially just wraps each call in a way that activates the virtualenv prior to running the command. This in itself is tricky, as some methods of running multiple commands inline are split into two separate commands by Jenkins, causing the activated virtualenv no longer to apply.
I'm new to Jenkins files. Here's how I've been working around the virtual environment issue. (I'm running Python3, Jenkins 2.73.1)
Caveat: Just to be clear, I'm not saying this is a good way to solve the problem, nor have I tested this enough to stand behind this approach, but here what is working for me today:
I've been playing around with bypassing the venv 'activate' by calling the virtual environment's python interpreter directly. So instead of:
source ~/venv/bin/activate
one can use:
~/venv/bin/python3 my_script.py
I pass the path to my virtual environment python interpreter via the shell's rc file (In my case, ~/.bashrc.) In theory, every shell Jenkins calls should read this resource file. In practice, I must restart Jenkins after making changes to the shell resource file.
HOME_DIR=~
export VENV_PATH="$HOME_DIR/venvs/my_venv"
export PYTHON_INTERPRETER="${VENV_PATH}/bin/python3"
My Jenkinsfile looks similar to this:
pipeline {
agent {
label 'my_slave'
}
stages {
stage('Stage1') {
steps {
// sh 'echo $PYTHON_INTERPRETER'
// sh 'env | sort'
sh "$PYTHON_INTERPRETER my_script.py "
}
}
}
}
So when the pipeline is run, the sh has the $PYTHON_INTERPRETER environment values set.
Note one shortcoming of this approach is that now the Jenkins file does not contain all the necessary information to run the script correctly. Hopefully this will get you off the ground.
I am trying to provision a coreOS box using Ansible. First a bootstapped the box using https://github.com/defunctzombie/ansible-coreos-bootstrap
This seems to work ad all but pip (located in /home/core/bin) is not added to the path. In a next step I am trying to run a task that installs docker-py:
- name: Install docker-py
pip: name=docker-py
As pip's folder is not in path I did it using ansible:
environment:
PATH: /home/core/bin:$PATH
If I am trying to execute this task I get the following error:
fatal: [192.168.0.160]: FAILED! => {"changed": false, "cmd": "/home/core/bin/pip install docker-py", "failed": true, "msg": "\n:stderr: /home/core/bin/pip: line 2: basename: command not found\n/home/core/bin/pip: line 2: /root/pypy/bin/: No such file or directory\n"}
what I ask is where does /root/pypy/bin/ come from it seems this is the problem. Any idea?
You can't use shell-style variable expansion when setting Ansible variables. In this statement...
environment:
PATH: /home/core/bin:$PATH
...you are setting your PATH environment variable to the literal value /home/core/bin:$PATH. In other words, you are blowing away any existing value of $PATH, which is why you're getting "command not found" errors for basic things like basename.
Consider installing pip somewhere in your existing $PATH, modifying $PATH before calling ansible, or calling pip from a shells cript:
- name: install something with pip
shell: |
PATH="/home/core/bin:$PATH"
pip install some_module
The problem lies in /home/core/bin/pip script which is literally:
#!/bin/bash
LD_LIBRARY_PATH=$HOME/pypy/lib:$LD_LIBRARY_PATH $HOME/pypy/bin/$(basename $0) $#
when run under root by ansible the $HOME variable is substituted with /root and not with /home/core.
Change $HOME with /home/core and it should work.
I have just installed python 2.7 using macports as:
sudo port install py27-numpy py27-scipy py27-matplotlib py27-ipython +notebook py27-pandas py27-sympy py27-nose
during the process it found some issues, mainly broken files related with py25-haslib that I managed to fix. Now it seems eveything is ok. I tested a few programs and they run as expected. Currently, I have two versions of python: 2.5 (Default, from when I worked in my former institution) and 2.7 (just installed):
which python
/usr/stsci/pyssg/Python-2.5.1/bin/python
which python2.7
/opt/local/bin/python2.7
The next move would be set the new python version 2.7 as default:
sudo port select --set python python27
sudo port select --set ipython ipython27
My question is: is there a way to go back to 2.5 in case something goes wrong?
I know a priori, nothing has to go wrong. But I have a few data reduction and analysis routines that work perfectly with the 2.5 version and I want to make sure I don´t mess up before setting the default.
if you want to revert, you can modify your .bash_profile or other login shell initialization to fix $PATH to not add "/Library/Frameworks/Python.framework/Versions/2.5/bin" to $PATH and/or to not have /usr/local/bin appear before /usr/bin on $PATH.
If you want to permanently remove the python.org installed version,
paste the following lines up to and including the chmod into a posix-
compatible shell:
tmpfile=/tmp/generate_file_list
cat <<"NOEXPAND" > "${tmpfile}"
#!/bin/sh
version="${1:-"2.5"}"
file -h /usr/local/bin/* | grep \
"symbolic link to ../../../Library/Frameworks/Python.framework/"\
"Versions/${version}" | cut -d : -f 1
echo "/Library/Frameworks/Python.framework/Versions/${version}"
echo "/Applications/Python ${version}"
set -- Applications Documentation Framework ProfileChanges \
SystemFixes UnixTools
for package do
echo "/Library/Receipts/Python${package}-${version}.pkg"
done
NOEXPAND
chmod ug+x ${tmpfile}
...excripted from troubleshooting question on python website