So i am trying to create a pipeline on bitbucket. On my local computer, I navigate to the folder cd terraform/environments/devand run terraform init without an issue. However, when I run the test pipeline on bitbucket, it stops on the second action because
bash: terraform: command not found
How can I fix this? I believe I need to install terraform on bitbucket somehow but I am not sure how to do so. Do I use python pip commands? If so, how and why?
image: atlassian/default-image:2
pipelines:
branches:
test:
- step:
name: 'Navigate to Dev'
script:
- cd terraform/environments/dev
condition:
changesets:
includePaths:
- "terraform/modules"
- "terraform/environments/dev"
- step:
name: 'Initialize Terraform'
script:
- terraform init
You need the correct image for your build agent. In this situation, the agent basically only needs terraform installed and accessible:
image: hashicorp/terraform
This will fix your issue. You can also of course set the tag for the image to your specific version of Terraform.
Related
I have a GUI program I'm managing, written in Python. For the sake of not having to worry about environments, it's distributed as an executable built with PyInstaller. I can run this build from a function defined in the module as MyModule.build() (because to me it makes more sense to manage that script alongside the project itself).
I want to automate this to some extent, such that when a new release is added on Gitlab, it can be built and attached to the release by a runner. The approach I currently have to this is functional but hacky:
I use the Gitlab API to download the source of the tag for the release. I run python -m pip install -r {requirements_path} and python -m pip install {source_path} in the runner's environment. Then import and run the MyModule.build() function to generate an executable. Which is then uploaded and linked to the release with the Gitlab API.
Obviously the middle section is wanting. What are best approaches for similar projects? Can the package and requirments be installed in a separate venv than the one the runner script it running in?
One workflow would be to push a tag to create your release. The following jobs have a rules: configuration so they only run on tag pipelines.
One job will build the executable file. Another job will create the GitLab release using the file created in the first job.
build:
rules:
- if: "$CI_COMMIT_TAG" # Only run when tags are pushed
image: python:3.9-slim
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: # https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies
paths:
- .cache/pip
- venv/
script:
- python -m venv venv
- source venv/bin/activate
- python -m pip install -r requirements.txt # package requirements
- python -m pip install pyinstaller # build requirements
- pyinstaller --onefile --name myapp mypackage/__main__.py
artifacts:
paths:
- dist
create_release:
rules:
- if: "$CI_COMMIT_TAG"
needs: [build]
image: registry.gitlab.com/gitlab-org/release-cli:latest
script: # zip/upload your binary wherever it should be downloaded from
- echo "Uploading release!"
release: # create GitLab release
tag_name: $CI_COMMIT_TAG
name: 'Release of myapp version $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
assets: # link uploaded asset(s) to the release
- name: 'release-zip'
url: 'https://example.com/downloads/myapp/$CI_COMMIT_TAG/myapp.zip'
I'm working on a Project where I'm trying to setup a HTTP Server in C#. The responds from the server are tested using the pytest Module.
This is what I've done so far:
Define the API using swagger editor
generate base code using swagger generator
write some python Tests which are sending requests to the server and testing whether or not the responds fulfill certain requirements
I now want to set up CI on gitlab before I start actually writing the functions that correspond to the routes I've defined earlier. I set up a Runner on my local machine (it's later going to be on a dedicated server) using Docker.
As I am new to CI, there are a few questions I'm struggling with:
As I need both Python and .NET for testing, should I choose .NET as a base image and then install Python or Python as base image and then install .NET? What would be easier? I tried the latter but it doesn't seem very elegant...
Do I build before I push to the remote repository and include the /bin folder in my repository to execute those files or would I rather build during the CI and therefore not have to push anything but sourcecode?
I know those questions are a little bit wild but as I am new to CI and also Docker I'm looking for advise on how to follow best practices (if there are any).
The base image for a runner is just the default if you don't specify one in your .gitlab-ci.yml file. You can override the runner's default image by using a "pipeline default" image at the top of your .gitlab-ci.yml file (outside of any jobs), or you can specify the image for each job individually.
Using a "pipeline default" image:
image: python:latest
stages:
- build
...
In this example, all jobs will use the python:latest image unless the job specifies its own image like this example:
stages:
- build
- test
Build Job:
stage: build
image: python:latest
script:
- ...
Here, this job overrides the runner's default.
image: python:latest
stages:
- build
- db_setup
Build Job:
stage: build
script:
- # run some build steps
Database Setup Job:
stage: db_setup
image: mysql:latest
script:
- mysql -h my-host.example.com -u my-user -pmy-password -e "create database my-database;"
In this example, we have a "pipeline default" image that the "Build Job" uses since it doesn't specify its own image, but the "Database Setup Job" uses the mysql:latest image.
Here's an example where the runner's default image is ruby:latest
stages:
- build
- test
Build Job:
stage: build
script:
- # run some build steps
Test Job:
stage: test
image: golang:latest
script:
- # run some tests
In this last example, the "Build Job" uses the runner's base image, ruby:latest, but the "Test Job" uses golang:latest.
For you second question, it's up to you but the convention is to only commit source code and not dependencies/compiled resources, but again it's just a convention.
build:
My goal is to deploy and run my python script from GitHub to my virtual machine via Azure Pipeline. My azure-pipelines.yml looks like this:
jobs:
- deployment: VMDeploy
displayName: Test_script
environment:
name: deploymentenvironment
resourceType: VirtualMachine
strategy:
rolling:
maxParallel: 2 #for percentages, mention as x%
preDeploy:
steps:
- download: current
- script: echo initialize, cleanup, backup, install certs
deploy:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: python3 $(Agent.BuildDirectory)/test_file.py
routeTraffic:
steps:
- script: echo routing traffic
postRouteTraffic:
steps:
- script: echo health check post-route traffic
on:
failure:
steps:
- script: echo Restore from backup! This is on failure
success:
steps:
- script: echo Notify! This is on success
This returns an error:
/usr/bin/python3: can't find '__main__' module in '/home/ubuntu/azagent/_work/1/test_file.py'
##[error]Bash exited with code '1'.
If I place the test_file.py to the /home/ubuntu and replace the deployment script with the following: script: python3 /home/ubuntu/test_file.py the script does run smoothly.
If I move the test_file.py to another directory with mv /home/ubuntu/azagent/_work/1/test_file.py /home/ubuntu I can find an empty folder, not a .py file, named of test_file.py
EDIT
Screenshot from Jobs:
The reason that you can not get the source is because you use download: current to download artifacts produced by the current pipeline run, but you didn't publish any artifact in current pipeline.
As deployment jobs doesn't automatically check out source code, you need either checkout the source in your deployment job,
- checkout: self
or publish the sources to the artifact before downloading it.
- publish: $(Build.SourcesDirectory)
artifact: Artifact_Deploy
I am trying to add a python linting step to an Azure DevOps pipeline using Super-Linter. I followed the instruction here to add the following step in my pipeline:
jobs:
- job: PythonLint
displayName: Python Lint
pool:
vmImage: ubuntu-latest
steps:
- script: |
docker pull github/super-linter:latest
docker run -e RUN_LOCAL=true -e VALIDATE_PYTHON_PYLINT=true -v $(System.DefaultWorkingDirectory):/tmp/lint github/super-linter
displayName: 'Code Scan using GitHub Super-Linter'
The Super-Linter github docs page describes passing environment variables to tell it which linters to run. I pass the variable VALIDATE_PYTHON_PYLINT to tell it to run pylint. When I run the pipeline, I get a load of errors of the form E0401: Unable to import 'pyspark.sql' (import-error).
Pylint's configuration page says the modules should be passed as command line arguments.
My question is, how can I configure Super-Linter to pass these arguments or otherwise tell pylint which modules to import?
I have a python script that I am trying to run as part of gitlab pages deployment of a jekyll site. My site has blog posts that have various tags, and the python script generates the .md files for the tag pages. The script works perfectly fine when I just manually run it in an IDE, however I want it to be part of the gitlab ci deployment process
here is what my gitlab-ci.yml setup looks like:
run:
image: python:latest
script:
- python tag_generator.py
artifacts:
paths:
- public
only:
- master
pages:
image: ruby:2.3
stage: deploy
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
only:
- master
however, it doesn't actually create the files that it's supposed to create, here is the output from the job "run":
...
Cloning repository...
Cloning into '/builds/username/projectname'...
Checking out 4c8a47fe as master...
Skipping Git submodules setup
$ python tag_generator.py
Tags generated, count 23
Uploading artifacts...
WARNING: public: no matching files
ERROR: No files to upload
Job succeeded
the script reads out "tags generated, count ___" once it's executed, so it is running, however the files that it's supposed to create aren't being created/uploaded into the right directory. there is a /tag directory in the root project folder, that is where they are supposed to go.
I realize that the issue must have something to do with the public folder, however when I don't have
artifacts:
paths:
- public
it still doesn't create the files in the /tag directory, so it doesn't work whether I have -public or not, and I don't know what the problem is.
I FIGURED IT OUT!
the "build" for the project isn't made in the repo, gitlab clones the repo into another place, so I had to change the artifact path for the python job so that it's in the cloned "build" location, like so:
run:
image: python:latest
stage: test
before_script:
- python -V # Print out python version for debugging
- pip install virtualenv
script:
- python tag_generator.py
artifacts:
paths:
- /builds/username/projectname/tag
only:
- master