Heroku Deployment, Yuglify No Such File with Django pipeline - python

Trying to run collectstatic on deloyment, but running into the following error:
pipeline.exceptions.CompressorError: /usr/bin/env: yuglify: No such
file or directory
When I run collectstatic manually, everything works as expected:
Post-processed 'stylesheets/omnibase-v1.css' as
'stylesheets/omnibase-v1.css' Post-processed 'js/omnijs-v1.js' as
'js/omnijs-v1.js'
I've installed Yuglify globally. If I run 'heroku run yuglify', the interface pops up and runs as expected. I'm only running into an issue with deployment. I'm using the multibuildpack, with NodeJS and Python. Any help?
My package, just in case:
{
"author": "One Who Sighs",
"name": "sadasd",
"description": "sadasd Dependencies",
"version": "0.0.0",
"homepage": "http://sxaxsaca.herokuapp.com/",
"repository": {
"url": "https://github.com/heroku/heroku-buildpack-nodejs"
},
"dependencies": {
"yuglify": "~0.1.4"
},
"engines": {
"node": "0.10.x"
}
}
Should maybe mention that Yuglify is not in my requirements.txt, just in my package.json.

I ran into the same problem and ended up using a custom buildpack such as this and writing a bash script to install node and yuglify:
https://github.com/heroku/heroku-buildpack-python
After setting the buildpack, I created a few bash scripts to install node and yuglify. the buildpack has hooks to call these post compile scripts. Here's a good example how to do this which I followed:
https://github.com/nigma/heroku-django-cookbook
These scripts are placed under bin in your root folder.
In the post_compile script, I added a script to install yuglify.
post_compile script
if [ -f bin/install_nodejs ]; then
echo "-----> Running install_nodejs"
chmod +x bin/install_nodejs
bin/install_nodejs
if [ -f bin/install_yuglify ]; then
echo "-----> Running install_yuglify"
chmod +x bin/install_yuglify
bin/install_yuglify
fi
fi
install_yuglify script
#!/usr/bin/env bash
set -eo pipefail
npm install -g yuglify
If that doesn't work, you can have a look at this post:
Yuglify compressor can't find binary from package installed through npm

Related

Tailwindcss Not Updating In Browser after Class Change

I'm running Tailwind with React but in order for me to see changes that I make to classNames I have to restart the server and run npm run start again to see the updates.
Is there something I need to change in my scripts so that the browser updates without having to restart the server.
package.json
'''
"scripts": {
"start": "npm run watch:css && react-scripts start",
"build": "npm run watch:css && react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"watch:css": "postcss src/assets/tailwind.css -o src/assets/main.css"
}
'''
All feedback is appreciated! Thanks
&& will run command one after another, so try to use this npm-run-all library to run npm command on parallel

Better way to solve the "No module named X" error in VS code?

I am unfamiliar with VS Code.
While trying to run a task (Terminal --> Run Task... --> Dev All --> Continue without scanning the task output) associated with a given code workspace (module.code-workspace), I ran into the following error, even though uvicorn was installed onto my computer:
/usr/bin/python: No module named uvicorn
I fixed this issue by setting the PATH directly into the code workspace. For instance, I went from:
"command": "cd ../myapi/ python -m uvicorn myapi.main:app --reload --host 127.x.x.x --port xxxx"
to:
"command": "cd ../myapi/ && export PATH='/home/sheldon/anaconda3/bin:$PATH' && python -m uvicorn myapi.main:app --reload --host 127.x.x.x --port xxxx"
While this worked, it seems like a clunky fix.
Is it a good practice to specify the path in the code workspace?
My understanding is that the launch.json file is meant to help debug the code. I tried creating such a file instead of fiddling with the workspace but still came across the same error.
In any case, any input on how to set the path in VS code will be appreciated!
You can modify the tasks like this:
"tasks": [
{
"label": "Project Name",
"type": "shell",
"command": "appc ti project id -o text --no-banner",
"options": {
"env": {
"PATH": "<mypath>:${env:PATH}"
}
}
}
]

How to pass AWS credential when building Docker image in Jenkins?

Hi I am working in jenkins to build my AWS CDK project. I have created my docker file as below.
FROM python:3.7.4-alpine3.10
ENV CDK_VERSION='1.14.0'
RUN mkdir /cdk
COPY ./requirements.txt /cdk/
COPY ./entrypoint.sh /usr/local/bin/
COPY ./aws /cdk/
WORKDIR /cdk
RUN apk -uv add --no-cache groff jq less
RUN apk add --update nodejs npm
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN npm install -g aws-cdk
RUN pip3 install -r requirements.txt
RUN ls -la
ENTRYPOINT ["entrypoint.sh"]
RUN cdk synth
RUN cdk deploy
In jenkins I am building this Docker image as below.
stages {
stage('Dev Code Deploy') {
when {
expression {
return BRANCH_NAME = 'Develop'
}
}
agent {
dockerfile {
additionalBuildArgs "--build-arg 'http_proxy=${env.http_proxy}' --build-arg 'https_proxy=${env.https_proxy}'"
filename 'Dockerfile'
args '-u root:root'
}
}
In the above code I am not supplying AWS credentials So when cdk synth is executed I get error Need to perform AWS calls for account 1234567 but no credentials found. Tried: default credentials.
In jenkins I have my AWS credentials and I can access this like
steps {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${env.ENVIRONMENT}"]]) {
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
But how can I pass these credentialsId when building docker image. Can someone help me to figure it out. Any help would be appreciated. Thanks
I am able to pass credentials like below.
steps {
script {
node {
checkout scm
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${CFN_ENVIRONMENT}"]]) {
abc = docker.build('cdkimage', "--build-arg http_proxy=${env.http_proxy} --build-arg https_proxy=${env.https_proxy} .")
abc.inside{
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
}
}
I have added below code in build.sh
cdk synth
cdk deploy
You should install the "Amazon ECR" plugin and restart Jenkins
Fulfil the plugin with your credential. And specify in pipeline
All documentation you can find here https://wiki.jenkins.io/display/JENKINS/Amazon+ECR
If you're using Jenkins pipeline, maybe you can try withAWS step.
This should provide a way to access Jenkins aws credential, then pass the credential as docker environment while running docker container.
ref:
https://github.com/jenkinsci/pipeline-aws-plugin
https://jenkins.io/doc/book/pipeline/docker/

how to run nose2 tests in Jenkins build python

I am running a Jenkins pipeline where I have a flask app to run and then test (using nose)
the current Jenkinsfile I am using is
pipeline {
agent { docker { image 'python:3.8.0' } }
stages {
stage('build') {
steps {
withEnv(["HOME=${env.WORKSPACE}"]) {
sh 'python --version'
sh 'python -m pip install --upgrade pip'
sh 'pip install --user -r requirements.txt'
sh script:'''
#!/bin/bash
echo "This is start $(pwd)"
cd ./flask/section5
echo "This is $(pwd)"
python create_table.py
python myapp.py &
nose2
'''
}
}
}
}
}
Jenkins creates the docker image and download the pythnon requirements. I get and error when running nose2 from shell script
with this error
+ pwd
+ echo This is start .jenkins/workspace/myflaskapptest_master
This is start .jenkins/workspace/myflaskapptest_master
+ cd ./flask/section5
+ pwd
+ echo This is .jenkins/workspace/myflaskapptest_master/flask/section5
This is .jenkins/workspace/myflaskapptest_master/flask/section5
+ python create_table.py
DONE
+ nose2+
python myapp.py
.jenkins/workspace/myflaskapptest_master#tmp/durable-1518e85e/script.sh: 8: .jenkins/workspace/myflaskapptest_master#tmp/durable-1518e85e/script.sh: nose2: not found
what am I missing?
I fixed it (i.e. the jenkins build works) by changing the Jenkinfile in this way
sh script:'''
#!/bin/bash
echo "This is start $(pwd)"
cd ./flask/section5
echo "This is $(pwd)"
python create_table.py
python myapp.py &
cd ./tests
python test.py
'''
I would have preferred to run nose2 like I do on my machine. if anyone knows out to fix this would be great

Jenkinsfile and Python virtualenv

I am trying to setup a project that uses the shiny new Jenkins pipelines, more specifically a multibranch project.
I have a Jenkinsfile created in a test branch as below:
node {
stage 'Preparing VirtualEnv'
if (!fileExists('.env')){
echo 'Creating virtualenv ...'
sh 'virtualenv --no-site-packages .env'
}
sh '. .env/bin/activate'
sh 'ls -all'
if (fileExists('requirements/preinstall.txt')){
sh 'pip install -r requirements/preinstall.txt'
}
sh 'pip install -r requirements/test.txt'
stage 'Unittests'
sh './manage.py test --noinput'
}
It's worth noting that preinstall.txt will update pip.
I am getting error as below:
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip'
Looks like it's trying to update pip in global env instead of inside virtualenv, and looks like each sh step is on its own context, how do I make them to execute within the same context?
What you are trying to do will not work. Every time you call the sh command, jenkins will create a new shell.
This means that if you use .env/bin/activate in a sh it will be only sourced in that shell session. The result is that in a new sh command you have to source the file again (if you take a closer look at the console output you will see that Jenkins will actually create temporary shell files each time you run the command.
So you should either source the .env/bin/activate file at the beginning of each shell command (you can use triple quotes for multiline strings), like so
if (fileExists('requirements/preinstall.txt')) {
sh """
. .env/bin/activate
pip install -r requirements/preinstall.txt
"""
}
...
sh """
. .env/bin/activate
pip install -r requirements/test.txt
"""
}
stage("Unittests") {
sh """
. .env/bin/activate
./manage.py test --noinput
"""
}
or run it all in one shell
sh """
. .env/bin/activate
if [[ -f requirements/preinstall.txt ]]; then
pip install -r requirements/preinstall.txt
fi
pip install -r requirements/test.txt
./manage.py test --noinput
"""
Like Rik posted, virtualenvs don't work well within the Jenkins Pipeline Environment, since a new shell is created for each command.
I created a plugin that makes this process a little less painful, which can be found here: https://wiki.jenkins.io/display/JENKINS/Pyenv+Pipeline+Plugin. It essentially just wraps each call in a way that activates the virtualenv prior to running the command. This in itself is tricky, as some methods of running multiple commands inline are split into two separate commands by Jenkins, causing the activated virtualenv no longer to apply.
I'm new to Jenkins files. Here's how I've been working around the virtual environment issue. (I'm running Python3, Jenkins 2.73.1)
Caveat: Just to be clear, I'm not saying this is a good way to solve the problem, nor have I tested this enough to stand behind this approach, but here what is working for me today:
I've been playing around with bypassing the venv 'activate' by calling the virtual environment's python interpreter directly. So instead of:
source ~/venv/bin/activate
one can use:
~/venv/bin/python3 my_script.py
I pass the path to my virtual environment python interpreter via the shell's rc file (In my case, ~/.bashrc.) In theory, every shell Jenkins calls should read this resource file. In practice, I must restart Jenkins after making changes to the shell resource file.
HOME_DIR=~
export VENV_PATH="$HOME_DIR/venvs/my_venv"
export PYTHON_INTERPRETER="${VENV_PATH}/bin/python3"
My Jenkinsfile looks similar to this:
pipeline {
agent {
label 'my_slave'
}
stages {
stage('Stage1') {
steps {
// sh 'echo $PYTHON_INTERPRETER'
// sh 'env | sort'
sh "$PYTHON_INTERPRETER my_script.py "
}
}
}
}
So when the pipeline is run, the sh has the $PYTHON_INTERPRETER environment values set.
Note one shortcoming of this approach is that now the Jenkins file does not contain all the necessary information to run the script correctly. Hopefully this will get you off the ground.

Categories

Resources