Tailwindcss Not Updating In Browser after Class Change - python

I'm running Tailwind with React but in order for me to see changes that I make to classNames I have to restart the server and run npm run start again to see the updates.
Is there something I need to change in my scripts so that the browser updates without having to restart the server.
package.json
'''
"scripts": {
"start": "npm run watch:css && react-scripts start",
"build": "npm run watch:css && react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"watch:css": "postcss src/assets/tailwind.css -o src/assets/main.css"
}
'''
All feedback is appreciated! Thanks

&& will run command one after another, so try to use this npm-run-all library to run npm command on parallel

Related

Better way to solve the "No module named X" error in VS code?

I am unfamiliar with VS Code.
While trying to run a task (Terminal --> Run Task... --> Dev All --> Continue without scanning the task output) associated with a given code workspace (module.code-workspace), I ran into the following error, even though uvicorn was installed onto my computer:
/usr/bin/python: No module named uvicorn
I fixed this issue by setting the PATH directly into the code workspace. For instance, I went from:
"command": "cd ../myapi/ python -m uvicorn myapi.main:app --reload --host 127.x.x.x --port xxxx"
to:
"command": "cd ../myapi/ && export PATH='/home/sheldon/anaconda3/bin:$PATH' && python -m uvicorn myapi.main:app --reload --host 127.x.x.x --port xxxx"
While this worked, it seems like a clunky fix.
Is it a good practice to specify the path in the code workspace?
My understanding is that the launch.json file is meant to help debug the code. I tried creating such a file instead of fiddling with the workspace but still came across the same error.
In any case, any input on how to set the path in VS code will be appreciated!
You can modify the tasks like this:
"tasks": [
{
"label": "Project Name",
"type": "shell",
"command": "appc ti project id -o text --no-banner",
"options": {
"env": {
"PATH": "<mypath>:${env:PATH}"
}
}
}
]

Execute npm Script in a different Folder with python

Can someone tell me how I can do "npm run start" in any folder using a Python script. But please with the "os" operator and not "subprocess".
EDIT:
I need a python script, that goes to a specific folder and then executes
"npm run start". How can I do that?
You can run code in selecte folder
os.chdir("path/to/folder")
os.system("npm run start")
or
os.system("cd path/to/folder ; npm run start")
os.system("cd path/to/folder && npm run start")
or
subprocess.run("npm run start", shell=True, cwd="path/to/folder")
subprocess.run(["npm", "run", "start"], cwd="path/to/folder")
and similar way with other methods in subprocess

Is there any way for Mac terminal to run npm start and flask run simultaneously?

Im trying to run npm start and flask run at the same time in terminal on Mac, I tried to do it with shell script -> .sh like this
./chinmap_flask.sh
cd INS/FYP/chinmap
export PORT=3001
npm start
==================================
chinmap_flask.sh
#!/bin/sh
cd ~
cd Desktop
cd INS/FYP/chinmap/src/Backend/frontendData
export FLASK_APP=../app
export FLASK_ENV=development
flask run
==================================
And this is not working too:
"scripts": {
"start": "react-scripts start && cd ~ && cd Desktop && ./chinmap_flask.sh"
},
npm server started successfully, however, the terminal stop running other scripts after npm start, are there any possible ways to run both of them in one shell script or single line of terminal command? Thank you!
I don't think it is possible to run two commands at the same time. You can try this but I'm not sure if it will work:
npm start && flask run
You could try adding a new tab in the terminal with Cmd + T And in the first tab run flask run and run npm start in the second tab. This won't run both the commands in a single terminal command but the node server and the flask server will be running.
If you have trouble doing this check out this post: https://superuser.com/questions/286157/how-can-i-open-a-new-tab-in-terminal-in-mac
Just add & at the end of any command to run it in the background and get your Terminal back for doing other stuff:
# Start flask in background
./chinmap_flask.sh &
# Set up and start npm in background
cd INS/FYP/chinmap
export PORT=3001
npm start &

How to pass AWS credential when building Docker image in Jenkins?

Hi I am working in jenkins to build my AWS CDK project. I have created my docker file as below.
FROM python:3.7.4-alpine3.10
ENV CDK_VERSION='1.14.0'
RUN mkdir /cdk
COPY ./requirements.txt /cdk/
COPY ./entrypoint.sh /usr/local/bin/
COPY ./aws /cdk/
WORKDIR /cdk
RUN apk -uv add --no-cache groff jq less
RUN apk add --update nodejs npm
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN npm install -g aws-cdk
RUN pip3 install -r requirements.txt
RUN ls -la
ENTRYPOINT ["entrypoint.sh"]
RUN cdk synth
RUN cdk deploy
In jenkins I am building this Docker image as below.
stages {
stage('Dev Code Deploy') {
when {
expression {
return BRANCH_NAME = 'Develop'
}
}
agent {
dockerfile {
additionalBuildArgs "--build-arg 'http_proxy=${env.http_proxy}' --build-arg 'https_proxy=${env.https_proxy}'"
filename 'Dockerfile'
args '-u root:root'
}
}
In the above code I am not supplying AWS credentials So when cdk synth is executed I get error Need to perform AWS calls for account 1234567 but no credentials found. Tried: default credentials.
In jenkins I have my AWS credentials and I can access this like
steps {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${env.ENVIRONMENT}"]]) {
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
But how can I pass these credentialsId when building docker image. Can someone help me to figure it out. Any help would be appreciated. Thanks
I am able to pass credentials like below.
steps {
script {
node {
checkout scm
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${CFN_ENVIRONMENT}"]]) {
abc = docker.build('cdkimage', "--build-arg http_proxy=${env.http_proxy} --build-arg https_proxy=${env.https_proxy} .")
abc.inside{
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
}
}
I have added below code in build.sh
cdk synth
cdk deploy
You should install the "Amazon ECR" plugin and restart Jenkins
Fulfil the plugin with your credential. And specify in pipeline
All documentation you can find here https://wiki.jenkins.io/display/JENKINS/Amazon+ECR
If you're using Jenkins pipeline, maybe you can try withAWS step.
This should provide a way to access Jenkins aws credential, then pass the credential as docker environment while running docker container.
ref:
https://github.com/jenkinsci/pipeline-aws-plugin
https://jenkins.io/doc/book/pipeline/docker/

Heroku Deployment, Yuglify No Such File with Django pipeline

Trying to run collectstatic on deloyment, but running into the following error:
pipeline.exceptions.CompressorError: /usr/bin/env: yuglify: No such
file or directory
When I run collectstatic manually, everything works as expected:
Post-processed 'stylesheets/omnibase-v1.css' as
'stylesheets/omnibase-v1.css' Post-processed 'js/omnijs-v1.js' as
'js/omnijs-v1.js'
I've installed Yuglify globally. If I run 'heroku run yuglify', the interface pops up and runs as expected. I'm only running into an issue with deployment. I'm using the multibuildpack, with NodeJS and Python. Any help?
My package, just in case:
{
"author": "One Who Sighs",
"name": "sadasd",
"description": "sadasd Dependencies",
"version": "0.0.0",
"homepage": "http://sxaxsaca.herokuapp.com/",
"repository": {
"url": "https://github.com/heroku/heroku-buildpack-nodejs"
},
"dependencies": {
"yuglify": "~0.1.4"
},
"engines": {
"node": "0.10.x"
}
}
Should maybe mention that Yuglify is not in my requirements.txt, just in my package.json.
I ran into the same problem and ended up using a custom buildpack such as this and writing a bash script to install node and yuglify:
https://github.com/heroku/heroku-buildpack-python
After setting the buildpack, I created a few bash scripts to install node and yuglify. the buildpack has hooks to call these post compile scripts. Here's a good example how to do this which I followed:
https://github.com/nigma/heroku-django-cookbook
These scripts are placed under bin in your root folder.
In the post_compile script, I added a script to install yuglify.
post_compile script
if [ -f bin/install_nodejs ]; then
echo "-----> Running install_nodejs"
chmod +x bin/install_nodejs
bin/install_nodejs
if [ -f bin/install_yuglify ]; then
echo "-----> Running install_yuglify"
chmod +x bin/install_yuglify
bin/install_yuglify
fi
fi
install_yuglify script
#!/usr/bin/env bash
set -eo pipefail
npm install -g yuglify
If that doesn't work, you can have a look at this post:
Yuglify compressor can't find binary from package installed through npm

Categories

Resources