I'm using Django-Pipeline to minify my javascript. When I push my project to Heroku and CollectStatic runs, it gives me the error
pipeline.exceptions.CompressorError: /usr/bin/env: yuglify: No such file or directory
But when I run CollectStatic manually, Yuglify runs without issue. I'm unable to find out the problem. What code should I even show you guys in this situation?
My solution to this was to add a "yuglify" part to the codebase here: https://github.com/nigma/heroku-django-cookbook
Here's my code:
bin/install_yuglify
#!/usr/bin/env bash
set -eo pipefail
npm install -g yuglify
Then add the following to bin/post_compile (around line 23...)
if [ -f bin/install_yuglify ]; then
echo "-----> Running install_yuglify"
chmod +x bin/install_yuglify
bin/install_yuglify
fi
And you should be good to go :)
You can see my code here, for reference: https://github.com/GK-12/rpi_csdt_community/tree/master/bin
Good luck!
I managed to solve the problem with a less painful solution. Heroku provides you with buildpacks, which actually is the environment that your applications is going to build upon. By default you have the python buildpack. That is the reason why the system is able to run commands like python manage.py.... My solution is the following:
1) Install nodejs buildpack as the first buildpack
heroku buildpacks:add --index 1 heroku/nodejs
2) Add the package.json in the same path like requirements.txt
3) In the package.json add the dependency of yuglify
An other solution, is to change your compressions and use a python-written one.
PIPELINE['CSS_COMPRESSOR'] = 'pipeline.compressors.cssmin.CSSMinCompressor'
PIPELINE['JS_COMPRESSOR'] = 'pipeline.compressors.jsmin.JSMinCompressor'
pip install cssmin jsmin
I dont have a clear opinion which is better jsmin or yuglify.
Related
I started a new project using Django. This project is build using Docker with few containers and poetry to install all dependencies.
When I first run docker-compose up -d, everything is installed correctly. Actually, this problem is not related with Docker I suppose.
After I run that command, I'm running docker-compose exec python make -f automation/local/Makefile which has this content
Makefile
.PHONY: all
all: install-deps run-migrations build-static-files create-superuser
.PHONY: build-static-files
build-static-files:
python manage.py collectstatic --noinput
.PHONY: create-superuser
create-superuser:
python manage.py createsuperuser --noinput --user=${DJANGO_SUPERUSER_USERNAME} --email=${DJANGO_SUPERUSER_USERNAME}#zitec.com
.PHONY: install-deps
install-deps: vendor
vendor: pyproject.toml $(wildcard poetry.lock)
poetry install --no-interaction --no-root
.PHONY: run-migrations
run-migrations:
python manage.py migrate --noinput
pyproject.toml
[tool.poetry]
name = "some-random-application-name"
version = "0.1.0"
description = ""
authors = ["xxx <xxx#xxx.com>"]
[tool.poetry.dependencies]
python = ">=3.6"
Django = "3.0.8"
docusign-esign = "^3.4.0"
[tool.poetry.dev-dependencies]
pytest = "^3.4"
django-debug-toolbar = "^2.2"
Debug toolbar is installed by adding those entries under settings.py (MIDDLEWARE / INSTALLED_APP) and even DEBUG_TOOLBAR_CONFIG with next value: SHOW_TOOLBAR_CALLBACK.
Let me confirm that EVERYTHING works after fresh docker-compose up -d. The problem occurs after I stop container and start it again using next commands:
docker-compose down
docker-compose up -d
When I try to access the project it says that Module debug_toolbar does not exist!.
I read all questions from this website, but nothing worked for me.
Has anyone encountered this problem before?
That sounds like normal behavior. A container has a temporary filesystem, and when the container exits any changes that have been made in that filesystem will be permanently lost. Deleting and recreating containers is extremely routine (even just changing environment: or ports: settings in the docker-compose.yml file would cause that to happen).
You should almost never install software in a running container. docker exec is an extremely useful debugging tool, but it shouldn't be the primary way you interact with your container. In both cases you're setting yourself up to lose work if you ever need to change a Docker-level setting or update the base image.
For this example, you can split the contents of that Makefile into two parts, the install_deps target (that installs Python packages but doesn't have any external dependencies) and the rest (that will depend on a database running). You need to run the installation part at image-build time, but the Dockerfile can't access a database, so the remainder needs to happen at container-startup time.
So in your image's Dockerfile, RUN the installation part:
RUN make install-reps
You will also need an entrypoint script that does the rest of the first-time setup, then runs the main container command. This can look like:
#!/bin/sh
make run-migrations build-static-files create-superuser
exec "$#"
Then run this in your Dockerfile:
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD python3 manage.py runserver --host 0.0.0.0
(I've recently seen a lot of Dockerfiles that have just ENTRYPOINT ["python3"]. Splitting ENTRYPOINT and CMD this way isn't especially useful; just move the python3 interpreter command into CMD.)
I want to specify a GitLab job that creates a sphinx html documentation.
I am using a Python 3 alpine image (cannot specify which exactly).
the build stage within my .gitlab-ci.yml looks like this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- sphinx-build -b html docs/ public/
only:
- master
however, the pipeline fails: sphinx-build: command not found. (same error for make html)
According to This Tutorial, my .gitlab-ci.yml should be more or less correct.
What am I doing wrong? Is this issue related to the alpine image I am using?
As #Yasen correctly noted, the path to sphinx-build was not contained in $PATH. However, adding command in before sphinx-build did not solve the problem for me.
Anyway I found the solution in the the runner logs: The output of pip install -U sphinx produced the following warning:
WARNING: The scripts sphinx-apidoc, sphinx-autogen, sphinx-build and sphinx-quickstart are installed in 'some/path' which is not on PATH.
so I added export PATH="some/path" to the script-step in the .gitlab-ci.yml:
script:
- pip install -U sphinx
- export PATH="some/path"
- sphinx-build -b html docs/ public/
Did the command pip install -U sphinx succeed? (You should be able to tell that from the CI job log.)
If so, you may need to specify the full path to sphinx-build, as Yasen said.
If it did not succeed, you should troubleshoot the installation of Sphinx.
Most likely the reason is that $PATH doesn't contain path to sphinx-build
TL;DR try to use command
Try this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- command sphinx-build -b html docs/ public/
only:
- master
Explanation
GitLab runners run different way
Since GitLab CI uses runners, runner's shell profile may differ from commonly used.
So, your runner may be configured without declared $PATH to the directory that contains sphinx-build
Zsh/Bash startup files loading order (.bashrc, .zshrc etc.)
See this explanation:
The issue is that Bash sources from a different file based on what kind of shell it thinks it is in. For an “interactive non-login shell”, it reads .bashrc, but for an “interactive login shell” it reads from the first of .bash_profile, .bash_login and .profile (only). There is no sane reason why this should be so; it’s just historical.
What command does mean?
Since we don't know the path where sphinx-build installed, you may use commands like: which, type, etc.
As per this great answer(shell - Why not use "which"? What to use then? - Unix & Linux Stack Exchange, author recommends to use command <name>, or $(command -v <name>)
I am looking for a way to create a self-contained archive of all dependencies required to satisfy a Pipfile.lock. One way to achieve this would be to point PIPENV_CACHE_DIR at an empty temporary directory, run pipenv install, ship the contents of that directory, and use it on the offline machine.
E.g., this should work:
tmpdir=$(mktemp -d)
if [ -n "$offline" ]; then
tar -xf pipenv_cache.tar -C "$tmpdir"
fi
pipenv --rm
PIPENV_CACHE_DIR="$tmpdir" PIP_CACHE_DIR="$tmpdir" pipenv install
if [ -n "$online" ]; then
tar -cf pipenv_cache.tar -C "$tmpdir" .
fi
However, there are a number of problems with this script, one being that it can’t use the online machine’s cache, having to download everything every time instead.
The question is, is there a better way, that doesn’t involve a custom script? Maybe some documented community best practices?
Ideally, there would exist an interface like:
pipenv lock --create-archive <file_name>
pipenv install --from-archive <file_name>
With some Shell scripting work, wheelfreeze can be made to do it.
To create the archive (in a Bash shell):
(. "$(pipenv --venv)"/bin/activate && wheelfreeze <(pipenv lock -r))
And to install from the archive:
wheelfreeze/install "$(pipenv --venv)"
Disclosure: I created wheelfreeze while trying to solve the problem – “to scratch my own itch”, as the saying goes.
I've been usually installed python packages through pip.
For Google App Engine, I need to install packages to another target directory.
I've tried:
pip install -I flask-restful --target ./lib
but it fails with:
must supply either home or prefix/exec-prefix -- not both
How can I get this to work?
Are you using OS X and Homebrew? The Homebrew python page https://github.com/Homebrew/brew/blob/master/docs/Homebrew-and-Python.md calls out a known issue with pip and a work around.
Worked for me.
You can make this "empty prefix" the default by adding a
~/.pydistutils.cfg file with the following contents:
[install]
prefix=
Edit: The Homebrew page was later changed to recommend passing --prefix on the command line, as discussed in the comments below. Here is the last version which contained that text. Unfortunately this only works for sdists, not wheels.
The issue was reported to pip, which later fixed it for --user. That's probably why the section has now been removed from the Homebrew page. However, the problem still occurs when using --target as in the question above.
I believe there is a simpler solution to this problem (Homebrew's Python on macOS) that won't break your normal pip operations.
All you have to do is to create a setup.cfg file at the root directory of your project, usually where your main __init__.py or executable py file is. So if the root folder of your project is: /path/to/my/project/, create a setup.cfg file in there and put the magic words inside:
[install]
prefix=
OK, now you sould be able to run pip's commands for that folder:
pip install package -t /path/to/my/project/
This command will run gracefully for that folder only. Just copy setup.cfg to whatever other projects you might have. No need to write a .pydistutils.cfg on your home directory.
After you are done installing the modules, you may remove setup.cfg.
On OSX(mac), assuming a project folder called /var/myproject
cd /var/myproject
Create a file called setup.cfg and add
[install]
prefix=
Run pip install <packagename> -t .
Another solution* for Homebrew users is simply to use a virtualenv.
Of course, that may remove the need for the target directory anyway - but even if it doesn't, I've found --target works by default (as in, without creating/modifying a config file) when in a virtual environment.
*I say solution; perhaps it's just another motivation to meticulously use venvs...
I hit errors with the other recommendations around --install-option="--prefix=lib". The only thing I found that worked is using PYTHONUSERBASE as described here.
export PYTHONUSERBASE=lib
pip install -I flask-restful --user
this is not exactly the same as --target, but it does the trick for me in any case.
As other mentioned, this is known bug with pip & python installed with homebrew.
If you create ~/.pydistutils.cfg file with "empty prefix" instruction it will fix this problem but it will break normal pip operations.
Until this bug is officially addressed, one of the options would be to create your own bash script that would handle this case:
#!/bin/bash
name=''
target=''
while getopts 'n:t:' flag; do
case "${flag}" in
n) name="${OPTARG}" ;;
t) target="${OPTARG}" ;;
esac
done
if [ -z "$target" ];
then
echo "Target parameter must be provided"
exit 1
fi
if [ -z "$name" ];
then
echo "Name parameter must be provided"
exit 1
fi
# current workaround for homebrew bug
file=$HOME'/.pydistutils.cfg'
touch $file
/bin/cat <<EOM >$file
[install]
prefix=
EOM
# end of current workaround for homebrew bug
pip install -I $name --target $target
# current workaround for homebrew bug
rm -rf $file
# end of current workaround for homebrew bug
This script wraps your command and:
accepts name and target parameters
checks if those parameters are empty
creates ~/.pydistutils.cfg file with "empty prefix" instruction in it
executes your pip command with provided parameters
removes ~/.pydistutils.cfg file
This script can be changed and adapted to address your needs but you get idea. And it allows you to run your command without braking pip. Hope it helps :)
If you're using virtualenv*, it might be a good idea to double check which pip you're using.
If you see something like /usr/local/bin/pip you've broken out of your environment. Reactivating your virtualenv will fix this:
VirtualEnv: $ source bin/activate
VirtualFish: $ vf activate [environ]
*: I use virtualfish, but I assume this tip is relevant to both.
I have a similar issue.
I use the --system flag to avoid the error as I decribe here on other thread where I explain the specific case of my situation.
I post this here expecting that can help anyone facing the same problem.
When I was developing and testing my project, I used to use virtualenvwrapper to manage the environment and run it:
workon myproject
python myproject.py
Of course, once I was in the right virtualenv, I was using the right version of Python, and other corresponding libraries for running my project.
Now, I want to use Supervisord to manage the same project as it is ready for deployment. The question is what is the proper way to tell Supervisord to activate the right virtualenv before executing the script? Do I need to write a separate bash script that does this, and call that script in the command field of Supervisord config file?
One way to use your virtualenv from the command line is to use the python executable located inside of your virtualenv.
for me i have my virtual envs in .virtualenvs directory. For example
/home/ubuntu/.virtualenvs/yourenv/bin/python
no need to workon
for a supervisor.conf managing a tornado app i do:
command=/home/ubuntu/.virtualenvs/myapp/bin/python /usr/share/nginx/www/myapp/application.py --port=%(process_num)s
Add your virtualenv/bin path to your supervisord.conf's environment:
[program:myproj-uwsgi]
process_name=myproj-uwsgi
command=/home/myuser/.virtualenvs/myproj/bin/uwsgi
--chdir /home/myuser/projects/myproj
-w myproj:app
environment=PATH="/home/myuser/.virtualenvs/myproj/bin:%(ENV_PATH)s"
user=myuser
group=myuser
killasgroup=true
startsecs=5
stopwaitsecs=10
First, run
$ workon myproject
$ dirname `which python`
/home/username/.virtualenvs/myproject/bin
Add the following
environment=PATH="/home/username/.virtualenvs/myproject/bin"
to the related supervisord.conf under [program:blabla] section.