How can I use jinja2 with babel outside a flask application.
Supposing that I have locale dir which is populated using pybabel command. I want to load translation files and translate my template files.
I found the solution. Here's how you can use jinja2/babel without flask integration.
Preconditions
Preconditions are described just to complete the example, all of them can have other values or names.
You use message domain named "html" for messages (domain is arbitrary name, default is "message").
There is a directory "i18n" with translated and compiled messages (e.g. with a file i18n/cs/LC_MESSAGES/html.mo).
You prefer to render your templates using "cs" or "en" locale.
The templates are located in directory templates and there exists a jinja2 template named stack.html there, so there exists a file templates/stack.html.
Code sample
from jinja2 import Environment, FileSystemLoader
from babel.support import Translations
locale_dir = "i18n"
msgdomain = "html"
list_of_desired_locales = ["cs", "en"]
loader = FileSystemLoader("templates")
extensions = ['jinja2.ext.i18n', 'jinja2.ext.autoescape', 'jinja2.ext.with_']
translations = Translations.load(locale_dir, list_of_desired_locales)
env = Environment(extensions=extensions, loader=loader) # add any other env options if needed
env.install_gettext_translations(translations)
template = env.get_template("stack.html")
rendered_template = template.render()
The rendered_template contains the rendered HTML content now, probably in "cs" locale.
This works great! Thanks.
I. jinja2 dependency MarkupSafe
II. Python babel dependency ytz
See for these steps at http://tlphoto.googlecode.com/git/jinja2_i18n_howto.txt
Create the folder structure (no whitespace after the commas!!!)
mkdir -pv ./lang/{en_US,zh_CN,fa_IR,es_VE,de_DE,ja_JP}/LC_MESSAGES/
Extract
pybabel -v extract -F babel.config -o ./lang/messages.pot ./
Init/Update
3.1 Init
pybabel init -l zh_CN -d ./lang -i ./lang/messages.pot
3.2 Update
pybabel update -l zh_CN -d ./lang -i ./lang/messages.pot
Compile
pybabel compile -f -d ./lang
Related
I want to specify a GitLab job that creates a sphinx html documentation.
I am using a Python 3 alpine image (cannot specify which exactly).
the build stage within my .gitlab-ci.yml looks like this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- sphinx-build -b html docs/ public/
only:
- master
however, the pipeline fails: sphinx-build: command not found. (same error for make html)
According to This Tutorial, my .gitlab-ci.yml should be more or less correct.
What am I doing wrong? Is this issue related to the alpine image I am using?
As #Yasen correctly noted, the path to sphinx-build was not contained in $PATH. However, adding command in before sphinx-build did not solve the problem for me.
Anyway I found the solution in the the runner logs: The output of pip install -U sphinx produced the following warning:
WARNING: The scripts sphinx-apidoc, sphinx-autogen, sphinx-build and sphinx-quickstart are installed in 'some/path' which is not on PATH.
so I added export PATH="some/path" to the script-step in the .gitlab-ci.yml:
script:
- pip install -U sphinx
- export PATH="some/path"
- sphinx-build -b html docs/ public/
Did the command pip install -U sphinx succeed? (You should be able to tell that from the CI job log.)
If so, you may need to specify the full path to sphinx-build, as Yasen said.
If it did not succeed, you should troubleshoot the installation of Sphinx.
Most likely the reason is that $PATH doesn't contain path to sphinx-build
TL;DR try to use command
Try this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- command sphinx-build -b html docs/ public/
only:
- master
Explanation
GitLab runners run different way
Since GitLab CI uses runners, runner's shell profile may differ from commonly used.
So, your runner may be configured without declared $PATH to the directory that contains sphinx-build
Zsh/Bash startup files loading order (.bashrc, .zshrc etc.)
See this explanation:
The issue is that Bash sources from a different file based on what kind of shell it thinks it is in. For an “interactive non-login shell”, it reads .bashrc, but for an “interactive login shell” it reads from the first of .bash_profile, .bash_login and .profile (only). There is no sane reason why this should be so; it’s just historical.
What command does mean?
Since we don't know the path where sphinx-build installed, you may use commands like: which, type, etc.
As per this great answer(shell - Why not use "which"? What to use then? - Unix & Linux Stack Exchange, author recommends to use command <name>, or $(command -v <name>)
I am using create-react-app to create a front end for a django based application.
How would I import the js bundle generated by create-react-app in a Django template.
The bundle filename is in the following format.
main.3cf06d58.js
The issue is that every time I rebuild the bundle the hash based on the contents in the filename changes. This in turn breaks my static file import in my Django template
<script type='text/javascript' src='{% static 'js/bundle/main.c86ade78.js' %}'></script>
Is there a way of setting custom Webpack bundle filenames in create-react-app? This setting doesn't seem to be available as I have not ejected and therefore do not have access to the Webpack configuration file.
Probably the best option to keep the 'hashed names' and avoid cache issues is to use django-webpack-loader and webpack-bundle-tracker.
The first one provides a couple of new tags for the django templates like {% render_bundle 'main' %}. This tag will be replaced at runtime for the path to your bundled main entry point defined in webpack configuration.
The second one is a webpack plugin that ouputs to disk a json file with some information about the bundles like the actual "hashed filename". This json is read by django-webpack-loader to figure out the filenames.
There is a full explantation of how it can be done in this post from the author of both plugins.
For more info you can check these series of posts:
Confident Asset Deployments with Webpack & Django, Part 1
Confident Asset Deployments with Webpack & Django, part 2
Confident Asset Deployments with Webpack & Django, part 3
Confident Asset Deployments with Webpack & Django, part 4
The best solution would be to access the webpack configuration file and set the bundle filename to have a static filename. Hashnames based on content for static files are useful for browser caching. However if this is not needed your best bet is probably to eject your create-react-app and tweak the webpack config.
To do with create-react-app this is could be possible to fork the react-scripts module and make such an adjustment.
Another less robust way of doing this if you don't want to touch the webpack configuration for whatever reason is to create a bash script.
This script goes in the same directory as your package.json and greps the filename of the bundles from the output of the npm run build command. Then copies the css and js bundles to the django static folder under respective css and js directories.
build-django-static.sh
#!/usr/bin/env bash
for bundle in $(npm run build | grep -o 'build\/static\/\S*')
do
filename=$(basename "$bundle")
extension="${filename##*.}"
outputpath=../core/static/${extension}/bundle.${extension}
cp $bundle $outputpath
echo copied $bundle to $outputpath
done
Note - It is crucial to change the $outputpath variable to the correct path that points to your static django directory.
Then add a custom npm script to your package.json which calls this bash script.
"scripts": {
...
"build-django-static": "bash ./build-django-static.sh"
...
}
Then call the npm script by running the following command from the same directory as your package.json:
npm run build-django-static
I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies.
some python packages having corresponding recipes in the meta folders, like Enum class for example:
meta-openembedded/meta-python/recipes-devtools/python/python-enum34_1.1.6.bb
unfortunately lot's of useful classes aren't available, but some might be needed for the python application. get used of installing missing packages using pip already on booted platform? but what if the target product is not IP network connected? the solution is to implement a new recipe and add to the platform meta layer (at least). Example is a recipe for the module keyboard useful for intercepting keys/buttons touch events:
use PyPi web site to identify if the package is available:
https://pypi.org/project/keyboard/
download archive available on the package description page:
https://github.com/boppreh/keyboard/archive/master.zip
collect some useful information required to fill-out a new recipe:
SUMMARY - could be obtained from the package description page
HOMEPAGE - the project URL on github or bitbucket or sourceforge, etc
LICENSE - verify license type
LIC_FILES_CHKSUM by executing md5sum on existing LICENSE or README or PKG-INFO file located in the root of the package (preferrably)
SRC_URI[md5sum] - is md5sum of the archive itself. it will be used to discover and download archive on pypi server automatically with the help of supporting script inherit pypi
PYPI_PACKAGE_EXT - if the package is not tar.gz require to supply the correct extension
create missing python-keyboard_0.13.1.bb recipe:
`
SUMMARY = "Hook and simulate keyboard events on Windows and Linux"
HOMEPAGE = "https://github.com/boppreh/keyboard"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://PKG-INFO;md5=9bc8ba91101e2f378a65d36f675c88b7"
SRC_URI[md5sum] = "d4b90e53bbde888e7b7a5a95fe580a30"
SRC_URI += "file://add_missing_CHANGES_md.patch"
PYPI_PACKAGE = "keyboard"
PYPI_PACKAGE_EXT = "zip"
inherit pypi
inherit setuptools
BBCLASSEXTEND = "native nativesdk"
`
the package has been patched by adding
SRC_URI += "file://add_missing_CHANGES_md.patch"
directive to the recipe due to missing CHANGES.md file used by setup.py script to identify package version (this step is optional). the patch itself has to be placed inside the folder next to the recipe matching recipe name but without version:
python-keyboard
This question is old, but currently in 2020 there is a python package called pipoe.
pipoe can generate .bb classes corresponding to python packages for you!
Usage:
$ pip3 install pipoe
$ pipoe -p requests
OR
$ pipoe -p requests --python python3
Now copy the generated .bb files to your layer and use them.
https://pypi.org/project/pipoe/
The OE layer index at layers.openembedded.org lists all known layers and the recipes they contain, so searching that should bring up the meta-python layer that you can add to your build and use recipes from.
In your Image recipe you can add a Python module by adding it to the IMAGE_INSTALL variable:
IMAGE_INSTALL += "python-numpy"
You can find possible modules for example by searching for them with wildcards:
find -name *python*numpy*bb
in the Yocto Folder brings:
./poky/meta/recipes-devtools/python/python-numpy_1.7.0.bb
The pipoe did not work for me either, I ended up making this bash script. Someone else might find it useful.
You will need to change this in my script below:
local my_layers_dir="my/layers/directory"
To run this script:
./pypi.sh <modulename>
#example:
./pypi.sh humanfriendly #this should generate the bb file for the humanfriendly python module
pypi.sh:
#!/bin/bash
set -ex
function argstovars()
{
for change in $#; do
set -- `echo $change | tr '=' ' '`
eval $1=$2
done
}
function main(){
local module=""
argstovars $#
local my_layers_dir="my/layers/directory"
local url_files="https://pypi.org/project/$module/#files"
mkdir -p /tmp/pypi
rm -fr /tmp/pypi/*
pushd /tmp/pypi
wget $url_files
local targz_url=$(cat index.html | grep https://files | grep tar.gz | sed -r "s/<a href=\"(.*)\">/\1/g")
wget $targz_url
local targz_file=$(ls | grep tar.gz)
local md5=$(md5sum $targz_file)
md5=${md5%% *}
local sha256=$(sha256sum $targz_file)
sha256=${sha256%% *}
tar -xf $targz_file
local module_with_version=$(echo "$targz_file" | sed -r "s/(.*)\.tar\.gz/\1/g")
pushd $module_with_version
local license_file=$(find . -name LICENSE*)
local md5lic=$(md5sum $license_file)
md5lic=${md5lic%% *}
popd
popd
module_with_version="${module_with_version//-/_}"; echo $foo
mkdir -p "$my_layers_dir/$module"
pushd "$my_layers_dir/$module"
echo "SUMMARY = \"This is a python module for $module\"
HOMEPAGE = \"https://pypi.org/project/$module/\"
LICENSE = \"MIT\"
LIC_FILES_CHKSUM = \"file://$license_file;md5=$md5lic\"
SRC_URI[md5sum] = \"$md5\"
SRC_URI[sha256sum] = \"$sha256\"
PYPI_PACKAGE = \"$module\"
inherit pypi setuptools3
RDEPENDS_${PN} += \" \
python3-psutil \
\"
" > "${module_with_version}.bb"
popd
}
time main module=$#
I've written a Django app, and now I want to make it easy to deploy on multiple servers.
The basic installation is:
Copy the app folder into the Django project folder
Add it to INSTALLED_APPS in settings.py
Run ./manage.py collectstatic
This particular app doesn't need to use the DB, but if it did, I'd use south and run ./manage.py migrate, but that's another story.
The part I'm having trouble with is #2. I don't want to have to manually edit this file every time. What's the easiest/most robust way to update that?
I was thinking I could use the inspect module to find the variable and then somehow append it, but I'm not having any luck. inspect.getsourcelines won't find variables.
You can modify your settings.py using bash.
#set $SETTINGS_FILE variable to full path of the your django project settings.py file
SETTINGS_FILE="/path/to/your/django/project/settings.py"
# checks that app $1 is in the django project settings file
is_app_in_django_settings() {
# checking that django project settings file exists
if [ ! -f $SETTINGS_FILE ]; then
echo "Error: The django project settings file '$SETTINGS_FILE' does not exist"
exit 1
fi
cat $SETTINGS_FILE | grep -Pzo "INSTALLED_APPS\s?=\s?\[[\s\w\.,']*$1[\s\w\.,']*\]\n?" > /dev/null 2>&1
# now $?=0 if app is in settings file
# $? not 0 otherwise
}
# adds app $1 to the django project settings
add_app2django_settings() {
is_app_in_django_settings $1
if [ $? -ne 0 ]; then
echo "Info. The app '$1' is not in the django project settings file '$SETTINGS_FILE'. Adding."
sed -i -e '1h;2,$H;$!d;g' -re "s/(INSTALLED_APPS\s?=\s?\[[\n '._a-zA-Z,]*)/\1 '$1',\n/g" $SETTINGS_FILE
# checking that app $1 successfully added to django project settings file
is_app_in_django_settings $1
if [ $? -ne 0 ]; then
echo "Error. Could not add the app '$1' to the django project settings file '$SETTINGS_FILE'. Add it manually, then run this script again."
exit 1
else
echo "Info. The app '$1' was successfully added to the django settings file '$SETTINGS_FILE'."
fi
else
echo "Info. The app '$1' is already in the django project settings file '$SETTINGS_FILE'"
fi
}
Use:
add_app2django_settings "my_app"
Here are my reasons why I think this would be wrong:
it is extra code complexity without any big need, adding one line to settings every time is not that bad, especially if you are doing step #1 and #3.
it will become not explicit what apps your project is using. When another developer will work on your project, he might not know that your app is installed.
you should do step #1 and step #2 on code versioning system, test the whole system and then commit the changes and just then deploy it.
I think you have something wrong (from my point of view) in your develop/deploy process if you are looking for such an "optimization". I think it is much easier and better to use INSTALLED_APPS.
If you are building something for public use and you want to make it as easy as possible to add modules then it would be nice. In this case I would recommend to package project and it's apps as python eggs and make use of entry points. Then you could deploy an app into project like this:
pip install my-app-name
Without even step #1 and #3! Step #1 will be done by pip, and step #2 and #3 will be done by setup hooks defined in your project.
Paste script is a good example of entry-points utilization:
# Install paste script:
pip install pastescript
# install django templates for pastescript:
pip install fez.djangoskel
# now paste script knows about fez.djangoskel because of entry-points
# start a new django project from fez's templates:
paste create -t django_buildout
Here is a portion of setup.py from fez.djangoskel package:
...
entry_points="""
[paste.paster_create_template]
django_buildout=fez.djangoskel.pastertemplates:DjangoBuildoutTemplate
django_app=fez.djangoskel.pastertemplates:DjangoAppTemplate
...
zc.buildout is another great tool which might make your deployments much easier. Python eggs plays very nice with buildout.
How would you go about internationalizing a Google App Engine webapp application using BABEL? I am looking here for all the stages:
Marking the strings to be translated.
Extracting them.
Traslating
Configuring your app to load the right language requested by the browser
1) use _() (or gettext()) in your code and templates. Translated strings set in the module globals or class definitions should use some form of lazy gettext(), because i18n won't be available when the modules are imported.
2) Extract all translations using pybabel. Here we pass two directories to be scanned: the templates dir and the app dir. This will create a messages.pot file in the /locale directory with all strings found in these directories. babel.cfg is the extraction configuration that varies depending on the template engine you use:
$ pybabel extract -F ./babel.cfg -o ./locale/messages.pot ./templates/ ./app/
3) Initialize a directory for each language. This is done only once. Here we initialize three translations, en_US, es_ES and pt_BR, and use the messages.pot file created on step 2:
$ pybabel init -l en_US -d ./locale -i ./locale/messages.pot
$ pybabel init -l es_ES -d ./locale -i ./locale/messages.pot
$ pybabel init -l pt_BR -d ./locale -i ./locale/messages.pot
Translate the messages. They will be in .mo files in each translation directory.
After all locales are translated, compile them:
$ pybabel compile -f -d ./locale
Later, if new translations are added, repeat step 2 and update them using the new .pot file:
$ pybabel update -l pt_BR -d ./locale/ -i ./locale/messages.pot
Then translate the new strings and compile the translations again.
4) The strategy here may vary. For each request you must set the correct translations to be used, and probably want to cache loaded translations to reuse in subsequent requests.