GAE webapp application internationalization with Babel - python

How would you go about internationalizing a Google App Engine webapp application using BABEL? I am looking here for all the stages:
Marking the strings to be translated.
Extracting them.
Traslating
Configuring your app to load the right language requested by the browser

1) use _() (or gettext()) in your code and templates. Translated strings set in the module globals or class definitions should use some form of lazy gettext(), because i18n won't be available when the modules are imported.
2) Extract all translations using pybabel. Here we pass two directories to be scanned: the templates dir and the app dir. This will create a messages.pot file in the /locale directory with all strings found in these directories. babel.cfg is the extraction configuration that varies depending on the template engine you use:
$ pybabel extract -F ./babel.cfg -o ./locale/messages.pot ./templates/ ./app/
3) Initialize a directory for each language. This is done only once. Here we initialize three translations, en_US, es_ES and pt_BR, and use the messages.pot file created on step 2:
$ pybabel init -l en_US -d ./locale -i ./locale/messages.pot
$ pybabel init -l es_ES -d ./locale -i ./locale/messages.pot
$ pybabel init -l pt_BR -d ./locale -i ./locale/messages.pot
Translate the messages. They will be in .mo files in each translation directory.
After all locales are translated, compile them:
$ pybabel compile -f -d ./locale
Later, if new translations are added, repeat step 2 and update them using the new .pot file:
$ pybabel update -l pt_BR -d ./locale/ -i ./locale/messages.pot
Then translate the new strings and compile the translations again.
4) The strategy here may vary. For each request you must set the correct translations to be used, and probably want to cache loaded translations to reuse in subsequent requests.

Related

Problems using debuild to upload a python/GTK program to Launchpad

[update, I found the solution, see answer below]
I made a GUI wrapper for protonvpn, a cmd program for Linux. dpkg -b gets me ProtonVPNgui.deb, which works fine. However, I have problems using debuild -S -sa to upload it to Launchpad.
As is, it won't build once uploaded with dput, cf. the error msg
I tried using debuild -i -us -uc -b to build a .deb file for local testing, but it returns:
dpkg-genchanges: error: binary build with no binary artifacts found; cannot distribute
Any ideas? This whole process is driving me nuts. (I use this tar.gz)
I figured it out myself. Create a .deb package locally for testing and upload the project to Launchchpad:
Create a launchpad user account.
Install dh-python with the package manager
Create the package source dir
mkdir myscript-0.1
Copy your python3 script(s) (or the sample script below) to the source dir (don't use !/usr/bin/python, use !/usr/bin/python3 or !/usr/bin/python2 and edit accordingly below)
cp ~/myscript myscript-0.1
cd myscript-0.1
Sample script:
#!/usr/bin/python3
if __name__ == '__main__':
print("Hello world")
Create the packaging skeleton (debian/*)
dh_make -s --createorig
Remove the example files
rm debian/*.ex debian/*.EX debian/README.*
Add eventual binary files to include, e.g. gettext .mo files
mkdir myscript-0.1/source
echo debian/locales/es/LC_MESSAGES/base.mo > myscript-0.1/source/include-binaries
Edit debian/control
Replace its content with the following text:
Source: myscript
Section: utils
Priority: optional
Maintainer: Name,
Build-Depends: debhelper (>= 9), python3, dh-python
Standards-Version: 4.1.4
X-Python3-Version: >= 3.2
Package: myscript
Architecture: all
Depends: ${misc:Depends}, ${python3:Depends}
Description: insert up to 60 chars description
insert long description, indented with spaces
debian/install must contain the script(or several, python, perl, etc., also eventual .desktop files for start menu shortcuts) to install as well as the target directories, each on a line
echo myscript usr/bin > debian/install
Edit debian/rules
Replace its content with the following text:
#!/usr/bin/make -f
%:
dh $# --with=python3
Note: it's a TAB before dh $#, not four spaces!
Build the .deb package
debuild -us -uc
You will get a few Lintian warnings/errors but your package is ready to be used:
../myscript_0.1-1_all.deb
Prepare upload to Launchpad, insert your gdp fingerprint after -k
debuild -S -sa -k12345ABC
Upload to Launchpad
dput ppa:[your ppa name]/ppa myscript_0.1-1_source.changes
This is an update to askubuntu.com/399552. It may take some error messages and googling till you're ready... C.f. the ...orig.tar.gz file at launchpad for the complete project.

sphinx-build command not found in gitlab ci pipeline / python 3 alpine image

I want to specify a GitLab job that creates a sphinx html documentation.
I am using a Python 3 alpine image (cannot specify which exactly).
the build stage within my .gitlab-ci.yml looks like this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- sphinx-build -b html docs/ public/
only:
- master
however, the pipeline fails: sphinx-build: command not found. (same error for make html)
According to This Tutorial, my .gitlab-ci.yml should be more or less correct.
What am I doing wrong? Is this issue related to the alpine image I am using?
As #Yasen correctly noted, the path to sphinx-build was not contained in $PATH. However, adding command in before sphinx-build did not solve the problem for me.
Anyway I found the solution in the the runner logs: The output of pip install -U sphinx produced the following warning:
WARNING: The scripts sphinx-apidoc, sphinx-autogen, sphinx-build and sphinx-quickstart are installed in 'some/path' which is not on PATH.
so I added export PATH="some/path" to the script-step in the .gitlab-ci.yml:
script:
- pip install -U sphinx
- export PATH="some/path"
- sphinx-build -b html docs/ public/
Did the command pip install -U sphinx succeed? (You should be able to tell that from the CI job log.)
If so, you may need to specify the full path to sphinx-build, as Yasen said.
If it did not succeed, you should troubleshoot the installation of Sphinx.
Most likely the reason is that $PATH doesn't contain path to sphinx-build
TL;DR try to use command
Try this:
pages:
stage: build
tags:
- buildtag
script:
- pip install -U sphinx
- command sphinx-build -b html docs/ public/
only:
- master
Explanation
GitLab runners run different way
Since GitLab CI uses runners, runner's shell profile may differ from commonly used.
So, your runner may be configured without declared $PATH to the directory that contains sphinx-build
Zsh/Bash startup files loading order (.bashrc, .zshrc etc.)
See this explanation:
The issue is that Bash sources from a different file based on what kind of shell it thinks it is in. For an “interactive non-login shell”, it reads .bashrc, but for an “interactive login shell” it reads from the first of .bash_profile, .bash_login and .profile (only). There is no sane reason why this should be so; it’s just historical.
What command does mean?
Since we don't know the path where sphinx-build installed, you may use commands like: which, type, etc.
As per this great answer(shell - Why not use "which"? What to use then? - Unix & Linux Stack Exchange, author recommends to use command <name>, or $(command -v <name>)

Fortify scan for python project

Hot to generate Fortify for file for python files.
A similar question is Fortify, how to start analysis through command but it lists the steps for java.
To generate reports for python project, --python-path has to be used.
I tried following steps, but did not work.
Step 1: Clean,build
sourceanalyzer -64 -Xms1024M -Xmx10000M -b -verbose -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
Step 2: Scan: This step should generate fpr file
sourceanalyzer -b 9999 -verbose -Xms1024M -Xmx10000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -python-path /path/to/python -f projec_999.fpr /local/proj/**/*.py
This did not generate any fpr file.
The second step gives the warning as:
[warning]: The -f option has no effect without the -scan option
[warning]: You may need to add some arguments to the -python-path argument to SCA.
I am not sure if I am using the correct command.
How to make sure that all python files are being scanned in the directory and subdirectories?
Is there any option to add multiple python paths?
The first step you did only does Clean, not the build step.
To perform the translation step for Python you need to specify the directories for the any Python references (-python-path) as well as the files to translate.
I am also not sure what you are doing with the ProjectRoot and WorkingDirectory, you know these are used to store temp data/intermediate files for sourceanalyzer and not the location of your source code, correct?
Something like
sourceanalyzer -b <buildId> -python-path <directories> <files to scan>'
<buildId> can be used to group different projects, you are somewhat doing this yourself when you do the ProjectRoot and WorkingDirectory (I am not sure if you need them both, can't remember and I no longer have access to test it out)
<directories> - this is where you can list out the directories that would normally be in your PythonPath environment variable (you might be able to actually call it here and save a lot of hassle). This is a comma-seperated list for Windows and a colon-seperated list for Linux
<files to scan> this is where you specify the files you want to translate/scan. You can specify individual files or use wildcard characters (* and **/* [recursive])
A sample command would look like:
sourceanalyzer -b MyApp -python-path %PYTHONPATH% ./MyApp/**/*
The other options you are putting in can be used and it would look something like this:
sourceanalyzer -b MyApp -Xms1024M -Xmx10G -logfile /local/proj/working/9999/working/sca.log -python-path %PYTHONPATH% ./MyApp/**/*
It is at this step you would check to see what files we translated from your program:
sourceanalyzer -b MyApp -show-files
Then you would perform the scan command
sourceanalyzer -b MyApp -logfile /local/proj/working/9999/working/sca.log -scan -f project.fpr
You may apply -python-path multiple times. This solves the problem which separator to use. The list of needed directories may be obtained with python:
import sys
print(sys.path)

How do I add more python modules to my yocto/openembedded project?

I wish to add more python modules to my yocto/openembedded project but I am unsure how to? I wish to add flask and its dependencies.
some python packages having corresponding recipes in the meta folders, like Enum class for example:
meta-openembedded/meta-python/recipes-devtools/python/python-enum34_1.1.6.bb
unfortunately lot's of useful classes aren't available, but some might be needed for the python application. get used of installing missing packages using pip already on booted platform? but what if the target product is not IP network connected? the solution is to implement a new recipe and add to the platform meta layer (at least). Example is a recipe for the module keyboard useful for intercepting keys/buttons touch events:
use PyPi web site to identify if the package is available:
https://pypi.org/project/keyboard/
download archive available on the package description page:
https://github.com/boppreh/keyboard/archive/master.zip
collect some useful information required to fill-out a new recipe:
SUMMARY - could be obtained from the package description page
HOMEPAGE - the project URL on github or bitbucket or sourceforge, etc
LICENSE - verify license type
LIC_FILES_CHKSUM by executing md5sum on existing LICENSE or README or PKG-INFO file located in the root of the package (preferrably)
SRC_URI[md5sum] - is md5sum of the archive itself. it will be used to discover and download archive on pypi server automatically with the help of supporting script inherit pypi
PYPI_PACKAGE_EXT - if the package is not tar.gz require to supply the correct extension
create missing python-keyboard_0.13.1.bb recipe:
`
SUMMARY = "Hook and simulate keyboard events on Windows and Linux"
HOMEPAGE = "https://github.com/boppreh/keyboard"
LICENSE = "BSD-3-Clause"
LIC_FILES_CHKSUM = "file://PKG-INFO;md5=9bc8ba91101e2f378a65d36f675c88b7"
SRC_URI[md5sum] = "d4b90e53bbde888e7b7a5a95fe580a30"
SRC_URI += "file://add_missing_CHANGES_md.patch"
PYPI_PACKAGE = "keyboard"
PYPI_PACKAGE_EXT = "zip"
inherit pypi
inherit setuptools
BBCLASSEXTEND = "native nativesdk"
`
the package has been patched by adding
SRC_URI += "file://add_missing_CHANGES_md.patch"
directive to the recipe due to missing CHANGES.md file used by setup.py script to identify package version (this step is optional). the patch itself has to be placed inside the folder next to the recipe matching recipe name but without version:
python-keyboard
This question is old, but currently in 2020 there is a python package called pipoe.
pipoe can generate .bb classes corresponding to python packages for you!
Usage:
$ pip3 install pipoe
$ pipoe -p requests
OR
$ pipoe -p requests --python python3
Now copy the generated .bb files to your layer and use them.
https://pypi.org/project/pipoe/
The OE layer index at layers.openembedded.org lists all known layers and the recipes they contain, so searching that should bring up the meta-python layer that you can add to your build and use recipes from.
In your Image recipe you can add a Python module by adding it to the IMAGE_INSTALL variable:
IMAGE_INSTALL += "python-numpy"
You can find possible modules for example by searching for them with wildcards:
find -name *python*numpy*bb
in the Yocto Folder brings:
./poky/meta/recipes-devtools/python/python-numpy_1.7.0.bb
The pipoe did not work for me either, I ended up making this bash script. Someone else might find it useful.
You will need to change this in my script below:
local my_layers_dir="my/layers/directory"
To run this script:
./pypi.sh <modulename>
#example:
./pypi.sh humanfriendly #this should generate the bb file for the humanfriendly python module
pypi.sh:
#!/bin/bash
set -ex
function argstovars()
{
for change in $#; do
set -- `echo $change | tr '=' ' '`
eval $1=$2
done
}
function main(){
local module=""
argstovars $#
local my_layers_dir="my/layers/directory"
local url_files="https://pypi.org/project/$module/#files"
mkdir -p /tmp/pypi
rm -fr /tmp/pypi/*
pushd /tmp/pypi
wget $url_files
local targz_url=$(cat index.html | grep https://files | grep tar.gz | sed -r "s/<a href=\"(.*)\">/\1/g")
wget $targz_url
local targz_file=$(ls | grep tar.gz)
local md5=$(md5sum $targz_file)
md5=${md5%% *}
local sha256=$(sha256sum $targz_file)
sha256=${sha256%% *}
tar -xf $targz_file
local module_with_version=$(echo "$targz_file" | sed -r "s/(.*)\.tar\.gz/\1/g")
pushd $module_with_version
local license_file=$(find . -name LICENSE*)
local md5lic=$(md5sum $license_file)
md5lic=${md5lic%% *}
popd
popd
module_with_version="${module_with_version//-/_}"; echo $foo
mkdir -p "$my_layers_dir/$module"
pushd "$my_layers_dir/$module"
echo "SUMMARY = \"This is a python module for $module\"
HOMEPAGE = \"https://pypi.org/project/$module/\"
LICENSE = \"MIT\"
LIC_FILES_CHKSUM = \"file://$license_file;md5=$md5lic\"
SRC_URI[md5sum] = \"$md5\"
SRC_URI[sha256sum] = \"$sha256\"
PYPI_PACKAGE = \"$module\"
inherit pypi setuptools3
RDEPENDS_${PN} += \" \
python3-psutil \
\"
" > "${module_with_version}.bb"
popd
}
time main module=$#

How to use jinja2 and its i18n extension (using babel) outside flask

How can I use jinja2 with babel outside a flask application.
Supposing that I have locale dir which is populated using pybabel command. I want to load translation files and translate my template files.
I found the solution. Here's how you can use jinja2/babel without flask integration.
Preconditions
Preconditions are described just to complete the example, all of them can have other values or names.
You use message domain named "html" for messages (domain is arbitrary name, default is "message").
There is a directory "i18n" with translated and compiled messages (e.g. with a file i18n/cs/LC_MESSAGES/html.mo).
You prefer to render your templates using "cs" or "en" locale.
The templates are located in directory templates and there exists a jinja2 template named stack.html there, so there exists a file templates/stack.html.
Code sample
from jinja2 import Environment, FileSystemLoader
from babel.support import Translations
locale_dir = "i18n"
msgdomain = "html"
list_of_desired_locales = ["cs", "en"]
loader = FileSystemLoader("templates")
extensions = ['jinja2.ext.i18n', 'jinja2.ext.autoescape', 'jinja2.ext.with_']
translations = Translations.load(locale_dir, list_of_desired_locales)
env = Environment(extensions=extensions, loader=loader) # add any other env options if needed
env.install_gettext_translations(translations)
template = env.get_template("stack.html")
rendered_template = template.render()
The rendered_template contains the rendered HTML content now, probably in "cs" locale.
This works great! Thanks.
I. jinja2 dependency MarkupSafe
II. Python babel dependency ytz
See for these steps at http://tlphoto.googlecode.com/git/jinja2_i18n_howto.txt
Create the folder structure (no whitespace after the commas!!!)
mkdir -pv ./lang/{en_US,zh_CN,fa_IR,es_VE,de_DE,ja_JP}/LC_MESSAGES/
Extract
pybabel -v extract -F babel.config -o ./lang/messages.pot ./
Init/Update
3.1 Init
pybabel init -l zh_CN -d ./lang -i ./lang/messages.pot
3.2 Update
pybabel update -l zh_CN -d ./lang -i ./lang/messages.pot
Compile
pybabel compile -f -d ./lang

Categories

Resources