I am trying to build a GTK application in MacOS (Monterey, v. 12.4) that includes both C GTK components and Python GTK components. I am following the instructions from both here and here. I had minimal issues with the first part (although for some reason I got an error where jhbuild said cargo did not exist when building librsvg during the call to jhbuild build meta-gtk-osx-gtk3, despite .new_local/bin being at the front of PATH). The instructions there were simply:
sh gtk-osx-setup.sh
alias jhbuild="PATH=.new_local/bin:$PATH jhbuild"
jhbuild bootstrap-gtk-osx
jhbuild build meta-gtk-osx-bootstrap meta-gtk-osx-gtk3
In any case, the issue that I am having now is installing either set of bindings for Python. When I attempt to build either, jhbuild states both:
jhbuild#Cytocyberneticss-Mac-mini ~ % jhbuild build meta-gtk-osx-python-gtk3
Loading .env environment variables...
jhbuild build: A module called ''meta-gtk-osx-python-gtk3'' could not be found.
Usage: run_jhbuild.py [ -f config ] command [ options ... ]
jhbuild#Cytocyberneticss-Mac-mini ~ % jhbuild build meta-gtk-osx-python
Loading .env environment variables...
jhbuild build: A module called ''meta-gtk-osx-python'' could not be found.
Usage: run_jhbuild.py [ -f config ] command [ options ... ]
I do not have home-brew or MacPorts installed so neither of those could be getting in the way. I really am at a loss as to what the problems is here, when the other builds went fine. Any pointers would be greatly appreciated. Let me know if you need any more information about my setup.
As per the package maintainer:
I need to rewrite that Python wiki page, it's thoroughly out of date.
meta-gtk-osx-python-gtk3 got changed to meta-gtk-osx-python2-gtk3, and current versions of gtk-osx don't support gtk2. What's more, python2 is obsolete and its use is deprecated; you should use meta-gtk-osx-python3-gtk3. I haven't yet made a meta-gtk-osx-python3-gtk4 but you can easily do so in your own moduleset if your application is ready for it.
So simply use either:
jhbuild build meta-gtk-osx-python3-gtk3
jhbuild build meta-gtk-osx-python2-gtk3
This question will be irrelevant though as soon as the Wiki is updated...
I need to package a virtualenv as an rpm. I found a sample spec file for plone here
My project uses python 2.7 and for that I've built python from source. Therefore I changed some of the spec file to
/usr/local/bin/virtualenv-3.4 --no-site-packages --distribute %{_builddir}/usr/local/virtualenvs/%{shortname}
I'm getting the following error on rpmbuild -bb requirements.spec
+ /usr/sbin/prelink -u /var/lib/jenkins/rpmbuild/BUILDROOT/requirements-1.0-1.x86_64/usr/local/virtualenvs/requirements/bin/python
/usr/sbin/prelink: /var/lib/jenkins/rpmbuild/BUILDROOT/requirements-1.0-1.x86_64/usr/local/virtualenvs/requirements/bin/python does not have .gnu.prelink_undo section
I'm assuming I need to rebuild python and enable the prelinking during the ./configure. How can I do that?
I had a similar issue recently with a SPEC file that was also based on this example from plone.
In my case I'm using python27 RPMs from IUS repository and want to avoid building it from source.
My workaround was to disable prelink completely in my SPEC file:
add this: %define __prelink_undo_cmd %{nil}
comment out this:
# # This avoids prelink & RPM helpfully breaking the package signatures:
# /usr/sbin/prelink -u $RPM_BUILD_ROOT/usr/local/virtualenvs/%{shortname}/bin/python
I'm developing my own module in python 2.7. It resides in ~/Development/.../myModule instead of /usr/lib/python2.7/dist-packages or /usr/lib/python2.7/site-packages. The internal structure is:
/project-root-dir
/server
__init__.py
service.py
http.py
/client
__init__.py
client.py
client/client.py includes PyCachedClient class. I'm having import problems:
project-root-dir$ python
Python 2.7.2+ (default, Jul 20 2012, 22:12:53)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from server import http
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "server/http.py", line 9, in <module>
from client import PyCachedClient
ImportError: cannot import name PyCachedClient
I didn't set PythonPath to include my project-root-dir, therefore when server.http tries to include client.PyCachedClient, it tries to load it from relative path and fails. My question is - how should I set all paths/settings in a good, pythonic way? I know I can run export PYTHONPATH=... in shell each time I open a console and try to run my server, but I guess it's not the best way. If my module was installed via PyPi (or something similar), I'd have it installed in /usr/lib/python... path and it'd be loaded automatically.
I'd appreciate tips on best practices in python module development.
My Python development workflow
This is a basic process to develop Python packages that incorporates what I believe to be the best practices in the community. It's basic - if you're really serious about developing Python packages, there still a bit more to it, and everyone has their own preferences, but it should serve as a template to get started and then learn more about the pieces involved. The basic steps are:
Use virtualenv for isolation
setuptools for creating a installable package and manage dependencies
python setup.py develop to install that package in development mode
virtualenv
First, I would recommend using virtualenv to get an isolated environment to develop your package(s) in. During development, you will need to install, upgrade, downgrade and uninstall dependencies of your package, and you don't want
your development dependencies to pollute your system-wide site-packages
your system-wide site-packages to influence your development environment
version conflicts
Polluting your system-wide site-packages is bad, because any package you install there will be available to all Python applications you installed that use the system Python, even though you just needed that dependency for your small project. And it was just installed in a new version that overrode the one in the system wide site-packages, and is incompatible with ${important_app} that depends on it. You get the idea.
Having your system wide site-packages influence your development environment is bad, because maybe your project depends on a module you already got in the system Python's site-packages. So you forget to properly declare that your project depends on that module, but everything works because it's always there on your local development box. Until you release your package and people try to install it, or push it to production, etc... Developing in a clean environment forces you to properly declare your dependencies.
So, a virtualenv is an isolated environment with its own Python interpreter and module search path. It's based on a Python installation you previously installed, but isolated from it.
To create a virtualenv, install the virtualenv package by installing it to your system wide Python using easy_install or pip:
sudo pip install virtualenv
Notice this will be the only time you install something as root (using sudo), into your global site-packages. Everything after this will happen inside the virtualenv you're about to create.
Now create a virtualenv for developing your package:
cd ~/pyprojects
virtualenv --no-site-packages foobar-env
This will create a directory tree ~/pyprojects/foobar-env, which is your virtualenv.
To activate the virtualenv, cd into it and source the bin/activate script:
~/pyprojects $ cd foobar-env/
~/pyprojects/foobar-env $ . bin/activate
(foobar-env) ~/pyprojects/foobar-env $
Note the leading dot ., that's shorthand for the source shell command. Also note how the prompt changes: (foobar-env) means your inside the activated virtualenv (and always will need to be for the isolation to work). So activate your env every time you open a new terminal tab or SSH session etc..
If you now run python in that activated env, it will actually use ~/pyprojects/foobar-env/bin/python as the interpreter, with its own site-packages and isolated module search path.
A setuptools package
Now for creating your package. Basically you'll want a setuptools package with a setup.py to properly declare your package's metadata and dependencies. You can do this on your own by by following the setuptools documentation, or create a package skeletion using Paster templates. To use Paster templates, install PasteScript into your virtualenv:
pip install PasteScript
Let's create a source directory for our new package to keep things organized (maybe you'll want to split up your project into several packages, or later use dependencies from source):
mkdir src
cd src/
Now for creating your package, do
paster create -t basic_package foobar
and answer all the questions in the interactive interface. Most are optional and can simply be left at the default by pressing ENTER.
This will create a package (or more precisely, a setuptools distribution) called foobar. This is the name that
people will use to install your package using easy_install or pip install foobar
the name other packages will use to depend on yours in setup.py
what it will be called on PyPi
Inside, you almost always create a Python package (as in "a directory with an __init__.py) that's called the same. That's not required, the name of the top level Python package can be any valid package name, but it's a common convention to name it the same as the distribution. And that's why it's important, but not always easy, to keep the two apart. Because the top level python package name is what
people (or you) will use to import your package using import foobar or from foobar import baz
So if you used the paster template, it will already have created that directory for you:
cd foobar/foobar/
Now create your code:
vim models.py
models.py
class Page(object):
"""A dumb object wrapping a webpage.
"""
def __init__(self, content, url):
self.content = content
self.original_url = url
def __repr__(self):
return "<Page retrieved from '%s' (%s bytes)>" % (self.original_url, len(self.content))
And a client.py in the same directory that uses models.py:
client.py
import requests
from foobar.models import Page
url = 'http://www.stackoverflow.com'
response = requests.get(url)
page = Page(response.content, url)
print page
Declare the dependency on the requests module in setup.py:
install_requires=[
# -*- Extra requirements: -*-
'setuptools',
'requests',
],
Version control
src/foobar/ is the directory you'll now want to put under version control:
cd src/foobar/
git init
vim .gitignore
.gitignore
*.egg-info
*.py[co]
git add .
git commit -m 'Create initial package structure.
Installing your package as a development egg
Now it's time to install your package in development mode:
python setup.py develop
This will install the requests dependency and your package as a development egg. So it's linked into your virtualenv's site-packages, but still lives at src/foobar where you can make changes and have them be immediately active in the virtualenv without re-installing your package.
Now for your original question, importing using relative paths: My advice is, don't do it. Now that you've got a proper setuptools package, that's installed and importable, your current working directory shouldn't matter any more. Just do from foobar.models import Page or similar, declaring the fully qualified name where that object lives. That makes your source code much more readable and discoverable, for yourself and other people that read your code.
You can now run your code by doing python client.py from anywhere inside your activated virtualenv. python src/foobar/foobar/client.py works just as fine, your package is properly installed and your working directory doesn't matter any more.
If you want to go one step further, you can even create a setuptools entry point for your CLI scripts. This will create a bin/something script in your virtualenv that you can run from the shell.
setuptools console_scripts entry point
setup.py
entry_points='''
# -*- Entry points: -*-
[console_scripts]
run-fooobar = foobar.main:run_foobar
''',
client.py
def run_client():
# ...
main.py
from foobar.client import run_client
def run_foobar():
run_client()
Re-install your package to activate the entry point:
python setup.py develop
And there you go, bin/run-foo.
Once you (or someone else) installs your package for real, outside the virtualenv, the entry point will be in /usr/local/bin/run-foo or somewhere simiar, where it will automatically be in $PATH.
Further steps
Creating a release of your package and uploading it PyPi, for example using zest.releaser
Keeping a changelog and versioning your package
Learn about declaring dependencies
Learn about Differences between distribute, distutils, setuptools and distutils2
Suggested reading:
The Hitchhiker’s Guide to Packaging
The pip cookbook
So, you have two packages, the first with modules named:
server # server/__init__.py
server.service # server/service.py
server.http # server/http.py
The second with modules names:
client # client/__init__.py
client.client # client/client.py
If you want to assume both packages are in you import path (sys.path), and the class you want is in client/client.py, then in you server you have to do:
from client.client import PyCachedClient
You asked for a symbol out of client, not client.client, and from your description, that isn't where that symbol is defined.
I personally would consider making this one package (ie, putting an __init__.py in the folder one level up, and giving it a suitable python package name), and having client and server be sub-packages of that package. Then (a) you could do relative imports if you wanted to (from ...client.client import something), and (b) your project would be more suitable for redistribution, not putting two very generic package names at the top level of the python module hierarchy.
I am messing around with the combination of buildout and virtualenv to setup an isolated development environment in python that allows to do reproducible builds.
There is a recipe for buildout that let's you integrate virtualenv into buildout:
tl.buildout_virtual_python
With this my buildout.cfg looks like this:
[buildout]
develop = .
parts = script
virtualpython
[virtualpython]
recipe = tl.buildout_virtual_python
headers = true
executable-name = vp
site-packages = false
[script]
recipe = zc.recipe.egg:scripts
eggs = foo
python = virtualpython
This will deploy two executables into ./bin/:
vp
script
When I execute vp, I get an interactive, isolated python dialog, as expected (can't load any packages from the system).
What I would expect now, is that if I run
./bin/script
that the isolated python interpreter is used. But it doesn't, it's not isolated as "vp" is (meaning I can import libraries from system level). However I can run:
./bin/vp ./bin/script
Which will run the script in an isolated environment as I wished. But there must be a way to specify this to do so without chaining commands otherwise buildout only solves half of the problems I hoped :)
Thanks for your help!
Patrick
You don't need virtualenv: buildout already provides an isolated environment, just like virtualenv.
As an example, look at files buildout generates in the bin directory. They'll have something like:
import sys
sys.path[0:0] = [
'/some/thing1.egg',
# and other things
]
So the sys.path gets completely replaced with what buildout wants to have on the path: the same isolation method as virtualenv.
zc.buildout 2.0 and later does not provide the isolated environment anymore.
But virtualenv 1.9 and later provides complete isolation (including to not install setuptools).
Thus the easiest way to get a buildout in a complete controlled environment is to run the following steps (here i.e. for Python 2.7):
cd /path/to/buildout
rm ./bin/python
/path/to/virtualenv-2.7 --no-setuptools --no-site-packages --clear .
./bin/python2.7 bootstrap.py
./bin/buildout
Preconditions:
bootstrap.py has to be a recent one matching the buildout version you are using. You'll find the latest at http://downloads.buildout.org/2/
if there are any version pins in your buildout, ensure they do not pin buildout itself or recipes/ extensions to versions not compatible with zc.buildout 2 or later.
Had issue running buildout using bootstrap on ubuntu server, from then I use virtualenv and buildout together. Simply create virualenv and install buildout in it. This way only virtualenv has to be installed into system (in theory1).
$ virtualenv [options_you_might_need] virtual
$ source virtual/bin/activate
$ pip install zc.buildout
$ buildout -c <buildout.cfg>
Also tell buildout to put its scripts in to virtual/bin/ directory, that way scripts appear on $PATH.
[buildout]
bin-directory = ${buildout:directory}/virtual/bin
...
1: In practice you probably will need to eggs what require compilation to system level that require compilation. Eggs like mysql or memcache.
I've never used that recipe before, but the first thing I would try is this:
[buildout]
develop = .
parts = script
virtualpython
[virtualpython]
recipe = tl.buildout_virtual_python
headers = true
executable-name = vp
site-packages = false
[script]
recipe = zc.recipe.egg:scripts
eggs = foo
python = virtualpython
interpreter = vp
If that doesn't work, you can usually open up the scripts (in this case vp and script) in a text editor and see the Python paths that they're using. If you're on windows there will usually be a file called <script_name>-script.py. In this case, that would be vp-script.py and script-script.py.
how can I make setup.py file for my own script? I have to make my script global.
(add it to /usr/bin) so I could run it from console just type: scriptName arguments.
OS: Linux.
EDIT:
Now my script is installable, but how can i make it global? So that i could run it from console just name typing.
EDIT: This answer deals only with installing executable scripts into /usr/bin. I assume you have basic knowledge on how setup.py files work.
Create your script and place it in your project like this:
yourprojectdir/
setup.py
scripts/
myscript.sh
In your setup.py file do this:
from setuptools import setup
# you may need setuptools instead of distutils
setup(
# basic stuff here
scripts = [
'scripts/myscript.sh'
]
)
Then type
python setup.py install
Basically that's it. There's a chance that your script will land not exactly in /usr/bin, but in some other directory. If this is the case, type
python setup.py install --help
and search for --install-scripts parameter and friends.
I know that this question is quite old, but just in case, I post how I solved the problem for myself, that was wanting to setup a package for PyPI, that, when installing it with pip, would install it as a system package, not just for Python.
setup(
# rest of setup
console_scripts={
'console_scripts': [
'<app> = <package>.<app>:main'
]
},
)
Details