I have a specialized buildout package, say, my.buildout, and under src there is MyProject package (which does not have buidlout.cfg by itself, but of course do have setup.py). These are the relevant lines in buildout.cfg in addition to sources:
develop = src/MyProject
auto-checkout = MyProject
When I run bootstrap and bin/buildout, src/MyProject is automatically checked out, and bin/pserve script (and many other scripts under bin) contains paths to all dependencies.
The python interpreter is coming from one of the virtual envs I have. Note, that dependencies are installed by buildout under eggs, not in the virtual env.
I want PyCharm to understand whereabouts of both MyProject and the eggs, that is, everything, which is normally available when the project is running.
Tried to add my.buildout as a project, added correct python interpreter. When I go into MyProject, dependencies are underlined in red.
Also tried to add MyProject as a project, with the same result.
I am aware of this answer:
PyCharm doesn't recognize Buildout dependencies
but I have set up interpreter to the one in the #! of the bin/pserve, and "bin/py" script, which is used as a wrapper, naturally can't be added as an interpreter.
Is it possible to have my.buildout and MyProject as they are, or is buildout support of PyCharm intended for some different buildout structure / development workflow?
(and sometimes there are many projects under development is src, I've simplified)
I am very new to PyCharm (trying it out), so may be missing something obvious.
Update: Of course, I've enabled Buildout support in the settings, and used the full path to my.buildout/bin/buildout script.
Just to make it clearer. In the bin/pserve script (generated by buildout), which is used to run the app, a lot of paths are inserted into sys.path. PyCharm just does not read those. The question is how to make it aware of those paths.
Update 2: And even more: when I give bin/py as a buidlout script in the Settings, the project panel faithfully lists "Buildout Eggs" (from sys.path?), but still shows "Package requirements .... " Install / Ignore suggestion.
Ok, so after adding .../bin/py (instead of bin/buildout, which does not contain but two paths) as a "Use paths from script" in the Setting > Build, Execution, Deployment > Buildout Support I am getting lookups right!
(Even though PyCharm still suggests to install requirements)
Related
The venv module (shipped with Python 3.3 or later), and virtualenv, still widely in use, allow to install a project's dependencies not to the system-wide Python installation, but to a directory specific to that project.
One of the subdirectories of such a "virtual environment" contains a copy of the Python interpreter as well as "activate" and "deactivate" scripts - but this subdirectory is called Scripts on Windows, and bin on all other systems.
This is somewhat surprising. Why did they special-case Windows?
(Neither PEP 405, nor the venv or virtualenv sources (or docs) contain any explanation - a commit message in virtualenv refers to "convention")
"I think the commit message is the best you'll get. Everything else will be pure speculation." (Bryan's comment, refering to the commit message in virtualenv)
Most ms-windows programs have a GUI which is started by an icon or menu-entry. So there is no need for a standardized location for binaries (which is then put in the $PATH) such as UNIX has. Also, the name bin wouldn't mean anything like it does to UNIX users.
Additionaly, ms-windows only has a very primitive package management (if you can even call it package management), so applications tend to be installed in their own directory tree where they won't interfere with each other.
I'm using Python to develop few company-specific applications. There is a custom shared module ("library") that describes some data and algorithms and there are dozens of Python scripts that work with this library. There's quite a lot of these files, so they are organized in subfolders
myproject
apps
main_apps
app1.py
app2.py
...
utils
util1.py
util2.py
...
library
__init__.py
submodule1
__init__.py
file1.py
...
submodule2
...
Users want to run these scripts by simply going, say, to myproject\utils and launching "py util2.py some_params". Many of these users are developers, so quite often they want to edit a library and immediately re-run scripts with updated code. There are also some 3rd party libraries used by this project and I want to make sure that everyone is using the same versions of these libs.
Now, there are two key problems I encountered:
how to reference (library) from (apps)?
how to manage 3rd party dependencies?
The first problem is well-familiar to many Python developers and was asked on SO for many times: it's quite difficult to instruct Python to import package from "....\library". I tested several different approaches, but it seems that python is reluctant to search for packages anywhere, but in standard libraries locations or the folder of the script itself.
Relative import doesn't work since script is not a part of a library (and even if it was, this still doesn't work when script is executed directly unless it's placed in the "root" project folder which I'd like to avoid)
Placing .pth file (as one might think from reading this document) to script folder apparently doesn't have any effect
Of course direct meddling with sys.path work, but boilerplate code like this one in each and every one of the script files looks quite terrible
import sys, os.path
here = os.path.dirname(os.path.realpath(__file__))
module_root = os.path.abspath(os.path.join(here, '../..'))
sys.path.append(python_root)
import my_library
I realize that this happens because Python wants my library to be properly "installed" and that's indeed would be the only right way to go had this library was developed separately from the scripts that use it. But unfortunately it's not the case and I think that re-doing "installation" of library each time it's changed is going to be quite inconvenient and prone to errors.
The second problem is straightforward. Someone adds a new 3rd party module to our app/lib and everyone else start seeing import problems once they update their apps. Several branches of development, different moments when user does pip install, few rollbacks - and everyone eventually ends using different versions of 3rd party modules. In my case things are additionally complicated by the fact that many devs work a lot with older Python 2.x code while I'd like to move on to Python 3.x
While looking for a possible solution for my problems, I found a truly excellent virtual environments feature in Python. Things looked quite bright:
Create a venv for myproject
Distribute a Requirements.txt file as part of app and provide a script that populates venv accordingly
Symlink my own library to venv site_packages folder so it'll be always detected by Python
This solution looked quite natural & robust. I'm explicitly setting my own environment for my project and place whatever I need into this venv, including my own lib that I can still edit on the fly. And it indeed work. But calling activate.bat to make this python environment active and another batch file to deactivate it is a mess, especially on Windows platform. Boilerplate code that is editing sys.path looks terrible, but at least it doesn't interfere with UX like this potential fix do.
So there's a question that I want to ask.
Is there a way to bind particular python venv to particular folders so python launcher will automatically use this venv for scripts from these folders?
Is there a better alternative way to handle this situation that I'm missing?
Environment for my project is Python 3.6 running on Windows 10.
I think that I finally found a reasonable answer. It's enough to just add shebang line pointing to python interpreter in venv, e.g.
#!../../venv/Scripts/python
The full project structure will look like this
myproject
apps
main_apps
app1.py (with shebang)
app2.py (with shebang)
...
utils
util1.py (with shebang)
util2.py (with shebang)
...
library
__init__.py
submodule1
__init__.py
file1.py
...
submodule2
...
venv
(python interpreter, 3rd party modules)
(symlink to library)
requirements.txt
init_environment.bat
and things work like this:
venv is a virtual python environment with everything that project needs
init_environment.bat is a script that populates venv according to requirements.txt and places a symlink to my library into venv site-modules
all scripts start with shebang line pointing (with relative path) to venv interpreter
There's a full custom environment with all the libs including my own and scripts that use it will all have very natural imports. Python launcher will also automatically pick Python 3.6 as interpreter & load the relevant modules whenever any user-facing script in my project is launched from console or windows explorer.
Cons:
Relative shebang won't work if a script is called from other folder
User will still have to manually run init_environment.bat to update virtual environment according to requirements.txt
init_environment scrip on Windows require elevated privileges to make a symlink (but hopefully that strange MS decision will be fixed with upcoming Win10 update in April'17)
However I can live with these limitations. Hope that this will help others looking for similar problems.
Would be still nice to still hear other options (as answers) and critics (as comments) too.
I'm currently moving a project between home and work using svn. The IDE I'm using is PyCharm which I find awsome. I get everything integrated into one tool.
PyCharm has the ability to create a setup.py from the virtualenv for me that I also commit to svn.
By default PyCharm is adding files to my svn repo with full recursion.
Should I also let PyCharm add the Include and Lib folders of my project and the Scripts folder? I run version 2.6 at work and 2.7 at home but I don't really want that to matter either since code wise it doesn't.
To me it seems better if that is updated on the other machine running python setup.py.
Include, Lib and Scripts folders being part of virtualenv are not part of your project thus they should not be under vcs control. You might find PyCharm: versioning .idea folder while keeping different interpreters across developers interesting as well. In addition you might want to take a look at pip requirements file as a mean to recreate the same environment for your project on different computers.
I have a django project that uses a lot of 3rd party apps, so wanted to decide out of the two approaches to manage my situation :
I can use [ virtualenv + pip ] along with pip freeze as requirements file to manage my project dependencies.
I don't have to worry about the apps, but can't have that committed with my code to svn.
I can have a lib folder in my svn structure and have my apps sit there and add that to sys.path
This way, my dependencies can be committed to svn, but I have to manage sys.path
Which way should I proceed ?
What are the pros and cons of each approach ?
Update:
Method1 Disadvantage : Difficult to work with appengine.
This has been unanswered question (at least to me) so far. There're some discussion on this recently:-
https://plus.google.com/u/0/104537541227697934010/posts/a4kJ9e1UUqE
Ian Bicking said this in the comment:-
I do think we can do something that incorporates both systems. I
posted a recipe for handling this earlier, for instance (I suppose I
should clean it up and repost it). You can handle libraries in a very
similar way in Python, while still using the tools we have to manage
those libraries. You can do that, but it's not at all obvious how to
do that, so people tend to rely on things like reinstalling packages
at deploy time.
http://tarekziade.wordpress.com/2012/02/10/defining-a-wsgi-app-deployment-standard/
The first approach seem the most common among python devs. When I first started doing development in Django, it feels a bit weird since when doing PHP, it quite common to check third party lib into the project repo but as Ian Bicking said in the linked post, PHP style deployment leaves out thing such non-portable library. You don't want to package things such as mysqldb or PIL into your project which better being handled by tools like Pip or distribute.
So this is what I'm using currently.
All projects will have virtualenv directory at the project root. We name it as .env and ignore it in vcs. The first thing dev did when to start doing development is to initialize this virtualenv and install all requirements specified in requirements.txt file. I prefer having virtualenv inside project dir so that it obvious to developer rather than having it in some other place such as $HOME/.virtualenv and then doing source $HOME/virtualenv/project_name/bin/activate to activate the environment. Instead developer interact with the virtualenv by invoking the env executable directly from project root, such as:-
.env/bin/python
.env/bin/python manage.py runserver
To deploy, we have a fabric script that will first export our project directory together with the .env directory into a tarball, then copy the tarball to live server, untar it deployment dir and do some other tasks like restarting the server etc. When we untar the tarball on live server, the fabric script make sure to run virtualenv once again so that all the shebang path in .env/bin get fixed. This mean we don't have to reinstall dependencies again on live server. The fabric workflow for deployment will look like:-
fab create_release:1.1 # create release-1.1.tar.gz
fab deploy:1.1 # copy release-1.1.tar.gz to live server and do the deployment tasks
fab deploy:1.1,reset_env=1 # same as above but recreate virtualenv and re-install all dependencies
fab deploy:1.1,update_pkg=1 # only reinstall deps but do not destroy previous virtualenv like above
We also do not install project src into virtualenv using setup.py but instead add path to it to sys.path. So when deploying under mod_wsgi, we have to specify 2 paths in our vhost config for mod_wsgi, something like:-
WSGIDaemonProcess project1 user=joe group=joe processes=1 threads=25 python-path=/path/to/project1/.env/lib/python2.6/site-packages:/path/to/project1/src
In short:
We still use pip+virtualenv to manage dependencies.
We don't have to reinstall requirements when deploying.
We have to maintain path into sys.path a bit.
Virtualenv and pip are fantastic for working on multiple django projects on one machine. However, if you only have one project that you are editing, it is not necessary to use virtualenv.
I'm developing a Python utility module to help with file downloads, archives, etc. I have a project set up in a virtual environment along with my unit tests. When I want to use this module on the same computer (essentially as "Production"), I move the files to the mymodule directory in the ~/dev/modules/mymodule
I keep all 3rd-party modules under ~/dev/modules/contrib. This contrib path is on my PYTHONPATH, but mymodule is NOT because I've noticed that if mymodule is on my PYTHONPATH, my unit tests cannot distinguish between the "Development" version and the "Production" version. But now if I want to use this common utility module, I have to manually add it to the PYTHONPATH.
This works, but I'm sure there's a better, more automated way.
What is the best way to have a Development and Production module on the same computer? For instance, is there a way to set PYTHONPATH dynamically?
You can add/modify python paths at sys.path, just make sure that the first path is the current directory ".", because some third-party modules rely on importing from the directory of the current module.
More information on python paths:
http://djangotricks.blogspot.com/2008/09/note-on-python-paths.html
I'm guessing by virtual environment you mean the virtualenv package?
http://pypi.python.org/pypi/virtualenv
What I'd try (and apologies if I've not understood the question right) is:
Keep the source somewhere that isn't referenced by PYTHONPATH (e.g. ~/projects/myproject)
Write a simple setuptools or distutils script for installing it (see Python distutils - does anyone know how to use it?)
Use the virtualenv package to create a dev virtual environment with the --no-site-packages option - this way your "dev" version won't see any packages installed in the default python installation.
(Also make sure your PYTHONPATH doesn't have any of your source directories)
Then, for testing:
Activate dev virtual environment
Run install script, (usually something like python setup.py build install). Your package ends up in /path/to/dev_virtualenv/lib/python2.x/site-packages/
Test, break, fix, repeat
And, for production:
Make sure dev virtualenv isn't activated
Run install script
All good to go, the "dev" version is hidden away in a virtual environment that production can't see...
...And there's no (direct) messing around with PYTHONPATH
That said, I write this with the confidence of someone who's not actually tried setting using virtualenv in anger and the hope I've vaguely understood your question... ;)
You could set the PYTHONPATH as a global environment variable pointing to your Production code, and then in any shell in which you want to use the Development code, change the PYTHONPATH to point to that code.
(Is that too simplistic? Have I missed something?)