Create Python .egg that installs from git repository - python

I'm currently researching deployment techniques for our Python products. We manage our code with multiple git repositories already but want to improve the process of setting up and updating our servers. It seems that easy_install, .egg files and virtualenv are the best tools for doing this nowadays.
Here's the catch: We don't really do versioning; all our products have a master branch which is supposed to provide stable code all the time. If we want to update, we have to git pull the master branch on every server, for each product and all its dependencies.
This solution is very time-consuming and we want to improve it.
My idea was to create a virtualenv instance on all servers/installations and use easy_install to install and update our own packages, but I couldn't find a way to specify a git repository as a source for the source code.
Is there a way to achieve that? Did I miss something? Am I going in the wrong direction and this is a bad idea overall?
Thanks in advance,
Fabian

You can use pip instead of easy_install, it supports a number of possible ways to specify where to get the package from, one being git, you could then install your package like this:
pip install git://my.git-repo.com/my_project.git

Related

python pip priority order with index-url and extra-index-url

I searched a bit but could not find a clear answer.
The goal is, to have two pip indexes, one is a private index, that will be a first priority. And one is the standard PyPI. The priority is there to prevent the security risk of code injection.
Say I have library named lib, and I configure index_url = http://my_private_pypi_repo and extra_index_url = https://pypi.org/simple
If I pip install lib, and lib exists in both indexes. What index will get the priority? From where it is going to be installed from?
Also, if I pip install lib=0.0.2 but lib exists in my private index at version 0.0.1. Is it going to look at PyPI as well?
And what is a good way to be in control, that certain libraries will only be fetched from the private index if they exists there, and will not be looked for at PyPI?
The short answer is: there is no prioritization and you probably should avoid using --extra-index-url entirely.
This is asked and answered here: https://github.com/pypa/pip/issues/5045#issuecomment-369521345
Question:
I have this in my pip.conf:
[global]
index-url = https://myregistry-xyz.com
extra-index-url = https://pypi.python.org/pypi
Let's assume packageX exists in both registries and I run pip install packageX.
I expect pip to install packageX from https://myregistry-xyz.com, but pip will use https://pypi.python.org/pypi instead.
If I switch the values for index-url and extra-index-url I get the same result. pypi is always prioritized.
Answer:
Packages are expected to be unique up to name and version, so two wheels with the same package name and version are treated as indistinguishable by pip. This is a deliberate feature of the package metadata, and not likely to change.
I would also recommend reading this discussion: https://discuss.python.org/t/dependency-notation-including-the-index-url/5659
There are quite a lot of things that are addressed in this discussion, some that is clearly out of scope for this question, but everything is very informative anyway.
In there, there should be the key takeaway for you:
Pip does not really prioritize one index over the other in theory. In practice, because of a coincidence in the way things are implemented in code, it might be that one is always checked first, but it is not a behavior you should rely on.
And what is a good way to be in control, that certain libraries will only be fetched from the private index if they exists there, and will not be looked for at PyPI?
You should setup and curate your own package index (devpi, pydist, jfrog artifactory, sonatype nexus, etc.) and use it exclusively, meaning: never use --extra-index-url. This is the only way you can have exact control over what gets downloaded. This custom repository might function mostly a proxy for the public PyPI, except for a couple of dependencies.
Related:
pip: selecting index url based on package name?
The title of this question feels a bit like an instance of XY problem. If you would elaborate more on what you want to achieve and what your constraints are we may be able to give you a better answer.
That said, sinoroc's suggestion to curate your own package index and use only that is a good one. A few other ideas also come to mind:
Update: Turns out pip may run distributions other than those in the constraints file so this method should probably be considered insecure. Additionally hashes are kind of broken on recent releases of pip.
Using a constraints file with hashes. This file can be generated using pip-tools like pip-compile --generate-hashes assuming you have documented your dependencies in a file named requirements.in. You can then install packages like pip install -c requirements.txt some_package.
Pro: What may be installed is documented alongside your code in your VCS.
Con: Controlling what is downloaded the first time is either tricky or laborious.
Con: Hash checking can be slow.
Con: You run into issues more frequently than when not using hashes. Some can be worked around others cannot; it is for instance not possible to combine constraints like -e file://` with hashes.
Use an alternative packaging tool like pipenv. It works similarly to the previous suggestion.
Pro: Easy to use
Con: Harder to integrate into your workflow if it does not fit naturally.
Curate packages locally. Packages and dependencies can be downloaded like pip download --dest some_dir some_package and installed like pip install --no-index --find-links some_dir.
Pro: What may be installed can be documented alongside your code, if you track the artifacts in VCS e.g. git lfs.
Con: Either all packages are downloaded or none are.
Use a hermetic build system. I know bazel advertise this as a feature, not sure about others like pants and buck.
Pro: May be the ultimate solution if you want control over your builds.
Con: Does not integrate well with open source python ecosystem afaik.
Con: A lot of overhead.
​1: https://en.wikipedia.org/wiki/XY_proble

Does the Python Package Manager **pip** have something like yarn link?

I am wanting to fork a python (pip) dependency that I am using and make some edits to it, etc. And I don't want to risk a pip update/upgrade erasing my changes.
In the javascript world, and easy way to do what I want to do is with the yarn link command.
Is there a command similar to yarn link when using python/pip?
So, I found out how to do this. Instead of doing a normal pip install, you can do the following:
Checkout a repo of the forked package
Then, run this command
pip install -e /path/to/the/package/on/local/file/system
This creates an editable install of the package in the folder of your choosing, so you can develop and make changes and see the effect of the changes immediately.
I'm sure seasoned python developers already know this. But I'm not in python everyday. I've been wanting to know how to do this for a long time. Finally figured it out. I hope this helps someone else!

Python: incorporate git repo in project

What is best practice for incorporating git repositories into the own project which do not contain a setup.py? There are several ways that I could imagine, but it doesn't seem clear which is best.
Copy the relevant code and include it into the own project
pro:
Only use the relevant code,
con:
git repo might be updated
need to do this for every project again
feels like stealing
Cloning the repository and writing a setup.py and install it with pip
pro:
easy
package can be updated
can use package like any normal pip package
con:
feels weird
Clone the repository and add the path to the project's search path
pro:
easy
package can be updated
con:
needing to adjust the search path also feels strange
In my opinion, you forgot the best option: Ask the original project maintainer to make the package available via pip. Since pip can install directly from git repositories this doesn't take more than a setup.py -- in particular, you don't need a PyPI account, you don't need to tag releases, etc.
If that's not possible then I would opt for your second option, i.e. provide my own setup.py file in a fork of the project. This makes incorporating upstream changes pretty easy (basically you simply git pull them from the upstream repo) and gives you all the benefits of package management (automatic installation, dependency management, etc.).

Easy-install live python libraries/scripts

I have a number of python "script suites" (as I call them) which I would like to make easy-to-install for my colleagues. I have looked into pip, and that seems really nice, but in that regimen (as I understand it) I would have to submit a static version and update them on every change.
As it happens I am going to be adding and changing a lot of stuff in my script suites along the way, and whenever someone installs it, I would like them to get the newest version. With pip, that means that on every commit to my repository, I will also have to re-submit a package to the PyPI index. That's a lot of unnecessary work.
Is there any way to provide an easy cross-platform installation (via pip or otherwise) which pulls the files directly from my github repo?
I'm not sure if I understand your problem entirely, but you might want to use pip's editable installs[1]
Here's a brief example: In this artificial example let's suppose you want to use git as CVS.
git clone url_to_myrepo.git path/to/local_repository
pip install [--user] -e path/to/local_repository
The installation of the package will reflect the state of your local repository. Therefore there is no need to reinstall the package with pip when the remote repository gets updated. Whenever you pull changes to your local repository, the installation will be up-to-date as well.
[1] http://pip.readthedocs.org/en/latest/reference/pip_install.html#editable-installs

Is there a way to "version" my python distribution?

I'm working by myself right now, but am looking at ways to scale my operation.
I'd like to find an easy way to version my Python distribution, so that I can recreate it very easily. Is there a tool to do this? Or can I add /usr/local/lib/python2.7/site-packages/ (or whatever) to an svn repo? This doesn't solve the problems with PATHs, but I can always write a script to alter the path. Ideally, the solution would be to build my Python env in a VM, and then hand copies of the VM out.
How have other people solved this?
virtualenv + requirements.txt are your friend.
You can create several virtual python installs for your projects, everything containing exactly those library versions you need (Tip: pip freeze spits out a requirements.txt with the exact library versions).
Find a good reference to virtualenv here: http://simononsoftware.com/virtualenv-tutorial/ (it's from this question Comprehensive beginner's virtualenv tutorial?).
Alternatively, if you just want to distribute your code together with libraries, PyInstaller is worth a try. You can package everything together in a static executable - you don't even have to install the software afterwards.
You want to use virtualenv. It lets you create an application(s) specific directory for installed packages. You can also use pip to generate and build a requirements.txt
For the same goal, i.e. having the exact same Python distribution as my colleagues, I tried to create a virtual environment in a network drive, so that everybody of us would be able to use it, without anybody making his local copy.
The idea was to share the same packages installed in a shared folder.
Outcome: Python run so unbearably slow that it could not be used. Also installing a package was very very sluggish.
So it looks there is no other way than using virtualenv and a requirements file. (Even if unfortunately often it does not always work smoothly on Windows and it requires manual installation of some packages and dependencies, at least at this time of writing.)

Categories

Resources