I'm currently developing a CI process for a micro-services Python application. It is built as such: each micro service is packed as a docker image, and there are several of those. In addition, there's some common code which is packed as a PyPI package, and is consumed by the services.
Fot the manner of the discussion, let's say we have a service called foo, and the common code called lib.
In the day-to-day development, we want foo to consume the latest versions of lib. But once we want to release a version of foo, we want to merge the code to the main branch, and record the exact version of lib in the requirements.txt of foo.
The idea that came up is to work in the following manner: In foo, We'll have a develop and master branches. On each push to develop, we'll build an image with the latest version of lib. When the developers merge the code to master, we run pip freeze > requirements.txt , and git push it to main again, so that when we want to come back to this version, we'll have a the requirements.txt file pinned to a specific version of lib (and the rest of the dependencies).
That sounds OK overall, but let's add a complexity to this:
Our lib, in turn, depends on another PyPI package, let's call it utils. The setup.py of lib contains a install_requires field which specified utils. Again, in the day-to-day, we want it to consume the latest version of utils, but when we merge the code to main (in lib), we want to pin a specific version.
The question is, is there a way to automatically update the setup.py install_requires section as there is to automatically update the requirements.txt with pip freeze?
But actually my question is - does this process make sense? Maybe we're missing something here?
Related
I ran into a problem when trying to figure out how to specify dependencies for a python application which uses pyproject.toml.
What is the best practice when it comes to installing applications? Couple of approaches I am considering:
have requirements.txt with resolved and pinned dependencies, have dynamic dependencies in pyproject.toml, which would reference this requirements file.
pin only top level dependencies in pyproject.toml, without any requirements file
have requirements.txt as in first case, which would again be referenced as dynamic dependencies in pyproject.toml, but also have optional dev dependencies, which would contain unpinned top level dependencies
Things I am looking for is reproducibility, but also minimal manual work needed when developing the app. Third approach makes the most sense to me, as it would only require pip freeze before pushing my new changes. I have used pyproject.toml only for developing python libraries so far (well, this is actually the first app I am creating in python in general).
I have been playing around with poetry, but would be more interested how to do it with minimal set up that is pip + pyproject.toml.
I have a Python 3 (>=3.7) project, that contains a complex web service. At the same time, I want to put a portion of the project (e.g. API client) on PyPi to allow external application to interface with it.
I would like to publish morphocluster while morphocluster.server (containing all service-related functionality) should be excluded. (That means that pip install morphocluster and pip install git+https://github.com/morphocluster/morphocluster/ should install morphocluster with all submodules except morphocluster.server.)
Moreover, morphocluster.server must be installable via a separate setup.py for installation in the service docker container.
Is that achievable in any way without splitting the project into distinct morphocluster and morphocluster_server packages?
I already had a look at namespace packages, but it seems that they don't allow functionality in the namespace itself.
Also setuptools.find_packages looked helpful, but wouldn't help with making morphocluster.server installable separately.
I have multiple shared internal libraries that many repositories depends on. Right now, these libraries are in the same git repository and are submoduled into each application when needed. During the build time, I will pip install the libraries. The problem I am facing is these internal libraries also depends on each other, but the dependencies can't be resolved since they are in local folders.
For example, I have local library A depends on B. This will NOT work
setup(
name='A_package',
install_requires=[
'B_package', # source file in local folder
],
...
)
since pip tries to find B_package on PyPI.
I have searched many solutions however, I can't seem to find a straight forward solution such as
install_requires=[
'/commonlib/path/B_package',
],
This way, I can just pip install A_package then B_package will also be located and installed.
The reason I would like to have shared library source code as a submodule is to make development easier so engineers can modify and commit the libraries whenever needed. I am welcome for any other suggestions.
Cost of Publish Packages vs Speed of Development
The Tradeoff between these two things is the key here.
Using Git Repo : Good Speed at Small Scale, Problems when it grows
This is what I first tried in my company. We don't use submodule, we just put git repo under where pip installs packages, like /Users/xxx/miniconda3/lib/python3.6/site-packages. pip will always treat that package as installed, and we synchronize that git repo to update that package.
This works great at small scale, but it brings problems when project grows. When you are using git repo, you are using git revision instead of pypi package version so you need to maintain version dependency manually. Suppose project A uses package B, they both have their own versions, how to maintain the dependency, two options:
Always uses the latest version of B, B is backward compatible - Easy for A, but put burden on B.
Just like python package version, put a git revision or tag in A, people need to checkout B every time they switch to a different branch in A - Painful for large project with multiple branches
And if you have multiple virtualenv, you have to extra work to do.
Publish Package But Minimize the Cost : The Way to Go
This is what I ended up with in my company. I setup our own pypi server and the gitlab ci to publish package when a tag is pushed. It doesn't have the problems with the previous one and also supports a fast development iteration.
For Developer
$ git commit ...
$ git tag ...
$ git push && git push --tags
Two commands is all they need to do to publish a package, it's cheap. And we actually use bumpversion to manage version instead of a manual tag.
For User
$ pip install -r requirements.txt
Every time they switch to anther branch in A or someone fix bug in B, they only need to pip install.
EDIT 2019.05.27
It's possible to do what you want with setup.py. When you run pip install, it will download package, unpack it and run python setup.py install, so
you can add custom logic in setup.py:
install_requires = ['b', 'c', 'd']
# make sure it's in the python path and has been checked out
if is_package_b_installed_as_git_repo():
install_requires.remove('b')
setup(
name='A_package',
install_requires=install_requries,
...
)
I'm developing coding a test django site, which I keep in a bitbucket repository in order to be able to deploy it easily on a remote server, and possible share development with a friend. I use hg for version control.
The site depends on a 3rd party app (django-registration), which I needed to customize for my site, so I forked the original app and created a 2nd repository for it (the idea being that this way I can keep up with updates on the original, which wouldnt be possible if I just pasted the code into my main site, plus add to my own custom code) (You can see some more details on this question)
My question is, how do I specify requirements on my setup.py file so that when I install my django site I get the latest version of my fork for the 3rd party app (I use distribute rather than setuptools in case that makes a difference)?
I have tried this:
install_requires = ['django', 'django-registration'],
dependency_links = ['https://myuser#bitbucket.org/myuser/django-registration#egg=django_registration']
but this gets me the latest named version on the original trunk (so not even the tip version)
Using a pip requirements file however works well:
hg+https://myuser#bitbucket.org/myuser/django-registration#egg=django-registration
gets me the latest version from my fork.
Is there a way to get this same behaviour directly from the setup.py file, without having to install first the code for the site, then running pip install -r requirements.txt?
This question is very informative, but seems to suggest I should depend on version 'dev' or the 3rd party package, which doesn't work (I guess there would have to be a specific version tagged as dev for that)
Also I'm a complete newbie in packaging / distribute / setuptools, so dont hold back spelling out the steps :)
Maybe I should change the setup.py file on my fork of the 3rd party app, and make sure it mentions a version number. Generally I'm curious to know what's a source distribution, as opposed to simply having my code on a public repository, and what would be a binary distribution in my case (an egg file?), and whether that would be any more practical for me when deploying remotely / have my friend deploy on his pc. And also would like to know how do I tag a version on my repository for setup.py to refer to it, is it simply a version control tag (hg in my case)?. Feel free to comment on any details you think are important for the starter packager :)
Thanks!
put this:
dependency_links=['https://bitbucket.org/abraneo/django-registration/get/tip.tar.gz#egg=django-registration']
in the dependency_links you have to pass a download url like that one.
"abraneo" is a guy who forked this project too, replace his name by yours.
Just curious how people are deploying their Django projects in combination with virtualenv
More specifically, how do you keep your production virtualenv's synched correctly with your development machine?
I use git for scm but I don't have my virtualenv inside the git repo - should I, or is it best to use the pip freeze and then re-create the environment on the server using the freeze output? (If you do this, could you please describe the steps - I am finding very little good documentation on the unfreezing process - is something like pip install -r freeze_output.txt possible?)
I just set something like this up at work using pip, Fabric and git. The flow is basically like this, and borrows heavily from this script:
In our source tree, we maintain a requirements.txt file. We'll maintain this manually.
When we do a new release, the Fabric script creates an archive based on whatever treeish we pass it.
Fabric will find the SHA for what we're deploying with git log -1 --format=format:%h TREEISH. That gives us SHA_OF_THE_RELEASE
Fabric will get the last SHA for our requirements file with git log -1 --format=format:%h SHA_OF_THE_RELEASE requirements.txt. This spits out the short version of the hash, like 1d02afc which is the SHA for that file for this particular release.
The Fabric script will then look into a directory where our virtualenvs are stored on the remote host(s).
If there is not a directory named 1d02afc, a new virtualenv is created and setup with pip install -E /path/to/venv/1d02afc -r /path/to/requirements.txt
If there is an existing path/to/venv/1d02afc, nothing is done
The little magic part of this is passing whatever tree-ish you want to git, and having it do the packaging (from Fabric). By using git archive my-branch, git archive 1d02afc or whatever else, I'm guaranteed to get the right packages installed on my remote machines.
I went this route since I really didn't want to have extra virtuenvs floating around if the packages hadn't changed between release. I also don't like the idea of having the actual packages I depend on in my own source tree.
I use this bootstrap.py: http://github.com/ccnmtl/ccnmtldjango/blob/master/ccnmtldjango/template/bootstrap.py
which expects are directory called 'requirements' that looks something like this: http://github.com/ccnmtl/ccnmtldjango/tree/master/ccnmtldjango/template/requirements/
There's an apps.txt, a libs.txt (which apps.txt includes--I just like to keep django apps seperate from other python modules) and a src directory which contains the actual tarballs.
When ./bootstrap.py is run, it creates the virtualenv (wiping a previous one if it exists) and installs everything from requirements/apps.txt into it. I do not ever install anything into the virtualenv otherwise. If I want to include a new library, I put the tarball into requirements/src/, add a line to one of the textfiles and re-run ./bootstrap.py.
bootstrap.py and requirements get checked into version control (also a copy of pip.py so I don't even have to have that installed system-wide anywhere). The virtualenv itself isn't. The scripts that I have that push out to production run ./bootstrap.py on the production server each time I push. (bootstrap.py also goes to some lengths to ensure that it's sticking to Python 2.5 since that's what we have on the production servers (Ubuntu Hardy) and my dev machine (Ubuntu Karmic) defaults to Python 2.6 if you're not careful)