I am wanting to fork a python (pip) dependency that I am using and make some edits to it, etc. And I don't want to risk a pip update/upgrade erasing my changes.
In the javascript world, and easy way to do what I want to do is with the yarn link command.
Is there a command similar to yarn link when using python/pip?
So, I found out how to do this. Instead of doing a normal pip install, you can do the following:
Checkout a repo of the forked package
Then, run this command
pip install -e /path/to/the/package/on/local/file/system
This creates an editable install of the package in the folder of your choosing, so you can develop and make changes and see the effect of the changes immediately.
I'm sure seasoned python developers already know this. But I'm not in python everyday. I've been wanting to know how to do this for a long time. Finally figured it out. I hope this helps someone else!
Related
trying to find a way to get tool like pylint (so we can use pyreverse). however conditions: restricted network. docker is not acceptable. and we ARE NOT aloud to use pip so please. don't suggest it. its a no go. not allowed. is their a way to use the git source code to install it? seems their USED to be a setup.py file (in the old docs) but it no longer exists.. anyone have any clue as to how this might be achieved? manually getting al decencies. etc is fine. as long as their a process. which does not use PIP or docker.
Let's say I have a standard python directory structure of python package like here and consider I need to add a function to the package. More specifically, I want to do it with trial-and-error, by running a test code. What is the correct work-flow for this?
I currently do the following:
do sudo python setup.py install anytime I made a change in the package
source ~/.bashrc
open a python interpreter,
run the test code.
But apparently this flow takes a lot of time to just check the modification via test code. And I feel that I'm doing something wrong, and the better ways exist.
I would comment, but alas I do not have enough reputation.
Use pip install -e path/to/package. This will install it "editably" and locally, so any change to the package code is effective immediately on your system. This way you can change and test code on the fly.
Further answers: https://stackoverflow.com/a/23075617/12164878
I have jupyter-notebook running on my own Mac with the caylsto-processing library plugged in so I can run processing scripts in a notebook in a browser tab. But I am trying to be able to run this all in binder, so that I can share my processing scripts with students during class. I created a Github repository and have it linked to a binder, and the binder builds and launches, but the only kernel available is python 3.
I have read that I can include a bunch of configuration files, but I'm new to these and I don't see any examples that bring in the calysto-processing kernel, so I'm unsure on how to proceed.
Screenshot of my binder with the jupyter-notebook with a processing script - but when you click on kernels, the only kernel it shows is python:
Any help would be appreciated.
Very good question. Ayman suggestion is good.
I've just installed calysto_processing and noticed 3 things are necessary:
installing the calysto_processing package via pip,
running install on the calysto_processing package.
installing Processing.
First point should be easy with requirements.txt.
I'm unsure what the best option is for the second step (maybe a custom setup.py ?).
Step 3 feels the trickiest.
Installing Processing currently isn't supported with apt-get so Dockerfile might be way forward (even through mybinder recommend that only as a last resort).
Let's assume a Dockerfile would contain all the steps to manually download/install processing (and I'm not super experienced with Docker at the moment btw), it will need to be executed which will require a windowing system to render the Processing window.
I don't know how well that plays with Docker, sounds like it's getting into virtual machine territory.
That being said, looking at the source code right here:
Processing is used only to validate the sketch, and pull syntax errors to display them otherwise.
ProcessingJS is used to actually render the processing code in a <canvas/> element within the Jupyter Notebook
I'm not sure what the easiest way to run the current calysto_processing in mybinder as is.
My pragmatic (even hacky if you will) suggestion is to:
fork the project and remove the processing-java dependency (which means might loose error checking)
install the cloned/tweaked version via pip/requirements.txt (pip can install a package from a github repo)
Update I have tried the above: you can run test kernel here
The source is here and the module is installed from this fork which simply comments out the processing-java part.
In terms of the mybinder configuration it boils down to:
create a binder folder in the repo containing the notebook
add requirements.txt which points to the tweaked version of calysto_processing stripped off the processing-java dependency: git+https://github.com/orgicus/calysto_processing.git#hotfix/PJS-only-test
add postBuild file which runs install on the calysto_processing module: python -m calysto_processing install --user
Notes
With this workaround java error checking is gone
Although Processing syntax is used it's execute as javascript and rendered in <canvas/> using ProcessingJS: this means no processing-java libraries, no threads or other java specific features,(buggy or no 3D),etc. just basic Processing drawing sketches
It might be worth looking at replacing ProcessingJS with p5.js and checking out other JS notebooks ? (e.g. Observable or IJavascript)
I have a number of python "script suites" (as I call them) which I would like to make easy-to-install for my colleagues. I have looked into pip, and that seems really nice, but in that regimen (as I understand it) I would have to submit a static version and update them on every change.
As it happens I am going to be adding and changing a lot of stuff in my script suites along the way, and whenever someone installs it, I would like them to get the newest version. With pip, that means that on every commit to my repository, I will also have to re-submit a package to the PyPI index. That's a lot of unnecessary work.
Is there any way to provide an easy cross-platform installation (via pip or otherwise) which pulls the files directly from my github repo?
I'm not sure if I understand your problem entirely, but you might want to use pip's editable installs[1]
Here's a brief example: In this artificial example let's suppose you want to use git as CVS.
git clone url_to_myrepo.git path/to/local_repository
pip install [--user] -e path/to/local_repository
The installation of the package will reflect the state of your local repository. Therefore there is no need to reinstall the package with pip when the remote repository gets updated. Whenever you pull changes to your local repository, the installation will be up-to-date as well.
[1] http://pip.readthedocs.org/en/latest/reference/pip_install.html#editable-installs
I'm currently researching deployment techniques for our Python products. We manage our code with multiple git repositories already but want to improve the process of setting up and updating our servers. It seems that easy_install, .egg files and virtualenv are the best tools for doing this nowadays.
Here's the catch: We don't really do versioning; all our products have a master branch which is supposed to provide stable code all the time. If we want to update, we have to git pull the master branch on every server, for each product and all its dependencies.
This solution is very time-consuming and we want to improve it.
My idea was to create a virtualenv instance on all servers/installations and use easy_install to install and update our own packages, but I couldn't find a way to specify a git repository as a source for the source code.
Is there a way to achieve that? Did I miss something? Am I going in the wrong direction and this is a bad idea overall?
Thanks in advance,
Fabian
You can use pip instead of easy_install, it supports a number of possible ways to specify where to get the package from, one being git, you could then install your package like this:
pip install git://my.git-repo.com/my_project.git