I installed some package via pip install something. I want to edit the source code for the package something. Where is it (on ubuntu 12.04) and how do I make it reload each time I edit the source code and run it?
Currently I am editing the source code, and then running python setup.py again and again, which turns out to be quite a hassle.
You should never edit an installed package. Instead, install a forked version of package.
If you need to edit the code frequently, DO NOT install the package via pip install something and edit the code in '.../site_packages/...'
Instead, put the source code under a development directory, and install it with
$ python setup.py develop
or
$ pip install -e path/to/SomePackage
Or use a vcs at the first place
$ pip install -e git+https://github.com/lakshmivyas/hyde.git#egg=hyde
Put your changes in a version control system, and tell pip to install it explicitly.
Reference:
Edit mode
You can edit the files installed in /usr/local/lib/python2.7/dist-packages/. Do note that you will have to use sudo or become root.
The better option would be to use virtual environment for your development. Then you can edit the files installed with your permissions inside your virtual environment and only affect the current project.
In this case the files are in ./venv/lib/pythonX.Y/site-packages
The path could be dist-packages or site-packages, you can read more in the answer to this question
Note that, as the rest of the people have mentioned, this should only be used sparingly, for small tests or debug, and being sure to revert your changes to prevent issues when upgrading the package.
To properly apply a change to the package (a fix or a new feature) go for the options described in other answers to contribute to the repo or fork it.
I too needed to change some things inside a package. Taking inspiration from the previous answers, You can do the following.
Fork the package/repo to your GitHub
clone your forked version and create a new branch of your choice
make changes and push code to the new branch on your repository
you can easily use pip install -e git+repositoryurl#branchname
There are certain things to consider if its a private repository
If you are doing the custom module that you want hot loading, you can put your running code also inside the module. Then you can use python -m package.your_running_code. In this way, you can change the module in the package and reflect the result of your running code immediately.
Related
Normally when I develop a Python package for personal use, I use python3 setup.py develop, and then perform pip3 install -e <path_to_package> within another virtualenv, allowing me to hack around with both at the same time. When I do gpip3 freeze I see the path to the package on my local machine:
-e /Users/myName/Documents/testpackage
When I store that package on GitHub and clone it back onto a local machine, I expect to be able to use setup.py develop the same way and keep developing the package on my local machine, regardless of whether or when I push back to GitHub. However, when I do gpip3 freeze, I see:
-e git+git#github.com:github_username/repo_name#-----latest_commit's_sha_code-----#egg=repo_name&subdirectory=xx/xx/testpackage
I would like my system to keep track of the local version instead of git's remote.
Note: I know how to commit and push local changes to GitHub and install the egg in local environments. My goal is to quickly test ideas with a development version of the package without continuously integrating.
Note 2: The GitHub address given in gpip3 freeze fails when I try it in an environment (FileNotFoundError: [Errno 2] No such file or directory: '/Users/myName/Documents/testenvironment/src/testpackage/setup.py')
But if I wanted pip3 to install the latest GitHub commit, I wouldn't be bothering with setup.py develop anyway.
Is there a way to signal to setup.py that I want it to ignore the remote in the cloned repo and pay attention only to the local path? Or is always referencing a remote when present the expected behavior of setup tools?
update :
The wording of the output in gpip3 freeze after python3 setup.py develop when a remote isn't present (below) leads me to consider that tracking a remote whenever possible may be the intended behavior :
# Editable Git install with no remote (testpackage ==0.0.1)
-e /Users/myName/Documents/testpackage
I have been working around this by git remote remove origin when I want my local changes to be reflected in local environments without pushing a new commit, though unideal for me.
My question was rooted in a misunderstanding of how to implement python3 setup.py develop.
My original method was :
1) python3 setup.py develop from within the package directory itself, which would install/link the egg globally
2) gpip3 freeze to get (I thought) the link to the egg (seeing all the extra git remote info here was confusing to me)
3) cd to another virtual environment, source bin/activate, then call pip3 install -e <link_copied_from_global_pip_freeze>
In fact there is no need to call python3 setup.py develop from within the package under development, or to use gpip3 freeze to get the egg link.
I can go directly to the virtual env and activate it, then use pip3 install -e <system_path_to_package_directory_containing_setup.py>. This will create an egg link in the package directory if it doesn't already exist. Edits within the package are reflected in the virtual environment as expected, and I can use Git version control freely within the package according to my needs without interference.
I assume there may be times to call python3 setup.py develop directly (setup.py develop --user also exists) but by not doing so I happen to avoid littering my global environment with extra packages.
Related info from 2014 question in the Python Disutils thread :
Questioner writes:
For years, I've been recommending:
$ python setup.py develop
[...]
Having said that, I also notice that:
$ pip install -e .
does the same thing.
Should I be recommending one over the other?
Noah answers :
You should recommend using pip for it, mostly because as you said that will work even with packages that don't use setuptools :-) It also is required when doing a develop install with extras, though that requires a slightly more verbose syntax due to a bug in pip.
I'm working on a script in python that relies on several different packages and libraries. When this script is transferred to another machine, the packages it needs in order to run are sometimes not present or are older versions that do not have the same functionality and cause the script to fail.
I was considering using a virtual environment, but I can't find a way to have the script use the specific environment I design as it's default, and in order to use the environment a user must manually activate it from the command line.
I've also looked into trying to check the versions of the packages installed on the machine, and if they are not sufficient then updating them from the script as described here:
Installing python module within code
Is there any easier/surefire way to make sure that the needed packages will always be available regardless of where it's run?
The normal approach is to create an installation script and have that manage your dependencies. Then when you move your project to a new environment your installer will check that all dependencies are present.
I recommend you check out setuptools: https://setuptools.readthedocs.io/en/latest/
If you don't want to install dependencies whenever you need to use your script somewhere new, then you could package your script into a Docker container.
If the problem is ensuring the required packages are available in a new environment or virtual environment, you could use pip and generate a requirements.txt and check it in version control or use a tool to do that for you, like pipenv.
If you would prefer to generate a requirements.txt by hand, you should:
Install your depencencies using pip
Type pip freeze > requirements.txt to generate a requirements.txt file
Check requirements.txt in you source management software
When you need to setup a new environment, use pip install -m requirements.txt
The solution that I've been using has been to include a custom library (folder with all of my desired packages) in the folder with my script, and I simply import them from there:
from Customlib import pkg1, pkg2,...
As long as the custom library and script stay together in the same folder, it will always have access to the right packages and the correct versions of those packages.
I'm not sure how robust this solution actually is or what possible bugs may arise from this if it is passed from machine to machine, but for now this seems to work.
I want to contribute some changes to a python package which is using github. I have forked it.
It is a library I am using in a project (in a python 3.5.1 virtual environment).
The documentation at https://pip.pypa.io/en/latest/reference/pip_install/#vcs-support tell me how to install from a github fork, and it goes on to mention "editable installs" (https://pip.pypa.io/en/latest/reference/pip_install/#editable-installs) which basically does "development mode"
If it is a pure python package does it matter if I skip editable mode?
(since there would be no build steps necessary as I would only be changing python code. This would mean I can keep using the same import statements.)
When you pip install without editable mode, the package is copied into your Python environment (such as env/lib/python3.5/site-packages). You can, of course, edit it right there, as it’s usually just a bunch of Python files, but that is inconvenient.
When you pip install with editable mode, pip only sets up a link from your environment to wherever the source code is. So, you can clone your GitHub fork into a convenient directory like ~/projects/libraryX, then do pip install -e ~/projects/libraryX, and keep editing the code at ~/projects/libraryX while your changes are immediately reflected in the environment where you installed it.
This all applies to pure Python packages.
I like to edit python modules installed with pip. But, I do not know good way to avoid conflicts between local update and original one when upgrade a module.
For example,
$ pip install some_module
$ vim ~/.../some_module/something.py # update the file
$ pip install --upgrade some_module
It should occurs some trouble because of conflicts between local and original repository. (The assumption that original repo is on github is OK)
I guess One of alternatives is forking repository on github and pip install git+<repo_url>, but I'm not have confident.
What is good way to avoid this trouble?
You should not be editing the core files of a module, if you need to modify it you should be extending (subclassing) it and over-riding the functionality and adding your own functions, that way your code is separate from the repo's code and won't be over-written by an update or upgrade
You could also Use a virtual environment, a virtual environment is an isolated python installation/environment, it makes it easy to manage dependencies and different version of libraries/ version of python
this should get you started
http://docs.python-guide.org/en/latest/dev/virtualenvs/
I would like to know if any of you had implemented an autoupdate feature for a python app. My idea is to download the first 100-200 bytes (using requests?) from a github URL, contanining the version tag. Ex.
#!/usr/bin/env python
#######################
__author__ = 'xxx'
__program__ = 'theprogram'
__package__ = ''
__description__ = '''This program does things'''
__version__ = '0.0.0'
...
So if the version tag is greater than the one in the local module, the updater would download the whole file and replace it, and then (either the way) run it.
What is the best way to do this?
Thanks!
You can use pip programmatically to schedule updates for your modules in a cron, so you won't be needing to request the version because pip will update only when necessary.
pip install --upgrade yourpackage
or
pip install --upgrade git+https://github.com/youracc/yourepo.git
Also, as #riotburn pointed out, you should be using a virtualenv to isolate your environment and may as well rollback to a previous one if necessary. In that last scenario, you may find this virtualenv wrapper very helpful.
You should look into using virtualenv or conda for managing the dependencies used in your package. Both allow you to create isolated environments for installing specific versions of packages as well as creating environments from predefined list of dependencies. Conda also has the benefit of being a package manager like pip. If you were to not specify versions in this requirements file, it would install the latest. Then you could just write a bash script to automate the couple of command lines needed to do this for your use case.
Try reading up on python environments:
http://conda.pydata.org/docs/using/envs.html
https://virtualenv.pypa.io/en/latest/