I have an environment which I previously installed into an editable package:
virtualenv venv
. venv/bin/activate
pip install -e ...
pip freeze | grep <pkg_name>
-e git+ssh://git#bitbucket.org/SPACE/REPO.git#HASH#egg=NAME&subdirectory=PATH
I copeid the pip freeze result to a req.txt file and installed it into a new environment, and it works.
My question is - how can I make it pull the code to build and install, not from a remote server, but from my local project (like done when running pip install -e)
It would obviously only work on my machine, assuming that project still is there, but this is what I want...
According to pip documentation (1, 2), yes, you can have an entry like this in your requirements.txt:
-e git+ssh://git#example.com/repo.git
Also as chepner has pointed out, one could just specify local URL:
-e file:///home/someone/repo
-e file://C:\Users\someone\repo
Related
I'm learning Python and have Anaconda installed, and I'm trying to familiarize myself with the process of getting an Eye-Color Detection project working.
I'm running into the following error after going through readme:
Eye-Color-Detection git:(master) ✗ sudo pip install -r requirements.txt
WARNING: The directory '/Users/{user}/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
When trying to update:
(tweet) ➜ Eye-Color-Detection git:(master) ✗ conda update --all
[Errno 13] Permission denied: '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py' -> '/Users/{user}/opt/anaconda3/envs/tweet/lib/python3.8/site-packages/wrapt/__init__.py.c~'
Q: How might I go about doing this correctly within the same Conda environment?
Most of the time, a sudo pip install is almost never what you really want. While in some cases, it may "appear" to work and solve you're immediate problem. More often than not you've just broken your system python without knowing it.
In the context of that repo, I'd ignore the repo's README and do this.
$ git clone https://github.com/ghimiredhikura/Eye-Color-Detection
$ cd Eye-Color-Detection
Create a virtualenv environment, change yourenvname as you like.
$ conda create -n yourenvname python=3.x
$ conda activate yourenvname
Install the dependencies and run the code
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=ima
As fixing you conda environment may be difficult to debug depending on what else you've sudo'd in attempting to resolve your issue. If you happen to be familiar with "regular" virtualenv's created using python's builtin virtual environment tooling, then you could also try this to get you going.
$ python3 -m venv .venv --copies
$ source .venv/bin/active
$ pip install -r requirements.txt
$ python3 eye-color.py --input_path=sample/2.jpg --input_type=image
What you need to do is you have to change the directory premission to writable.
You can do it using this command,
$ sudo chmod 7777 /Users/{user}/library/caches/
to change permissions recursively,
$ sudo chmod 7777 -R /Users/{user}/library/caches/
or you can own that directory by using this command,
$ sudo chown OWNER:GROUP /Users/{user}/library/caches/
where OWNER is the username for your computer which you can find in the terminal by using this command.
$ whoami
GROUP is optional.
How can I run target make install only if requirements.txt is changed ?
I don't want to upgrade packages each time when I do make install
I found some workaround by creating fake file _requirements.txt.pyc but is ugly and dirty. It will refuse install pip requirements second time because requirements.txt has no changes
$ make install-pip-requirements
make: Nothing to be done for 'install-pip-requirements'.
But my goal is to do:
# first time,
$ make install # create virtual environment, install requirements
# second time
$ make install # detected and skipping creating virtual env,
# detect that requirements.txt have no changes
# and skipping installing again all python packages
make: Nothing to be done for 'install'.
Python package looks like:
.
├── Makefile
├── README.rst
├── lambda_handler.py
└── requirements.txt
I am using file, Makefile, for some automation in python:
/opt/virtual_env:
# create virtual env if folder not exists
python -m venv /opt/virtual_env
virtual: /opt/virtual_env
# if requirements.txt is modified than execute pip install
_requirements.txt.pyc: requirements.txt
/opt/virtual_env/bin/pip install -r --upgrade requirements.txt
echo > _requirements.txt.pyc
requirements: SOME MAGIG OR SOME make flags
pip install -r requirements.txt
install-pip-requirements: _requirements.txt.pyc
install: virtual requirements
I am sure that
Must be a better way
to do this;)
Not sure it will answer your question at this point. The better way is to use a fully fledged Python PIP project template.
We use cookiecutter to create a particular pip package with this cookiecutter template.
It has a Makefile, which does not constantly re-install all the dependencies and it makes use of Python tox, which allows running a project tests in different python envs automatically. You still can develop in dev virtualenv, but we update it only when new package is added, everything else is handle by tox.
But, what you show so far is trying to write a Python build from scratch, which was done with numerous project templates. If you really want to understand what is going on there, you can analyze these templates.
As followup: Because you expect it to work with a makefile, I'd suggest removing the --upgrade flag from the pip command. I suspect your requirements do not include versions that are needed for the project to work. We made an experience, that not putting versions there might badly brake things. Thus our requirements.txt looks like:
configure==0.5
falcon==0.3.0
futures==3.0.5
gevent==1.1.1
greenlet==0.4.9
gunicorn==19.4.5
hiredis==0.2.0
python-mimeparse==1.5.2
PyYAML==3.11
redis==2.10.5
six==1.10.0
eventlet==0.18.4
Using the requirements without --upgrade causes pip simply verify what is in virtualenv and what not. Everything that satisfies the required version will be skipped (no download). You can also reference git versions in requirements like that:
-e git+http://some-url-here/path-to/repository.git#branch-name-OR-commit-id#egg=package-name-how-to-appear-in-pip-freeze
#Andrei.Danciuc, make just needs two files to compare; you can use any of the output files from running pip install.
For example, I usually use a "vendored" folder, so I can alias the path to the "vendored" folder instead of using a dummy file.
# Only run install if requirements.txt is newer than vendored folder
vendored-folder := vendored
.PHONY: install
install: $(vendored-folder)
$(vendored-folder): requirements.txt
rm -rf $(vendored-folder)
pip install -r requirements.txt -t $(vendored-folder)
If you don't use a vendored folder, this code below should work for both virtualenv and global setups.
# Only run install if requirements.txt is newer than SITE_PACKAGES location
.PHONY: install
SITE_PACKAGES := $(shell pip show pip | grep '^Location' | cut -f2 -d':')
install: $(SITE_PACKAGES)
$(SITE_PACKAGES): requirements.txt
pip install -r requirements.txt
How can one manage to install extras_requires with pip when installing from a git repository ?
I know that you can do pip install project[extra] when the project is on pypi.
And you have to do pip install -e git+https://github.com/user/project.git#egg=project for a git repo but I didn't manage to find how to link these two options together.
This should work, per example #6
For remote repos:
pip install -e git+https://github.com/user/project.git#egg=project[extra]
And this for local ones (thanks to #Kurt-Bourbaki):
pip install -e .[extra]
As per #Jurt-Bourbaki:
If you are using zsh you need to escape square brackets or use quotes:
pip install -e .\[extra\]
# or
pip install -e ".[extra]"
Important to notice: you should not have whitespaces around or within brackets. I.e. this will work: -e ".[extra1,extra2]" but this won't: -e ". [extra1, extra2]" - and even as a row in requirements.txt file, where it is not so obvious. The worst thing about it is that when you have whitespace, extras are just silently ignored.
It may not be obvious for some users, and wasn't for me, so thought to highlight that extra in the following command
pip install -e ".[extra]"
needs to be replaced by the actual name of the extra requirements.
Example:
You add options.extras_require section to your setup.cfg as follows:
[options.extras_require]
test =
pre-commit>=2.10.1,<3.0
pylint>=2.7.2,<3.0
pytest>=6.2.2,<7.0
pytest-pspec>=0.0.4,<1.0
Then you install the test extra as follows
pip install -e ".[test]"
This also works when installing from a whl file so, for example, you can do:
pip install path/to/myapp-0.0.1-py3-none-any.whl[extra1]
This is very far from clear from the docs, and not particularly intuitive.
Using git + ssh to install packages with extras from private repositories:
pip install -e 'git+ssh://git#github.com/user/project.git#egg=project[extra1,extra2]'
I am attempting to install Portia, a python app from Github: https://github.com/scrapinghub/portia
I use the following steps at the command line:
set up new virtualenv 'portia' in Mac terminal
git clone https://github.com/scrapinghub/portia.git
follow readme instructions:
cd slyd
pip install -r requirements.txt
run Portia
cd slyd
twistd -n slyd
But every time I attempt the last step to run the program, I get the following error:
ImportError: No module named scrapy
Any idea why this error is occurring? All previous steps seem to install correctly. Is it an error earlier in my install process?
Thanks!
I don't have the rep to upvote Alagappan's answer but he's correct. Also, if you're as inexperienced as I am, you may need further clarity on this.
You have to create, activate and navigate into the virtualenv before installing anything (including cloning portia from github). Here's the whole thing working from start to finish:
1: cd to wherever you’d like to store your project...
and Install virtualenv:
$ pip install virtualenv
2: Create the virtual environment. (I called mine “portia” but this can be anything.):
$ virtualenv portia
3: Activate the virtual environment you created (change the path to reflect the name you used here if not “portia”.):
$ source portia/bin/activate
At this point your terminal should have display the virtualenv name in parenthesis before the standard directory path prompt:
(name-of-virtualenv) [your-machine]:[current-directory]: [user]$
...and if you list the files within your pwd you’ll see the name of you virtualenv there.
4: cd into your virtualenv (“portia” for me):
$ cd portia
5: Now you can clone portia from github into your virtualenv...
$ git clone https://github.com/scrapinghub/portia
6: cd into the cloned portia/slyd...
$ cd portia/slyd
7/8: pip install twisted and Scrapy...
$ pip install twisted
$ pip install Scrapy
You’re virtualenv should still be activated and you should still be in [virtualenv-name]/portia/slyd
9: Install the requirements.txt:
$ pip install -r requirements.txt
10: Run slyd:
$ twistd -n slyd
--- No more scrapy error! ---
Another Installation Method For Portia: Using Vagrant
Here is the method that made me install Portia with ease. Works with Mac, Windows and Linux. With a few commands and clicks, you'll get a fully functional web scraper.
Things Needed:
VirtualBox
Vagrant
Clone the repo for Portia or download the zip file.
Additional Steps To Take:
Install VirtualBox.
Install Vagrant
Open your terminal and navigate to where you cloned the Portia repo or where you've extracted it (in case of a zip file).
Then make a command vagrant up - This will download and setup a VirtualBox Guest VM for you + will install all the necessary requirements for Portia and will install Portia from start to finished.
After the above process, you may now open your browser and navigate to
http://the-virtualbox-ip:8000/static/main.html
And you're setup.
It's quite simple, you just need to install the python module scrapy in the same way that the Twitter API requires setuptools
pip install scrapy
I suppose the issue you are facing is because of the virtualenv. Once you setup a new virtual environment you need to run the activate script in order to start using it. In your case you'll have to run the following command:
$ source portia/bin/activate
On successful activation, your prompt will look like:
(portia) $
Can you check if you activated your virtual environment before you installed the packages using pip? I believe doing so will fix your issue.
I'm trying to install my application via pip to a virtualenv for testing.
Works fine for installing the default or tip like so:
pip install -e hg+https://username#bitbucket.org/username/app_name#egg=app_name
But is there any way to point to a branch, rather than just getting the tip. Not sure if this would be a mercurial thing, bitbucket, or pip.
Bitbucket allows for downloading of a tagged version of the code, but I can only get it to work while logged into the browser. I tried installing from a tag tar.gz like so:
pip install https://username#bitbucket.org/username/app_name/get/bbc4286a75db.tar.gz
but even after entering my password it returns a 401 Unauthorized (Its a Private Repo)
In official pip documentation in section VCS Support:
Mercurial
The supported schemes are: hg+http, hg+https, hg+static-http and
hg+ssh:
-e hg+http://hg.myproject.org/MyProject/#egg=MyProject
-e hg+https://hg.myproject.org/MyProject/#egg=MyProject
-e hg+ssh://hg#myproject.org/MyProject/#egg=MyProject
You can also specify a revision number, a revision hash, a tag name or
a local branch name:
-e hg+http://hg.myproject.org/MyProject/#da39a3ee5e6b#egg=MyProject
-e hg+http://hg.myproject.org/MyProject/#2019#egg=MyProject
-e hg+http://hg.myproject.org/MyProject/#v1.0#egg=MyProject
-e hg+http://hg.myproject.org/MyProject/#special_feature#egg=MyProject
The syntax is the same when specifying repo at the command line
pip install -e hg+http://hg.myproject.org/MyProject/#special_feature#egg=MyProject
and it works when not using -e option starting from version 0.8.2.