Can you simply delete the directory from your python installation, or are there any lingering files that you must delete?
It varies based on the options that you pass to install and the contents of the distutils configuration files on the system/in the package. I don't believe that any files are modified outside of directories specified in these ways.
Notably, distutils does not have an uninstall command at this time.
It's also noteworthy that deleting a package/egg can cause dependency issues – utilities like easy_install attempt to alleviate such problems.
The three things that get installed that you will need to delete are:
Packages/modules
Scripts
Data files
Now on my linux system these live in:
/usr/lib/python2.5/site-packages
/usr/bin
/usr/share
But on a windows system they are more likely to be entirely within the Python distribution directory. I have no idea about OSX except it is more likey to follow the linux pattern.
Another time stamp based hack:
Create an anchor: touch /tmp/ts
Reinstall the package to be removed: python setup.py install --prefix=<PREFIX>
Remove files what are more recent than the anchor file: find <PREFIX> -cnewer /tmp/ts | xargs rm -r
Yes, it is safe to simply delete anything that distutils installed. That goes for installed folders or .egg files. Naturally anything that depends on that code will no longer work.
If you want to make it work again, simply re-install.
By the way, if you are using distutils also consider using the multi-version feature. It allows you to have multiple versions of any single package installed. That means you do not need to delete an old version of a package if you simply want to install a newer version.
In ubuntu 12.04, I have found that the only place you need to look by default is under
/usr/local/lib/python2.7/
And simply remove the associated folder and file, if there is one!
If this is for testing and/or development purposes, setuptools has a develop command that updates every time you make a change (so you don't have to uninstall and reinstall every time you make a change). And you can uninstall the package using this command as well.
If you do use this, anything that you declare as a script will be left behind as a lingering file.
install --record + xargs rm
sudo python setup.py install --record files.txt
xargs sudo rm -rf < files.txt
removes all files and but leaves empty directories behind.
That is not ideal, it should be enough to avoid package conflicts.
And then you can finish the job manually if you want by reading files.txt, or be braver and automate empty directory removal as well.
A safe helper would be:
python-setup-uninstall() (
sudo rm -f files.txt
sudo python setup.py install --record files.txt && \
xargs rm -rf < files.txt
sudo rm -f files.txt
)
Tested in Python 2.7.6, Ubuntu 14.04.
for Python in Windows:
python -m pip uninstall "package_keyword"
uninstall **** (y/n)?
For Windows 7,
Control Panel --> Programs --> Uninstall
, then
choose the python package to remove.
I just uninstalled a python package, and even though I'm not certain I did so perfectly, I'm reasonably confident.
I started by getting a list of all python-related files, ordered by date, on the assumption that all of the files in my package will have more or less the same timestamp, and no other files will.
Luckily, I've got python installed under /opt/Python-2.6.1; if I had been using the Python that comes with my Linux distro, I'd have had to scour all of /usr, which would have taken a long time.
Then I just examined that list, and noted with relief that all the stuff that I wanted to nuke consisted of one directory, /opt/Python-2.6.1/lib/python2.6/site-packages/module-name/, and one file, /opt/Python-2.6.1/lib/python2.6/site-packages/module-x.x.x_blah-py2.6.egg-info.
So I just deleted those.
Here's how I got the date-sorted list of files:
find "$#" -printf '%T# ' -ls | sort -n | cut -d\ -f 2-
(I think that's got to be GNU "find", by the way; the flavor you get on OS X doesn't know about "-printf '%T#'")
I use that all the time.
On Mac OSX, manually delete these 2 directories under your pathToPython/site-packages/ will work:
{packageName}
{packageName}-{version}-info
for example, to remove pyasn1, which is a distutils installed project:
rm -rf lib/python2.7/site-packages/pyasn1
rm -rf lib/python2.7/site-packages/pyasn1-0.1.9-py2.7.egg-info
To find out where is your site-packages:
python -m site
What it worked for me on Windows 10 and using Python for Windows version 3.96 was to just erase the folder containing the module inside the site-package folder. In my case this was located at: C:\Python396\Lib\site-packages.
After this a pip list does not show the module that I wanted to delete anymore.
ERROR: flake8 3.7.9 has requirement pycodestyle<2.6.0,>=2.5.0, but you'll have pycodestyle 2.3.1 which is incompatible.
ERROR: nuscenes-devkit 1.0.8 has requirement motmetrics<=1.1.3, but you'll have motmetrics 1.2.0 which is incompatible.
Installing collected packages: descartes, future, torch, cachetools, torchvision, flake8-import-order, xmltodict, entrypoints, flake8, motmetrics, nuscenes-devkit
Attempting uninstall: torch
Found existing installation: torch 1.0.0
Uninstalling torch-1.0.0:
Successfully uninstalled torch-1.0.0
Attempting uninstall: torchvision
Found existing installation: torchvision 0.2.1
Uninstalling torchvision-0.2.1:
Successfully uninstalled torchvision-0.2.1
Attempting uninstall: entrypoints
Found existing installation: entrypoints 0.2.3
ERROR: Cannot uninstall 'entrypoints'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
Then I type:
conda uninstall entrypoints
pip install --upgrade pycodestyle
pip install nuscenes-devkit
Done!
Related
Our local python package server contains these files:
subprocess32-3.2.7-cp27-cp27mu-linux_x86_64.whl
subprocess32-3.5.0-cp27-none-linux_x86_64.whl
subprocess32-3.5.0rc1-cp27-none-linux_x86_64.whl
subprocess32-3.5.0.tar.gz
subprocess32-3.5.2.tar.gz
The file subprocess32-3.5.2.tar.gz is new.
Before installing of subprocess32 was successful, since this new version exists, it fails. It fails because there is no gcc on the machine which tries to install subprocess32.
What can I do? I think there are these solutions.
remove subprocess32-3.5.2.tar.gz
make subprocess32-3.5.2 available as wheel
make gcc available on the machine
fix the dependency to subprocess32-3.5.0
But all of them don't really make me happy, since I only solve my current problem. Some weeks later, the same thing can happen again.
Is there a way to tell pip to use a wheel even if this means to take an older version?
Background: there is no explicit dependency on the new version. Pip tries to take the latest version.
I use pip version 9.0.1.
If I understand correctly, your use case is to prohibit installation from source distribution (tar.gz, tar.bz2, zip) when installing a particular package subprocess32. Do it with
$ pip install subprocess32 --only-binary=subprocess32
The difference between --only-binary=pkgname and --only-binary=:all: is that in the first case, the source dists will be forbidden for pkgname only, while the latter prohibits source dists for all packages scheduled for installation, including dependencies. Multiple packages can be selected by comma-separating their names, e.g. --only-binary=spam,eggs,bacon.
Permanent configuration
Entering the --only-binary option every time starts to annoy pretty quickly. To apply it permanently, open pip.conf and add:
# ~/.pip/pip.conf
[global]
only-binary=subprocess32
Now issuing pip install subprocess32 will have the same effect as the above command - the latest binary wheel available for the target platform will be selected. If no binary wheels will be eligible for installation, the command will fail.
Specifying binary requirement
You can also force the --only-binary option in the requirement file if you have one:
# requirements.txt
subprocess32 --only-binary=subprocess32
Now, when installing from the requirement file (via pip install -r requirements.txt), the latest binary wheel available for the target platform will be selected.
There are currently wheels for version 3.2.7 and 3.5.0 so you can try
pip install -U subprocess32==3.2.7
or
pip install -U subprocess32==3.5.0
You can also try to disable source at all:
pip install -U --only-binary=:all: subprocess32
My python package contains a lot of files compiled by python-protobuf (python2-protobuf-2.5.0 on Arch Linux), I installed the package on Ubuntu server 12.04.3 (which have python-protobuf-2.4.1), tried to run the code, and hit the following error:
from google.protobuf.internal import enum_type_wrapper
ImportError: cannot import name enum_type_wrapper
I think it's because the protobuf modules in my package are compiled by protobuf-2.5.0 and they do not work with protobuf-2.4.1.
I have no idea of the environments in which my code may run, the version of protobuf may vary. How to make my package work with both protobuf 2.4 and 2.5?
(A possible way: include two different sets of protobuf libraries (one compiled by 2.4.1, the other compiled by 2.5.0) in my package, get google.protobuf version at runtime and select the protobuf libraries to import. Is it possible?
You need to specify the version of protobuf that will work with in your setup.py in the list install_requires=['protobuf>=2.5.0']. With a Python package, you can put just the name or the exact versions that will run with the package using ==. I believe you can also specify != for specific versions.
If you are not packaging it with a setup.py, you should set up a virtualenv and put a file install_requires.txt with all the specific python packages and versions in the root of the project.
That might look like:
$ cd ../project
$ virtualenv project_venv
$ source project_venv/bin/activate
$ cd project
$ pip install protobuf>=2.5.0
$ pip freeze > ./requirements.txt
Then someone you distribute to can activate their virtualenv and do:
$ pip install -r requirements.txt
Make sure your package will work from a fresh virtualenv by installing with that method. This is also good to check before installing via a setup.py. You want to make sure your requirements will get anyone working who just does a fresh sudo python setup.py install, or python setup.py install in a virtualenv context.
You can exit a virtualenv context with:
$ deactivate
Your best bet may be to include a copy of the protobuf runtime library with your package, maybe under a different package name. Then you can make sure that it matches the version of your generated code.
Another option is to invoke protoc as part of the installation process, so you get whatever version is available on the host.
I don't think packaging multiple versions of your generated code sounds like a good idea -- you'll just have problems again when the next protobuf release comes out.
I've just uploaded a new version of my package to PyPi (1.2.1.0-r4): I can download the egg file and install it with easy_install, and the version checks out correctly. But when I try to install using pip, it installs version 1.1.0.0 instead. Even if I explicitly specify the version to pip with pip install -Iv tome==1.2.1.0-r4, I get this message: Requested tome==1.2.1.0-r4, but installing version 1.1.0.0, but I don't understand why.
I double checked with parse_version and confirmed that the version string on 1.2.1 is greater than that on 1.1.0 as shown:
>>> from pkg_resources import parse_version as pv
>>> pv('1.1.0.0') < pv('1.2.1.0-r4')
True
>>>
So any idea why it's choosing to install 1.1.0 instead?
This is an excellent question. It took me forever to figure out. This is the solution that works for me:
Apparently, if pip can find a local version of the package, pip will prefer the local versions to remote ones. I even disconnected my computer from the internet and tried it again -- when pip still installed the package successfully, and didn't even complain, the source was obviously local.
The really confusing part, in my case, was that pip found the newer versions on pypi, reported them, and then went ahead and re-installed the older version anyway ... arggh. Also, it didn't tell me what it was doing, and why.
So how did I solve this problem?
You can get pip to give verbose output using the -v flag ... but one isn't enough. I RTFM-ed the help, which said you can do -v multiple times, up to 3x, for more verbose output. So I did:
pip install -vvv <my_package>
Then I looked through the output. One line caught my eye:
Source in /tmp/pip-build-root/ has version 0.0.11, which satisfies requirement <my_package>
I deleted that directory, after which pip installed the newest version from pypi.
Try forcing download the package again with:
pip install --no-cache-dir --upgrade <package>
Thanks to Marcus Smith, who does amazing work as a maintener of pip, this was fixed in version 1.4 of pip which was released on 2013-07-23.
Relevant information from the changelog for this version
Fixed a number of issues (#413, #709, #634, #602, and #939) related to
cleaning up and not reusing build directories. (Pull #865, #948)
I found here that there is a known bug in pip that it won't check the version if there's a build directory with unpacked sources. I have checked this on my troubling package and after deleting its sources from build directory pip installed the required version.
If you are using a pip version that comes with some distribution packages (ex. Ubuntu python-pip), you may need to install a newer pip version:
Update pip to latest version:
sudo pip install -U pip
In case of "virtualenv", skip "sudo":
pip install -U pip
Following command may be required, if your shell report something like -bash: /usr/bin/pip: No such file or directory after pip update:
hash -d pip
Now install your package as usual:
pip install -U foo
or
pip install foo==package.version.here
Got the same issue to update pika 0.9.5 to 0.9.8. The only working way was to install from tarball: pip install https://pypi.python.org/packages/source/p/pika/pika-0.9.8.tar.gz.
In my case the python version used (3.4) didn't satisfy Django 2.1 dependencies requirements (python >= 3.5).
For my case I had to delete the .pip folder in my home directory and then I was able to get later versions of multiple libraries. Note that this was on linux.
pip --version
pip 18.1 from /usr/lib/python2.7/site-packages/pip (python 2.7)
virtualenv --version
15.1.0
Just in case that anyone else hassles with upgrading torchtext (or probably any other torch library):
Although https://pypi.org/project/torchtext/ states that you could run pip install torchtext I had to install it similiar to torch by specifying --find-links aka -f:
pip install torchtext===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
What irritated me was that PyCharm pointed me to the new version, but couldn't find it when attempting to upgrade to it. I guess that PyCharm uses its own mechanism to spot new versions. Then, when invoking pip under the hood, it didn't find the new version without the --find-links option.
In my case I am pip installing a .tar.gz package from Artifactory that I make a lot of updates to. In order to overwrite my cached Python files and always grab/install the latest I was able to run:
pip install --no-cache-dir --force-reinstall <path/to/tar.gz>
You should see this re-download any necessary files and install those, instead of using your local cache.
10 years on and pip still fails to work as expected 😖.
I wasted a couple of hours now banging my head against the wall trying to find out why pip won't install a development version of my package. In my case, there are versions 0.0.4 and 0.0.5.dev1 in a private gitlab.com package registry (hence the --extra-index-url argument below), but I believe that's not relevant to the problem.
Following a lot of the advice on this page, I create a test venv in a far away folder, clear the pip cache, uninstall the package in question, etc. first to rule out the most common problems:
$ pip cache purge && \
pip uninstall --yes my-package && \
pip install --extra-index-url "https://_:${GITLAB_PASSWORD_TOOLS_VAULTTOOLS}#gitlab.com/api/v4/projects/<project-id>/packages/pypi/simple" \
--no-cache-dir \
--pre \
--upgrade my-package
output (using empty lines to separate output for commands):
WARNING: No matching packages
Files removed: 0
Found existing installation: my-package 0.0.4
Uninstalling my-package-0.0.4:
Successfully uninstalled my-package-0.0.4
Looking in indexes: https://pypi.org/simple, https://_:****#gitlab.com/api/v4/projects/<project-id>/packages/pypi/simple
Collecting my-package
Downloading https://gitlab.com/api/v4/projects/<project-id>/packages/pypi/files/f07 ... 397/my_package-0.0.5.dev1-py3-none-any.whl (16 kB)
Downloading https://gitlab.com/api/v4/projects/<project-id>/packages/pypi/files/775 ... 70e/my_package-0.0.4-py3-none-any.whl (16 kB)
...
Successfully installed my-package-0.0.4
So pip does see the dev package version, but chooses the earlier one nonetheless.
In an attempt to figure out what's going on, I published a 0.0.5 version: Error persists, pip sees all three versions, but still installs 0.0.4.
In a further, increasingly desperate attempt, I removed any versions prior to 0.0.5* from the gitlab.com package registry.
Only now, pip would bother to actually display some useful information:
$ (same command as above)
... (similar output as above) ...
ERROR: Cannot install my-package==0.0.5 and my-package==0.0.5.dev1 because these package versions have conflicting dependencies.
The conflict is caused by:
my-package 0.0.5 depends on my-other-package<0.2.5 and >=0.2.4
my-package 0.0.5.dev1 depends on my-other-package<0.2.5 and >=0.2.4
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
OK, so there is something wrong with my package dependencies. Thanks for letting me know.
Seriously - I tried hard for a couple of hours using all kinds of pip ... -vvv and/or fixed versions such as e.g. my-package==0.0.5.dev1 - but I did not manage to get any useful output out of pip - until I wiped the entire history from my package registry 🤬.
Hope this at least helps someone in the same situation.
I found that if you use microversions, pip doesn't seem to recognize them. For example, we couldn't get version 1.9.9.1 to upgrade.
In my case, someone had published the latest version of a package with python2, so attempting to pip3 install it grabbed an older version that had been built with python3.
Handy things to check when debugging this:
If pip install claims to not be able to find the version, see whether pip search can see it.
Take a look at the "Download Files" section on the pypi repo -- the filenames might suggest what's wrong (in my case i saw -py2- there clear as day).
As suggested by others, try running pip install --no-cache-dir in case pip isn't bothering to ask the internet because it already has your answer locally.
I had hidden unversioned files under the Git tab in PyCharm that were being installed with pip install . even though I didn't see the files anywhere else.
Took a long time to find it for me, posting this in hope that it'll help somebody else.
if you need the path for your package do pip -v list. Example see related post when using pip -e Why is an old version of a package of my python library installing by itself with pip -e?
I am trying to install python-shapely with pip in Ubuntu 10.04. I got "Unknown or unsupported command 'install'" while I tried,
user#desktop:~$ pip install Shapely
I tried installing pip and got the following error:
user#desktop:~$ sudo apt-get install python-pip
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
python-pip
0 upgraded, 1 newly installed, 0 to remove and 396 not upgraded.
Need to get 0B/49.8kB of archives.
After this operation, 270kB of additional disk space will be used.
(Reading database ... 252574 files and directories currently installed.)
Unpacking python-pip (from .../python-pip_0.3.1-1ubuntu2.1_all.deb) ...
dpkg: error processing /var/cache/apt/archives/python-pip_0.3.1-1ubuntu2.1_all.deb (--unpack):
trying to overwrite '/usr/bin/pip', which is also in package pip 0:0.13-1
Errors were encountered while processing:
/var/cache/apt/archives/python-pip_0.3.1-1ubuntu2.1_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
I'd appreciate any comment/solution.
Thanks!
Did you install pip first, then get this error, then try to install python-pip?
If so, first remove pip (apt-get remove pip), then install python-pip instead and try again.
(I just had the same problem, not sure if python 2.7 uses pip and 2.6 uses python-pip? That might be the issue.)
Same happen to me, I'm running Ubuntu Lucid Lynx, 10.04 and there's a packaging conflict. Package pip (pearl installation software) has a conflict with the python-pip package. Both of them try to put a pip binary at /usr/bin/pip. You could do several things to solve the problem so choose the one that fits your needs:
1.- Remove "the pearl pip" if you don't use it and install the python pip
2.- Force installation of python pip with some "dpkg -f" or so, but this way your pip binary file will be overwritten
3.- Manually install ether of the packages changing the binary name, i.e. you manually install the python pip and instead of pip you just call the binary "python-pip"
Seems to be broken download. Did you try easy_install?
sudo easy_install pip
The problem raise because pip is in strawberry perl and Python both, if Perl's pip hit this error comes
$ which pip
/cygdrive/c/strawberry/perl/bin/pip
Solution
1. C:\Python27\Scripts\pip install south
or
2. Keep python path before strawberry perl
or
3. remove strawberry perl path from path variable...
Leave everything, Install latest version of python from its https://www.python.org/downloads .It already contain PIP, so open CMD from start and give him path to reach folder where python is installed and open "Script" folder where pip is build-in installed e.g. c:\Python36-32\Script And then write pip install module_name and enjoy,,,
Possibly you will have to open Administrator CMD, SO after typing cmd in start when you see CMD is on list press CTRL+SHIFT+ENTER and press OK in pop-up dialog and you will have administrative CMD.
I just updated Python to 2.6.4 on my Mac.
I installed from the dmg package.
The binary did not seem to correctly set my Python path, so I added '/usr/local/lib/python2.6/site-packages' in .bash_profile
>>> pprint.pprint(sys.path)
['',
'/Users/Bryan/work/django-trunk',
'/usr/local/lib/python2.6/site-packages',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload',
'/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages']
Apparently that is not all the required paths because I can't run iPython.
$ ipython
Traceback (most recent call last):
File "/usr/local/bin/ipython", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named `pkg_resources`
I've done Google searches and I can't really figure out how to install pkg_resources or make sure it's on the path.
What do I need to do to fix this?
I encountered the same ImportError. Somehow the setuptools package had been deleted in my Python environment.
To fix the issue, run the setup script for setuptools:
curl https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py | python
If you have any version of distribute, or any setuptools below 0.6, you will have to uninstall it first.*
See Installation Instructions for further details.
* If you already have a working distribute, upgrading it to the "compatibility wrapper" that switches you over to setuptools is easier. But if things are already broken, don't try that.
[UPDATE] TL;DR pkg_resources is provided by either Distribute or setuptools.
[UPDATE 2] As announced at PyCon 2013, the Distribute and setuptools projects have re-merged. Distribute is now deprecated and you should just use the new current setuptools. Try this:
curl -O https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py
python ez_setup.py
Or, better, use a current pip as the high level interface and which will use setuptools under the covers.
[Longer answer for OP's specific problem]:
You don't say in your question but I'm assuming you upgraded from the Apple-supplied Python (2.5 on 10.5 or 2.6.1 on 10.6) or that you upgraded from a python.org Python 2.5. In any of those cases, the important point is that each Python instance has its own library, including its own site-packages library, which is where additional packages are installed. (And none of them use /usr/local/lib by default, by the way.) That means you'll need to install those additional packages you need for your new python 2.6. The easiest way to do this is to first ensure that the new python2.6 appears first on your search $PATH (that is, typing python2.6 invokes it as expected); the python2.6 installer should have modified your .bash_profile to put its framework bin directory at the front of $PATH. Then install easy_install using setuptools following the instructions there. The pkg_resources module is also automatically installed by this step.
Then use the newly-installed version of easy_install (or pip) to install ipython.
easy_install ipython
or
pip install ipython
It should automatically get installed to the correct site-packages location for that python instance and you should be good to go.
In case of upgrading your python on mac os 10.7 and pkg_resources doesn't work, the simplest way to fix this is just reinstall setuptools as Ned mentioned above.
sudo pip install setuptools --upgrade
or sudo easy_install install setuptools --upgrade
On my system (OSX 10.6) that package is at
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/pkg_resources.py
I hope that helps you figure out if it's missing or just not on your path.
The reason might be because the IPython module is not in your PYTHONPATH.
If you donwload IPython and then do
python setup.py install
The setup doesn't add the module IPython to your python path.
You might want to add it to your PYTHONPATH manually. It should work after you do :
export PYTHONPATH=/pathtoIPython:$PYTHONPATH
Add this line in your .bashrc or .profile to make it permanent.
I realize this is not related to OSX, but on an embedded system (Beagle Bone Angstrom) I had the exact same error message. Installing the following ipk packages solved it.
opkg install python-setuptools
opkg install python-pip
I got this error on Ubuntu, and the following worked for me:
Removed the dropbox binaries and download them again, by running:
sudo rm -rf /var/lib/dropbox/.dropbox-dist
dropbox start -i
I encountered with the same problem when i am working on autobahn related project.
1) So I download the setuptools.-0.9.8.tar.gz form https://pypi.python.org/packages/source/s/setuptools/ and extract it.
2 )Then i get the pkg_resources module and copy it to the folder where it needed.
**in my case that folder was C:\Python27\Lib\site-packages\autobahn
In my case, package python-pygments was missed. You can fix it by command:
sudo apt-get install python-pygments
If there is problem with pandoc. You should install pandoc and pandoc-citeproc.
sudo apt-get install pandoc pandoc-citeproc
Try this only if you are ok with uninstalling python.
I uninstalled python using
brew uninstall python
then later installed using
brew install python
then it worked!