I tried to install EmoPy NN to my computer (here is the link https://github.com/thoughtworksarts/EmoPy). I used both OS Ubuntu and Windows. The problem is in library versions. The error. I get an error while installing file requirements.txt.
So my next step was to install all of dependencies on my own, counting them one by one. But then another error occured, which said, that some methods are not valid.
As cloning from github was unsuccessful, I decided to try to use pip installer. Unfortunately, the same problem with the conflict of versions occured.
So are there any possible solutions or that NN is too old and too difficult to be installed?
P.S. I use python 3.6.6. as documentation requires
I have simple instruction with installed Anaconda (conda env python 3.6), cloned from Git an emopy.
pip install -r requirements.txt
there is a little feature with dependency version(one library needs old version scipy 1.0.0, other needs 1.0.1)
write pip install scikit-image==0.16.2
if you start script from examples\fernmodel_example.py(emotions are determined by photo here exactly, in others some weird figures, it's not our needs, if i'm not mistaken), it will get several errors in saving.py file from keras library - to delete .decode("utf-8") one by one, while doesn't work.
You can fix other exampeles, of course, but it doesn't make sense yet.
Related
Overview: While running Python 3.6, after upgrading my arcgis package, scripts no longer recognizes many packages and pip itself completely broke, making it impossible to upgrade or uninstall any packages.
Background Info: Fairly recently, when I run a particular program of mine, I have been seeing a deprecation message connected to the arcgis package. So, I upgraded the arcgis package to see if it fixed it. It seemed to install correctly but then when trying to run my program, I'd get errors for other packages, like folium or requests. I then tried upgrading Python and initially, it worked. I used pip to install pandas and requests but right after I installed arcgis, everything broke again. So then when trying to uninstall arcgis (or do anything else pip related) I get this error:
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\Users\myuserpath\AppData\Local\.certifi'
I've uninstalled Python but it doesn't change anything. pip install any package results in this error. I tried reverting back to Python 3.6 but the installer wasn't available from the python site, only 3.9.
What could have been changed or affected by this arcgis installation?
There seems to be two primary issues you're dealing with. The first is as #BoarGules mentioned, that arcgis does a 'full' install with all its dependencies and that could be causing problems. Secondly, the newest requests library seems to have some issues as well, at least from what I've experienced. So let's get started fixing all this.
There's probably a few different ways to fix this, so this is just one of the many. First, uninstall python and delete the python folder from your AppData folder - in your case, it would be the Python 3.9 folder. Re-install Python and check your site-packages folder making sure it only contains the default Python packages. Open up a command prompt and do a pip install of something basic, like pandas. If that goes well, then the first hurdle is over.
When it comes time to install arcgis again, you'll want to use this instead
pip install arcgis --no-deps
this will prevent the doubling up of any of the packages or whatever seems to be happening. You will need to then also install these:
pip install ujson
pip install requests_ntlm
Next, when you come to installing requests, use an older library, like this one:
pip install requests==2.20.0
That should get things back up and running.
I'm trying to install neural_renderer. Unfortunately, the original implementation only supports Python 2.7+ and PyTorch 0.4.0, so I'm using a fork that includes some fixes for compatibility with torch 1.7 (here). The main issue was using AT_CHECK(), which was not compatible with newer versions of PyTorch, and was replaced with TORCH_CHECK().
After running pip install neural_renderer_pytorch on the fixed version, using a virtual environment, I get the output (which I truncated to just the error):
/tmp/pip-install-[somestring]/neural-renderer-pytorch_[somelongstring]/neural_renderer/cuda/load_textures_cuda.cpp:15:23: error: ‘AT_CHECK’ was not declared in this scope; did you mean ‘DCHECK’?
15 | #define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
| ^~~~~~~~
with [somestring] and [somelongstring] being some alphanumeric strings that changed with each compilation.
It looks like AT_CHECK is still being used somewhere in the code, but I don't know where. I know this error is exactly what the fork fixed, so I assume the cpp file is still cached somewhere from a previous compilation. But I can't figure out where exactly. I'm sure I'm on branch pytorch1.7 and running pip in the right repository; with torch==1.7.0 installed.
What I've tried so far, to no avail:
run pip cache purge before attempting to install
running pip with --no-cache-dir
deleting the virtualenv I'm using and making a new one
deleting the entire repository and making a new one
This issue on GitHub suggested just using PyTorch 1.4.0. This worked (i.e. I created a Python3.7 environment and ran conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.0 -c pytorch, then installed), but my goal is still to compile it for PyTorch 1.7.0 (and a newer version of Python).
If you want to install the fork, you cannot use pip install neural_renderer_pytorch. This command installs the original one.
To install the fork, you have to clone it to your local machine and install it:
git clone https://github.com/ZhengZerong/neural_renderer
cd neural_renderer
pip install .
You can do it in just one go as well:
pip install git+https://github.com/ZhengZerong/neural_renderer.git
Don't forget to uninstall the original version first, or just start a new venv.
It's a great hassle when installing some packages in a VE and conda or pip downloads them again even when I already have it in my base environment. Since I have limited internet bandwidth and I'm assuming I'll work with many different VE's, it will take a lot of time to download basic packages such as OpenCV/Tensorflow.
By default, pip caches anything it downloads, and will used the cached version whenever possible. This cache is shared between your base environment and all virtual environments. So unless you pass the --no-cache-dir option, pip downloading a package means it has not previously downloaded a compatible version of that package. If you already have that package installed in your base environment or another virtual environment and it downloads it anyway, this probably means one or more of the following is true:
You installed your existing version with a method other than pip.
There is a newer version available, and you didn't specify, for example, pip install pandas=1.1.5 (if that's the version you already have elsewhere). Pip will install the newest compatible version for your environment, unless you tell it otherwise.
The VE you're installing to is a different Python version (e.g. created with Pyenv), and needs a different build.
I'm less familiar with the specifics of conda, and I can't seem to find anything in its online docs that focuses on the default caching behavior. However, a how-to for modifying the cache location seems to assume that the default behavior is similar to how pip works. Perhaps someone else with more Anaconda experience can chime in as well.
So except for the caveats above, as long as you're installing a package with the same method you did last time, you shouldn't have to download anything.
If you want to simplify the process of installing all the same packages (that were installed via pip) in a new VE that you already have in another environment, pip can automate that too. Run pip freeze > requirements.txt in the first environment, and copy the resulting file to your newly created VE. There, run pip install -r requirements.txt and pip will install all the packages that were installed (via pip) in the first environment. (Note that pip freeze records version numbers as well, so this won't install newer versions that may be available -- whether this is a good or bad thing depends on your needs.)
everyone:
Because of the speed of network, when I conda install some packages, there will exist some related packages can not be downloaded completely. But we can not install packages have been downloaded successfully without other "related" packages(maybe "related" means the best march in version, but not necessary).
For example, When I install pytorch, it need numpy-1.14.2, but I am with numpy-1.15.1. I don't need verson 1.14.2 numpy in practice.
So I am a little confused how to make "conda" trying to install packages have been downloaded successfully, ignoring download failed packages?
Thanks!
EricKani
From the conda documentation there are two options that may help https://docs.conda.io/projects/conda/en/latest/commands/install.html
--no-update-deps
Do not update or change already-installed dependencies.
--no-deps
Do not install, update, remove, or change dependencies. This WILL lead to broken environments and inconsistent behavior. Use at your own
risk.
I believe by default conda tries with --no-update-deps first and then if that fails tries to update deps; giving it that option will make sure some version of each needed package is installed, if not necessarily the latest.
You could try --no-deps as well, which will literally prevent conda from installinh ANYTHING other than the exact packages you tell it to, but things may not work with that.
I am still pretty new to python, and I was wondering if anyone has had this problem before. I have read other threads, but I haven't seen this problem addressed yet. I need to install the GDAL module for python, and I have seen threads saying you need to install GDAL first and then it can be used on python, but I have also see others that said that conda install GDAL is enough. When I try the latter, I get this error. Any ideas?
I had the same problem two days ago trying to install GDAL on Debian Jessie.
The solution was using pygdal python package from PyPi.
Just read the instructions at PyPi and follow them, they are a bit different then one expects. In general:
install required dependencies into your system (e.g. using apt-get install libgdal1-dev
check, what version of GDAL is installed
use pip to install pygdal with a version matching the installed GDAL lib.
The last step is a bit unusual, but does the trick.
This works for Linux. For Windows my colleagues claim, there are ready made binaries, which can be installed.