Many compatibility problems with gluonts - python

I would like to install the gluonts package. The problem here is that there are extremely many compatibility issues. I cannot manually solve each of these compatibility problems by trial and error. Is there a general way to solve the whole thing automatically?
I am working in the PyCharm environment with Pyton. to use a virtual environment I am using Anaconda.
The packages I need in any case are Jupyter, Numpy, Pandas, mxnet, matplotlib and just glounts.
I have also tried to install the whole thing in a different order, that didn't work either, so I installed gluonts first, which also worked, but when I then wanted to install mxnet is then the problem.
Since I'm writing my bachelor thesis and accordingly under time pressure, I would be very grateful for your help.
My plan is to implement the whole thing in Pyton 3.6, I've also tried switching to Pyton 3.7, but that didn't help either. Now I will try it with Python 3.5.
When solving the problem, it should be noted that I am not completely free in my way of working, since I work on a unicomputer and do not have all the rights, etc.
EDIT
So I have now found a solution myself, how to use everything reasonably.
You create with anaconda prompt a new environment and use there Python 3.9 , when creating give the following code
conda create -n env python=3.9 numpy=1.16.6
conda activate env
pip install mxnet gluonts jupyter
conda deactivate
now you could select this as normal environment in PyCharm. You could also look at the whole thing normally in the anaconda navigator and see which packages you have on it.

The general form is:
pip install gluonts
You should try this:
# support for mxnet models, faster datasets
pip install gluonts[mxnet,pro]
# support for torch models, faster datasets
pip install gluonts[torch,pro]
and then try this example:
from gluonts.dataset.util import to_pandas
from gluonts.dataset.pandas import PandasDataset
from gluonts.dataset.repository.datasets import get_dataset
from gluonts.model.deepar import DeepAREstimator
from gluonts.mx import Trainer
dataset = get_dataset("airpassengers")
deepar = DeepAREstimator(prediction_length=12, freq="M", trainer=Trainer(epochs=5))
model = deepar.train(dataset.train)
# Make predictions
true_values = to_pandas(list(dataset.test)[0])
true_values.to_timestamp().plot(color="k")
prediction_input = PandasDataset([true_values[:-36], true_values[:-24], true_values[:-12]])
predictions = model.predict(prediction_input)
for color, prediction in zip(["green", "blue", "purple"], predictions):
prediction.plot(color=f"tab:{color}")
plt.legend(["True values"], loc="upper left", fontsize="xx-large")
Here is the link to the gluonts 0.10.2 module:
https://pypi.org/project/gluonts/
Please state any errors that you see with this.

Related

Why is my Jupyter notebook running ipython kernel instead of python3? Problem with group bys

I've been running into this issue, my Jupyter Lab seems to be running 'Python ipykernel' instead of just saying running 'python 3'. Essentially, both of them are python 3 however, I've found a few issues with using ipykernel, I don't know why (Such as simple pandas groupby functions). I didn't feel the need to share any screenshots for this one, hope somebody can help me out; if need be, I can still post it.
I want to make it to Python 3 and not ipykernel
-------- EDIT--------------
Okay, so many of you are saying that it's not that. I've decided to provide screenshots to show that the outputs of my groupby function are completely different.
EXHIBIT A: Using labs when it just says Python3
EXHIBIT B: Using labs when it just says Python 3 ipykernel
SAME code, SAME dataset, different output ? Why ?
ipykernel is using Python 3. Jupyter (whether notebook, lab, or any other interface) is and always was using ipykernel as the default kernel for Python. You might be confused because the latest version added the (ipykernel) label (in this PR) to inform users which kernel they are using. There are other kernels like xeus-python.
It is highly, like really highly, unlikely that ipykernel is the source of your problems. You can try downgrading it to previous version if you believe it worked better for you:
pip install "ipykernel<6"
but actually I would first recommend doing the opposite, this is ensuring that you are running the latest ipykernel patch release with:
pip install -U ipykernel
Once you confirm that ipykernel is not a source of your problem, I would recommend asking a more detailed question with a reproducible example of the pandas code that you have problem with.

pytorch - package importing error...i have no idea

Image of code and pytorch and my error
I have no idea why this perfect baseline code is not running on my pc.
I fail to install "processes" package.
I fail to use modules of "tools".
Please help me...
The Perfect baseline is below.
https://github.com/audio-captioning/dcase-2020-baseline
Pls. make sure that you are using the same virtual environment in Pycharm(or the IDE) in which you have installed this.
You can run which python to check the python interpreter. Ensure that it is same in where you have installed. Also, check with pip list if proper installation is there and then go for the same import statement in a python terminal.

Numpy cannot be accessed on Jupyter. Is it a problem with the path and how do I fix that?

This is probably a very basic question but I have not been able to solve it for some time.
My goal is to start using Python with Jupyter Notebook for data analytics.
I first downloaded Python 3.7 on OSx10.95. Then tried to download Anaconda, which failed a few times. Then I downloaded Miniconda and used Wing101. After that I could download Anaconda. However, I did not get Anaconda navigator to work.
Then I started using Jupyter Notebook from terminal. It works but there are a number of problems:
In Jupyter when I try to import pandas and numpy I get an error:
--------
<ipython-input-1-baf368f80de7> in <module>
----> 1 import pandas as pd
2 import numpy as np
~/anaconda3/lib/python3.7/site-packages/pandas/__init__.py in <module>
17 if missing_dependencies:
18 raise ImportError(
---> 19 "Missing required dependencies{0}".format(missing_dependencies))
20 del hard_dependencies, dependency, missing_dependencies
21
ImportError: Missing required dependencies ['numpy']
----------
Numpy is installed though, but it is probably in the wrong place.
Another problem is that the Anaconda and Python files are all over my computer:
The Anaconda navigator is at:
/anaconda3
Pip 3.7 is at:
/Library/Frameworks/Python.framework/Versions/3.7/bin/
Numpy is at:
/Users/lsluyser/Downloads/ENTER/lib/python3.7/site-packages/pandas/compat/
Jupyter files are at:
/Users/lsluyser/Downloads/ENTER/lib/python3.7/site-packages/
and also at:
/anaconda3/lib/python3.7/site-packages
My question is:
What is the desired organization of the program files and how do I achieve this?
Should I move all files from Downloads to another folder?
Should numpy be put under /anaconda3/lib/python3.7/site-packages?
Can the fact that Anaconda navigator does not work have to do with its location?
Thank you very much in advance!
I suggest using Miniconda, which is a smaller alternative to Anaconda. Even if you don't, you should download the packages you need, such as numpy, from
Anaconda Cloud, which should put the files in proper location.
Generally [on Windows] the packages should be in folder C:\Users\<>\Miniconda3\Lib\site-packages and verify the environment variable has necessary paths.
If you're going to work in Python, you will soon realize the need for creating multiple python virtual environments on your computer.
This is because, when working in Python:
You will constantly run into situations that require you to install, upgrade, or downgrade some new module.
Each such install, upgrade, or downgrade could have some unwanted side-effect (something that was working earlier, stops working after the change).
By creating multiple virtual environments, you will be able to perform such installs, upgrades or downgrades within a specific environment, with no risk of affecting your other environments.
Tools such as Anaconda and Miniconda make it easy for you to create and manage such virtual environments.
Under the hood, the creation and management of the virtual environments is probably not much more than setting some environment variables.
I found this to be a good intro to the concept.
For your problem, yes, most likely your problem with numpy can be solved by suitably setting environment variables, but I would suggest not to attempt that.
Instead, use Anaconda or Miniconda to create an environment, and within that environment, use Anaconda or Miniconda to install numpy. You will of course will be prompted about any pre-requisites that may be needed for numpy.

ImportError: cannot import name 'moduleTNC' - python

I'm having a problem in python when I try to import linear_model from sklearn library: from sklearn import linear_model. I have just installed it via pip simply in this way: pip install sklearn. I know that to avoid this error suffices uninstall and reinstall sklearn, but it didn't work. I also installed it via conda but opening the idle (is that correct?) it gives the same error.
How to avoid it?
NB: If I use jupyter from conda it works well as it should.
I had this same problem and resolved it with:
conda remove scipy scikit-learn -y
conda install scipy scikit-learn -y
I saw it here, where many other people said it solved their problems too.
With regards to the following error:
ImportError: cannot import name 'moduleTNC'
It can be solved by renaming moduletnc.cp36-win_amd64.pyd with moduleTNC.cp36-win_amd64.pyd in:
AppData\Roaming\Python\Python36\site-packages\scipy\optimize
I can't mark as possible duplicate, so I'm just pasting here. If that's a wrong behaviour, I'm sorry:
Import module works in Jupyter notebook but not in IDLE
The reason is that your pip/conda installed library paths are not accessible by python IDLE. You have to add those library paths to your environment variable(PATH). To do this open my computer > properties > advanced system settings > system.
Under environment variables look for PATH and at the end add the location of installed libraries. Refer this for more details on how to add locations in path variable. Once you do these you will be able to import the libraries. In order to know what locations python searches for libraries you can use
import sys
print sys.path
This will give you a list of locations where python searches for libraries. Once you edit the PATH variable those locations will be reflected here.
Refer this also in order to know how to add python library path.
Note: The tutorial is a reference on how to edit PATH variable. I encourage you to find the location of installed libraries and follow the steps to edit the same.

One last issue getting DeepChem running with TensorFlow

I'm trying to install the DeepChem multi objective deep learning code on a MacBook Pro with a Python 2.7 environment. I did a manual install of DeepChem and its required components and all seemed to go well, no errors. When I tried to run the test data I got an error "Failed to load the native TensorFlow runtime". I then installed TensorFlow again in the VirtualEnv Python environment and this again went well. The Tensorflow validation worked OK. I tried the DeepChem test examples again and got the same error "Failed to load the native TensorFlow runtime". Can anyone please suggest a solution? Should I provide a complete error path?
Your question is really general and this is not really a tensorflow problem so much as a virtualenv/dependency question.
That said here are some possible fixes:
are you sure you have activated the virtualenv (should show
environment in parentheses before the bash prompt)? If not use:
source activate ENV_NAME
if you are running from an IDE (like pycharm) are you sure you are
using the virtualenv as your interpreter? If not follow this
guide.
Again, more information about your use case would be very helpful.

Categories

Resources