UserWarning: [W094] Model 'en_training' (0.0.0) specifies an under-constrained spaCy version requirement: >=2.1.4.
This can lead to compatibility problems with older versions,
or as new spaCy versions are released, because the model may say it's compatible when it's not.
Consider changing the "spacy_version" in your meta.json to a version range,
with a lower and upper pin. For example: >=3.2.1,<3.3.0
spaCy version 3.2.1
Python version 3.9.7
OS Window
For spacy v2 models, the under-constrained requirement >=2.1.4 means >=2.1.4,<2.2.0 in effect, and as a result this model will only work with spacy v2.1.x.
There is no way to convert a v2 model to v3. You can either use the model with v2.1.x or retrain the model from scratch with your training data.
pip3 install spacy==2.1.4
This can download required
Related
I am using spacy version==2.2.4 for name entity recognition and wishes to use the same version for testing custom spacy relation extraction pipeline. But unfortunately, I am facing the below issue while running custom relation extraction model with the above spacy version.
ModuleNotFoundError: No module named 'thinc.types'
I have used spacy github link to train the custom relation extraction pipeline. For training, I have used spacy==3.1.4.
Now, I need to connect two different models whereas Name entity recognition is trained on spacy version 2 whereas spacy relation extraction model works fine with spacy version 3.
I did some debugging and here are my results
I read in spacy github issue 7219 that to use the relation extraction model with spaCy v2, use spacy-transformers==0.6.2. I did exactly the same but no success.There is pypi link about spacy transformers which says that spacy transformers requires spacy>=3.0
I did not stopped researching there and went to another spacy github issue 7910 which says use the thinc version 8.0.3. This version is not compatible with spacy==2.2.4
I am facing the issue to use spaCy v2 for testing custom spaCy relation extraction pipeline. If it is not possible then one of the solution would be to use the same spacy version on both end. I could easily implement this but there is another challenge which comes in between i.e also using neuralcoref in between which cannot be installed with spaCy v3. So any solution to this problem would help in solving that.
I am also thinking about using different environments for (NER + Coreference) and (Relation Extraction). Does this sounds a good solution.
I am trying to download en_vectors_web_lg, but keep getting the below error:
ERROR: Could not install requirement en-vectors-web-lg==3.0.0 from https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-3.0.0/en_vectors_web_lg-3.0.0-py3-none-any.whl#egg=en_vectors_web_lg==3.0.0 because of HTTP error 404 Client Error: Not Found for url: https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-3.0.0/en_vectors_web_lg-3.0.0-py3-none-any.whl for URL https://github.com/explosion/spacy-models/releases/download/en_vectors_web_lg-3.0.0/en_vectors_web_lg-3.0.0-py3-none-any.whl#egg=en_vectors_web_lg==3.0.0
Is spacy still supporting en_vectors_web_lg?
I also just updated my spacy to the latest version
The naming conventions changed in v3 and the equivalent model is en_core_web_lg. It includes vectors and you can install it like this:
spacy download en_core_web_lg
I would not recommend downgrading to use the old vectors model unless you need to run old code.
If you are concerned about accuracy and have a decent GPU the transformers model, en_core_web_trf, is also worth considering, though it doesn't include word vectors.
It looks like en_vectors_web_lg is not supported by SpaCy v3.0. The SpaCy v3.0 installation guide offers en_core_web_trf instead, which is a Transformer-based pipeline.
I have a question about scikit models and (retro-)compatibility.
I have a model (saved using joblib) created in Python 3.5 from scikit-learn 0.21.2, which I then analyze with the package shap version 0.30. Since I upgraded to Ubuntu 20.04 I have Python 3.8 (and newer versions of both scikit-learn and shap).
Because of the new packages version I cannot load them with Python 3.8, so I make a virtual environment with Py3.5 and the original package versions.
Now my question is: is there a way to re-dump with joblib the models so I can also open them with Python 3.8? I'd like to re-analyze the model with the newest version of the package shap (but of course it has a scikit version requirement that would break the joblib loading).
Alternatively, what other options do I have? (The only thing I do not want is to re-train the model).
There are no standard solutions within scikit-learn. If your model is supported, you can try sklearn-json.
Although this does not solve your current issue, you can in the future save your models in formats with fewer compatibility issues – see the Interoperable formats section in scikit-learn's Model persistence page.
I have this wierd requirement where I have 2 modules
chatbot(using rasa)
image classifier
I want to create a single Flask Web service for both of them
But issue is RASA uses tensorflow 1.x while my image classifier module is built using tensorflow 2.x.
I know I can not have both versions of Tensorflow in my environment(if there's a way, i am not aware of it)
As RASA doesn't support Tensorflow 2.x yet, one thing I'm left with is to downgrade Tensorflow for image classifier.
Is there a way to tackle this issue without changing my existing code or with minimum changes?
I am using Python 3.5, installed and managed with Anaconda. I want to train an NGramModel (from nltk) using some text. My installation does not find the module nltk.model
There are some possible answers to this question (pick the correct one, and explain how to do it):
A different version of nltk can be installed using conda, so that it contains the model module. This is not just an older version (it would need to be too old), but a different version containing the model (or model2) branch of the current nltk development.
The version of nltk mentioned in the previous point cannot be installed using conda, but can be installed using pip.
nltk.model is deprecated, better use some other package (explain which package)
there are better options than nltk for training an ngram model, use some other library (explain which library)
none of the above, to train an ngram model the best option is something else (explain what).
try
import nltk
nltk.download('all')
in your notebook