import sys
import os
import argparse
from setup.settings import hparams
sys.path.append(os.path.realpath(os.path.dirname(__file__)))
sys.path.append(os.path.realpath(os.path.dirname(__file__)) + "/nmt")
from nmt import nmt
import tensorflow.compat.v1 as tf
# Modified autorun from nmt.py (bottom of the file)
# We want to use original argument parser (for validation, etc)
nmt_parser = argparse.ArgumentParser()
nmt.add_arguments(nmt_parser)
# But we have to hack settings from our config in there instead of commandline options
nmt.FLAGS, unparsed = nmt_parser.parse_known_args(['--'+k+'='+str(v) for k,v in hparams.items()])
# And now we can run TF with modified arguments
tf.app.run(main=nmt.main, argv=[os.getcwd() + '\nmt\nmt\nmt.py'] + unparsed)
So i ran this and i got a error which was
ModuleNotFoundError: No module named 'tensorflow.python.layers'
this is my pip list
tb-nightly 2.3.0a20200711
tensorboard 1.14.0
tensorboard-plugin-wit 1.7.0
tensorflow 1.14.0
tensorflow-estimator 1.14.0
termcolor 1.1.0
tf-estimator-nightly 2.4.0.dev2020071101
tqdm 4.47.0
Related
I am trying to follow this article to use the AutoModelForCasualLM from transformers to generate text with bloom. But I keep getting an error saying that python cannot AutoModelForCasualLM from transformers. I have tried multiple computers and multiple versions of transformers but I always get the following error. (Traceback from latest version of transformers)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[28], line 1
----> 1 from transformers import AutoTokenizer, AutoModelForCasualLM, BloomConfig
2 from transformers.models.lboom.modeling_bloom import BloomBlock, build_alibi_tensor
ImportError: cannot import name 'AutoModelForCasualLM' from 'transformers' (/mnt/MLDr/venv/lib/python3.10/site-packages/transformers/__init__.py)
code snippet from where the error occurs (first ~10 lines):
import os
import torch
import torch.nn as nn
from collections import OrderedDict
def get_state_dict(shard_num, prefix=None):
d = torch.load(os.path.join(model_path, f"pytorch_model_{shard_num:05d}-of-00072.bin"))
return d if prefix is None else OrderedDict((k.replace(prefix, ''), v) for k, v in d.items())
from transformers import AutoTokenizer, AutoModelForCasualLM, BloomConfig
from transformers.models.lboom.modeling_bloom import BloomBlock, build_alibi_tensor
model = "./bloom"
config = BloomConfig.from_pretrained(model_path)
device = 'cpu'
transformers-cli env results:
transformers version: 4.25.1
Platform: Linux-5.15.0-58-generic-x86_64-with-glibc2.35
Python version: 3.10.6
Huggingface_hub version: 0.11.1
PyTorch version (GPU?): 1.13.1+cu117 (False)
Tensorflow version (GPU?): 2.11.0 (False)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?: <fill in>
Using distributed or parallel set-up in script?: <fill in>
im trying to import categorical_dqn
when i try the following
from tf_agents.agents.categorical_dqn import categorical_dqn_agent
i get
ImportError: cannot import name 'binary_weighted_focal_crossentropy' from 'keras.backend' (C:\Users\tgmjack\anaconda3\lib\site-packages\keras\backend.py)
the advice i find around the internet Error importing binary_weighted_focal_crossentropy from keras backend: Cannot import name is to try importing this stuff first
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.metrics import binary_focal_crossentropy
i end up with the exact same error caused by the second line of this suggestion however.
ImportError: cannot import name 'binary_weighted_focal_crossentropy' from 'keras.backend' (C:\Users\tgmjack\anaconda3\lib\site-packages\keras\backend.py)
####### bonus info ########
im running all this on anaconda
tensorflow version = 2.9.2
tf agents version = 0.5.0
keras version = 2.9.0
im trying to follow this tutorial = https://github.com/tensorflow/agents/blob/master/docs/tutorials/9_c51_tutorial.ipynb
I had a similar problem with tf_agents a few months ago. Doing this fixed it for me:
pip install tf-agents[reverb]
I have the following packages with their respective versions:
keras 2.9.0
tensorflow 2.9.2
tf-agents 0.13.0
I'm getting the following error while importing Keras:
ImportError: cannot import name 'dtensor' from 'tensorflow.compat.v2.experimental' (C:\Users\User\AppData\Roaming\Python\Python38\site-packages\tensorflow\_api\v2\compat\v2\experimental\__init__.py)
Tensorflow v. 2.6, Keras v. 2.6
Does anyone have an idea how to solve this error?
DTensor is part of the TensorFlow 2.9.0 release. To import dtensor you can upgrade tensorflow 2.6 to tensorflow 2.9 as follows:
pip install --upgrade tensorflow==2.9
Now, you can import dtensor either from tensorflow.experimental or from tensorflow.keras as follows:
#Using tensorflow.experimental
from tensorflow.experimental import dtensor
#Using tensorflow.keras
from tensorflow.keras import dtensor
For more information, please refer to this guide. Thank you.
refer to url
ImportError: cannot import name 'enums'
google cloud speech ImportError: cannot import name 'enums'
I got the error when using google-cloud-speech api for my project. I'm NOT using pipenv for virtual environment. I installed google-cloud-speech api with
pip3 install google-cloud-speech
and
pip3 update google-cloud-speech
environment
win10
python3.6.8
google-api-core 1.14.2
google-auth 1.6.3
google-cloud 0.34.0
google-cloud-core 1.0.3
google-cloud-speech 1.2.0
google-cloud-storage 1.18.0
google-resumable-media 0.3.3
googleapis-common-protos 1.6.0
grpc-google-cloud-speech-v1beta1 1.0.1
enum34 1.1.6
enums 0.0.2
the error contents
from google.cloud.speech import enums
ImportError: cannot import name 'enums'
I already tried the pip command follow.
pip3 install enums
pip3 install enum34
pip3 install google-cloud-speech
pip3 upgrade google-cloud-speech
here the code when the code did in google cloud shell of GCP, the following code works well and I didn't get the error.
# !/usr/bin/env python
# coding: utf-8
import argparse
import io
import sys
import codecs
import datetime
import locale
from google.cloud import storage
from google.cloud import speech_v1 as speech
from google.cloud.speech_v1 import enums
from google.cloud.speech_v1 import types
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
def transcribe_gcs(gcs_uri):
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri=gcs_uri)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.FLAC, # wavsetting
sample_rate_hertz=44100, # hertz must be fix raw files
language_code='ja-JP') #for japanese language
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
operationResult = operation.result()
d = datetime.datetime.today()
today = d.strftime("%Y%m%d-%H%M%S")
fout = codecs.open('output{}.txt'.format(today), 'a', 'utf-8')
for result in operationResult.results:
for alternative in result.alternatives:
fout.write(u'{}\n'.format(alternative.transcript))
print('alternative.transcript===',format(alternative.transcript))
fout.close()
print('alternative.transcript===',format(alternative.transcript))
print('operationResult===',operationResult)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument(
'path', help='GCS path for audio file to be recognized')
args = parser.parse_args()
transcribe_gcs(args.path)
Thanks in advance.
By using pip install google.cloud.speech, you are getting the latest version, currently V2.
In V2, enums and types have been removed and are no longer needed.
https://github.com/googleapis/python-speech/blob/master/UPGRADING.md#enums-and-types
Should be: from google.cloud.speech_v1 import enums
Try using,
from google.cloud.language_v1 import *
Then you can access all of followings directly
LanguageServiceAsyncClient
AnalyzeEntitiesRequest
AnalyzeEntitiesResponse
AnalyzeEntitySentimentRequest
AnalyzeEntitySentimentResponse
AnalyzeSentimentRequest
AnalyzeSentimentResponse
AnalyzeSyntaxRequest
AnalyzeSyntaxResponse
AnnotateTextRequest
AnnotateTextResponse
ClassificationCategory
ClassifyTextRequest
ClassifyTextResponse
DependencyEdge
Document
EncodingType
Entity
EntityMention
LanguageServiceClient
PartOfSpeech
Sentence
Sentiment
TextSpan
Token
I've installed pytorch and fastai via conda:
conda list
...
fastai 1.0.28 py_1 fastai
pytorch 1.0.0 py3.6_1 pytorch
torchtext 0.3.1 <pip>
torchvision 0.2.1 py_2 pytorch
I'm using one of the fastai models.
The code to load the model is this ( the very last line is the one that fails):
import numpy as np
import torch
from fastai import untar_data, URLs
import pickle
from fastai.text import get_language_model
from torchtext import data
# puzzling the pieces together
# get weights and itos
model_path = untar_data(URLs.WT103, data=False)
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
wgts_fname, itos_fname = fnames
itos = pickle.load(open(itos_fname, 'rb'))
wgts = torch.load(wgts_fname, map_location=lambda storage, loc: storage)
It produces the error:
dyld: Symbol not found: _PySlice_Unpack
Referenced from: /anaconda3/envs/t1/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace
After browsing SO, I found this related question:
dyld: Symbol not found: error how to resolve this issue
Apparently an error about dyld symbols is related to missing dependencies/broken paths to binaries.
In this case it looks like a pytorch internal problem to me.
How can something like this happen when using a package manager like conda?
My operating system is MacOS 10.14.2