AttributeError: 'Vocab' object has no attribute 'stoi' - python

Trying to run a training script, after resolving a few error messages I've come accross this one, Anyone know what is happening here?
Batch size > 1 not implemented! Falling back to batch_size = 1 ...
Building multi-modal model...
Loading model parameters.
Traceback (most recent call last):
File "translate_mm.py", line 166, in <module>
main()
File "translate_mm.py", line 98, in main
use_filter_pred=False)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/IO.py", line 198, in build_dataset
use_filter_pred=use_filter_pred)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 75, in __init__
out_examples = list(out_examples)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 69, in <genexpr>
out_examples = (self._construct_example_fromlist(
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 68, in <genexpr>
example_values = ([ex[k] for k in keys] for ex in examples_iter)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 265, in _dynamic_dict
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 265, in <listcomp>
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1178, in __getattr__
type(self).__name__, name))
AttributeError: 'Vocab' object has no attribute 'stoi'
which refers to
def _dynamic_dict(self, examples_iter):
for example in examples_iter:
src = example["src"]
src_vocab = torchtext.vocab.Vocab(Counter(src))
self.src_vocabs.append(src_vocab)
# Mapping source tokens to indices in the dynamic dict.
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
example["src_map"] = src_map
if "tgt" in example:
tgt = example["tgt"]
mask = torch.LongTensor(
[0] + [src_vocab.stoi[w] for w in tgt] + [0])
example["alignment"] = mask
yield example
Note: the original model was made with a much older version of torchtext, I am guessing the error is related to that, but I am simply too inexperienced to know for sure.
Anyone has an idea? Googling this provided no significant results.
regards,
U.

You must use get_stoi()[w].This is for the newer version after removing the legacy. You also can use get_itos() which returns a list of elements.

Related

Facing issues while converting a pre-trained PyTorch model to CoreML

I'm trying to convert a pre-trained model from PyTorch to CoreML. I have created a script to achieve the same. I'm able to load and convert the model to TorchScript from both of the methods. (i.e. Tracing and Scripting)
However, when calling the coremltools.convert() method for the traced or scripted model it throws an error.
I have mentioned the scripts for both methods along with errors thrown.
System Information
MacOS = 12.4
Python = 3.9
protobuf = 3.19.0
coremltools = 6.0b1
torch = 1.10.2
torchvision = 0.11.3
Note - I have tried with multiple versions of the libraries I have mentioned above but that does not help me in any way.
Method 1 -> Tracing
Code -
import coremltools as coremltools
import numpy as np
import torch
import torchvision as torchvision
def do_trace(in_model, in_input):
model_trace = torch.jit.trace(in_model, in_input)
model_trace.eval()
return model_trace
def dict_to_tuple(out_dict):
if "masks" in out_dict.keys():
return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
class PredictionModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.model = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True)
def forward(self, in_input):
output = self.model(in_input)
return dict_to_tuple(output[0])
inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, 300, 300)))
model = PredictionModel().eval()
with torch.no_grad():
output = model(inp)
trace_model = do_trace(model, inp)
ml_model = coremltools.convert(trace_model, inputs=[coremltools.TensorType(shape=(1, 3, 300, 300))])
print(ml_model)
Error -
Converting PyTorch Frontend ==> MIL Ops: 3%|▎ | 74/2627 [00:00<00:05, 436.01 ops/s]
Traceback (most recent call last):
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 91, in _perform_torch_convert
prog = converter.convert()
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 263, in convert
convert_nodes(self.context, self.graph)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 89, in convert_nodes
add_op(context, node)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 3973, in reciprocal
context.add(mb.inverse(x=inputs[0], name=node.name))
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 63, in add_op
return cls._add_op(op_cls, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/builder.py", line 191, in _add_op
new_op.type_value_inference()
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 244, in type_value_inference
output_vals = self._auto_val(output_types)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 354, in _auto_val
builtin_val.val = v
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/types/type_tensor.py", line 93, in val
raise ValueError(
ValueError: tensor should have value of type ndarray, got <class 'numpy.float32'> instead
Method 2 -> Scripting
Code -
import coremltools as coremltools
import torch
import torchvision as torchvision
model = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True)
script_model = torch.jit.script(model)
ml_model = coremltools.convert(script_model, inputs=[coremltools.TensorType(shape=(1, 3, 300, 300))])
print(ml_model)
Error -
WARNING:root:Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/techlead/PycharmProjects/conversion_demo/main.py", line 8, in
ml_model = coremltools.convert(script_model, inputs=[coremltools.TensorType(shape=(1, 3, 300, 300))])
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 426, in convert
mlmodel = mil_convert(
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 182, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 209, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 272, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 104, in call
return load(*args, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 51, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 158, in init
raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 478, in _expand_and_optimize_ir
graph, params_dict = TorchConverter._jit_pass_lower_graph(graph, torchscript)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 423, in _jit_pass_lower_graph
_lower_graph_block(graph)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 402, in _lower_graph_block
module = getattr(node_to_module_map[_input], attr_name)
KeyError: images.7 defined in (%images.7 : torch.torchvision.models.detection.image_list.ImageList, %targets.31 : Dict(str, Tensor)[]? = prim::TupleUnpack(%405)
)
If you try to run any of the above snippets you'll see that the model gets successfully converted to TorchScipt (Trace and Script) but the last step i.e. to convert the torch script model to coreml fails. Please have a look at this issue and let me know how I can move further with this. Also, if I'm doing something wrong (for eg - Passing the inputs wrong) let me know as well in that case. This is me first time doing this, so i'm kind of a noob. Any help is appreciated. Thank you!

Getting ValueError while training facenet

Im trying this code https://github.com/arsfutura/face-recognition , but While running sh tasks/train.sh images/ Im getting valueerror as :-
images/rah/ra.jpg
/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/utils/detect_face.py:146: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:882.)
bb = mask.nonzero().float().flip(1)
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/face-recognition/training/train.py", line 99, in
main()
File "/home/pi/face-recognition/training/train.py", line 84, in main
embeddings, labels, class_to_idx = load_data(args, features_extractor)
File "/home/pi/face-recognition/training/train.py", line 61, in load_data
embeddings, labels = dataset_to_embeddings(dataset, features_extractor)
File "/home/pi/face-recognition/training/train.py", line 41, in dataset_to_embeddings
_, embedding = features_extractor(transform(Image.open(img_path).convert('RGB')))
File "/home/pi/face-recognition/face_recognition/face_features_extractor.py", line 26, in call
return self.extract_features(img)
File "/home/pi/face-recognition/face_recognition/face_features_extractor.py", line 15, in extract_features
bbs, _ = self.aligner.detect(img)
File "/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/mtcnn.py", line 308, in detect
self.device
File "/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/utils/detect_face.py", line 66, in detect_face
tmp[(dy[k] - 1):edy[k], (dx[k] - 1):edx[k], :] = img[(y[k] - 1):ey[k], (x[k] - 1):ex[k], :]
ValueError: could not broadcast input array from shape (0,1364,3) into shape (0,0,3)
I even tried hardcoding tmp = np.zeros((0,1364, 3)) at line 65 in detect_face.py just to test, but no luck.
Why don't you use facenet within deepface? You just pass the exact image paths as pair and it builds a face recognition pipeline. I mean that verify function handles face detection and alignment in the background.
#!pip install deepface
from deepface import DeepFace
obj = DeepFace.verify("img1.jpg", "img2.jpg", model_name = 'Facenet')
print(obj["verified"])
Or you can find an identity in a data base similarly. Here, you are expected to store facial images with .jpg or .png extention in a folder and pass it to database path.
df = DeepFace.find("img1.jpg", db_path="C:/my_db", model_name = 'Facenet')
print(df.head())

Integer, multi-objective optimization with Platypus (Python)

I am exploring the Platypus library for multi-objective optimization in Python. It appears to me that Platypus should support variables (optimization parameters) as integers out of the box, however this simple problem (two objectives, three variables, no constraints and Integer variables with SMPSO):
from platypus import *
def my_function(x):
""" Some objective function"""
return [-x[0] ** 2 - x[2] ** 2, x[1] - x[0]]
def AsInteger():
problem = Problem(3, 2) # define 3 inputs and 1 objective (and no constraints)
problem.directions[:] = Problem.MAXIMIZE
int1 = Integer(-50, 50)
int2 = Integer(-50, 50)
int3 = Integer(-50, 50)
problem.types[:] = [int1, int2, int3]
problem.function = my_function
algorithm = SMPSO(problem)
algorithm.run(10000)
Results into:
Traceback (most recent call last):
File "D:\MyProjects\Drilling\test_platypus.py", line 62, in
AsInteger()
File "D:\MyProjects\Drilling\test_platypus.py", line 19, in AsInteger
algorithm.run(10000)
File "build\bdist.win-amd64\egg\platypus\core.py", line 405, in run
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 820, in step
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 838, in iterate
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1008, in _update_velocities
TypeError: unsupported operand type(s) for -: 'list' and 'list'
Similarly, if I try to use another optimization technique in Platypus (CMAES instead of SMPSO):
Traceback (most recent call last):
File "D:\MyProjects\Drilling\test_platypus.py", line 62, in
AsInteger()
File "D:\MyProjects\Drilling\test_platypus.py", line 19, in AsInteger
algorithm.run(10000)
File "build\bdist.win-amd64\egg\platypus\core.py", line 405, in run
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1074, in step
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1134, in initialize
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1298, in iterate
File "build\bdist.win-amd64\egg\platypus\core.py", line 378, in evaluate_all
File "build\bdist.win-amd64\egg\platypus\evaluator.py", line 88, in evaluate_all
File "build\bdist.win-amd64\egg\platypus\evaluator.py", line 55, in run_job
File "build\bdist.win-amd64\egg\platypus\core.py", line 345, in run
File "build\bdist.win-amd64\egg\platypus\core.py", line 518, in evaluate
File "build\bdist.win-amd64\egg\platypus\core.py", line 160, in call
File "build\bdist.win-amd64\egg\platypus\types.py", line 147, in decode
File "build\bdist.win-amd64\egg\platypus\tools.py", line 521, in gray2bin
TypeError: 'float' object has no attribute 'getitem'
I get other types of error messages with other algorithms (OMOPSO, GDE3). While the algorithms NSGAIII, NSGAII, SPEA2, etc... appear to be working.
Has anyone ever encountered such issues? Maybe I am specifying the problem in te wrong way?
Thank you in advance for any suggestion.
Andrea.
try to change the way u add the problem type
problem.types[:] = [integer(-50,50),integer(-50,50),integer(-50,50)]
could work this way

Python - NaiveBayesClassifier - TypeError: 'float' object is not iterable

I am trying to train some data for a classification tool I am building. I have done some simple examples and it works fine.
I am now trying to use some data from work (which is what it will be used on), and I am getting a TypeError: 'float' object is not iterable error, the traceback is here:
Traceback (most recent call last):
File "C:/Users/nicholas/Desktop/machineTraining/classLearning.py", line 13, in <module>
cl = NaiveBayesClassifier(train)
File "C:\Users\nicholas\AppData\Local\Programs\Python\Python36-32\lib\site-packages\textblob\classifiers.py", line 205, in __init__
super(NLTKClassifier, self).__init__(train_set, feature_extractor, format, **kwargs)
File "C:\Users\nicholas\AppData\Local\Programs\Python\Python36-32\lib\site-packages\textblob\classifiers.py", line 139, in __init__
self._word_set = _get_words_from_dataset(self.train_set) #Keep a hidden set of unique words.
File "C:\Users\nicholas\AppData\Local\Programs\Python\Python36-32\lib\site-packages\textblob\classifiers.py", line 63, in _get_words_from_dataset
return set(all_words)
This is my code:
df = pd.read_csv("C:/Users/nicholas\Desktop/trainData.csv", encoding='latin-1')
df['train'] = df[['Summary', 'Primary Classification']].apply(tuple, axis=1)
aTrain = df['train'].values.tolist()
train = aTrain
cl = NaiveBayesClassifier(train)
Any ideas on what is going wrong?

How to interface Pyomo with GLPK?

opt = SolverFactory("glpk")
opt.options["mipgap"] = 0.05
opt.options["FeasibilityTol"] = 1e-05
solver_manager = SolverManagerFactory("serial")
# results = solver_manager.solve(instance, opt=opt, tee=True,timelimit=None, mipgap=0.1)
results = solver_manager.solve(model, opt=opt, tee=True, timelimit=None)
# sends results to stdout
# results.write()
def pyomo_save_results(options=None, instance=None, results=None):
OUTPUT = open(r'Results_generic_hub.txt', 'w')
print(results, file=OUTPUT)
OUTPUT.close()
It generates the following error. GLPK is installed with GLPSOL -- help working from any directory. Is this a problem with the GLPK module? Or with the model itself? Environment: - Conda, Mac OS Yosemite.
File "<ipython-input-7-ba156f9322b2>", line 7, in <module>
results = solver_manager.solve(model, opt=opt, tee=True,timelimit=None)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/async_solver.py", line 34, in solve
return self.execute(*args, **kwds)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/manager.py", line 107, in execute
ah = self.queue(*args, **kwds)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/manager.py", line 122, in queue
return self._perform_queue(ah, *args, **kwds)
File "/anaconda/lib/python3.6/site-
packages/pyomo/opt/parallel/local.py", line 59, in _perform_queue
results = opt.solve(*args, **kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 582, in solve
self._presolve(*args, **kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/solver/shellcmd.py", line 196, in _presolve
OptSolver._presolve(self, *args, **kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 661, in _presolve
**kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/solvers.py", line 729, in _convert_problem
**kwds)
File "/anaconda/lib/python3.6/site-packages/pyomo/opt/base/convert.py", line 110, in convert_problem
problem_files, symbol_map = converter.apply(*tmp, **tmpkw)
File "/anaconda/lib/python3.6/site-packages/pyomo/solvers/plugins/converter/model.py", line 86, in apply
io_options=io_options)
File "/anaconda/lib/python3.6/site-packages/pyomo/core/base/block.py", line 1646, in write
io_options)
File "/anaconda/lib/python3.6/site-packages/pyomo/repn/plugins/cpxlp.py", line 163, in __call__
include_all_variable_bounds=include_all_variable_bounds)
File "/anaconda/lib/python3.6/site-packages/pyomo/repn/plugins/cpxlp.py", line 575, in _print_model_LP
" cannot write legal LP file" % str(model.name))
ValueError: ERROR: No objectives defined for input model 'unknown'; cannot write legal LP file
The error you are seeing:
"ERROR: No objectives defined for input model 'unknown'; cannot write legal LP file"
indicates that Pyomo cannot find an active Objective component on your model (either you never added one to the model, or the Objective component(s) were all deactivated). Either way, valid LP files (which is how Pyomo interfaces with GLPK) require an objective. Fixing your model by adding an Objective should resolve this error.
Try this code in the end of the script:
> instance = model.create() instance.pprint() opt =
> SolverFactory("glpk") results = opt.solve(instance)
> print(results)
`

Categories

Resources