Im trying this code https://github.com/arsfutura/face-recognition , but While running sh tasks/train.sh images/ Im getting valueerror as :-
images/rah/ra.jpg
/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/utils/detect_face.py:146: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:882.)
bb = mask.nonzero().float().flip(1)
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/face-recognition/training/train.py", line 99, in
main()
File "/home/pi/face-recognition/training/train.py", line 84, in main
embeddings, labels, class_to_idx = load_data(args, features_extractor)
File "/home/pi/face-recognition/training/train.py", line 61, in load_data
embeddings, labels = dataset_to_embeddings(dataset, features_extractor)
File "/home/pi/face-recognition/training/train.py", line 41, in dataset_to_embeddings
_, embedding = features_extractor(transform(Image.open(img_path).convert('RGB')))
File "/home/pi/face-recognition/face_recognition/face_features_extractor.py", line 26, in call
return self.extract_features(img)
File "/home/pi/face-recognition/face_recognition/face_features_extractor.py", line 15, in extract_features
bbs, _ = self.aligner.detect(img)
File "/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/mtcnn.py", line 308, in detect
self.device
File "/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/utils/detect_face.py", line 66, in detect_face
tmp[(dy[k] - 1):edy[k], (dx[k] - 1):edx[k], :] = img[(y[k] - 1):ey[k], (x[k] - 1):ex[k], :]
ValueError: could not broadcast input array from shape (0,1364,3) into shape (0,0,3)
I even tried hardcoding tmp = np.zeros((0,1364, 3)) at line 65 in detect_face.py just to test, but no luck.
Why don't you use facenet within deepface? You just pass the exact image paths as pair and it builds a face recognition pipeline. I mean that verify function handles face detection and alignment in the background.
#!pip install deepface
from deepface import DeepFace
obj = DeepFace.verify("img1.jpg", "img2.jpg", model_name = 'Facenet')
print(obj["verified"])
Or you can find an identity in a data base similarly. Here, you are expected to store facial images with .jpg or .png extention in a folder and pass it to database path.
df = DeepFace.find("img1.jpg", db_path="C:/my_db", model_name = 'Facenet')
print(df.head())
Related
I'm trying to convert a pre-trained model from PyTorch to CoreML. I have created a script to achieve the same. I'm able to load and convert the model to TorchScript from both of the methods. (i.e. Tracing and Scripting)
However, when calling the coremltools.convert() method for the traced or scripted model it throws an error.
I have mentioned the scripts for both methods along with errors thrown.
System Information
MacOS = 12.4
Python = 3.9
protobuf = 3.19.0
coremltools = 6.0b1
torch = 1.10.2
torchvision = 0.11.3
Note - I have tried with multiple versions of the libraries I have mentioned above but that does not help me in any way.
Method 1 -> Tracing
Code -
import coremltools as coremltools
import numpy as np
import torch
import torchvision as torchvision
def do_trace(in_model, in_input):
model_trace = torch.jit.trace(in_model, in_input)
model_trace.eval()
return model_trace
def dict_to_tuple(out_dict):
if "masks" in out_dict.keys():
return out_dict["boxes"], out_dict["scores"], out_dict["labels"], out_dict["masks"]
return out_dict["boxes"], out_dict["scores"], out_dict["labels"]
class PredictionModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.model = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True)
def forward(self, in_input):
output = self.model(in_input)
return dict_to_tuple(output[0])
inp = torch.Tensor(np.random.uniform(0.0, 250.0, size=(1, 3, 300, 300)))
model = PredictionModel().eval()
with torch.no_grad():
output = model(inp)
trace_model = do_trace(model, inp)
ml_model = coremltools.convert(trace_model, inputs=[coremltools.TensorType(shape=(1, 3, 300, 300))])
print(ml_model)
Error -
Converting PyTorch Frontend ==> MIL Ops: 3%|▎ | 74/2627 [00:00<00:05, 436.01 ops/s]
Traceback (most recent call last):
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 91, in _perform_torch_convert
prog = converter.convert()
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 263, in convert
convert_nodes(self.context, self.graph)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 89, in convert_nodes
add_op(context, node)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 3973, in reciprocal
context.add(mb.inverse(x=inputs[0], name=node.name))
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 63, in add_op
return cls._add_op(op_cls, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/builder.py", line 191, in _add_op
new_op.type_value_inference()
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 244, in type_value_inference
output_vals = self._auto_val(output_types)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 354, in _auto_val
builtin_val.val = v
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/mil/types/type_tensor.py", line 93, in val
raise ValueError(
ValueError: tensor should have value of type ndarray, got <class 'numpy.float32'> instead
Method 2 -> Scripting
Code -
import coremltools as coremltools
import torch
import torchvision as torchvision
model = torchvision.models.detection.keypointrcnn_resnet50_fpn(pretrained=True)
script_model = torch.jit.script(model)
ml_model = coremltools.convert(script_model, inputs=[coremltools.TensorType(shape=(1, 3, 300, 300))])
print(ml_model)
Error -
WARNING:root:Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1491, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/techlead/PycharmProjects/conversion_demo/main.py", line 8, in
ml_model = coremltools.convert(script_model, inputs=[coremltools.TensorType(shape=(1, 3, 300, 300))])
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 426, in convert
mlmodel = mil_convert(
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 182, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 209, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 272, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 104, in call
return load(*args, **kwargs)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 51, in load
converter = TorchConverter(torchscript, inputs, outputs, cut_at_symbols)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 158, in init
raw_graph, params_dict = self._expand_and_optimize_ir(self.torchscript)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 478, in _expand_and_optimize_ir
graph, params_dict = TorchConverter._jit_pass_lower_graph(graph, torchscript)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 423, in _jit_pass_lower_graph
_lower_graph_block(graph)
File "/Users/techlead/PycharmProjects/conversion_demo/venv/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 402, in _lower_graph_block
module = getattr(node_to_module_map[_input], attr_name)
KeyError: images.7 defined in (%images.7 : torch.torchvision.models.detection.image_list.ImageList, %targets.31 : Dict(str, Tensor)[]? = prim::TupleUnpack(%405)
)
If you try to run any of the above snippets you'll see that the model gets successfully converted to TorchScipt (Trace and Script) but the last step i.e. to convert the torch script model to coreml fails. Please have a look at this issue and let me know how I can move further with this. Also, if I'm doing something wrong (for eg - Passing the inputs wrong) let me know as well in that case. This is me first time doing this, so i'm kind of a noob. Any help is appreciated. Thank you!
I am implementing the HOG(Histogram of Oriented Gradient) with below code.
import io
from skimage.io import imread, imshow
from skimage.feature import hog
from skimage import exposure
from skimage import io
import matplotlib
img = imread('cr7.jpeg')
io.imshow(img)
MC = True #Fpr color images
#MC = false #for grayscale images
hogfv, hog_image = hog(img, orientations=9,
pixels_per_cell=(32,32),
cells_per_block=(4,4),
visualize = True ,
channel_axis=MC)
hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0,5))
imshow(hog_image_rescaled)
I don't know why i am getting error of dimension.
Traceback (most recent call last):
File "main.py", line 22, in <module>
channel_axis=MC)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 427, in fixed_func
out = func(*new_args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 348, in fixed_func
return func(*args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/feature/_hog.py", line 286, in hog
dtype=float_dtype
ValueError: negative dimensions are not allowed
(base) (env) c100-110#C100-110s-iMac-2 HOG % python main.py
Traceback (most recent call last):
File "main.py", line 18, in <module>
channel_axis=MC)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 427, in fixed_func
out = func(*new_args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 348, in fixed_func
return func(*args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/feature/_hog.py", line 286, in hog
dtype=float_dtype
ValueError: negative dimensions are not allowed
Can anyone help me in finding solution to this error.
The error log says there is a problem in "line 22"
Traceback (most recent call last):
File "main.py", line 22, in <module>
channel_axis=MC)
...
ValueError: negative dimensions are not allowed
channel_axis, it's the "channel axis"! So I guess it expects an integer, rather than a bool value.
It is confirmed in the source code:
channel_axis : int or None, optional
If None, the image is assumed to be a grayscale (single channel) image.
Otherwise, this parameter indicates which axis of the array corresponds
to channels.
I think you were trying to use multichannel, which is deprecated:
multichannel : boolean, optional
If True, the last image dimension is considered as a color channel,
otherwise as spatial. This argument is deprecated: specify channel_axis instead.
By adding following, it is working for my case.
channel_axis=-1
Trying to run a training script, after resolving a few error messages I've come accross this one, Anyone know what is happening here?
Batch size > 1 not implemented! Falling back to batch_size = 1 ...
Building multi-modal model...
Loading model parameters.
Traceback (most recent call last):
File "translate_mm.py", line 166, in <module>
main()
File "translate_mm.py", line 98, in main
use_filter_pred=False)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/IO.py", line 198, in build_dataset
use_filter_pred=use_filter_pred)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 75, in __init__
out_examples = list(out_examples)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 69, in <genexpr>
out_examples = (self._construct_example_fromlist(
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 68, in <genexpr>
example_values = ([ex[k] for k in keys] for ex in examples_iter)
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 265, in _dynamic_dict
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
File "/content/drive/My Drive/Thesis/thesis_code/onmt/io/TextDataset.py", line 265, in <listcomp>
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1178, in __getattr__
type(self).__name__, name))
AttributeError: 'Vocab' object has no attribute 'stoi'
which refers to
def _dynamic_dict(self, examples_iter):
for example in examples_iter:
src = example["src"]
src_vocab = torchtext.vocab.Vocab(Counter(src))
self.src_vocabs.append(src_vocab)
# Mapping source tokens to indices in the dynamic dict.
src_map = torch.LongTensor([src_vocab.stoi[w] for w in src])
example["src_map"] = src_map
if "tgt" in example:
tgt = example["tgt"]
mask = torch.LongTensor(
[0] + [src_vocab.stoi[w] for w in tgt] + [0])
example["alignment"] = mask
yield example
Note: the original model was made with a much older version of torchtext, I am guessing the error is related to that, but I am simply too inexperienced to know for sure.
Anyone has an idea? Googling this provided no significant results.
regards,
U.
You must use get_stoi()[w].This is for the newer version after removing the legacy. You also can use get_itos() which returns a list of elements.
I have a machine learning model in PyTorch saved as a .pt file, and I'm trying to convert it to a CoreML model. Here is my code:
import coremltools as ct
import torch
import torchvision
from torchvision import transforms
from PIL import Image
# Image processing
input_image = Image.open("example.png")
input_image = input_image.convert('RGB')
preprocess = transforms.Compose([transforms.Resize((256, 256)), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0,2, 0.1], std=[0.5, 0.3, 0.7])])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
# Model loading and tracing
model = torch.load("model.pt")
trace = torch.jit.trace(model, input_batch)
# Convert model to CoreML
mlmodel = ct.convert(trace, inputs=[ct.ImageType(name="input_1", shape=input_batch.shape, bias=[1, 0.2/0.3, 0.1/0.7], scale = 1./(255*0.67))])
EDIT: Full error traceback below:
It's the last line where I get an error, which is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 175, in convert
mlmodel = mil_convert(
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 128, in mil_convert
proto = mil_convert_to_proto(model, convert_from, convert_to,
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 171, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 85, in __call__
return load(*args, **kwargs)
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 81, in load
raise e
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 73, in load
prog = converter.convert()
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 227, in convert
convert_nodes(self.context, self.graph)
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 54, in convert_nodes
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'type_as' not implemented.
I'm not sure what this means. How do I solve it?
Thanks!
Thanks in advance!
I am working on portfolio optimisation with PyPortfolioOptimisation.
I have the prices of my underlying assets starting from 2015-01-01 up till 2021-05-19
The dataframe shape is [1666 rows x 20 columns]
I ran the following codes
mu = expected_returns.mean_historical_return(df)
cov = risk_models.sample_cov(df)
print('Mean' '\n'+str(mu))
print('Covariance: ' '\n' +str(cov))
ef = EfficientFrontier(mu,cov)
weights = ef.max_sharpe()
cleaned_w = ef.cleaned_weights()
print(cleaned_w)
ef.portfolio_performance(verbose=True)
But it gives an error stating workspace allocation error pointing to line weights = ef.max_sharpe()
Traceback (most recent call last):
File "F:\Python projects\KS\Investment\Efficient frontier.py", line 34, in <module>
weights = ef.max_sharpe()
File "F:\Python projects\KS\lib\site-packages\pypfopt\efficient_frontier\efficient_frontier.py", line 278, in max_sharpe
self._solve_cvxpy_opt_problem()
File "F:\Python projects\KS\lib\site-packages\pypfopt\base_optimizer.py", line 239, in _solve_cvxpy_opt_problem
self._opt.solve(verbose=self._verbose, **self._solver_options)
File "F:\Python projects\KS\lib\site-packages\cvxpy\problems\problem.py", line 459, in solve
return solve_func(self, *args, **kwargs)
File "F:\Python projects\KS\lib\site-packages\cvxpy\problems\problem.py", line 947, in _solve
solution = solving_chain.solve_via_data(
File "F:\Python projects\KS\lib\site-packages\cvxpy\reductions\solvers\solving_chain.py", line 343, in solve_via_data
return self.solver.solve_via_data(data, warm_start, verbose,
File "F:\Python projects\KS\lib\site-packages\cvxpy\reductions\solvers\qp_solvers\osqp_qpif.py", line 103, in solve_via_data
solver.setup(P, q, A, lA, uA, verbose=verbose, **solver_opts)
File "F:\Python projects\KS\lib\site-packages\osqp\interface.py", line 37, in setup
self._model.setup(*unpacked_data, **settings)
ValueError: Workspace allocation error!
I tried changing the memory setting in Pycharm but to no avail. Is memory the same as workspace allocation? Sorry for these fundamental questions...
Cheers mate