I am implementing the HOG(Histogram of Oriented Gradient) with below code.
import io
from skimage.io import imread, imshow
from skimage.feature import hog
from skimage import exposure
from skimage import io
import matplotlib
img = imread('cr7.jpeg')
io.imshow(img)
MC = True #Fpr color images
#MC = false #for grayscale images
hogfv, hog_image = hog(img, orientations=9,
pixels_per_cell=(32,32),
cells_per_block=(4,4),
visualize = True ,
channel_axis=MC)
hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0,5))
imshow(hog_image_rescaled)
I don't know why i am getting error of dimension.
Traceback (most recent call last):
File "main.py", line 22, in <module>
channel_axis=MC)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 427, in fixed_func
out = func(*new_args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 348, in fixed_func
return func(*args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/feature/_hog.py", line 286, in hog
dtype=float_dtype
ValueError: negative dimensions are not allowed
(base) (env) c100-110#C100-110s-iMac-2 HOG % python main.py
Traceback (most recent call last):
File "main.py", line 18, in <module>
channel_axis=MC)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 427, in fixed_func
out = func(*new_args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/_shared/utils.py", line 348, in fixed_func
return func(*args, **kwargs)
File "/Volumes/DATA/Djangoproject/HOG/env/lib/python3.7/site-packages/skimage/feature/_hog.py", line 286, in hog
dtype=float_dtype
ValueError: negative dimensions are not allowed
Can anyone help me in finding solution to this error.
The error log says there is a problem in "line 22"
Traceback (most recent call last):
File "main.py", line 22, in <module>
channel_axis=MC)
...
ValueError: negative dimensions are not allowed
channel_axis, it's the "channel axis"! So I guess it expects an integer, rather than a bool value.
It is confirmed in the source code:
channel_axis : int or None, optional
If None, the image is assumed to be a grayscale (single channel) image.
Otherwise, this parameter indicates which axis of the array corresponds
to channels.
I think you were trying to use multichannel, which is deprecated:
multichannel : boolean, optional
If True, the last image dimension is considered as a color channel,
otherwise as spatial. This argument is deprecated: specify channel_axis instead.
By adding following, it is working for my case.
channel_axis=-1
Related
I'm using python image_match library. I need to use search_image method of this library. but when I se this method I got the below error:
Traceback (most recent call last):
File "/var/www/html/Panel/test2.py", line 16, in <module>
ses.search_image('https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg/687px-Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg')
File "/usr/local/lib/python3.10/site-packages/image_match/signature_database_base.py", line 268, in search_image
transformed_record = make_record(img, self.gis, self.k, self.N)
File "/usr/local/lib/python3.10/site-packages/image_match/signature_database_base.py", line 356, in make_record
signature = gis.generate_signature(path)
File "/usr/local/lib/python3.10/site-packages/image_match/goldberg.py", line 161, in generate_signature
im_array = self.preprocess_image(path_or_image, handle_mpo=self.handle_mpo, bytestream=bytestream)
File "/usr/local/lib/python3.10/site-packages/image_match/goldberg.py", line 257, in preprocess_image
return rgb2gray(image_or_path)
File "/usr/local/lib/python3.10/site-packages/skimage/_shared/utils.py", line 394, in fixed_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/skimage/color/colorconv.py", line 875, in rgb2gray
rgb = _prepare_colorarray(rgb)
File "/usr/local/lib/python3.10/site-packages/skimage/color/colorconv.py", line 140, in _prepare_colorarray
raise ValueError(msg)
ValueError: the input array must have size 3 along `channel_axis`, got (1024, 687)
Can you please help me?
I'm trying to reshape an image after reshaping it, I'm facing problems when it comes to the saving method. Here's the code I'm trying to run:
import nibabel as nib
import numpy as np
from nibabel.testing import data_path
import os
example_filename = os.path.join("D:/Volumes convertidos LIDC",
'teste001converted.nii.gz')
img = nib.load('teste001converted.nii.gz')
print (img.shape)
newimg = img.get_fdata().reshape(332,360*360)
print (newimg.shape)
final_img = nib.Nifti1Image(newimg, img.affine)
nib.save(final_img, os.path.join("D:/Volumes convertidos LIDC",
'test2d.nii.gz'))
And I'm getting an error:
(most recent call last):
File "d:\Volumes convertidos LIDC\reshape.py", line 17, in
final_img = nib.Nifti1Image(newimg, img.affine)
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 1756, in init
super(Nifti1Pair, self).init(dataobj,
File "C:\Python39\lib\site-packages\nibabel\analyze.py", line 918, in init
super(AnalyzeImage, self).init(
File "C:\Python39\lib\site-packages\nibabel\spatialimages.py", line 469, in init
self.update_header()
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 2032, in update_header
super(Nifti1Image, self).update_header()
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 1795, in update_header
super(Nifti1Pair, self).update_header()
File "C:\Python39\lib\site-packages\nibabel\spatialimages.py", line 496, in update_header
hdr.set_data_shape(shape)
File "C:\Python39\lib\site-packages\nibabel\nifti1.py", line 880, in set_data_shape
super(Nifti1Header, self).set_data_shape(shape)
File "C:\Python39\lib\site-packages\nibabel\analyze.py", line 633, in set_data_shape
raise HeaderDataError(f'shape {shape} does not fit in dim datatype')
nibabel.spatialimages.HeaderDataError: shape (332, 129600) does not fit in dim datatype
Is there any way to solve it?
You are trying to save a numpy array, whereas the nib.save expects a SpatialImage object.
You should convert the numpy array to a SpatialImage:
final_img = nib.Nifti1Image(newimg, img.affine)
After which you can save the image:
nib.save(final_img, os.path.join("D:/Volumes convertidos LIDC", 'test4d.nii.gz'))
See the documentation and this answer for more explanation.
Edit: This will not work if newimg is a 2D image.
I have a machine learning model in PyTorch saved as a .pt file, and I'm trying to convert it to a CoreML model. Here is my code:
import coremltools as ct
import torch
import torchvision
from torchvision import transforms
from PIL import Image
# Image processing
input_image = Image.open("example.png")
input_image = input_image.convert('RGB')
preprocess = transforms.Compose([transforms.Resize((256, 256)), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0,2, 0.1], std=[0.5, 0.3, 0.7])])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
# Model loading and tracing
model = torch.load("model.pt")
trace = torch.jit.trace(model, input_batch)
# Convert model to CoreML
mlmodel = ct.convert(trace, inputs=[ct.ImageType(name="input_1", shape=input_batch.shape, bias=[1, 0.2/0.3, 0.1/0.7], scale = 1./(255*0.67))])
EDIT: Full error traceback below:
It's the last line where I get an error, which is:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 175, in convert
mlmodel = mil_convert(
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 128, in mil_convert
proto = mil_convert_to_proto(model, convert_from, convert_to,
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 171, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 85, in __call__
return load(*args, **kwargs)
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 81, in load
raise e
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 73, in load
prog = converter.convert()
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 227, in convert
convert_nodes(self.context, self.graph)
File "/Users/aditya/miniconda3/lib/python3.8/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 54, in convert_nodes
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'type_as' not implemented.
I'm not sure what this means. How do I solve it?
Thanks!
Im trying this code https://github.com/arsfutura/face-recognition , but While running sh tasks/train.sh images/ Im getting valueerror as :-
images/rah/ra.jpg
/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/utils/detect_face.py:146: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:882.)
bb = mask.nonzero().float().flip(1)
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/face-recognition/training/train.py", line 99, in
main()
File "/home/pi/face-recognition/training/train.py", line 84, in main
embeddings, labels, class_to_idx = load_data(args, features_extractor)
File "/home/pi/face-recognition/training/train.py", line 61, in load_data
embeddings, labels = dataset_to_embeddings(dataset, features_extractor)
File "/home/pi/face-recognition/training/train.py", line 41, in dataset_to_embeddings
_, embedding = features_extractor(transform(Image.open(img_path).convert('RGB')))
File "/home/pi/face-recognition/face_recognition/face_features_extractor.py", line 26, in call
return self.extract_features(img)
File "/home/pi/face-recognition/face_recognition/face_features_extractor.py", line 15, in extract_features
bbs, _ = self.aligner.detect(img)
File "/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/mtcnn.py", line 308, in detect
self.device
File "/home/pi/.local/lib/python3.7/site-packages/facenet_pytorch/models/utils/detect_face.py", line 66, in detect_face
tmp[(dy[k] - 1):edy[k], (dx[k] - 1):edx[k], :] = img[(y[k] - 1):ey[k], (x[k] - 1):ex[k], :]
ValueError: could not broadcast input array from shape (0,1364,3) into shape (0,0,3)
I even tried hardcoding tmp = np.zeros((0,1364, 3)) at line 65 in detect_face.py just to test, but no luck.
Why don't you use facenet within deepface? You just pass the exact image paths as pair and it builds a face recognition pipeline. I mean that verify function handles face detection and alignment in the background.
#!pip install deepface
from deepface import DeepFace
obj = DeepFace.verify("img1.jpg", "img2.jpg", model_name = 'Facenet')
print(obj["verified"])
Or you can find an identity in a data base similarly. Here, you are expected to store facial images with .jpg or .png extention in a folder and pass it to database path.
df = DeepFace.find("img1.jpg", db_path="C:/my_db", model_name = 'Facenet')
print(df.head())
When I do general PIL commands in Python, I get such an error message:
>>> im.save('layer_86_.tiff')
TIFFSetField: layer_86_.tiff: Unknown tag 33922.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\PIL\Image.py", line 1685, in save
save_handler(self, fp, filename)
File "C:\Python34\lib\site-packages\PIL\TiffImagePlugin.py", line 1185, in _save
e = Image._getencoder(im.mode, 'libtiff', a, im.encoderconfig)
File "C:\Python34\lib\site-packages\PIL\Image.py", line 430, in _getencoder
return encoder(mode, *args + extra)
RuntimeError: Error setting from dictionary
I've seen at Github and SO similar questions, that date back to many years ago. But in my case this problem still can be reproduced. I've even installed libtiff.dll and put it in the System32 and SysWOW64 folders, but to no avail. So, how can I fix it?
This is another error message, that I see, when I try to rotate an image:
>>> from PIL import Image
>>> Image.MAX_IMAGE_PIXELS = 100000000000
>>> img = Image.open('layer_71_.tiff')
>>> img.rotate(80,expand=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\PIL\Image.py", line 1603, in rotate
return self.transform((w, h), AFFINE, matrix, resample)
File "C:\Python34\lib\site-packages\PIL\Image.py", line 1862, in transform
im.__transformer((0, 0)+size, self, method, data, resample, fill)
File "C:\Python34\lib\site-packages\PIL\Image.py", line 1910, in __transformer
image.load()
File "C:\Python34\lib\site-packages\PIL\ImageFile.py", line 245, in load
if not self.map and (not LOAD_TRUNCATED_IMAGES or t == 0) and e < 0:
TypeError: unorderable types: tuple() < int()
So it seems like PIL does not work in many cases.