I have generated many images like number plate as below[![enter image description here]
Now, want to convert all such images like real world vehicle number plate image.
For example-
How to these type of augmentation and save the all the augmented images in another folder.
Solution
Check out the library: albumentations. Try to answer the question: "what is the difference between the image you have and the image you want?". For instance, that image is :
more pixelated,
grainy,
has lower resolution,
also could have nails/fastening screws on it
may have something else written under or over the main number
may have shadows on it
the number plate may be unevenly bright at places, etc.
Albumentations, helps you come up with many types of image augmentations. Please try to break down this problem like I suggested and then try and find out which augemntations you need there from albumentations.
Example of image augmentation using albumentations
The following code block (source) shows you how to apply albumentations for image augmentation. In case you had an image and a mask, both of them will undergo identical transformations.
Another example from kaggle: Image Augmentation Demo with albumentation
from albumentations import (
HorizontalFlip, IAAPerspective, ShiftScaleRotate, CLAHE, RandomRotate90,
Transpose, ShiftScaleRotate, Blur, OpticalDistortion, GridDistortion, HueSaturationValue,
IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, IAAPiecewiseAffine,
IAASharpen, IAAEmboss, RandomBrightnessContrast, Flip, OneOf, Compose
)
import numpy as np
def strong_aug(p=0.5):
return Compose([
RandomRotate90(),
Flip(),
Transpose(),
OneOf([
IAAAdditiveGaussianNoise(),
GaussNoise(),
], p=0.2),
OneOf([
MotionBlur(p=0.2),
MedianBlur(blur_limit=3, p=0.1),
Blur(blur_limit=3, p=0.1),
], p=0.2),
ShiftScaleRotate(shift_limit=0.0625, scale_limit=0.2, rotate_limit=45, p=0.2),
OneOf([
OpticalDistortion(p=0.3),
GridDistortion(p=0.1),
IAAPiecewiseAffine(p=0.3),
], p=0.2),
OneOf([
CLAHE(clip_limit=2),
IAASharpen(),
IAAEmboss(),
RandomBrightnessContrast(),
], p=0.3),
HueSaturationValue(p=0.3),
], p=p)
image = np.ones((300, 300, 3), dtype=np.uint8)
mask = np.ones((300, 300), dtype=np.uint8)
whatever_data = "my name"
augmentation = strong_aug(p=0.9)
data = {"image": image, "mask": mask, "whatever_data": whatever_data, "additional": "hello"}
augmented = augmentation(**data)
image, mask, whatever_data, additional = augmented["image"], augmented["mask"], augmented["whatever_data"], augmented["additional"]
Strategy
First tone down the number of augmentations to a bare minimum
Save a single augmented-image
Save a few images post augmentation.
Now test and update your augmentation pipeline to suit your requirements of mimicking the ground-truth scenario.
finalize your pipeline and run it on a larger number of images.
Time it: how long this takes for how many images.
Then finally run it on all the images: this time you can have a time estimate on how long it is going to take to run it.
NOTE: every time an image passes through the augmentation pipeline, only a single instance of augmented image comes out of it. So, say you want 10 different augmented versions of each image, you will need to pass each image through the augmentation pipeline 10 times, before moving on to the next image.
# this will not be what you end up using
# but you can begin to understand what
# you need to do with it.
def simple_aug(p-0,5):
return return Compose([
RandomRotate90(),
# Flip(),
# Transpose(),
OneOf([
IAAAdditiveGaussianNoise(),
GaussNoise(),
], p=0.2),
])
# for a single image: check first
image = ... # write your code to read in your image here
augmentation = strong_aug(p=0.5)
augmented = augmentation({'image': image}) # see albumentations docs
# SAVE the image
# If you are using imageio or PIL, saving an image
# is rather straight forward, and I will let you
# figure that out.
# save the content of the variable: augmented['image']
For multiple images
Assuming each image passing 10 times through the augmentation pipeline, your code could look like as follows:
import os
# I assume you have a way of loading your
# images from the filesystem, and they come
# out of `images` (an iterator)
NUM_AUG_REPEAT = 10
AUG_SAVE_DIR = 'data/augmented'
# create directory of not present already
if not os.path.isdir(AUG_SAVE_DIR):
os.makedirs(AUG_SAVE_DIR)
# This will create augmentation ids for the same image
# example: '00', '01', '02', ..., '08', '09' for
# - NUM_AUG_REPEAT = 10
aug_id = lambda x: str(x).zfill(len(str(NUM_AUG_REPEAT)))
for image in images:
for i in range(NUM_AUG_REPEAT):
data = {'image': image}
augmented = augmentation(**data)
# I assume you have a function: save_image(image_path, image)
# You need to write this function with
# whatever logic necessary. (Hint: use imageio or PIL.Image)
image_filename = f'image_name_{aug_id(i)}.png'
save_image(os.path.join(AUG_SAVE_DIR, image_filename), augmented['image'])
Related
I want to run real time object detection using YOLOv5 on a camera and then generate vector embeddings for cropped images of detected objects.
I currently generate image embeddings using this function below for locally saved images:
def generate_img_embedding(img_file_path):
images = [
Image.open(img_file_path)
]
# Encoding a single image takes ~20 ms
embeddings = embedding_model.encode(img_str)
return embeddings
also I start the Yolov5 objection detection with image cropping as follows
def start_camera(productid):
print("Attempting to start camera")
# productid = "11011"
try:
command = " python ./yolov5/detect.py --source 0 --save-crop --name "+ id +" --project ./cropped_images"
os.system(command)
print("Camera runnning")
except Exception as e:
print("error starting camera!", e)
How can I modify the YOLOv5 model to pass the cropped images into my embedding function in real time?
Just take a look at the detect.py supplied with yolov5, the file you are running. The implementation is pretty short (~150 SLOC), I would recommend re-implementing it or modifying for your use case.
Key points, omitting a lot of (important, but standard and easily understandable) data transforms and parameter parsing, are as follows:
device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data)
# Code selecting FP16/FP32 omitted here
model.warmup(imgsz=(1 if pt else bs, 3, *imgsz), half=half)
for path, im, im0s, vid_cap, s in dataset:
im = torch.from_numpy(im).to(device)
# Image transforms omitted
pred = model(im, augment=augment, visualize=visualize) # stage 1
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det) # stage 2
for i, det in enumerate(pred):
if len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(im.shape[2:], det[:, :4], im0.shape).round()
# --> This is where you would access detections in real time! <--
Most of the code's logic is handling the I/O (in particular, dataset loading is handled by either LoadStreams or LoadImages from yolov5's utils), the rest is just rescaling input images, loading a torch model, and running detection and NMS. No rocket science here.
The least effort path for you would be just copying the entire thing and implementing your embeddings under
for *xyxy, conf, cls in reversed(det):
Instead of saving to file, you would get (x, y, w, h) and crop the image using e.g. Pillow's Image.crop() or slice the numpy array directly. Whichever works for you depends on the implementation of your embedding_model.encode.
tl; don't want to read
How do I convert YoloV5 model results into results.pandas(), sort it, then convert it back into results so I can access the useful methods like results.render() or results.crop()?
Context:
I've recently learned how to load and do inference with a YoloV5 model:
# Load model
model = torch.hub.load('./yolov5', 'custom', path='/content/drive/MyDrive/models/best.pt', source='local') # local repo
# Import Image
im1 = 'https://ultralytics.com/images/zidane.jpg'
im2 = 'https://ultralytics.com/images/bus.jpg'
# Do Inference
results = model([im1, im2])
I also learned that this results object returned from inference has really useful methods for getting the result in different formats:
imgs = results.render() # gives image results with bounding boxes
crops = results.crop(save=True) # cropped detections dictionary
df = results.pandas().xyxy[0] # Pandas DataFrame of 1 image
n_df = results.pandas().xyxyn[0] # Pandas DataFrame of 1 image with normalized coordinates
My use-case here was to sort it, then get the top 20 in terms of confidence.
top_20 = results.pandas().xyxy[0].sort_values('score',ascending = False).groupby('confidence').head(20) # get top 17 sorted by confidence
Now I'm not sure how to turn it back to just results, so I can also access the same utility methods like .render() and .crop()
I think I could also create my own render and crop functions with OpenCV using my sorted dataframes as args, but I was just wondering if there was a more intuitive way to just reuse those utility methods.
I tried to make a algorithm using Teachable Machine to receive a picture and see if it fall under one of two categories of pictures (e.g dogs or humans), but after I exported the code that was given I couldn't make sense of how I could make the results that were given via array to turn into something that anyone can understand. So far it only shows a list of two numbers (e.g [[0.00058185 0.99941814]] the first number being dogs and the second one humans) I wanted to make it to show which one of the two numbers means dog and human and the percentage of both or to make it to only shows which one is the most probable to be.
Here's the code:
import tensorflow.keras
from PIL import Image, ImageOps
import numpy as np
from decimal import Decimal
# Disable scientific notation for clarity
np.set_printoptions(suppress=True)
# Load the model
model = tensorflow.keras.models.load_model('keras_model.h5')
# Create the array of the right shape to feed into the keras model
# The 'length' or number of images you can put into the array is
# determined by the first position in the shape tuple, in this case 1.
data = np.ndarray(shape=(1, 224, 224, 3), dtype=np.float32)
# Replace this with the path to your image
image = Image.open('test_photo.jpg')
#resize the image to a 224x224 with the same strategy as in TM2:
#resizing the image to be at least 224x224 and then cropping from the center
size = (224, 224)
image = ImageOps.fit(image, size, Image.ANTIALIAS)
#turn the image into a numpy array
image_array = np.asarray(image)
# display the resized image
image.show()
# Normalize the image
normalized_image_array = (image_array.astype(np.float32) / 127.0) - 1
# Load the image into the array
data[0] = normalized_image_array
# run the inference
prediction = model.predict(data)
print(prediction)
input('Press ENTER to exit')
Using argmax and max does what you want:
"Prediction is {} with {}% probability".format(["dog", "human"][np.argmax(prediction)], round(np.max(prediction)*100,2))
'Prediction is human with 99.94% probability'
I am working with grayscale images of size 75 by 75 and want to perform some augmentation techniques using ImageDataGenerator.
But wondering if we can repeat the output consistently if we run multiple times. I am not talking about epochs but like running the whole code to mimic the exact same augmented images to get same results.
I am attaching sample grayscale image:
import matplotlib.pyplot as plt
import numpy as np
from scipy import misc, ndimage
from keras.preprocessing.image import ImageDataGenerator
gen = ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, # shear_range=0.15,
channel_shift_range=10., horizontal_flip=True, vertical_flip = True,
rescale = 0.2, fill_mode = 'wrap')
image_path = '/trial_img.png' # grayscale image
# Obtain image
# data_format = [#num_images,height,width,#num_of_channels]
# where, #num_images = 1 and #num_of_channels = 1, height = width = 75
image = np.expand_dims(ndimage.imread(image_path),0) # add num_images dimension
image = np.expand_dims(image, axis=3) # add num_of_channels dimension
plt.imshow(image.reshape(75,75), cmap = 'gray')
# Trial #1
# Generate batches of augmented images from this image
aug_iter = gen.flow(image)
# Get 10 samples of augmented images
aug_images1 = [next(aug_iter)[0].reshape(75,75).astype(np.uint8) for i in range(10)]
# Trial #2
aug_iter = gen.flow(image)
aug_images2 = [next(aug_iter)[0].reshape(75,75).astype(np.uint8) for i in range(10)]
# check if equal
truth = []
for val in range(10):
truth.append((aug_images1[val] == aug_images2[val]).all()) # check images
np.asarray(truth).all() # check if all images are same
How to repeat the augmented outputs consistently in above code?
I know this code is written very badly, any suggestions on code optimization are also greatly appreciated.
Thanks,
Gopi
You can set a seed to the flow method:
aug_iter = gen.flow(image, seed = 0)
By setting this parameter to a specific integer, you will always get the same sequence of random shuffling/transformations.
You could run the generator and save the images, then simply load the images:
# Trial #1
# Generate batches of augmented images from this image
aug_iter = gen.flow(image)
# Get 10 samples of augmented images
aug_images1 = [next(aug_iter)[0].reshape(75,75).astype(np.uint8) for i in range(10)]
If memory is not a problem, you can save this with numpy:
aug_images1 = np.array(aug_images1)
np.save(filename, aug_images1)
Then load it:
aug_images1 = np.load(filename)
If you prefer, you can save each image as proper image files (less memory occupied) using an image library such as Pillow:
from PIL import Image
for (im,filename in zip(aug_images1,list_of_names)):
im = Image.fromarray(im) #make sure you have a uint8 from 0 to 255 array.
im.save(filename)
Later, load the files:
aug_images1 = [np.array(image.open(filename)) for filename in list_of_names]
aug_images1 = np.array(aug_images1)
Using ImageDataGenerator for loading files
In case you don't want to load all images at once in memory, with saved images, you can create a new ImageDataGenerator, but without any kind of augmentation, just a pure image loader.
Then use gen.flow_from_directory() to get images from a directory.
Read more in the documentation: https://keras.io/preprocessing/image/
I've been using datasets from sklearn. And I want to show image from 'MNIST original' using openCV.imshow
Here is part of my code
dataset = datasets.fetch_mldata('MNIST original')
features = np.array(dataset.data, 'int16')
labels = np.array(dataset.target, 'int')
list_hog_fd = []
deskewed_images = []
for img in features:
cv2.imshow("digit", img)
deskewed_images.append(deskew(img))
"digit" window appears but it is definitely not an digit image. How can I access real image from dataset?
Shape
MNIST image datasets generally are distributed and used as a 1D vector of 784 values.
However, in order to show it as image, you need to convert it to a 2D matrix with 28*28 values.
Simply using img = img.reshape(28,28) might work in your case.