Avoid Loading Image dataset in Pycharm IDE multiple times(Load only once) - python

I am working on an Image Classification problem using Keras/Tensorflow. The problem is that since I am using an IDE like Pycharm (I also use Jupyter Notebook), I am curious to know if there is any way where I can load the dataset from the directory only once and then when I re-run the whole .py file, I just use the images from already loaded data?
labels = ['rugby', 'soccer']
img_size = 224
def get_data(data_dir):
data = []
for label in labels:
path = os.path.join(data_dir, label)
class_num = labels.index(label)
for img in os.listdir(path):
try:
img_arr = cv2.imread(os.path.join(path, img))[...,::-1] #convert BGR to RGB format
resized_arr = cv2.resize(img_arr, (img_size, img_size)) # Reshaping images to preferred size
data.append([resized_arr, class_num])
except Exception as e:
print(e)
return np.array(data)
Now we can easily fetch our train and validation data.
train = get_data('../input/traintestsports/Main/train')
val = get_data('../input/traintestsports/Main/test')
Every time get_data is called, it would require additional time to load entire datset

You can read in each image using the cv2.imread() method, and use the np.save() method to save all the images (put into a single array) to save the data into a binary file in .npy format:
import cv2
import numpy as np
imgs = ['image1.png', 'image2.png', 'image3.png', 'image4.png']
# Map each str to cv2.imread, convert map object to list, and convert list to array
arr = np.array(list(map(cv2.imread, imgs)))
np.save('data.npy', arr)
When you want to access the data, you can use the np.load() method:
import numpy as np
arr = np.load('data.npy')
You can install cv2 (OpenCV) via the command prompt command:
pip install opencv-python
and numpy with
pip install numpy
If you have a more complex data type, you can use the pickle.dump() method to save your data sterilized into a file:
import pickle
data = {"data": ['test', 1, 2, 3]} # Replace this with your dataset
with open("data.pickle", "wb") as f:
pickle.dump(data, f)
When you want to access the data, you can use the pickle.load() method:
import pickle
with open("data.pickle", "rb") as f:
data = pickle.load(f)
print(data)
Output:
{'data': ['test', 1, 2, 3]}
The pickle module is built into python.

Related

Saving an image as a protobuf in a TFRecord file and then reading it back and showing the image on screen

I want to do the following: Encode an image using the JPEG format in tensorflow, put this in a BytesList feature in protobuff, serialize it, save it, and then read it back again. After reading it, I have to parse it using a feature_description for the image, and then decode the image from the JPEG format. This is what I tried:
from sklearn.datasets import load_sample_images
from tensorflow.train import BytesList, FloatList, Int64List
from tensorflow.train import Feature, Features, Example
import tensorflow as tf
import numpy as np
import matplotlib as plt
# Loading the image and printing it
img = load_sample_images()["images"][0]
plt.imshow(img)
plt.axis("off")
plt.title("Original Image")
plt.show()
# Encode the image to JPEG
data = tf.io.encode_jpeg(img)
# Convert it to a protobuf Example
example_with_image = Example(
features = Features(
feature = {
"image": Feature(bytes_list = BytesList(value = [data.numpy()]))
}
)
)
# Serialize the protobuf Example to a string
serialized_example = example_with_image.SerializeToString()
# Imagine we saved 'serialized_example' to disk and read it back into memory
# We now want to print the image
# Provide 'feature_description' so that the parse function knows the default
# value
feature_description = {
"image": tf.io.VarLenFeature(tf.string)
}
# Parse the serialized string
example_with_image = tf.io.parse_single_example(serialized_example, feature_description)
This all works great. Then I try to decode the image back using Tensorflow's decode_jpeg() function:
decoded_img = tf.io.decode_jpeg(example_with_image)
And this doesn't work. I get the following ValueError:
ValueError: Attempt to convert a value ({'image': <tensorflow.python.framework.sparse_tensor.SparseTensor object
at 0x000002B4C90AB9D0>}) with an unsupported type (<class 'dict'>) to a Tensor.
It doesn't work with the more general tf.io.decode_image() Tensorflow function either.
Honestly, I have no idea what's going on. Shouldn't I get the image back? What's wrong?
example_with_image after using parse_single_example is a dictionary with key as image and a sparse tensor as value
The example_with_image looks like this:
{'image': <tensorflow.python.framework.sparse_tensor.SparseTensor at 0x25b29440cc8>}
The decode_jpeg function expects a byte value but you are providing a dictionary.
The correct way to extract the value would be:
Code:
image = tf.io.decode_jpeg(example_with_image1['image'].values.numpy()[0])
plt.imshow(image)
Output:
You can also parse your image as FixedLenFeature instead of VarLenFeature. In this case, you get a dense tensor instead of a sparse tensor.
Code:
feature_description = {
"image": tf.io.FixedLenFeature([], tf.string)
}
# Parse the serialized string
example_with_image = tf.io.parse_single_example(serialized_example, feature_description)
image = tf.io.decode_jpeg(example_with_image['image'].numpy())
plt.imshow(image)

How can I remove EXIF data from a dataset?

I am trying to remove EXIF data from images in a dataset (which I will use in transfer learning). However, it does not seem to be working. Below is my code:
import os
from PIL import Image
import piexif
import imghdr
from tqdm import tqdm
import warnings
Folder = 'drive/My Drive/PetImages'
labels =['Dog', 'Cat']
for label in labels:
imageFolder = os.path.join(Folder, label)
listImages = os.listdir(imageFolder)
for img in tqdm(listImages):
imgPath = os.path.join(imageFolder,img)
try:
img = Image.open(imgPath)
data = list(img.getdata())
image_without_exif = Image.new(img.mode, img.size)
image_without_exif.putdata(data)
image_without_exif.save(img)
print("done")
except:
print("except")
I tried saving the image using PIL (as per a previously asked question: Python: Remove Exif info from images) but the output is purely composed of "except"s.
I tried again using the piexif module, as below:
# Same imports as above
Folder = 'drive/My Drive/PetImages'
labels =['Dog', 'Cat']
for label in labels:
imageFolder = os.path.join(Folder, label)
listImages = os.listdir(imageFolder)
for img in tqdm(listImages):
imgPath = os.path.join(imageFolder,img)
try:
ImageType = img.format
# warnings.filterwarnings("error")
if ImageType in ["JPEG", "TIF", "WAV"]:
exif_data = img._getexif()
print(exif_data)
piexif.remove(img)
print("done")
except:
print("except")
In the code above, I check for the image type first to make sure the method _getexif() actually exists, then I just remove the data after saving it in exif_data variable. The output consisted of "except"s and the occasional exif data (in the form of a dictionary) or "None" if it doesn't exist but never the word "done". Why doesn't it reach that part?
For anyone stumbling upon this through Google, there is a simple solution using PIL:
from PIL import Image
im = Image.open('some-image.jpg')
# this clears all exif data
im.getexif().clear()
im.save('some-image-without-exif.jpg')
I thought that getexif() only allows read access as the name might imply, but it turns out that this is not the case.
Edit: In my case, it even worked to just load and save the file, without im.getexif().clear(). I don't know how reliable that is, though.
That command definitely removes exif-data from the image-object, though. This can be simply tested in a Python shell:
>>> from PIL import Image
>>> im = Image.open('some-image.jpg')
>>> print(im.getexif())
{296: 2, 282: 72.0, 283: 72.0 ..... }
>>> im.getexif().clear()
>>> print(im.getexif())
{}

Byte representation of an image differs depending on method used to read it

I was trying to perform some data augmentation in object detection models in tensorflow so I was checking the compatibility of different image representations.
First I was just reading an image file using PIL (Pillow to be precise)
full_path = 'path/to/my/image.jpg'
image = PIL.Image.open(full_path)
image_np = np.array(image)
encoded_jpg_io1 = io.BytesIO(image_np)
Then I used the tensorflow version (used to create tfrecords as well):
with tf.gfile.GFile(full_path, 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io2 = io.BytesIO(encoded_jpg)
And then I checked the equality of the above operations:
if encoded_jpg_io1 == encoded_jpg_io2:
print('Equal')
I was expecting those two to be equal. So, why this is not the case here?
If I use the bytes I get the same result:
v1 = encoded_jpg_io1.getvalue()
v2 = encoded_jpg_io2.getvalue()
if encoded_jpg_io1.getvalue() == encoded_jpg_io2.getvalue():
print('Equal')
if v1.__eq__(v2):
print('Equal')
I need to manipulate my images with numpy and then create some tfrecords so the equality is required.
Some interesting facts:
1. PIL cannot read the image in np.array format at all:
image1 = PIL.Image.open(encoded_jpg_io1)
OSError: cannot identify image file
While using GFile works fine:
image2 = PIL.Image.open(encoded_jpg_io2)
2.PIL image cannot be directly converted to BytesIO:
encoded_jpg_io1 = io.BytesIO(image)
TypeError: a bytes-like object is required, not 'JpegImageFile'

loading an image from cifar-10 dataset

I am using cifar-10 dataset for my training my classifier. I have downloaded the dataset and tried to display am image from the dataset. I have used the following code:
from six.moves import cPickle as pickle
from PIL import Image
import numpy as np
f = open('/home/jayanth/udacity/cifar-10-batches-py/data_batch_1', 'rb')
tupled_data= pickle.load(f, encoding='bytes')
f.close()
img = tupled_data[b'data']
single_img = np.array(img[5])
single_img_reshaped = single_img.reshape(32,32,3)
plt.imshow(single_img_reshaped)
the description of data is as follows:
Each array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image.
Is my implementation correct?
the above code gave me the following image:
I used
single_img_reshaped = np.transpose(np.reshape(single_img,(3, 32,32)), (1,2,0))
to get the correct format in my program.
Since Python uses the default C-like indexing order (row-major order), it can be forced to work in column-major order:
import numpy as np
import matplotlib.pyplot as plt
# I assume you have loaded your data into x_train (see some tutorial)
data = x_train[0, :] # get a row data
data = np.reshape(data, (32,32,3), order='F' ) # Fortran-like indexing order
plt.imshow(data)
single_img_reshaped = single_img.reshape(3,32,32).transpose([1, 2, 0])

How to properly load a set of images in python

I am trying to open a set of images in python, but I am a bit puzzled on how I should do that. I know how to do it with one image, but I don't have a clue on how to handle several hundreds of images.
I have a file folder with a few hundred .jpg images. I want to load them in a python program to do machine learning on them. How can I do this properly?
I don't have any code yet since I am already struggling with this.
But my Idea in pseudocode was
dataset = load(images)
do some manipulations on it
How I have done it before:
from sklearn.svm import LinearSVC
from numpy import genfromtxt,savetxt
load = lambda x: genfromtxt(open(x,"r"),delimiter = ",",dtype = "f8")[1:]
dataset = load("train.csv")
train = [x[1:] for x in dataset]
target = [x[0] for x in dataset]
test = load("test.csv")
linear = LinearSVC()
linear.fit(train,target)
savetxt("digit2.csv",linear.predict(test),delimiter = ",", fmt = "%d")
Which worked fine because of the format. Al the data was in one file.
If you want to process each image individually (assuming you're using PIL or Pillow) then do so sequentially:
import os
from glob import glob
try:
# PIL
import Image
except ImportError:
# Pillow
from PIL import Image
def process_image(img_path):
print "Processing image: %s" % img_path
# Open the image
img = Image.open(img_path)
# Do your processing here
print img.info
# Not strictly necessary, but let's be explicit:
# Close the image
del img
images_dir = "/home/user/images"
if __name__ == "__main__":
# List all JPEG files in your directory
images_list = glob(os.path.join(images_dir, "*.jpg"))
for img_filename in images_list:
img_path = os.path.join(images_dir, img_filename)
process_image(img_path)
Read the documentation on python glob module and in a loop process each of the images in turn.

Categories

Resources