I have a folder name "seg_train" in which I have 6 labeled folders: building, tree, street, glacier, forest, sea and mountain. I am trying to read all the images in these folders using open cv and for that I have written a function but I don't know what I am doing wrong. There are approximately 14000 files in these 6 folders all together but the function I wrote is reading only 2300 from one folder. Could you please help?
Here's the python code
result- Shape of Images: (2382, 150, 150, 3)
Shape of Labels: (2382,)
i was expecting (14000,150,150,3)
for image_file in os.listdir(r'C:/Users/dhvan/Desktop/intel-image-classification/seg_train/seg_train/'+ labels):
image = cv2.imread(r'C:/Users/dhvan/Desktop/intel-image-classification/seg_train/seg_train/' + labels + r'/'+ image_file)
image = cv2.resize(image,(150,150))
Images.append(image)
Labels.append(label)
return shuffle(Images,Labels,random_state=812490023)
I suggest the code below. Also I think if you try to read in 14000 images of size (150,150,3) you will get a resource exhaust error because this will use a very large amount
of memory. If you are building a CNN classifier I recommend you read the images in in batches using the Keras ImageDataGenerator.flowfrom directory, documentation is here
https://keras.io/preprocessing/image/
import os
dir=r'C:/Users/dhvan/Desktop/intel-image-classification/seg_train/seg_train'
path_to_labels=os.path.join (dir, 'labels')
dir_list=os.listdir(path_to_labels)
for images in dir_list:
path_to_images=os.path.join (path_to_labels, images)
cv2.imread(path_to_images)
Related
I have a hard time to get good results for a full integer quantized TFLite Model using Post-training quantization. The model does not recognize anything corectly. I used the given notebook tutorial from google and changed it. Here is my version where I try to perform full integer quantization by using images from the coco validation dataset. It should run completely on its own.
Probably something is wrong with _representative_dataset_gen() which looks like this:
def _representative_dataset_gen():
print("200 rep dataset function called!")
root = 'val2017/'
pattern = "*.jpg"
imagePaths = []
for path, subdirs, files in os.walk(root):
for name in files:
if fnmatch(name, pattern):
imagePaths.append(root + name)
for index,p in enumerate(imagePaths[:200]):
if index % 10 == 0:
print(index)
image = cv2.imread(p)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (640, 640))
image = image.astype("float")
image = np.expand_dims(image, axis=1)
image = image.reshape(1, 640, 640, 3)
yield [image.astype("float32")]
#yield image
I also compared it to a full integer version which only gets one single image as a repr. dataset. Interestingly it performs really similar, therefore I am quite confident that my attempt is wrong.
Don't hesitate to ask questions. I would appreciate any help.
I am working on an image segmentation problem. There are 3 types of images in my dataset(Drishti_GS). One is raw fundus image and the other two are soft map images namely optic cup seg and optic disc seg. I am trying to make a data generator for which I can train my model on. I am attaching the screenshot of names of images I got after using the following code.
import os
for root, dirs, files in os.walk("."):
for filename in files:
if filename.endswith(".png"):
print(filename)
I need to load these images. I hope that someone can help me with some concrete codes or some useful materials.
You can load images by:
for image_name in os.listdir("data_path"):
image = keras.preprocessing.image.load_img('data_path/'+image_name, color_mode='rgb', target_size=(128,128))
names = image_name.split('_')
if names[-1]=="cupsegSoftmap.png":
label = ...
elif names[-1]=="ODsegSoftmap.png":
label = ...
You can manually set what the label value should be, as an array
I am merging two different datasets containing images into one dataset. One of the datasets contains 600 images in the training set. The other dataset contains only 90-100 images. I want to increase the size of the latter dataset by using the imgaug library. The images are stored in folders under the name of their class. So the path for a "cake" image in the training set would be ..//images//Cake//cake_0001. I'm trying to use this code to augment the images in this dataset:
path = 'C:\\Users\\User\\Documents\\Dataset\\freiburg_groceries_dataset\\images'
ia.seed(6)
seq = iaa.Sequential([
iaa.Fliplr(0.5),
iaa.Crop(percent=(0, 0.1)),
iaa.Affine(rotate=(-25,25))
], random_order=True)
for folder in os.listdir(path):
try:
for i in os.listdir(folder):
img = imageio.imread(i)
img_aug = seq(images=img)
iaa.imshow(img_aug)
print(img_aug)
except:
pass
Right now there's not output, even if I put print(img) or imshow(img) or anything. How do I ensure that I got more images for this dataset? Also, what is the best spot to augment images? Where do the augmented images get stored, and how do I see how many new images were generated?
The Question was not clear. So, for the issue2: error in saving file and not able to visualize using imshow().
First: In the second loop code block
img = imageio.imread(i)
img_aug = seq(images=img)
iaa.imshow(img_aug)
print(img_aug)
1st error is: i is not the file path. To solve this replace imageio.imread(i) with imageio.imread(path+'/'+folder+'/'+i).
2nd error is: iaa doesn't have the property imshow().
To fix this replace iaa.imshow(img_aug) with iaa.imgaug.imshow(img_aug). This fixes the error of visualizing and finishing the loop execution.
Second: If you have any issue in saving images, then use PIL.
i.e.,
from PIL import Image
im = Image.fromarray(img_aug)
im.save('img_aug.png')`
It's because folder is not the path to the directory you are looking for.
You should change for i in os.listdir(folder): to for i in os.listdir(path+'\\'+folder):. Then it looks inside the path\folder directory for files.
I am new to CNN so I am trying to learn to code it with python by following tutorials online, and I came up to this tutorial: https://medium.com/nybles/create-your-first-image-recognition-classifier-using-cnn-keras-and-tensorflow-backend-6eaab98d14dd
I followed the code and all but I get this small error that I can't seem to know the solution:
FileNotFoundError: [Errno 2] No such file or directory: 'random.jpg'
This is the code where the error points out to:
import numpy as np
from keras.preprocessing import image
test_image = image.load_img('random.jpg',target_size = (64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
training_set.class_indices
if result[0][0] >= 0.5:
prediction = 'dog'
else:
prediction = 'cat'
print(prediction)
I'm going to include the whole code just in case people want to see: https://drive.google.com/open?id=1ew22sJOvl5Ea9VTM_PXqVKNZJm1OuXTG
Any help is appreciated. :)
You need to give full path to the image or put code file and the image to the same file.
Based on what I read on the blog post, he used just a random dog image (downloaded from any web), name it as "random.jpg", and use it as a test image. You can just seek for any dog/cat image around the web, download and rename it as "random.jpg".
The point is, you know the image is a dog or cat, then test your model to predict what image it is.
You need to put 'random.jpg' into your working directory. That is put any file( a dog, cat, or any) with that name inside your folder :)
I am working on training my own images read from my folders. I would be thankful if you could help me for this.
I successfully read my all images from the folder and create my own onehot_encoded labels. However, in each time I run my code, it takes a lot of time to do read all images from the folders. Therefore, I want to create dataset from these images and save it like MNIST to use faster. Thus, I will not read my whole images again. Could you please help me for this?
The code is:
path = "D:/cleandata/train_data/"
loadedImages = []
labels = []
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for i in range(len(os.listdir(path))):
imagesList = listdir(path+os.listdir(path)[i])
for image in imagesList:
image_raw_data_jpg tf.gfile.FastGFile(path+os.listdir(path)
[i]+'/'+image, 'rb').read()
raw_image =tf.image.decode_png(image_raw_data_jpg,3)
gray_resize=tf.image.resize_images(raw_image, [28, 28])
image_data =
sess.run(tf.image.rgb_to_grayscale(gray_resize))
loadedImages.append(image_data)
Here is a tutorial on how to use a TFRecords file. It shows how to create the file (containing images and labels) and read from it.
http://www.machinelearninguru.com/deep_learning/tensorflow/basics/tfrecord/tfrecord.html
Or you could just use zipfile, and include the label in the image file name, thus keeping them together (that is what I did)