Using the the program at this link, https://leon.bottou.org/projects/infimnist, I generated some data.
As far as i can tell it is in some sort of binary format:
b"\x00\x00\x08\x01\x00\x00'\x10\x07\x02\x01\x00\x04\x01\x04\t\x05 ...
I need to extract labels and pictures from two datasets like this, generated with:
https://leon.bottou.org/projects/infimnist
with open("test10k-labels", "rb") as binary_file:
data = binary_file.read()
print(data)
>>> b"\x00\x00\x08\x01\x00\x00'\x10\x07\x02\x01\x00\x04\x01\x04\t\x05 ...
b"\x00\x00\x08\x01 ...".decode('ascii')
>>> "\x00\x00\x08\x01 ..."
I also tried the binascii package, but it did not work.
Thankful for any help!
Creating the Data
To create the dataset i am speaking download the package from the following link: https://leon.bottou.org/projects/infimnist.
$ cd dir_of_folder
$ make
Then I took the path of the resulting infimnist executable that pops up and:
$ app_path lab 10000 69999 > mnist60k-labels-idx1-ubyte
This should place the file i used in the folder.
The command after app_path can be replaced by any other command he lists on the side.
Final update
It works!
Using some numpy functions the images can be returned to their normal orientation.
# for the labels
with open(path, "rb") as binary_file:
y_train = np.array(array("B", binary_file.read()))
# for the images
with open("images path", "rb") as binary_file:
images = []
emnistRotate = True
magic, size, rows, cols = struct.unpack(">IIII", binary_file.read(16))
if magic != 2051:
raise ValueError('Magic number mismatch, expected 2051,''got {}'.format(magic))
for i in range(size):
images.append([0] * rows * cols)
image_data = array("B", binary_file.read())
for i in range(size):
images[i][:] = image_data[i * rows * cols:(i + 1) * rows * cols]
# for some reason EMNIST is mirrored and rotated
if emnistRotate:
x = image_data[i * rows * cols:(i + 1) * rows * cols]
subs = []
for r in range(rows):
subs.append(x[(rows - r) * cols - cols:(rows - r)*cols])
l = list(zip(*reversed(subs)))
fixed = [item for sublist in l for item in sublist]
images[i][:] = fixed
x = []
for image in images:
x.append(np.rot90(np.flip(np.array(image).reshape((28,28)), 1), 1))
x_train = np.array(x)
Crazy solution for such a simple thing :)
Ok, so looking at the python-mnistsource, it seems the correct way to unpack the binary format is as follows:
from array import array
with open("test10k-labels", "rb") as binary_file:
magic, size = struct.unpack(">II", file.read(8))
if magic != 2049:
raise ValueError("Magic number mismatch, expected 2049,got{}".format(magic))
labels = array("B", binary_file.read())
print(labels)
update
So I haven't tested this extensively, but the following code should work. It was taken and modified from the aforementioned python-mnistsee source
from array import array
import struct
with open("mnist8m-patterns-idx3-ubyte", "rb") as binary_file:
images = []
emnistRotate = True
magic, size, rows, cols = struct.unpack(">IIII", binary_file.read(16))
if magic != 2051:
raise ValueError('Magic number mismatch, expected 2051,''got {}'.format(magic))
for i in range(size):
images.append([0] * rows * cols)
image_data = array("B", binary_file.read())
for i in range(size):
images[i][:] = image_data[i * rows * cols:(i + 1) * rows * cols]
# for some reason EMNIST is mirrored and rotated
if emnistRotate:
x = image_data[i * rows * cols:(i + 1) * rows * cols]
subs = []
for r in range(rows):
subs.append(x[(rows - r) * cols - cols:(rows - r)*cols])
l = list(zip(*reversed(subs)))
fixed = [item for sublist in l for item in sublist]
images[i][:] = fixed
print(images)
previous answer:
You can use the python-mnist library:
from mnist import MNIST
mndata = MNIST('./data')
images, labels = mndata.load_training()
Related
Hello I am using Python to try to read the digit data provided by MNIST into a data structure I can use to train a neural network. I am testing to ensure the data was read properly by creating an image using PIL. The image that is being created is horribly wrong, and I am not sure if it is because I am using PIL incorrectly or my data structures and methods are not right.
The format of the two data files is described here:
http://yann.lecun.com/exdb/mnist/
Here are the applicable functions:
read_image_data reads the pixel data organizing it into a list of 2D array numpy arrays
def read_image_data():
fd = open("train-images.idx3-ubyte", "rb")
images_bin_string = fd.read()
num_images = struct.unpack(">i", images_bin_string[4:8])[0]
image_data_bank = []
uint32_num_bytes = 4
current_index = 8
num_rows = struct.unpack(">I", \
images_bin_string[current_index: current_index + uint32_num_bytes])[0]
num_cols = struct.unpack(">I", \
images_bin_string[current_index + uint32_num_bytes: \
current_index + uint32_num_bytes * 2])[0]
current_index += 8
i = 0
while i < num_images:
image_data = np.zeros([num_rows, num_cols])
for j in range(num_rows - 1):
for k in range(num_cols - 1):
image_data[j][k] = images_bin_string[current_index + j * k]
current_index += num_rows * num_cols
i += 1
image_data_bank.append(image_data)
return image_data_bank
read_label_data reads the corresponding labels into a list
def read_label_data():
fd = open("train-labels.idx1-ubyte", "rb")
images_bin_string = fd.read()
num_images = struct.unpack(">i", images_bin_string[4:8])[0]
image_data_bank = []
current_index = 8
i = 0
while i < num_images:
image_data_bank.append(images_bin_string[current_index])
current_index += 1
i += 1
return image_data_bank
collect_data zips the structures together
def collect_data():
print("Reading image data...")
image_data = read_image_data()
print("Reading label data...")
label_data = read_label_data()
print("Zipping data sets...")
all_data = np.array(list(zip(image_data, label_data)))
return all_data
lastly run_test uses PIL to print the pixels from the first 28x28 np structure created by read_image_data
def run_test(data):
example = data[0]
pixel_data = example[0]
number = example[1]
print(number)
im = Image.fromarray(pixel_data)
im.show()
When I run the script:
Collecting data... Reading image data... Reading label data... Zipping
data sets... 5
I must be messing something up with the PIL library, but I do not know what.
That is a really weird looking 5. I am guessing that I went wrong somewhere in my organization of the data. The directions did say "Pixels are organized row-wise.", but I think I covered that by having my outer loop as the row loop then the inner as the column loop
UPDATE
I reversed the order of the row and column index in the np.arrays in read_image_data and it is making no difference.
image_data[k][j] = images_bin_string[current_index + j * k]
UPDATE
Ran quick test with matplotlib
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
imgplot = plt.imshow(pixel_data)
plt.show()
Here is what I got from matplotlib
That means it is definitely a problem with my code and not the library. The question is if it is the way I am passing the pixels to the imaging libraries or how I structured the data. If anyone can find the mistake, I would greatly appreciate.
I'm trying to normalize my dataset which is 1.7 Gigabyte. I have 14Gig of RAM and I hit my limit very quickly.
This happens when computing the mean/std of the training data. The training data takes up the majority of the memory when loaded into RAM(13.8Gig),thus the mean gets calculated, but when it reaches to the next line while calculating the std, it crashes.
Follows the script:
import caffe
import leveldb
import numpy as np
from caffe.proto import caffe_pb2
import cv2
import sys
import time
direct = 'examples/svhn/'
db_train = leveldb.LevelDB(direct+'svhn_train_leveldb')
db_test = leveldb.LevelDB(direct+'svhn_test_leveldb')
datum = caffe_pb2.Datum()
#using the whole dataset for training which is 604,388
size_train = 604388 #normal training set is 73257
size_test = 26032
data_train = np.zeros((size_train, 3, 32, 32))
label_train = np.zeros(size_train, dtype=int)
print 'Reading training data...'
i = -1
for key, value in db_train.RangeIter():
i = i + 1
if i % 1000 == 0:
print i
if i == size_train:
break
datum.ParseFromString(value)
label = datum.label
data = caffe.io.datum_to_array(datum)
data_train[i] = data
label_train[i] = label
print 'Computing statistics...'
print 'calculating mean...'
mean = np.mean(data_train, axis=(0,2,3))
print 'calculating std...'
std = np.std(data_train, axis=(0,2,3))
#np.savetxt('mean_svhn.txt', mean)
#np.savetxt('std_svhn.txt', std)
print 'Normalizing training'
for i in range(3):
print i
data_train[:, i, :, :] = data_train[:, i, :, :] - mean[i]
data_train[:, i, :, :] = data_train[:, i, :, :]/std[i]
print 'Outputting training data'
leveldb_file = direct + 'svhn_train_leveldb_normalized'
batch_size = size_train
# create the leveldb file
db = leveldb.LevelDB(leveldb_file)
batch = leveldb.WriteBatch()
datum = caffe_pb2.Datum()
for i in range(size_train):
if i % 1000 == 0:
print i
# save in datum
datum = caffe.io.array_to_datum(data_train[i], label_train[i])
keystr = '{:0>5d}'.format(i)
batch.Put( keystr, datum.SerializeToString() )
# write batch
if(i + 1) % batch_size == 0:
db.Write(batch, sync=True)
batch = leveldb.WriteBatch()
print (i + 1)
# write last batch
if (i+1) % batch_size != 0:
db.Write(batch, sync=True)
print 'last batch'
print (i + 1)
#explicitly freeing memory to avoid hitting the limit!
#del data_train
#del label_train
print 'Reading test data...'
data_test = np.zeros((size_test, 3, 32, 32))
label_test = np.zeros(size_test, dtype=int)
i = -1
for key, value in db_test.RangeIter():
i = i + 1
if i % 1000 == 0:
print i
if i ==size_test:
break
datum.ParseFromString(value)
label = datum.label
data = caffe.io.datum_to_array(datum)
data_test[i] = data
label_test[i] = label
print 'Normalizing test'
for i in range(3):
print i
data_test[:, i, :, :] = data_test[:, i, :, :] - mean[i]
data_test[:, i, :, :] = data_test[:, i, :, :]/std[i]
#Zero Padding
#print 'Padding...'
#npad = ((0,0), (0,0), (4,4), (4,4))
#data_train = np.pad(data_train, pad_width=npad, mode='constant', constant_values=0)
#data_test = np.pad(data_test, pad_width=npad, mode='constant', constant_values=0)
print 'Outputting test data'
leveldb_file = direct + 'svhn_test_leveldb_normalized'
batch_size = size_test
# create the leveldb file
db = leveldb.LevelDB(leveldb_file)
batch = leveldb.WriteBatch()
datum = caffe_pb2.Datum()
for i in range(size_test):
# save in datum
datum = caffe.io.array_to_datum(data_test[i], label_test[i])
keystr = '{:0>5d}'.format(i)
batch.Put( keystr, datum.SerializeToString() )
# write batch
if(i + 1) % batch_size == 0:
db.Write(batch, sync=True)
batch = leveldb.WriteBatch()
print (i + 1)
# write last batch
if (i+1) % batch_size != 0:
db.Write(batch, sync=True)
print 'last batch'
print (i + 1)
How can I make it consume less memory so that I can get to run the script?
Why not compute the statistics on a subset of the original data? For example, here we compute the mean and std for just 100 points:
sample_size = 100
data_train = np.random.rand(1000, 20, 10, 10)
# Take subset of training data
idxs = np.random.choice(data_train.shape[0], sample_size)
data_train_subset = data_train[idxs]
# Compute stats
mean = np.mean(data_train_subset, axis=(0,2,3))
std = np.std(data_train_subset, axis=(0,2,3))
If your data is 1.7Gb, it is highly unlikely that you need all the data to get an accurate estimation of the mean and std.
In addition, could you get away with fewer bits in your datatype? I'm not sure what datatype caffe.io.datum_to_array returns, but you could do:
data = caffe.io.datum_to_array(datum).astype(np.float32)
to ensure the data is float32 format. (If the data is currently float64, then this will save you half the space).
The culprit that caused so much issues and constant crashing due to insufficient memory, was due to batch size being the size of whole training set:
print 'Outputting test data'
leveldb_file = direct + 'svhn_test_leveldb_normalized'
batch_size = size_test
This apparently was the cause, nothing would get committed and saved to the disk until the whole dataset was read and loaded into one huge transaction, this is also the case when using np.float32 suggested by #BillCheatham didn't work properly.
The memorymap solution wouldn't work for me for some reason and I used the solution I mentioned above.
PS: Later on, I completely changed to float32, fixed the batch_size and ran the whole thing all together, that's how I could say my former solution (divide and add the fractions together) works and gives the exact number up to 2 decimals.
I want to write HDF5 format as 512x424(1channel) image database and multi-label in python (anaconda library) for image regression.
However, I have some error which is happened when memory allocation as array type.
import h5py
import numpy as np
cols = 512
rows = 424
channel = 1
total_image = sum(1 for line in open(data_label_list))
total_size = total_image*rows*cols
f_train = open(data_label_list)
lines = f_train.readlines()
data = np.empty((total_image, channel, cols, rows))
multi_label = np.empty((total_image, 1))
==========================================
data = np.empty((total_image, channel, cols, rows))
"MemoryError"
Can I solve this error for large scale database?
add source code :
cnt = 0
for line in lines:
line = line.rstrip('\n')
list_line = line.split(' ')
im = Image.open(list_line[0])
row, col = im.size
pixels = im.load()
for i in range(row):
for j in range(col):
data[cnt, 0, i, j] = pixels[i, j]
multi_label[cnt, 0] = list_line[1]
multi_label[cnt, 1] = list_line[2]
cnt += 1
Thanks,
This is a program for face recognition using pca logic. Everything went fine except for the index error that came up at the end of the program.
When I run the code I get an index error at the fourth last line of my program.
distances.append((dist, y[i]))
IndexError: list index out of range
can anyone just help in this. I am newbie into python, so am I not so expert in solving.
Here is my code :
from sklearn.decomposition import RandomizedPCA
import numpy as np
import glob
import cv2
import math
import os.path
import string
#function to get ID from filename
def ID_from_filename(filename):
part = string.split(filename, '/')
return part[1].replace("s", "")
#function to convert image to right format
def prepare_image(filename):
img_color = cv2.imread(filename)
img_gray = cv2.cvtColor(img_color, cv2.cv.CV_RGB2GRAY)
img_gray = cv2.equalizeHist(img_gray)
return img_gray.flat
IMG_RES = 92 * 112 # img resolution
NUM_EIGENFACES = 10 # images per train person
NUM_TRAINIMAGES = 110 # total images in training set
#loading training set from folder train_faces
folders = glob.glob('train_faces/*')
# Create an array with flattened images X
# and an array with ID of the people on each image y
X = np.zeros([NUM_TRAINIMAGES, IMG_RES], dtype='int8')
y = []
# Populate training array with flattened imags from subfolders of
train_faces and names
c = 0
for x, folder in enumerate(folders):
train_faces = glob.glob(folder + '/*')
for i, face in enumerate(train_faces):
X[c,:] = prepare_image(face)
y.append(ID_from_filename(face))
c = c + 1
# perform principal component analysis on the images
pca = RandomizedPCA(n_components=NUM_EIGENFACES, whiten=True).fit(X)
X_pca = pca.transform(X)
# load test faces (usually one), located in folder test_faces
test_faces = glob.glob('test_faces/*')
# Create an array with flattened images X
X = np.zeros([len(test_faces), IMG_RES], dtype='int8')
# Populate test array with flattened imags from subfolders of train_faces
for i, face in enumerate(test_faces):
X[i,:] = prepare_image(face)
# run through test images (usually one)
for j, ref_pca in enumerate(pca.transform(X)):
distances = []
# Calculate euclidian distance from test image to each of the known
images and save distances
for i, test_pca in enumerate(X_pca):
dist = math.sqrt(sum([diff**2 for diff in (ref_pca - test_pca)]))
distances.append((dist, y[i]))
found_ID = min(distances)[1]
print "Identified (result: "+ str(found_ID) +" - dist - " +
str(min(distances)[0]) + ")"
Your i in the loop below goes up to the length of X_pca - 1
for i, test_pca in enumerate(X_pca):
dist = math.sqrt(sum([diff**2 for diff in (ref_pca - test_pca)]))
distances.append((dist, y[i]))
However, your y is not built to have that length necessarily:
for x, folder in enumerate(folders):
train_faces = glob.glob(folder + '/*')
for i, face in enumerate(train_faces):
X[c,:] = prepare_image(face)
y.append(ID_from_filename(face))
So you are using an index i which is greater than the bounds of your list y.
I'm trying to multiplicate 2 big matrices with memory limit using hdf5 (pytables)
but function numpy.dot seems to give me error:
Valueerror: array is too big
I need to do matrix multiplication by myself maybe blockwise or there is some another python function similar to numpy.dot?
import numpy as np
import time
import tables
import cProfile
import numexpr as ne
n_row=10000
n_col=100
n_batch=10
rows = n_row
cols = n_col
batches = n_batch
atom = tables.UInt8Atom() #?
filters = tables.Filters(complevel=9, complib='blosc') # tune parameters
fileName_a = 'C:\carray_a.h5'
shape_a = (rows*batches, cols) # predefined size
h5f_a = tables.open_file(fileName_a, 'w')
ca_a = h5f_a.create_carray(h5f_a.root, 'carray', atom, shape_a, filters=filters)
for i in range(batches):
data = np.random.rand(rows,cols)
ca_a[i*rows:(i+1)*rows]= data[:]
#h5f_0.close()
rows = n_col
cols = n_row
batches = n_batch
fileName_b = 'C:\carray_b.h5'
shape_b = (rows, cols*batches) # predefined size
h5f_b = tables.open_file(fileName_b, 'w')
ca_b = h5f_b.create_carray(h5f_b.root, 'carray', atom, shape_b, filters=filters)
#need to batch by cols
sz= rows/batches
for i in range(batches):
data = np.random.rand(sz, cols*batches)
ca_b[i*sz:(i+1)*sz]= data[:]
#h5f_1.close()
rows = n_batch*n_row
cols = n_batch*n_row
fileName_c = 'C:\carray_c.h5'
shape_c = (rows, cols) # predefined size
h5f_c = tables.open_file(fileName_c, 'w')
ca_c = h5f_c.create_carray(h5f_c.root, 'carray', atom, shape_c, filters=filters)
a= h5f_a.root.carray#[:]
b= h5f_b.root.carray#[:]
c= h5f_c.root.carray
t0= time.time()
c= np.dot(a,b) #error if aray is big
print (time.time()-t0)
Update: so here is the code.It's interesting but using hdf5 it works even faster.
import numpy as np
import tables
import time
sz= 100 #chunk size
n_row=10000 #m
n_col=1000 #n
#for arbitrary size
A=np.random.rand(n_row,n_col)
B=np.random.rand(n_col,n_row)
# A=np.random.randint(5, size=(n_row,n_col))
# B=np.random.randint(5, size=(n_col,n_row))
#using numpy array
#C= np.zeros((n_row,n_row))
#using hdf5
fileName_C = 'CArray_C.h5'
atom = tables.Float32Atom()
shape = (A.shape[0], B.shape[1])
Nchunk = 128 # ?
chunkshape = (Nchunk, Nchunk)
chunk_multiple = 1
block_size = chunk_multiple * Nchunk
h5f_C = tables.open_file(fileName_C, 'w')
C = h5f_C.create_carray(h5f_C.root, 'CArray', atom, shape, chunkshape=chunkshape)
sz= block_size
t0= time.time()
for i in range(0, A.shape[0], sz):
for j in range(0, B.shape[1], sz):
for k in range(0, A.shape[1], sz):
C[i:i+sz,j:j+sz] += np.dot(A[i:i+sz,k:k+sz],B[k:k+sz,j:j+sz])
print (time.time()-t0)
t0= time.time()
res= np.dot(A,B)
print (time.time()-t0)
print (C== res)
h5f_C.close()
I don't know of a np.dot that work without loading into memory. I think blocking would work pretty well. Create a an output array (called "c" below) as pytables CArray and fill in blocks. You should choose the chunkshape when you create it to match your blocking scheme. Something like
atom = tables.Float32Atom() # you have UInt8Atom() above. do you mean that?
shape = (a.shape[0], b.shape[1])
# you can vary block_size and chunkshape independently, but I would
# aim to have block_size an integer multiple of chunkshape
# your mileage may vary and depends on the array size and how you'll
# access it in the future.
Nchunk = 128 # ?
chunkshape = (Nchunk, Nchunk)
chunk_multiple = 1
block_size = chunk_multiple * Nchunk
c = h5f.create_carray(h5.root, 'c', atom, shape, chunkshape=chunkshape)
for i_start in range(0, a.shape[0], block_size):
for j_start in range(0, b.shape[1], block_size):
for k_start in range(0, a.shape[1], block_size):
c[i_start:i_start+block_size, j_start:j_start + block_size] += \
np.dot(a[i_start:i_start + block_size, k_start:k_start + block_size],
b[k_start:k_start + block_size, j_start:j_start + block_size]