Load a image.png in a few milliseconds - python

I need to perform a function on images in less than 1 second. I have a problem on a 1000x1000 image that, just to load it as a matrix in the program, takes 1 second.
The function I use to load it is as follows:
import png
def load(fname):
with open(fname, mode='rb') as f:
reader = png.Reader(file=f)
w, h, png_img, _ = reader.asRGB8()
img = []
for line in png_img:
l = []
for i in range(0, len(line), 3):
l+=[(line[i], line[i+1], line[i+2])]
img+=[l]
return img
How can I modify it in such a way that, when opening the image, it takes a little more than a few milliseconds?
IMPORTANT NOTE: I cannot import other functions outside of this (this is a university exercise and therefore there are rules -.-). So I have to get one myself

you can use PIL to do this for you, it's highly optimized and fast
from PIL import Image
def load(path):
return Image.open(path)

Appending to a list is inherently slow - read about Shlemiel the painter’s algorithm. You can replace it with a generator expression and slicing.
for line in png_img:
img += list(zip(line[0::3], line[1::3], line[2::3])

I'm not sure it is remotely possible to run a python script that opens a file, etc. in just a few ms. On my computer, the simplest program takes several 10ms
Without knowing more about the specifics of your problem and the reasons for your constraint, it is hard to answer. You should consider what you are trying to do, in the context of the way your program really works, and then formulate a strategy to achieve your goal.
The total context here is, you're asking the computer to:
run python, load your code and interpret it
load any modules you want to use
find your image file and read it from disk
give those bytes some meaning as an image abstraction - parse, etc these bytes
do some kind of transform or "work" on the image
export your result in some way
You need to figure out which of those steps is it that really needs to be lightning fast. After that, maybe someone can make a suggestion.

Related

faster way to load images?

I'm a Python novice but have a decent amount of experience in other languages. I'm using this loop to load in a directory of images for some machine learning, which is why I convert them to numpy arrays. It's very slow, so I must be doing something wrong!
My current code:
def load_images(src):
files = [] # accept multiple extensions
for ext in ('*.gif', '*.png', '*.PNG', '*.jpg', '*.jpeg', '*.JPG', '*.JPEG'):
files.extend(glob.glob(os.path.join(src, ext)))
images = []
for each in files:
print(each)
img = PIL.Image.open(each)
img_array = np.asarray(img)
images.append(img_array)
return images
# need to convert from list to numpy array
train_images = np.asarray(load_images(READY_IMAGES))
from multiprocessing import Process
#this is the function to be parallelised
def image_load_here(image_path):
pass
if __name__ == '__main__':
#Start the multiprocesses and provide your dataset.
p = Process(target=image_load_here,['img1', 'img2', 'img3', 'img4'])
p.start()
p.join()
Original Code link : fastest way to load images in python for processing
Extra Refer
Fastest image reader? Four ways to open a Satellite image in Python
Efficient image loading
Also, if you're using machine learning with keras/tensorflow you can use a generator to really speed up your loading process and it will consume memory on the go, thereby conserving your RAM for other uses.
Here is an excellent article on the function. You can visit the official documentationtoo.

my program reduces music speed by 50% but only in one channel

I am using the wave library in python to attempt to reduce the speed of audio by 50%. I have been successful, but only in the right channel. in the left channel it is a whole bunch of static.
import wave,os,math
r=wave.open(r"C:\Users\A\My Documents\LiClipse Workspace\Audio
compression\Audio compression\aha.wav","r")
w=wave.open(r"C:\Users\A\My Documents\LiClipse Workspace\Audio
compression\Audio compression\ahaout.wav","w")
frames=r.readframes(r.getnframes())
newframes=bytearray()
w.setparams(r.getparams())
for i in range(0,len(frames)-1):
newframes.append(frames[i])
newframes.append(frames[i])
w.writeframesraw(newframes)
why is this? since I am just copying and pasting raw data surely I can't generate static?
edit: I've been looking for ages and I finally found a useful resource for the wave format: http://soundfile.sapp.org/doc/WaveFormat/
If I want to preserve stereo sound, it looks like I need to copy the actual sample width of 4 twice. This is because there are two channels and they take up 4 bytes instead of 2.
`import wave
r=wave.open(r"C:\Users\A\My Documents\LiClipse Workspace\Audio
compression\Audio compression\aha.wav","r")
w=wave.open(r"C:\Users\A\My Documents\LiClipse Workspace\Audio
compression\Audio compression\ahaout.wav","w")
frames=r.readframes(r.getnframes())
newframes=bytearray()
w.setparams(r.getparams())
w.setframerate(r.getframerate())
print(r.getsampwidth())
for i in range(0,len(frames)-4,4):
newframes.append(frames[i])
newframes.append(frames[i+1])
newframes.append(frames[i+2])
newframes.append(frames[i+3])
newframes.append(frames[i])
newframes.append(frames[i+1])
newframes.append(frames[i+2])
newframes.append(frames[i+3])
w.writeframesraw(newframes)`
Edit 2:
Okay I have no idea what drove me to do this but I am already enjoying the freedoms it is giving me. I chose to copy the wav file into memory, edit the copy directly, and write it to an output file. I am incredibly happy with the results. I can import a wav, repeat the audio once, and write it to an output file, in only 0.2 seconds. Reducing the speed by half times now takes only 9 seconds instead of the 30+ seconds with my old code using the wav plugin :) here's the code, still kind of un-optimized i guess but it's better than what it was.
import struct
import time as t
t.clock()
r=open(r"C:/Users/apier/Documents/LiClipse Workspace/audio editing
software/main/aha.wav","rb")
w=open(r"C:/Users/apier/Documents/LiClipse Workspace/audio editing
software/main/output.wav","wb")
rbuff=bytearray(r.read())
def replacebytes(array,bites,stop):
length=len(bites)
start=stop-length
for i in range(start,stop):
array[i]=bites[i-start]
def write(audio):
w.write(audio)
def repeat(audio,repeats):
if(repeats==1):
return(audio)
if(repeats==0):
return(audio[:44])
replacebytes(audio, struct.pack('<I', struct.unpack('<I',audio[40:44])
[0]*repeats), 44)
return(audio+(audio[44:len(audio)-58]*(repeats-1)))
def slowhalf(audio):
buff=bytearray()
replacebytes(audio, struct.pack('<I', struct.unpack('<I',audio[40:44])
[0]*2), 44)
for i in range(44,len(audio)-62,4):
buff.append(audio[i])
buff.append(audio[i+1])
buff.append(audio[i+2])
buff.append(audio[i+3])
buff.append(audio[i])
buff.append(audio[i+1])
buff.append(audio[i+2])
buff.append(audio[i+3])
return(audio[:44]+buff)
rbuff=slowhalf(rbuff)
write(rbuff)
print(t.clock())
I am surprised at how small the code is.
Each of the elements returned by readframes is a single byte, even though the type is int. An audio sample is typically 2 bytes. By doubling up each byte instead of each whole sample, you get noise.
I have no idea why one channel would work, with the code shown in the question it should be all noise.
This is a partial fix. It still intermixes the left and right channel, but it will give you an idea of what will work.
for i in range(0,len(frames)-1,2):
newframes.append(frames[i])
newframes.append(frames[i+1])
newframes.append(frames[i])
newframes.append(frames[i+1])
Edit: here's the code that should work in stereo. It copies 4 bytes at a time, 2 for the left channel and 2 for the right, then does it again to double them up. This will keep the channel data from interleaving.
for i in range(0, len(frames), 4):
for _ in range(2):
for j in range(4):
newframes.append(frames[i+j])

Is there any way to read one image row/column into an array in Python?

I've just translated some CT reconstruction software from IDL into Python, and this is my first experience ever with Python. The code works fine except that it's much, much slower. This is due, in part, to the fact that IDL allows me to save memory and time by reading in just one row of an image at a time, using the following:
image = read_tiff(filename, sub_rect = [0, slice, x, 1])
I need to read one row each from 1800 different projection images, but as far as I can tell I can only create an image array by reading in the entire image and then converting it to an array. Is there any trick to just read in one row from the start, since I don't need the other 2047 rows?
It looks like tifffile.py by Christoph Gohlke (http://www.lfd.uci.edu/~gohlke/code/tifffile.py.html) can do the job.
from tiffile import TiffFile
with TiffFile('test.tiff') as tif:
for page in tif:
image = page.asarray(memmap=True)
print image[0,:,:]
If I interpret the code correctly this will extract the first row of every page in the file without loading the whole file into memory (through numpy.memmap).

Python - read great number of pictures in loop

I have a great number of pictures to use for calculation on python.
These pictures are named as MINtruc1000, MINtruc1250...
and it is in couple with picture MAXtruc1000, MAXtruc1250...
My aim is to call a couple of picture as MINtruc1000, MAXtruc1000 for each step of a loop...and i need to automaye it because of the great number of data
img0=skimage.data.imread('./test/MINtruc1000.tiff')
If I understand the question correctly I imagine you could just do something like
def read_images(start, end):
for i in range(start, end):
img = skimage.data.imread("./test/MINtruc%s.tiff" % i)
...
if the images aren't in a consistently incrementing order, use glob
import glob
for im in glob.glob("./test/MINtruc*.tiff")
img = skimage.data.imread(im)
...
If the problem is that this may use too much memory, look into creating a generator instead

Iterative writing with PyPNG

I was wondering if there is a way to iteratively write png files using PyPNG (as in, row-by-row, or chunk-by-chunk) without providing the png.Writer() with the entire data. I've looked through the documentation, but all the writer methods require all rows of the PNG at once (uses too much memory).
Thanks in advance!
If you supply an iterator, then PyPNG will use it. Full example:
#!/usr/bin/env python
import random
import png
def iter_rows():
H = 4096
for i in xrange(H):
yield [random.randrange(i+1) for _ in range(4096)]
img = png.from_array(iter_rows(), mode="L;12",
info=dict(size=(4096,4096)))
img.save("big.png")

Categories

Resources