Reading a text file with OpenCV in Python - python

I have tens of thousands of text files to analyze, where each text file represents a snapshot in time of the physical state of a system. The micro-state of each "pixel" is represented by floats from 0 to 1. Is it possible for OpenCV to directly read a text file without first having to convert the text file to an image format? I do not want to create tens of thousands of image files every time I carry out this analysis.
Context/goal: I am analyzing a thermal simulation of a nano-magnetic system, and will eventually need to use OpenCV to calculate the contour areas of clusters formed above a certain threshold value.
I've included my code attempt below, using a test text file. The system is a square system of side length 40, and I am analyzing the column of 40^2 = 1600 data points which I call mag (for magnetization, as this is from a scientific research project). I multiply each "pixel" by 255 to mimic grayscale. As soon as the program reaches the cv2.threshold line, I get an error:
~/anaconda/conda-bld/work/opencv-2.4.8/modules/imgproc/src/thresh.cpp:783: error: (-210) in function threshold
which I suspect arises from my mimicking grayscale instead of reading an actual grayscale image file.
import numpy as np
import cv2
SideDim = 40
dud, mag = np.loadtxt('Aex_testfile.txt', unpack=True, usecols=(4,5), skiprows=2)
mag = np.reshape(mag, (SideDim,SideDim))
for row in range(SideDim):
for col in range(SideDim):
mag[row][col] = round(255 * mag[row][col])
mag = mag.astype(np.int)
ret,thresh = cv2.threshold(mag,0,255,cv2.THRESH_BINARY)
plt.imshow(thresh,'gray')

Regarding the question in your post title:
In Python, CV2 does not convert text into an image format. Instead, "images" are just numpy arrays. You are then correct in using np.loadtxt to import data (though I'm partial to np.genfromtxt(), as it's slightly more robust).
Regarding the error you're getting:
Error code -210 is defined as:
#define CV_StsUnsupportedFormat -210 /* the data format/type is not supported by the function*/
cv2.threshold() uses an 8 bit integer. Instead of casting mag as np.int, cast it as np.uint8. This should fix your error
Other things to note:
With numpy arrays, you don't need to use those ugly nested loops to multiply each value by 255. Instead, just do mag * 255.
Instead of multiplying by 255 (which doesn't quite make sense unless you're positive your maximum value is 1...), you should really just normalize your array. Something like (mag / mag.amax()) * 255 would be a better solution.
You don't need open CV for this part of the program. Instead, you can just do it in numpy:
thresh = 255 * (mag > threshval)
this will produce an array (thresh) that has any values greater than threshval set equal to 255
In general, I think it would behoove you to learn numpy before jumping into opencv. I think you'd be surprised at how much you can do in numpy.

Related

Images opened in Pillow and OpenCV are not equivelant

I downloaded a test image from Wikipedia (the tree seen below) to compare Pillow and OpenCV (using cv2) in python. Perceptually the two images appear the same, but their respective md5 hashes don't match; and if I subtract the two images the result is not even close to solid black (the image shown below the original). The original image is a JPEG. If I convert it to a PNG first, the hashes match.
The last image shows the frequency distribution of how the pixel value differences.
As Catree pointed out my subtraction was causing integer overflow. I updated to converting too dtype=int before the subtraction (to show the negative values) and then taking the absolute value before plotting the difference. Now the difference image is perceptually solid black.
This is the code I used:
from PIL import Image
import cv2
import sys
import md5
import numpy as np
def hashIm(im):
imP = np.array(Image.open(im))
# Convert to BGR and drop alpha channel if it exists
imP = imP[..., 2::-1]
# Make the array contiguous again
imP = np.array(imP)
im = cv2.imread(im)
diff = im.astype(int)-imP.astype(int)
cv2.imshow('cv2', im)
cv2.imshow('PIL', imP)
cv2.imshow('diff', np.abs(diff).astype(np.uint8))
cv2.imshow('diff_overflow', diff.astype(np.uint8))
with open('dist.csv', 'w') as outfile:
diff = im-imP
for i in range(-256, 256):
outfile.write('{},{}\n'.format(i, np.count_nonzero(diff==i)))
cv2.waitKey(0)
cv2.destroyAllWindows()
return md5.md5(im).hexdigest() + ' ' + md5.md5(imP).hexdigest()
if __name__ == '__main__':
print sys.argv[1] + '\t' + hashIm(sys.argv[1])
Frequency distribution updated to show negative values.
This is what I was seeing before I implemented the changes recommended by Catree.
The original image is a JPEG.
JPEG decoding can produce different results depending on the libjpeg version, compiler optimization, platform, etc.
Check which version of libjpeg Pillow and OpenCV are using.
See this answer for more information: JPEG images have different pixel values across multiple devices or here.
BTW, (im-imP) produces uint8 overflow (there is no way to have such a high amount of large pixel differences without seeing it in your frequency chart). Try to cast to int type before doing your frequency computation.

RGB Values Being Returned by PIL don't match RGB color

I'm attempting to make a reasonably simple code that will be able to read the size of an image and return all the RGB values. I'm using PIL on Python 2.7, and my code goes like this:
import os, sys
from PIL import Image
img = Image.open('C:/image.png')
pixels = img.load()
print(pixels[0, 1])
now this code was actually gotten off of this site as a way to read a gif file. I'm trying to get the code to print out an RGB tuple (in this case (55, 55, 55)) but all it gives me is a small sequence of unrelated numbers, usually containing 34.
I have tried many other examples of code, whether from here or not, but it doesn't seem to work. Is it something wrong with the .png format? Do I need to further code in the rgb part? I'm happy for any help.
My guess is that your image file is using pre-multiplied alpha values. The 8 values you see are pretty close to 55*34/255 (where 34 is the alpha channel value).
PIL uses the mode "RGBa" (with a little a) to indicate when it's using premultiplied alpha. You may be able to tell PIL to covert the to normal "RGBA", where the pixels will have roughly the values you expect:
img = Image.open('C:/image.png').convert("RGBA")
Note that if your image isn't supposed to be partly transparent at all, you may have larger issues going on. We can't help you with that without knowing more about your image.

Viewing dicom image with Bokeh

I'm trying to set the graph background to a dicom image. I followed this example, but the image data given from dicom.pixel_array isn't RGBA. I'm not sure how to convert it, either. I'm also not sure what exactly bokeh is expecting. I've tried finding specifics in the documentation, but not such luck.
from bokeh.plotting import figure, show, output_file
import dicom
import numpy as np
path = "/pathToDicomImage.dcm"
data = dicom.read_file(path)
img = data.pixel_array
p = figure(x_range=(0,10), y_range=(0,10))
# must give a vector of images
p.image_rgba(image=[img], x=0, y=0, dw=10, dh=10)
output_file("image_rgba.html", title="image_rgba.py example")
show(p)
This code doesnt give me any errors, but it doesn't display anything. Maybe the pixel array doesn't have alpha data, so alpha defaults to 0? I'm not sure. Also, I can't quite figure out how to test it.
SOLVED
As was pointed out, I just needed to map the pixel data to rgba space. for this instance, it means duplicating the data to each channel, and setting alpha all the way.
def dicom_image_to_RGBA(image_data):
rows = len(image_data)
cols = rows
img = np.empty((rows,cols), dtype=np.uint32)
view = img.view(dtype=np.uint8).reshape((rows, cols, 4))
for i in range(0,rows):
for j in range(0,cols):
view[i][j][0] = image_data[i][j]
view[i][j][1] = image_data[i][j]
view[i][j][2] = image_data[i][j]
view[i][j][3] = 255
return img
Not being an expert in python, I have had a glance at pydicom's capabilities in handling pixel data. I figured out that pixel_array is the value of the pixel-data attribute of the DICOM dataset as is and pydicom does not offer any functionality to convert it into some standard format which can be handled uniformly. This means you will have to convert it to RGB in most cases which is a quite compilcated and error-prone task.
Things to consider in this:
The encoding (Big/Little Endian, various compression methods like JPEG, JPEG-LS, RLE, ZIP) - DICOM attribute (0002,0010) TransferSyntaxUID
The type of pixeldata (Grayscale, RGB, ...) - DICOM attribute (0028,0004) PhotometricInterpretation, (0028,0103) PixelRepresentation
In case of color images: are the values encoded colur by plane (RRRRR,.....GGGGG,.....BBBBB) or colour by pixel as you expect it to be (RGB RGB...)
The bit depth and which bits are used for actual pixel data values (0028,0100) BitsAllocated, (0028,0101) BitsStored, (0028,0102) Highbit.
are the pixel data values really the values to be displayed or are they indices to a colour/grayscale lookup table (0028,3000) ModalityLUTSequence, (0028,3002) LUTDescriptor, (0028,3003) LUTExplanation, (0028,3004) ModalityLUTType, (0028,3006) LUTData.
Scary, isn't it? For some modern image classes like Enhanced MR, there is even more than that.
However, if you constrain to a particular type of image (e.g. Computed Radiography). limitations to the above mentioned apply that make your life a bit easier.
If you would post a DICOM dump of the image header I could give you some hints how to display that particular image.
HTH
kritzel
What you need to do is map the pixel data returned from pixel_array to RGB space. Usually that is done using a look up table (LUT). Take a look at the functions GetImage and GetLUTValue in the dicomparser module in the dicompyler-core library.
In GetLUTValue it maps the data to an 8-bit greyscale image. If you want to use a different LUT, you would need to map the color space accordingly.

MemoryError trying to convert Numpy 2D arrays into a 3D array

I have some trouble converting some amount (in this case, 153) of Numpy 2D arrays into a 3D array (these 2D arrays represent gray images - i.e. 2048x2048x1 - in order to deal with an image sequence instead of a set of 2D images). I need this to obtain the signal formed by each pixel value over time (which should be convenient with Numpy, once this problem is solved).
My code is (pretty much) the following :
zdim = len(imglist) # 'imglist' a Python list of the path for each image I need to process
windowspan = 512
xmin = ymin = 2
xmax = ymax = xmin + windowspan
sequence = []
for i in range(zdim):
hdulist = fits.open(imglist[i],'readonly') # allow to open FITS image files
hdr = hdulist[0].header['DATE-OBS'] # fetch the image date/time
img = fc.readfitsimg(imglist[i]) # return a np ndarray (2D)
patch = img[ymin:ymax, xmin:xmax] # take a small of the original image
print("patchSize : " + str(patch.size*4))
sequence.append(patch) # adding to the list
print("it : " + str(i))
sequence = np.array(sequence) # transform to numpy array
The interpreter returns a MemoryError after about 85 iterations...
Anyone would have any hints of what's happening ? (See some details below)
Some others details :
- I'm using WinPython 32 bits (portable), because I cannot install a 'proper' Python distribution (I switched between Python 2.7.9.4 and 3.4.3.3 for testing purpose)
- I'm forced to use a 32-bits Windows 7, on a PC which has 4GB, so 3.5GB usable / I've tried executing my script on another computer (Win7 64bits, 16GB of RAM)
Thanks for any help you could provide me with.
The MemoryError happens when your computer runs out of RAM memory. In this case, you seem to run out while adding all the images into a cube, when hitting the limit 85x512x512. If this was the only problem of the code, I would advice to use memmap to save the results directly to the hard drive instead of in the RAM. The memmap option is also available when you open a fits file fits.open(..., memmap=True). In that case, you only open the image in the disc, and read the parts you need, instead of loading the whole image in RAM.
But the real problem here, I suspect, is that you have been opening fits files without any closing at the end of the loop (hdu.close() in your case).

Best dtype for creating large arrays with numpy

I am looking to store pixel values from satellite imagery into an array. I've been using
np.empty((image_width, image_length)
and it worked for smaller subsets of an image, but when using it on the entire image (3858 x 3743) the code terminates very quickly and all I get is an array of zeros.
I load the image values into the array using a loop and opening the image with gdal
img = gdal.Open(os.path.join(fn + "\{0}".format(fname))).ReadAsArray()
but when I include print img_array I end up with just zeros.
I have tried almost every single dtype that I could find in the numpy documentation but keep getting the same result.
Is numpy unable to load this many values or is there a way to optimize the array?
I am working with 8-bit tiff images that contain NDVI (decimal) values.
Thanks
Not certain what type of images you are trying to read, but in the case of radarsat-2 images you can the following:
dataset = gdal.Open("RADARSAT_2_CALIB:SIGMA0:" + inpath + "product.xml")
S_HH = dataset.GetRasterBand(1).ReadAsArray()
S_VV = dataset.GetRasterBand(2).ReadAsArray()
# gets the intensity (Intensity = re**2+imag**2), and amplitude = sqrt(Intensity)
self.image_HH_I = numpy.real(S_HH)**2+numpy.imag(S_HH)**2
self.image_VV_I = numpy.real(S_VV)**2+numpy.imag(S_VV)**2
But that is specifically for that type of images (in this case each image contains several bands, so i need to read in each band separately with GetRasterBand(i), and than do ReadAsArray() If there is a specific GDAL driver for the type of images you want to read in, life gets very easy
If you give some more info on the type of images you want to read in, i can maybe help more specifically
Edit: did you try something like this ? (not sure if that will work on tiff, or how many bits the header is, hence the something:)
A=open(filename,"r")
B=numpy.fromfile(A,dtype='uint8')[something:].reshape(3858,3743)
C=B*1.0
A.close()
Edit: The problem is solved when using 64bit python instead of 32bit, due to memory errors at 2Gb when using the 32bit python version.

Categories

Resources