I use simple ITK for read dicom file but I do not know how to show it into a QLabel.
reader = SimpleITK.ImageFileReader()
reader.SetFileName("M:\\CT-RT DICOM\ct\\CT111253009007.dcm")
image1 = reader.Execute()
How can I show image1 in QLabel?
Maybe something like this? It should generate a QImage which you can then pass into the QLabel.
A few catch-me's will be the 16 bit image data (I assume) from the DICOM which needs passed into the RGB image. Further the scaling of the image. But this should be enough to get you started
from PySide import QtGui
width,height = img.GetSize()
img = QtGui.QImage(width, height, QtGui.QImage.Format_RGB16)
for x in xrange(width):
for y in xrange(height):
img.setPixel(x, y, QtGui.QColor(data[x,y],data[x,y],data[x,y]))
pix = QtGui.QPixmap.fromImage(img)
QtGui.QLabel label;
label.setPixmap(pix);
label.setMask(pix.mask());
label.show();
Related
I'm trying to transform a given image from a given coordinate to a coordinate on a given image.
I've written an example code using ImageDraw, but with size being 2000 it is too slow for my purpose.
from PIL import Image, ImageDraw
size = [size]
img = Image.new('RGB', (size,size), color = (255,200,100))
draw = ImageDraw.Draw(img)
pix = [pixel data]
for X in range(0,size):
for Y in range (0,size):
draw.point((X, Y), fill=pix[functionX(X),functionY(Y)])
I'm sure it could be done using PIL's funcions faster that my code.
I'm trying to convert a 1-layer (grey-scale) image to a 3-layer RGB image. Below is the code I'm using. This runs without error but doesn't create the correct result.
from PIL import Image # used for loading images
def convertLToRgb(img):
height = img.size[1]
width = img.size[0]
size = img.size
mode = 'RGB'
data = np.zeros((height, width, 3))
for i in range(height):
for j in range(width):
pixel = img.getpixel((j, i))
data[i][j][0] = pixel
data[i][j][1] = pixel
data[i][j][2] = pixel
img = Image.frombuffer(mode, size, data)
return img
What am I doing wrong here? I'm not expecting a color picture, but I am expecting a black and white picture resembling the input. Below are the input and output images:
Depending on the bit depth of your image, change:
data = np.zeros((height, width, 3))
to:
data = np.zeros((height, width, 3), dtype=np.uint8)
For an 8-bit image, you need to force your Numpy array dtype to an unsigned 8-bit integer, otherwise it defaults to float64. For 16-bit, use np.uint16, etc.
What is your task? black-white image or RGB color image. If you want to convert the gray image to the black-white image. You can directly convert the image into a binary image. As for your code, two things you need to care. Firstly, the location of the pixel is right, the wrong location will make the image all black like your post. Secondly, you only can convert the RGB to grayscale image directly, but you can not convert the grayscale image to RGB directly, because it may be not accurate.
You can do it with the PIL.Image and PIL.ImageOps as shown below. Because of the way it's written, the source image isn't required to be one layer—it will convert it to one if necessary before using it:
from PIL import Image
from PIL.ImageOps import grayscale
def convertLToRgb(src):
src.load()
band = src if Image.getmodebands(src.mode) == 1 else grayscale(src)
return Image.merge('RGB', (band, band, band))
src = 'whale_tail.png'
bw_img = Image.open(src)
rgb_img = convertLToRgb(bw_img)
rgb_img.show()
I have a very simple program in python with OpenCV and GDAL. In this program i read GeoTiff image with the following line
image = cv2.imread(sys.argv[1], cv2.IMREAD_LOAD_GDAL | cv2.IMREAD_COLOR)
The problem is for a specific image imread return None. I am using images from: https://www.sensefly.com/drones/example-datasets.html
Image in Assessing crops with RGB imagery (eBee SQ) > Map (orthomosaic) works well. Its size is: 19428, 19784 with 4 bands.
Image in Urban mapping (eBee Plus/senseFly S.O.D.A.) > Map (orthomosaic) doesn't work. Its size is: 26747, 25388 and 4 bands.
Any help to figure out what is the problem?
Edit: I tried the solution suggested by #en_lorithai and it works, the problem is then I need to do some image processing with OpenCV and the image loaded by GDAL has several issues
GDAL load images as RGB instead of BGR (used by default in OpenCV)
The image shape expected by OpenCV is (width, height, channels) and GDAL return an image with (channels, width, height) shape
The image returned by GDAL is flipped in Y-axe and rotate clockwise by 90 degree.
The image loaded by OpenCV is (resized to 700x700):
The image loaded by GDAL (after change shape, of course) is (resized to 700x700)
Finally, If I try to convert this image from BGR to RGB with
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
I get (resized to 700x700)
I can convert from GDAL format to OpenCV format with the following code
image = ds.ReadAsArray() #Load image with GDAL
tmp = image.copy()
image[0] = tmp[2,:,:] # swap read channel and blue channel
image[2] = tmp[0,:,:]
image = np.swapaxes(image,2,0) # convert from (height, width, channels) to (channels, height, width)
image = cv2.flip(image,0) # flip in Y-axis
image = cv2.transpose(image) # Rotate by 90 degress (clockwise)
image = cv2.flip(image,1)
The problem is I think that this is a very slow process and I want to know if there is a automatic convert-process.
You can try and open the image in gdal instead
from osgeo import gdal
g_image = gdal.Open('161104_hq_transparent_mosaic_group1.tif')
a_image = g_image.ReadAsArray()
can't test as i don't have enough available memory to open that image.
Edit: equivalent operation on another image
from osgeo import gdal
import matplotlib.pyplot as plt
g_image = gdal.Open('Water-scenes-014.jpg') # 3 channel rgb image
a_image = g_image.ReadAsArray()
s_image = np.dstack((a_image[0],a_image[1],a_image[2]))
plt.imshow(s_image) # show image in matplotlib (no need for color swap)
s_image = cv2.cvtColor(s_image,cv2.COLOR_RGB2BGR) # colorswap for cv
cv2.imshow('name',s_image)
Another method of getting individual bands from gdal
g_image = gdal.Open('image_name.PNG')
band1 = g_image.GetRasterBand(1).ReadAsArray()
You can then do a numpy dstack of each of the bands.
I have opened a grayscale image using the Python Imaging Library, copied every pixel value into another image variable of same size and saved it. Now when I open the new image with an image viewer it looks reddish. I have used the Image.new() method with and without the "white" and "black" arguments got the same reddish output.
My code:
from PIL import Image
import math
def run():
im = Image.open("hrabowski.jpg")
pix = im.load()
print im.size
# print pix[0, 1]
im2 = Image.new("RGB", (2400, 2400))
for i in range(im.size[0]):
for j in range(im.size[0]):
im2.putpixel((i, j), pix[i, j])
im2.save("hrabowski-2400-2400.jpg")
Original image (scaled down to 500 x 500):
Python output of my code (scaled down to 500 x 500):
Could anyone please tell me what I am doing wrong?
Your problem is that you want to create an RGB image which has three channels. Therefore one pixel value consists of three values and not only one (in your case use the gray value of the original image for each of the channels).
I have modified the code accordingly.
A side remark: I am almost sure that there is a better way to do what you want to achieve, there is usually no need to loop through single pixels, but I am not sure what you are after.
from PIL import Image
import math
def run():
im = Image.open("ZHiG0.jpg")
pix = im.load()
print im.size
# print pix[0, 1]
im2 = Image.new("RGB", (2400, 2400))
for i in range(im.size[0]):
for j in range(im.size[0]):
im2.putpixel((i, j), (pix[i, j],pix[i, j],pix[i, j]))
im2.save("ZHiG0-2400-2400.jpg")
run()
I want to resize some images and here is my code.
import os
from PIL import Image
size = 300, 300
for f in os.listdir('.'):
if f.endswith('.png'):
i = Image.open(f)
fn, fext = os.path.splitext(f)
i.thumbnail(size, Image.ANTIALIAS)
i.save('output/{}{}'.format(fn, fext))
The code is working fine and it resizes all my image to a width of 300px, but the height did not resize.
Can anyone tell me why?
Image.thumbnail() is designed to keep the aspect ratio of the original image. If you want the output image to be exactly 300x300 px, use Image.resize() instead.