I've got an indexed colour (up to 6 colours) 8 bit bitmap file.
I need to write some text on it and save without changing the format nor the colour table.
Unfortunately via docs:
http://doc.qt.io/qt-5/qimage.html
Note: Drawing into a QImage with QImage::Format_Indexed8 is not supported.
My attempt was to open the file as QImage, convert to a format I can write on with QPainter (using only colours which are already in the colour table) and then convert it back to an 8-bit bitmap.
Something like this:
image = image.convertToFormat(QImage.Format_RGB16)
painter = QPainter()
painter.begin(image)
[....]
painter.end()
image = image.convertToFormat(QImage.Format_Indexed8, Qt.ThresholdDither)
But no matter what I do I always end up with a somehow corrupted image (it has 4 bit colour depth, it's damaged with kind of a noise in it, or it has its colours changed).
I've tried using different flags during conversion (usually messes the colour depth).
I also tried to copy the colorTable from the original image and set it after the conversion (before save) but this produces a negative/black&white image.
I would be very grateful for any help with this issue.
Related
I have code that takes an image and converts it into a bitmap. I was wondering if there was a way to save the bitmap in a separate file to be used later. I would like to also be able to open that file in plain text and not an actual image so that I can read the bitmap.
code:
image_file = Image.open("edge.png")
image_file = image_file.convert('1')
print(image_file.mode)
print(type(image_file.tobitmap()))
tobit_image = image_file.tobitmap() # convert image to bitmap
print(tobit_image)
I think you are looking for "Chain Codes", or "Freeman Chain Codes". Basically, you store a direction, which is one of the 8 points of the compass encoded as a digit at each location to tell you how to get get to the next point, i.e. which direction your turtle must move.
Try looking here and also Googling.
OpenCV can generate them too with findContours()
Or, you may be looking for potrace which is a tool that converts images into vector paths.
I'm building a Paint-like app Since I want the freedom to reposition and modify the shape properties later, I am using Tkinter to draw shapes on Canvas instead of PIL Draw or anything else. From other answers, I found how to save a canvas as PNG by 1st creating a postscript file and then converting it to PNG using PIL.
Now the problem is the EPS file has transparent spaces but the PNG file fills those voids with a White background color. I'm not sure where I am going wrong.
Below is the function I used.
def saveImg(event):
global canvas
canvas.postscript(file="my_drawing.eps", colormode='color')
imgNew = Image.open("my_drawing.eps")
imgNew.convert("RGBA")
imgNew.thumbnail((2000,2000), Image.ANTIALIAS)
imgNew.save('testImg.png', quality=90)
Looks like transparency is not supported. From the docs:
The EPS driver can read EPS images in L, LAB, RGB and CMYK mode, but Ghostscript may convert the images to RGB mode rather than leaving them in the original color space.
When you load in RGB (instead of RGBA) the alpha channel information is discarded and converting it to RGBA later will not recover it.
Your best shot is porting it to more recent toolkits like cairo or QT or converting the file using GhostScript directly as suggested by PM2Ring.
For the GS approach in order to set the width and height of the output file you must use the -rN switch where N is the resolution in PPI (pixels per inch). You must do the math in order to get target resolution from the EPS bounding box and the desired output size.
Or you can render to a fixed resolution first, lets say, 100 PPI, see the width you got and do the math in order to get the correct resolution. For example, if rendering with -r100 gives you a file 500 pixels wide but you want it to be 1024:
desired_resolution = initial_resolution * desired_width // initial_width
In order to get a file 1024 pixels wide:
>>> 100 * 1024 // 500
204
So you must render the EPS again using -r204.
Edit 1:
I got the solution from this Question
We can set custom width and height using -gNNNNxMMMM
but the dpi value crops only a small area. I tried with the usual 72dpi and I got a decent output(I'm not sure if it's perfect or not). Now I need to find how to execute this command every time when I run the program and provide the custom image size value. :\
I'm trying to set the graph background to a dicom image. I followed this example, but the image data given from dicom.pixel_array isn't RGBA. I'm not sure how to convert it, either. I'm also not sure what exactly bokeh is expecting. I've tried finding specifics in the documentation, but not such luck.
from bokeh.plotting import figure, show, output_file
import dicom
import numpy as np
path = "/pathToDicomImage.dcm"
data = dicom.read_file(path)
img = data.pixel_array
p = figure(x_range=(0,10), y_range=(0,10))
# must give a vector of images
p.image_rgba(image=[img], x=0, y=0, dw=10, dh=10)
output_file("image_rgba.html", title="image_rgba.py example")
show(p)
This code doesnt give me any errors, but it doesn't display anything. Maybe the pixel array doesn't have alpha data, so alpha defaults to 0? I'm not sure. Also, I can't quite figure out how to test it.
SOLVED
As was pointed out, I just needed to map the pixel data to rgba space. for this instance, it means duplicating the data to each channel, and setting alpha all the way.
def dicom_image_to_RGBA(image_data):
rows = len(image_data)
cols = rows
img = np.empty((rows,cols), dtype=np.uint32)
view = img.view(dtype=np.uint8).reshape((rows, cols, 4))
for i in range(0,rows):
for j in range(0,cols):
view[i][j][0] = image_data[i][j]
view[i][j][1] = image_data[i][j]
view[i][j][2] = image_data[i][j]
view[i][j][3] = 255
return img
Not being an expert in python, I have had a glance at pydicom's capabilities in handling pixel data. I figured out that pixel_array is the value of the pixel-data attribute of the DICOM dataset as is and pydicom does not offer any functionality to convert it into some standard format which can be handled uniformly. This means you will have to convert it to RGB in most cases which is a quite compilcated and error-prone task.
Things to consider in this:
The encoding (Big/Little Endian, various compression methods like JPEG, JPEG-LS, RLE, ZIP) - DICOM attribute (0002,0010) TransferSyntaxUID
The type of pixeldata (Grayscale, RGB, ...) - DICOM attribute (0028,0004) PhotometricInterpretation, (0028,0103) PixelRepresentation
In case of color images: are the values encoded colur by plane (RRRRR,.....GGGGG,.....BBBBB) or colour by pixel as you expect it to be (RGB RGB...)
The bit depth and which bits are used for actual pixel data values (0028,0100) BitsAllocated, (0028,0101) BitsStored, (0028,0102) Highbit.
are the pixel data values really the values to be displayed or are they indices to a colour/grayscale lookup table (0028,3000) ModalityLUTSequence, (0028,3002) LUTDescriptor, (0028,3003) LUTExplanation, (0028,3004) ModalityLUTType, (0028,3006) LUTData.
Scary, isn't it? For some modern image classes like Enhanced MR, there is even more than that.
However, if you constrain to a particular type of image (e.g. Computed Radiography). limitations to the above mentioned apply that make your life a bit easier.
If you would post a DICOM dump of the image header I could give you some hints how to display that particular image.
HTH
kritzel
What you need to do is map the pixel data returned from pixel_array to RGB space. Usually that is done using a look up table (LUT). Take a look at the functions GetImage and GetLUTValue in the dicomparser module in the dicompyler-core library.
In GetLUTValue it maps the data to an 8-bit greyscale image. If you want to use a different LUT, you would need to map the color space accordingly.
in my pygame game, to import jpeg image, I use convert()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert
then, to play with the image transparency (how much we can see trough the image), I use set_alpha()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.set_alpha
However, to import my png image, which have a tranparent background, I use convert_alpha()
http://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert_alpha
but with this way of importing, I can't play with the general transparency using set_alpha(). Any other idea to adjust the transparency (how much we see trough the image) ?
When you read the documentation for set_alpha you can read this :
If the Surface format contains per pixel alphas, then this alpha value will be ignored.
%In your case, with a png image, it's a per pixel alphas. So, you must manage alpha "per pixel". For example, you can do that (not the best code, but easy to understand. Work with png with no/yes transparency only):
def change_alpha(img,alpha=255):
width,height=img.get_size()
for x in range(0,width):
for y in range(0,height):
r,g,b,old_alpha=img.get_at((x,y))
if old_alpha>0:
img.set_at((x,y),(r,g,b,alpha))
Be carefull, it's "slow", because you manage each pixel which is not at 0 (transparent from your png).
If your png has multiple level of transparency, you should manage the transparency with a better formula, like this:
r,g,b,old_alpha=img.get_at((x,y))
img.set_at((x,y),(r,g,b,(alpha*old_alpha)/255))
And in this case, never modify the original image, but work on a copy to never lose your original alpha.
I hope it will help
===================== EDIT ===================
Add some optimisation because asking in comment
With some caching methodology:
class image_with_alpha(object):
def __init__(self,name=None):
self.img=None
self.alpha={}
if name:
self.load_image(name)
def load_image(self,name):
self.img=pygame.image.load(name)
self.alpha[255]=self.img
#self.pre_compute_alpha()
def pre_compute_alpha(self):
for alpha in range(0,10):
self.alpha[alpha]=change_alpha(self.img,alpha)
def get_img(self,a=255):
try:
return self.alpha[a]
except:
self.alpha[a]=change_alpha(self.img,a)
return self.alpha[a]
And use it like this :
Load image:
image=image_with_alpha("test.png")
Blit with 60 for alpha:
screen.blit(image.get_img(60),(0,0))
And now, it's fast I hope
The fastest solution is probably to use numpy array manipulation, it should be fast enough to avoid the need for caching. What's really slow about calculating the alpha value pixel-wise is iterating in Python, while numpy does it all in C.
Start out by referencing the image's alpha channel into a numpy array. This will create a lock on the image surface; let's remember that for later. Then take the minimum (pixel-wise) of your original alpha and an array full of ones (that will leave you with an array of only ones and zeros), multiply that (pixel-wise) by your desired alpha and copy the result back to the image's alpha channel (still represented by that array reference). Before you can blit the image to the screen, the array reference to the alpha array must be cleared, releasing the lock on the image surface.
def change_alpha(img, alpha=255):
chan = pygame.surfarray.pixels_alpha(img)
chan2 = numpy.minimum(chan, numpy.ones(chan.shape, dtype=chan.dtype)) * alpha
numpy.copyto(chan, chan2)
del chan
Ok I have the answer for you. Pygame's "convert_alpha()" function does not support per pixel alpha. Based on the docs setting alpha will have no effect. However, you can get around this limitation by doing the following. If you load your image using "convert()", and then setting alpha you can get your image to become transparent. Then all you have to do is use "set_colorkey(background color)" to eliminate the background image. Be carful with colorkey because any color in the image that is set as the colorkey will become transparent. The colorkey does not care about per pixel alpha so you can change the alpha of an image and use colorkey at the same time.
Here is the code...
#Loading Image
image = pygame.image.load("test.png").convert()
#Setting Alpha
image.set_alpha(some desired alpha)
#Set colorkey To Eliminate Background Color
image.set_colorkey(some background color)
I threw this test picture together for testing the code. The image does have transparency around the edges of it.
This is what it looks like blitted onto a green background without the added alpha. The white part was transparent until loaded with ".convert()"
This is the finished look of the image with the whole code applied. The alpha has been stripped and reset, and the colorkey has been set to white because it was the background
I hope this is what you are looking for and hoped my answer helped here.
NOTE* You may want to make a copy of the image before you change its alpha like this without the risk of the image having a "spillover" affect from previous uses.
use set_colorkey(color) to set a transparent color. for example, if you have an image of an apple, and everything but the apple is the color black, you'd use apple.set_colorkey(black), and everything but the apple would be transparent. also, if your having trouble using a jpg image, I'd suggest changing it to a png and then doing .convert().
I have been hitting my head against the wall for a while with this, so maybe someone out there can help.
I'm using PIL to open a PNG with transparent background and some random black scribbles, and trying to put it on top of another PNG (with no transparency), then save it to a third file.
It comes out all black at the end, which is irritating, because I didn't tell it to be black.
I've tested this with multiple proposed fixes from other posts. The image opens in RGBA format, and it's still messed up.
Also, this program is supposed to deal with all sorts of file formats, which is why I'm using PIL. Ironic that the first format I tried is all screwy.
Any help would be appreciated. Here's the code:
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff)) # xoff and yoff are 0 in my tests
img.save(outfile)
I think what you want to use is the paste mask argument.
see the docs, (scroll down to paste)
from PIL import Image
img = Image.open(basefile)
layer = Image.open(layerfile) # this file is the transparent one
print layer.mode # RGBA
img.paste(layer, (xoff, yoff), mask=layer)
# the transparancy layer will be used as the mask
img.save(outfile)