I would like to transform the pixels of the header of a dicom image (which contains sensitive information) to pixel value = 0 (black background).
I can do that with the following code:
import pydicom
from pydicom import dcmread
fn = "A0000.dcm"
ds = dcmread(fn)
# Three channels
ds.pixel_array[0:68, 0:1280, 0] = 0
ds.pixel_array[0:68, 0:1280, 1] = 0
ds.pixel_array[0:68, 0:1280, 2] = 0
# Plot image
plt.imshow(ds.pixel_array, cmap="gray")
# Save
ds.save_as("dicom_processed")
When I run imshow, the header is removed, however when I save the dicom file, the header is not remove
EDIT: The header is like an image
I would like something like this, in an easy way (removing all the background):
https://microsoft.github.io/presidio/image-redactor/
https://medium.com/data-science-at-microsoft/redacting-sensitive-text-from-dicom-medical-images-in-python-ab35a34a10c0
This does not work:
ds.remove_private_tags()
Related
I'm working on a program that reads csv file to get the names of colors, compares RGB values with RGB values of an image from URL. I think the program doesn't get image from URL since I tried to imshow() to check whether image is passed into program or not. I get this error
(-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'
This is the code:
import numpy as np #needed to work with matrix of an image
import pandas as pd #needed to work with color.csv
import cv2 #needed to work with image
import matplotlib.pyplot as pl #needed to work with plotting
import urllib.request#needed to work with image url
#step 1. Read csv file with name, RGB and HEX values.
#step 2. Set color detection function. Get value of pixels in a NumPy array
#step 3. Compare RGB value of a pixel with dataframe.
#step 4. Save the name and RBG value inside a file.
#image from url
def url_to_image(url): #doesn't get file, need to work upon this
resp = urllib.request.urlopen(url)
image = np.asarray(bytearray(resp.read()), dtype='uint8')
image = cv2.imdecode(image,cv2.IMREAD_COLOR)
return image
#dataframe with 864 colors
index = ['color', 'color_name', 'hex','R','G','B']
csv = pd.read_csv('colors.csv', names = index, header = None)
def getColor(R,G,B):
minimum = 10000
for i in range(len(csv)):
distance = abs(R-int(csv.loc[i, 'R'])) + abs(G-int(csv.loc[i, 'G'])) + abs(B-int(csv.loc[i,'B']))
if(distance<=minimum):
minimum = distance
color_name = csv.loc[i, 'color_name']
return color_name
img = url_to_image("https://upload.wikimedia.org/wikipedia/commons/2/24/Solid_purple.svg")
cv2.imshow("image", img)
cv2.waitKey(0)
It doesn't work because you are trying to use an svg Image (which is vector based) to open in an Matrix like an JPEG or PNG image (which are raster based). It doesn't work like that with these.
Try loading a different Image like this
https://miro.medium.com/max/800/1*bNfxs62uJzISTfuPlOzOWQ.png EDIT sry wrong link
https://htmlcolorcodes.com/assets/images/colors/purple-color-solid-background-1920x1080.png
this will work because this is an png
As far as i know Opencv has no good support for SVG based Images
I have a dicom file from which I read images. The images I read, however, has incorrect colormap. Ideally, the image should look like:
However, the following code only gives me
If I only take the red component, I get the image below, which is not correct and cannot be adjusted to the ideal result in any colormap I tried.
or
root = tk.Tk()
root.withdraw()
path = filedialog.askopenfilename()
ds = dicom.dcmread(path, force = True) # reads a file data set
video = ds.pixel_array #reads a sequence of RGB images
plt.imsave(some_path, video[0], format='png') #gives image [2]
What have I done wrong?
This really looks like YCbCr data, is the Photometric Interpretation something like YBR_FULL? If so then as mentioned in the documentation you need to apply a colour space conversion, which in pydicom is:
from pydicom import dcmread
from pydicom.pixel_data_handlers import convert_color_space
ds = dcmread(...)
rgb = convert_color_space(ds.pixel_array, "YBR_FULL", "RGB")
I tried to make an algorithm in Python where I entered a georeferenced raster (known coordinate system), all its negative values were transformed to zero, and then a new image was saved with the georeference of the initial image.
import skimage.io
import pandas as pd
import numpy as np
pathhr = 'C:\\Users\\dataset\\S30W051.tif'
HR = skimage.io.imread(pathhr)
df1 = pd.DataFrame(HR)
HR_changed = df1[df1 < 0] = 0
#save function
savedata = df1.to_numpy()
skimage.io.imsave('C:\\Users\\dataset\\S30W051_TEST.tif', savedata)
But when I save my raster at the end of this script, I get a non-georeferenced TIFF raster.
How do I keep the same coordinate system as the initial raster (without transforming the output raster into local coordinates)?
I ask for help in solving this problem. Thanks.
You could use rasterio for opening and saving your tiff files, and copy the metadata of the initial raster to the new raster.
import rasterio as rio
# Load the original image
with rio.open(pathhr, 'r') as r:
HR = r.read()
meta = r.meta
# Do any transformation you like (on the numpy array)
HR_changed = HR[HR < 0] = 0
# Save the changed raster
with rio.open('C:\\Users\\dataset\\S30W051_TEST.tif', 'w', **meta) as dst:
dst.write(HR_change)
I created an model in blender. From here I took 2d slices through the y-plane of that model leading to the following.
600 png files each corresponding to a ylocation i.e y=0, y=0.1 etc
Each png file has a resolution of 500 x 600.
I am now trying to merge the 600 pngs into a h5 file using python before loading the .h5 into some software. I find that each individual png file is read fine and looks great. However when I look at the final 3d image there is some stretching of the image, and im not sure how this is being created.
The images are resized (from 600x600 to 500x600, but I have checked and this is not the cause of the stretching). I would like to know why I am introducing such stretching in other planes (not y-plane).
Here is my code, please note that there is some work in progress here, hence why I append the dataset to a list (this is to be used for later code)
from PIL import Image
import sys
import os
import h5py
import numpy as np
import cv2
from datetime import datetime
dir_path = os.path.dirname(os.path.realpath(__file__))
sys.path.append(dir_path + '//..//..')
Xlen=500
Ylen=600
Zlen=600
directory=dir_path+"/LowPolyA21/"
for filename in os.listdir(directory):
if fnmatch.fnmatch(filename, '*.png'):
image = Image.open(directory+filename)
new_image = image.resize((Zlen, Xlen))
new_image.save(directory+filename)
dataset = np.zeros((Xlen, Zlen, Ylen), np.float)
# traverse all the pictures under the specified address
cnt_num = 0
img_list = sorted(os.listdir(directory))
os.chdir(directory)
for img in (img_list):
if img.endswith(".png"):
gray_img = cv2.imread(img, 0)
dataset[:, :, cnt_num] = gray_img
cnt_num += 1
dataset[dataset == 0] = -1
dataset=dataset.swapaxes(1,2)
datasetlist=[]
datasetlist.append(dataset)
dz_dy_dz = (float(0.001),float(0.001),float(0.001))
for j in range(Xlen):
for k in range(Ylen):
for l in range(Zlen):
if datasetlist[i][j,k,l]>1:
datasetlist[i][j,k,l]=1
now = datetime.now()
timestamp = now.strftime("%d%m%Y_%H%M%S%f")
out_h5_path='voxelA_'+timestamp+'_flipped'
out_h5_path2='voxelA_'+timestamp+'_flipped.h5'
with h5py.File(out_h5_path2, 'w') as f:
f.attrs['dx_dy_dz'] = dz_dy_dz
f['data'] = datasetlist[i] # Write data to the file's primary key data below
Example of image without stretching (in y-plane)
Example of image with stretching (in x-plane)
Here's a (theoretically) simple task I have at hand:
Load transparent animated GIF from disk (or buffer)
Convert all individual frames into NumPy arrays. Each frame WITH ALPHA CHANNEL
Save NumPy arrays back into transparent animated GIF
Output file size is irrelevant, all I really need is to have are two identical GIFs - the original input image and the one saved in step 3.
What does matter to me though it de/encoding speed so pure Python solutions (without C bindings to the underlying imaging library) are not considered.
Attached (at the very bottom), you will find an example GIF I am using for testing.
I tried pretty much every single approach that comes to mind. Either the resulting GIF (step 3) is terribly butchered, rendered in grayscale only, or (at best), looses transparency and is saved on either white or black background.
Here's what I tried:
Read with Pillow:
from PIL import Image, ImageSequence
im = Image.open("animation.gif")
npArray = []
for frame in ImageSequence.Iterator(im):
npArray.append(np.array(frame))
return npArray
Read with imageio:
import imageio
npArr = []
im = imageio.get_reader("animation.gif")
for frame in im:
npArr.append(np.array(frame))
return npArr
Read with MoviePy:
from moviepy.editor import *
npArr = []
clip = VideoFileClip("animation.gif")
for frame in clip.iter_frames():
npArr.append(np.array(frame))
return npArr
Read with PyVips:
vi = pyvips.Image.new_from_file("animation.gif", n=-1)
pageHeight = vi.get("page-height")
frameCount = int(vi.height / pageHeight)
npArr = []
for i in range(0, frameCount):
vi = vi.crop(0, i * pageHeight + 0, vi.width, pageHeight).write_to_memory()
frame = np.ndarray(
buffer = vi,
dtype = np.uint8,
shape = [pageHeight, vi.width, 3]
)
npArr.append(frame)
return npArr
Save with Pillow:
images = []
for frame in frames:
im = Image.fromarray(frame)
images.append(im)
images[0].save(
"output.gif",
format = "GIF",
save_all = True,
loop = 0,
append_images = images,
duration = 40,
disposal = 3
)
I believe you're encountering an issue because you're not saving the palette associated with each frame. When you convert each frame to an array, the resulting array doesn't contain any of the palette data which specifies what colours are included in the frame. So, when you construct a new image from each frame, the palette is not present, and Pillow doesn't know what colour palette it should use for the frame.
Also, when saving the GIF, you need to specify the colour to use for transparency, which we can just extract from the original image.
Here's some code which (hopefully) produces the result you want:
from PIL import Image, ImageSequence
import numpy as np
im = Image.open("ex.gif")
frames = []
# Each frame can have its own palette in a GIF, so we need to store
# them individually
fpalettes = []
transparency = im.info['transparency']
for frame in ImageSequence.Iterator(im):
frames.append(np.array(frame))
fpalettes.append(frame.getpalette())
# ... Do something with the frames
images = []
for i, frame in enumerate(frames):
im = Image.fromarray(frame)
im.putpalette(fpalettes[i])
images.append(im)
images[0].save(
"output.gif",
format="GIF",
save_all=True,
loop=0,
append_images=images,
duration=40,
disposal=2,
transparency=transparency
)