Reading CT Scan dicom file - python

i am trying to read CT scan Dicom file using pydicom python library but i just can't get rid of this below error even when i install gdcm and pylibjpeg
RuntimeError: The following handlers are available to decode the pixel data however they are missing required dependencies: GDCM (req. ), pylibjpeg (req. )
Here is my code
!pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg
!pip install python-gdcm
import gdcm
import pylibjpeg
import numpy as np
import pydicom
from pydicom.pixel_data_handlers.util import apply_voi_lut
import matplotlib.pyplot as plt
%matplotlib inline
def read_xray(path, voi_lut = True, fix_monochrome = True):
dicom = pydicom.read_file(path)
# VOI LUT (if available by DICOM device) is used to transform raw DICOM data to "human-friendly" view
if voi_lut:
data = apply_voi_lut(dicom.pixel_array, dicom)
else:
data = dicom.pixel_array
# depending on this value, X-ray may look inverted - fix that:
if fix_monochrome and dicom.PhotometricInterpretation == "MONOCHROME1":
data = np.amax(data) - data
data = data - np.min(data)
data = data / np.max(data)
data = (data * 255).astype(np.uint8)
return data
img = read_xray('/content/ActiveTB/2018/09/17/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157438.869027/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157440.863887/1.2.392.200036.9116.2.6.1.44063.1797735841.1537154539.142332.dcm')
plt.figure(figsize = (12,12))
plt.imshow(img)
Here is the image link on which i am trying to run this code
https://drive.google.com/file/d/1-xuryA5VlglaumU2HHV7-p6Yhgd6AaCC/view?usp=sharing

Try running the following:
!pip install pylibjpeg
!pip install gdcm

In a similar problem as yours, we could see that the problem persists at the Pixel Data level. You need to install one or more optional libraries, so that you can handle the various compressions.
First, you should do:
$ pip uninstall pycocotools
$ pip install pycocotools --no-binary :all: --no-build-isolation
From here, you should do as follows:
$ pip install pylibjpeg pylibjpeg-libjpeg pydicom
Then, your code should look like this:
from pydicom import dcmread
import pylibjpeg
import gdcm
import numpy as np
import pydicom
from pydicom.pixel_data_handlers.util import apply_voi_lut
import matplotlib.pyplot as plt
%matplotlib inline
def read_xray(path, voi_lut = True, fix_monochrome = True):
dicom = dcmread(path)
# VOI LUT (if available by DICOM device) is used to transform raw DICOM data to "human-friendly" view
if voi_lut:
data = apply_voi_lut(dicom.pixel_array, dicom)
else:
data = dicom.pixel_array
# depending on this value, X-ray may look inverted - fix that:
if fix_monochrome and dicom.PhotometricInterpretation == "MONOCHROME1":
data = np.amax(data) - data
data = data - np.min(data)
data = data / np.max(data)
data = (data * 255).astype(np.uint8)
return data
img = read_xray('/content/ActiveTB/2018/09/17/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157438.869027/1.2.392.200036.9116.2.6.1.44063.1797735841.1537157440.863887/1.2.392.200036.9116.2.6.1.44063.1797735841.1537154539.142332.dcm')
plt.figure(figsize = (12,12))
plt.imshow(img)

Related

Getting error while getting decision tree as PNG file

i am trying to learn machine learn through Python from W3School. I am trying to get mydecisiontree. PNG using PyDotPlus
I am using pip PyCharm professional 2020.3
the code is as follow:
import numpy as np
import matplotlib.pyplot as plt
import pandas
from sklearn import tree
import pydotplus
from sklearn.tree import DecisionTreeClassifier
import matplotlib.image as pltimg
df = pandas.read_csv("shows.csv")
d = {'UK' : 0,'USA' : 1, 'N' : 2}
df['Nationality'] = df['Nationality'].map(d)
d = {'YES' : 1, "NO" : 0}
df['Go'] = df['Go'].map(d)
features = ['Age', 'Experience', 'Rank', 'Nationality']
X = df[features]
y = df['Go']
dtree = DecisionTreeClassifier()
dtree = dtree.fit(X, y)
data = tree.export_graphviz(dtree, out_file=None, feature_names=features)
graph = pydotplus.graph_from_dot_data(data)
graph.write_png('mydecisiontree.png')
img=pltimg.imread('mydecisiontree.png')
img_plot = plt.imshow(img)
plt.show()
Although the PyCharm shows no error but when i run the code it cant make the PNG file and gives an error on the line:
graph.write_png('mydecisiontree.png')
It shows the following error:
File "direc.....\venv\lib\site-packages\pydotplus\graphviz.py", line 1960, in create
'GraphViz\'s executables not found')
pydotplus.graphviz.InvocationException: GraphViz's executables not found
I can't see the problem. How to solve this?
PyCharm doesn't show an error because your code doesn't contain any. The problem is in your environment. Have you installed GraphViz (using pip install graphviz) in the environment that you are using to run it?
Also see the answers here:
GraphViz not working when imported inside PydotPlus (GraphViz's executables not found)
Graphviz's executables are not found (Python 3.4)
I had that problem too,
I installed graphviz from: here
and it solved.

Getting the following error while using scikit-image to read images "AttributeError: 'PngImageFile' object has no attribute '_PngImageFile__frame' "

I am using scikit-image to load a random image from a folder. OpenCV is being used for operations later on..
Code is as follows (only relevant parts included)
import imageio
import cv2 as cv
import fileinput
from collections import Counter
from data.apple_dataset import AppleDataset
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor
from torchvision.transforms import functional as F
import utility.utils as utils
import utility.transforms as T
from PIL import Image
import skimage.io
from skimage.viewer import ImageViewer
from matplotlib import pyplot as plt
%matplotlib inline
APPLE_IMAGE_PATH = r"__mypath__\samples\apples\images"
# Load a random image from the images folder
FILE_NAMES = next(os.walk(APPLE_IMAGE_PATH))[2]
random_apple_in_folder = os.path.join(APPLE_IMAGE_PATH, random.choice(FILE_NAMES))
apple_image = skimage.io.imread(random_apple_in_folder)
apple_image_cv = cv.imread(random_apple_in_folder)
apple_image_cv = cv.cvtColor(apple_image_cv, cv.COLOR_BGR2RGB)
Error is as follows
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-9575eed18f18> in <module>
11 FILE_NAMES = next(os.walk(APPLE_IMAGE_PATH))[2]
12 random_apple_in_folder = os.path.join(APPLE_IMAGE_PATH, random.choice(FILE_NAMES))
---> 13 apple_image = skimage.io.imread(random_apple_in_folder)
14 apple_image_cv = cv.imread(random_apple_in_folder)
AttributeError: 'PngImageFile' object has no attribute '_PngImageFile__frame'
How do i proceed from here? What should i change???
This is a bug in Pillow 7.1.0. You can upgrade Pillow with pip install -U pillow. See this bug report for more information:
https://github.com/scikit-image/scikit-image/issues/4548

Python - dot file to png file not found error

I am trying to convert dot file into a png or jpeg file where I can view the Random Forest Tree. I am following this tutorial: https://towardsdatascience.com/how-to-visualize-a-decision-tree-from-a-random-forest-in-python-using-scikit-learn-38ad2d75f21c.
I am getting error FileNotFoundError: [WinError 2] The system cannot find the file specified
I can see that tree.dot is there and I am able to open it. Trying to find why it is not reading it? Thanks.
from sklearn.datasets import load_iris
iris = load_iris()
# Model (can also use single decision tree)
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=10)
# Train
model.fit(iris.data, iris.target)
# Extract single tree
estimator = model.estimators_[5]
from sklearn.tree import export_graphviz
# Export as dot file
export_graphviz(estimator, out_file='tree.dot',
feature_names = iris.feature_names,
class_names = iris.target_names,
rounded = True, proportion = False,
precision = 2, filled = True)
<<error occurs here>>
# Convert to png using system command (requires Graphviz)
from subprocess import call
call(['dot', '-Tpng', 'tree.dot', '-o', 'tree.png', '-Gdpi=600'])
# Display in jupyter notebook
from IPython.display import Image
Image(filename = 'tree.png')
I ran through docker - ubuntu image and ran: RUN apt-get install graphviz -y in the Dockerfile. It started to work. Then used dot -Tpng tree.dot -o tree.png

How to run OpenAI Gym .render() over a server

I am running a python 2.7 script on a p2.xlarge AWS server through Jupyter (Ubuntu 14.04). I would like to be able to render my simulations.
Minimal working example
import gym
env = gym.make('CartPole-v0')
env.reset()
env.render()
env.render() makes (among other things) the following errors:
...
HINT: make sure you have OpenGL install. On Ubuntu, you can run
'apt-get install python-opengl'. If you're running on a server,
you may need a virtual frame buffer; something like this should work:
'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'")
...
NoSuchDisplayException: Cannot connect to "None"
I would like to some how be able to see the simulations. It would be ideal if I could get it inline, but any display method would be nice.
Edit: This is only an issue with some environments, like classic control.
Update I
Inspired by this I tried the following, instead of the xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py> (which I couldn't get to work).
xvfb-run -a jupyter notebook
Running the original script I now get instead
GLXInfoException: pyglet requires an X server with GLX
Update II
Issue #154 seems relevant. I tried disabling the pop-up, and directly creating the RGB colors
import gym
env = gym.make('CartPole-v0')
env.reset()
img = env.render(mode='rgb_array', close=True)
print(type(img)) # <--- <type 'NoneType'>
img = env.render(mode='rgb_array', close=False) # <--- ERROR
print(type(img))
I get ImportError: cannot import name gl_info.
Update III
With inspiration from #Torxed I tried creating a video file, and then rendering it (a fully satisfying solution).
Using the code from 'Recording and uploading results'
import gym
env = gym.make('CartPole-v0')
env.monitor.start('/tmp/cartpole-experiment-1', force=True)
observation = env.reset()
for t in range(100):
# env.render()
print(observation)
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.monitor.close()
I tried following your suggestions, but got ImportError: cannot import name gl_info from when running env.monitor.start(....
From my understanding the problem is that OpenAI uses pyglet, and pyglet 'needs' a screen in order to compute the RGB colors of the image that is to be rendered. It is therefore necessary to trick python to think that there is a monitor connected
Update IV
FYI there are solutions online using bumblebee that seem to work. This should work if you have control over the server, but since AWS run in a VM I don't think you can use this.
Update V
Just if you have this problem, and don't know what to do (like me) the state of most environments are simple enough that you can create your own rendering mechanism. Not very satisfying, but.. you know.
Got a simple solution working:
If on a linux server, open jupyter with
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
In Jupyter
import matplotlib.pyplot as plt
%matplotlib inline
from IPython import display
After each step
def show_state(env, step=0, info=""):
plt.figure(3)
plt.clf()
plt.imshow(env.render(mode='rgb_array'))
plt.title("%s | Step: %d %s" % (env._spec.id,step, info))
plt.axis('off')
display.clear_output(wait=True)
display.display(plt.gcf())
Note: if your environment is not unwrapped, pass env.env to show_state.
This GitHub issue gave an answer that worked great for me. It's nice because it doesn't require any additional dependencies (I assume you already have matplotlib) or configuration of the server.
Just run, e.g.:
import gym
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0') # insert your favorite environment
render = lambda : plt.imshow(env.render(mode='rgb_array'))
env.reset()
render()
Using mode='rgb_array' gives you back a numpy.ndarray with the RGB values for each position, and matplotlib's imshow (or other methods) displays these nicely.
Note that if you're rendering multiple times in the same cell, this solution will plot a separate image each time. This is probably not what you want. I'll try to update this if I figure out a good workaround for that.
Update to render multiple times in one cell
Based on this StackOverflow answer, here's a working snippet (note that there may be more efficient ways to do this with an interactive plot; this way seems a little laggy on my machine):
import gym
from IPython import display
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0')
env.reset()
for _ in range(100):
plt.imshow(env.render(mode='rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
Update to increase efficiency
On my machine, this was about 3x faster. The difference is that instead of calling imshow each time we render, we just change the RGB data on the original plot.
import gym
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('Breakout-v0')
env.reset()
img = plt.imshow(env.render(mode='rgb_array')) # only call this once
for _ in range(100):
img.set_data(env.render(mode='rgb_array')) # just update the data
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
I think we should just capture renders as video by using OpenAI Gym wrappers.Monitor
and then display it within the Notebook.
Example:
Dependencies
!apt install python-opengl
!apt install ffmpeg
!apt install xvfb
!pip3 install pyvirtualdisplay
# Virtual display
from pyvirtualdisplay import Display
virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()
Capture as video
import gym
from gym import wrappers
env = gym.make("SpaceInvaders-v0")
env = wrappers.Monitor(env, "/tmp/SpaceInvaders-v0")
for episode in range(2):
observation = env.reset()
step = 0
total_reward = 0
while True:
step += 1
env.render()
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
total_reward += reward
if done:
print("Episode: {0},\tSteps: {1},\tscore: {2}"
.format(episode, step, total_reward)
)
break
env.close()
Display within Notebook
import os
import io
import base64
from IPython.display import display, HTML
def ipython_show_video(path):
"""Show a video at `path` within IPython Notebook
"""
if not os.path.isfile(path):
raise NameError("Cannot access: {}".format(path))
video = io.open(path, 'r+b').read()
encoded = base64.b64encode(video)
display(HTML(
data="""
<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>
""".format(encoded.decode('ascii'))
))
ipython_show_video("/tmp/SpaceInvaders-v0/openaigym.video.4.10822.video000000.mp4")
I hope it helps. ;)
I managed to run and render openai/gym (even with mujoco) remotely on a headless server.
# Install and configure X window with virtual screen
sudo apt-get install xserver-xorg libglu1-mesa-dev freeglut3-dev mesa-common-dev libxmu-dev libxi-dev
# Configure the nvidia-x
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
# Run the virtual screen in the background (:0)
sudo /usr/bin/X :0 &
# We only need to setup the virtual screen once
# Run the program with vitural screen
DISPLAY=:0 <program>
# If you dont want to type `DISPLAY=:0` everytime
export DISPLAY=:0
Usage:
DISPLAY=:0 ipython2
Example:
import gym
env = gym.make('Ant-v1')
arr = env.render(mode='rgb_array')
print(arr.shape)
# plot or save wherever you want
# plt.imshow(arr) or scipy.misc.imsave('sample.png', arr)
There's also this solution using pyvirtualdisplay (an Xvfb wrapper). One thing I like about this solution is you can launch it from inside your script, instead of having to wrap it at launch:
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1400, 900))
display.start()
I ran into this myself.
Using xvfb as X-server somehow clashes with the Nvidia drivers.
But finally this post pointed me into the right direction.
Xvfb works without any problems if you install the Nvidia driver with the -no-opengl-files option and CUDA with --no-opengl-libs option.
If you know this, it should work. But as it took me quite some time till I figured this out and it seems like I'm not the only one running into problems with xvfb and the nvidia drivers.
I wrote down all necessary steps to set everything up on an AWS EC2 instance with Ubuntu 16.04 LTS here.
Referencing my other answer here: Display OpenAI gym in Jupyter notebook only
I made a quick working example here which you could fork: https://kyso.io/eoin/openai-gym-jupyter with two examples of rendering in Jupyter - one as an mp4, and another as a realtime gif.
The .mp4 example is quite simple.
import gym
from gym import wrappers
env = gym.make('SpaceInvaders-v0')
env = wrappers.Monitor(env, "./gym-results", force=True)
env.reset()
for _ in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done: break
env.close()
Then in a new cell Jupyter cell, or download it from the server onto some place where you can view the video.
import io
import base64
from IPython.display import HTML
video = io.open('./gym-results/openaigym.video.%s.video000000.mp4' % env.file_infix, 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''
<video width="360" height="auto" alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /></video>'''
.format(encoded.decode('ascii')))
If your on a server with public access you could run python -m http.server in the gym-results folder and just watch the videos there.
I encountered the same problem and stumbled upon the answers here. Mixing them helped me to solve the problem.
Here's a step by step solution:
Install the following:
apt-get install -y python-opengl xvfb
Start your jupyter notebook via the following command:
xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
Inside the notebook:
import gym
import matplotlib.pyplot as plt
%matplotlib inline
env = gym.make('MountainCar-v0') # insert your favorite environment
env.reset()
plt.imshow(env.render(mode='rgb_array')
Now you can put the same thing in a loop to render it multiple times.
from IPython import display
for _ in range(100):
plt.imshow(env.render(mode='rgb_array'))
display.display(plt.gcf())
display.clear_output(wait=True)
action = env.action_space.sample()
env.step(action)
Hope this works for anyone else still facing an issue. Thanks to Andrews and Nathan for their answers.
I avoided the issues with using matplotlib by simply using PIL, Python Image Library:
import gym, PIL
env = gym.make('SpaceInvaders-v0')
array = env.reset()
PIL.Image.fromarray(env.render(mode='rgb_array'))
I found that I didn't need to set the XV frame buffer.
I was looking for a solution that works in Colaboratory and ended up with this
from IPython import display
import numpy as np
import time
import gym
env = gym.make('SpaceInvaders-v0')
env.reset()
import PIL.Image
import io
def showarray(a, fmt='png'):
a = np.uint8(a)
f = io.BytesIO()
ima = PIL.Image.fromarray(a).save(f, fmt)
return f.getvalue()
imagehandle = display.display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
while True:
time.sleep(0.01)
env.step(env.action_space.sample()) # take a random action
display.update_display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
EDIT 1:
You could use xvfbwrapper for the Cartpole environment.
from IPython import display
from xvfbwrapper import Xvfb
import numpy as np
import time
import pyglet
import gym
import PIL.Image
import io
vdisplay = Xvfb(width=1280, height=740)
vdisplay.start()
env = gym.make('CartPole-v0')
env.reset()
def showarray(a, fmt='png'):
a = np.uint8(a)
f = io.BytesIO()
ima = PIL.Image.fromarray(a).save(f, fmt)
return f.getvalue()
imagehandle = display.display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
for _ in range(1000):
time.sleep(0.01)
observation, reward, done, info = env.step(env.action_space.sample()) # take a random action
display.update_display(display.Image(data=showarray(env.render(mode='rgb_array')), width=450), display_id='gymscr')
vdisplay.stop()
If you're working with standard Jupyter, there's a better solution though. You can use the CommManager to send messages with updated Data URLs to your HTML output.
IPython Inline Screen Example
In Colab the CommManager is not available. The more restrictive output module has a method called eval_js() which seems to be kind of slow.
I had the same problem and I_like_foxes solution to reinstall nvidia drivers with no opengl fixed things. Here are the commands I used for Ubuntu 16.04 and GTX 1080ti
https://gist.github.com/8enmann/931ec2a9dc45fde871d2139a7d1f2d78
This might be a complete workaround, but I used a docker image with a desktop environment, and it works great. The docker image is at https://hub.docker.com/r/dorowu/ubuntu-desktop-lxde-vnc/
The command to run is
docker run -p 6080:80 dorowu/ubuntu-desktop-lxde-vnc
Then browse http://127.0.0.1:6080/ to access the Ubuntu desktop.
Below are a gif showing it the Mario bros gym environment running and being rendered. As you can see, it is fairly responsive and smooth.
In my IPython environment, Andrew Schreiber's solution can't plot image smoothly. The following is my solution:
If on a linux server, open jupyter with
$ xvfb-run -s "-screen 0 1400x900x24" jupyter notebook
In Jupyter
import matplotlib.pyplot as plt
%matplotlib inline
%matplotlib notebook
from IPython import display
Display iteration:
done = False
obs = env.reset()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.ion()
fig.show()
fig.canvas.draw()
while not done:
# action = pi.act(True, obs)[0] # pi means a policy which produces an action, if you have
# obs, reward, done, info = env.step(action) # do action, if you have
env_rnd = env.render(mode='rgb_array')
ax.clear()
ax.imshow(env_rnd)
fig.canvas.draw()
time.sleep(0.01)
I created this mini-package which allows you to render your environment onto a browser by just adding one line to your code.
Put your code in a function and replace your normal env.render() with yield env.render(mode='rgb_array'). Encapsulate this function with the render_browser decorator.
import gym
from render_browser import render_browser
#render_browser
def test_policy(policy):
# Your function/code here.
env = gym.make('Breakout-v0')
obs = env.reset()
while True:
yield env.render(mode='rgb_array')
# ... run policy ...
obs, rew, _, _ = env.step(action)
test_policy(policy)
When you visit your_ip:5000 on your browser, test_policy() will be called and you'll be able to see the rendered environment on your browser window.

Error while converting rgb to lab : python

import skimage
from skimage import io, color
import numpy as np
import scipy.ndimage as ndi
rgb = io.imread('img.jpg')
lab = color.rgb2lab(skimage.img_as_float(rgb))
l_chan1 = lab[:,:,0]
l_chan1 /= np.max(np.abs(l_chan1))
l_chan_med = ndi.median_filter(l_chan1, size=5)
skimage.io.imshow(l_chan_med)
I am trying to do some image processing. While i was changing the color scheme, i am getting an error for rgb2lab function. " color.rgb2lab module object has no attribute 'rgb2lab'. I have imported all the required libraries.Any suggestions will be appreciated
Try this:
import skimage
from skimage import io
from skimage.color import rgb2lab
import numpy as np
import scipy.ndimage as ndi
rgb = io.imread('img.jpg')
lab = rgb2lab(skimage.img_as_float(rgb))
l_chan1 = lab[:,:,0]
l_chan1 /= np.max(np.abs(l_chan1))
l_chan_med = ndi.median_filter(l_chan1, size=5)
skimage.io.imshow(l_chan_med)
I don't know what's wrong, but your code runs fine on my machine. Python 2.7.6, OS X Yosemite. I would try reinstalling scikit-image.

Categories

Resources