The Problem:
I'm trying to simulate a live video by cycling through a series of still images I have saved in a directory, but when I add the animation and update functions my plot is displayed empty.
Background on why I'm doing this:
I believe its important for me to do it this way rather than a complete change of approach, say turning the images into a video first then displaying that, because what I really want to test is the image analysis I will be adding and then overlaying on each frame. The final application will be receiving frames one by one from a camera and will need to do some processing, display the image + annotations + output the data as .csv etc... I'm simulating this for now because I do not have any of the hardware to generate the images and will not have it for several months during which time I need to get the image processing set up, but I do have access to some sets of stills that are approximately what will be produced. In case its relevant my simulation images are 1680x1220 and are 1.88 Mb TIFFs, though I could covert and compress them if needed, and in the final form the resolution will be a bit higher and probably the image format could be adjusted if needed.
What I have tried:
I followed an example to list all files in a folder, and an example
to update a plot. However, the plot displays blank when I run the
code.
I added a line to print the current file name, and I can see this
cycling as expected.
I also made sure the images will display in the plot if I just create
a plot and add one image, and they do. But, when combined with the
animation function the plot is blank and I'm not sure what I've done
wrong/failed to include.
I also tried adding a plt.pause() in the update, but again this
didn't work.
I increased the interval up to 2000 to give it more time, but that didn't work. I believe 2000 is extreme, I'm expecting it should work with more like 20-30fps. Going to 0.5fps tells me the code is wrong or incomplete, rather than it just being a question of needing time to read the image file.
I appreciate no one else has my images, but they are nothing special. I'm using 60 images but I guess it could be tested with any 2 random images and setting range(60) to range(2) instead?
The example I copied originally demonstrated the animation function by making a random array, and if I do that it will show a plot that updates with random squares as expected.
Replacing:
A = np.random.randn(10,10)
im.set_array(A)
...with my image instead...
im = cv2.imread(files[i],0)
...and the plot remains empty/blank. I get a window shown called "Figure1" (like when using the random array), but unlike with the array there is nothing in this window.
Full code:
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import os
import cv2
def update(i):
im = cv2.imread(files[i],0)
print(files[i])
#plt.pause(0.1)
return im
path = 'C:\\Test Images\\'
files = []
# r=root, d=directories, f = files
for r, d, f in os.walk(path):
for file in f:
if '.TIFF' in file:
files.append(os.path.join(r, file))
ani = FuncAnimation(plt.gcf(), update, frames=range(60), interval=50, blit=False)
plt.show()
I'm a python and a programming novice so have relied on adjusting examples others have given online but I have only a simplistic understanding of how they are working and end up with a lot of trial and error on the syntax. I just can't figure out anything to make this one work though.
Cheers for any help!
The main reason nothing is showing up is because you never add the images to the plot. I've provided some code below to do what you want, be sure to look up anything you are curious about or don't understand!
import glob
import os
from matplotlib import animation
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
IMG_DIRPATH = 'C:\\Test Images\\' # the folder with your images (be careful about
# putting spaces in directory names!)
IMG_EXT = '.TIFF' # the file extension of your images
# Create a figure, and set to the desired size.
fig = plt.figure(figsize=[5, 5])
# Create axes for the current figure so that images can be sized appropriately.
# Passing in [0, 0, 1, 1] makes the axes fill the whole figure.
# frame_on=False means we won't have a bounding box, and setting xticks=[] and
# yticks=[] means that we won't have pesky tick marks along our image.
ax_props = {'frame_on': False, 'xticks': [], 'yticks': []}
ax = plt.axes([0, 0, 1, 1], **ax_props)
# Get all image filenames.
img_filepaths = glob.glob(os.path.join(IMG_DIRPATH, '*' + IMG_EXT))
def update_image(img_filepath):
# Remove all existing images on the axes, and restore our settings.
ax.clear()
ax.update(ax_props)
# Read the current image.
img = mpimg.imread(img_filepath)
# Add the current image to the plot axes.
ax.imshow(img)
anim = animation.FuncAnimation(fig, update_image, frames=img_filepaths, interval=250)
plt.show()
Related
I am trying to make a palettized version of my height image data (using Python/Matplotlib) and for some reason...it is giving me quite weird horizontal lines which I know are not actually present in the dataset.
Both images (mine and the "better" one).
Is this something weird with how Matplotlib normalizes the data? I just don't quite understand how this could happen, so I am at a loss for where to start. I have provided my code below (sorry if there is a typo, I slightly changed it to make sense outside of the code).
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# file location of the raw data
fileloc = r'C:\Users\...\raw_height_profile.csv'
# generate height profile map
palettized_image = getheightprofile(fileloc)
def getheightprofile(fileloc, color_palette='jet'):
# read data from file
data = pd.read_csv(fileloc, skiprows=0)
# generate colormap (I'm using the jet colormap rn)
colormap = plt.get_cmap(color_palette)
# normalize the height data to the range [0, 1]
norm = (data - np.min(data)) / (np.max(data) - np.min(data))
# convert the height data to RGB values using the palette
palettized_data = (colormap(norm)*255).astype(np.uint8)
# save the file as a png (to check quality)
saveloc = r'C:\Users\...\palletized_height_profile.png'
plt.imsave(saveloc, palettized_data)
# return the nice numbers for later analysis
return palettized_data
But instead of returning the nice image that I think I should get, it returns a super weird image with lines across it. note: I know these images aren't quite the same palettization, but I think you can understand the issue.
Does anyone understand how, why, etc.? I have also attached a link to the dataset, because maybe that is helpful...but I am quite sure there is nothing wrong with the data.
I want to automate one process, and I need to place some kind of pointer on my image. I found a great solution which works exactly as I would like, but its disadvantage is that it destroys my picture quality. I want to keep same size of the build in picture.
Bellow I share my code and error, which I receive. I would be grateful for your help :)
from matplotlib import image
from matplotlib import pyplot as plt
from PIL import Image
# to read the image stored in the working directory
# data = image.imread(file_name)
data = Image.open('File_name')
x, y = data.size
# to draw a point on co-ordinate (200,300)
plt.figure(figsize=(x, y))
plt.plot(650, 310, marker='*', color="red")
# plt.axis('off')
plt.imshow(data)
File = "File_name"
plt.savefig(File)
plt.show()
ValueError: Image size of 105480x55224 pixels is too large. It must be less than 2^16 in each direction.
I have a large tiff file (around 2GB) containing a map. I have been able to successfully read the data and even display it using the following python code:
import rasterio
from rasterio.plot import show
with rasterio.open("image.tif") as img:
show(img)
data = img.read()
This works just fine. However, I need to be able to display specific parts of this map without having to load the entire file into memory (as it takes up too much of the RAM and is not doable on many other PCs). I tried using the Window class of rasterio in order to that, but when I tried to display the map the outcome was different from how the full map is displayed (as if it caused data loss):
import rasterio
from rasterio.plot import show
from rasterio.windows import Window
with rasterio.open("image.tif") as img:
data = img.read(window=Window(0, 0, 100000, 100000))
show(data)
So my question is, how can I display a part of the map without having to load into memory the entire file, while also making it look as if it had been cropped from the full map image?
thanks in advance :)
The reason that it displays nicely in the first case, but not in the second, is that in the first case you pass an instance of rasterio.DatasetReader to show (show(img)), but in the second case you pass in a numpy array (show(data)). The DatasetReader contains additional information, in particular an affine transformation and color interpretation, which show uses.
The additional things show does in the first case (for RGB data) can be recreated for the windowed case like so:
import rasterio
from rasterio.enums import ColorInterp
from rasterio.plot import show
from rasterio.windows import Window
with rasterio.open("image.tif") as img:
window = Window(0, 0, 100000, 100000)
# Lookup table for the color space in the source file
source_colorinterp = dict(zip(img.colorinterp, img.indexes))
# Read the image in the proper order so the numpy array will have the colors in the
# order expected by matplotlib (RGB)
rgb_indexes = [
source_colorinterp[ci]
for ci in (ColorInterp.red, ColorInterp.green, ColorInterp.blue)
]
data = img.read(rgb_indexes, window=window)
# Also pass in the affine transform corresponding to the window in order to
# display the correct coordinates and possibly orientation
show(data, transform=img.window_transform(window))
(I figured out what show does by looking at the source code here)
In case of data with a single channel, the underlying matplotlib library used for plotting scales the color range based on the min and max value of the data. To get exactly the same colors as before, you'll need to know the min and max of the whole image, or some values that come reasonably close.
Then you can explicitly tell matplotlib's imshow how to scale:
with rasterio.open("image.tif") as img:
window = Window(0, 0, 100000, 100000)
data = img.read(window=window, masked=True)
# adjust these
value_min = 0
value_max = 255
show(data, transform=img.window_transform(window), vmin=value_min, vmax=value_max)
Additional kwargs (like vmin and vmax here) will be passed on to matplotlib.axes.Axes.imshow, as documented here.
From the matplotlib documenation:
vmin, vmax: float, optional
When using scalar data and no explicit norm, vmin and vmax define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is deprecated to use vmin/vmax when norm is given. When using RGB(A) data, parameters vmin/vmax are ignored.
That way you could also change the colormap it uses etc.
I am using word cloud with some txt files. How do I change this example if I wanted to 1) increase resolution and 2) remove empty border.
#!/usr/bin/env python2
"""
Minimal Example
===============
Generating a square wordcloud from the US constitution using default arguments.
"""
from os import path
import matplotlib.pyplot as plt
from wordcloud import WordCloud
d = path.dirname(__file__)
# Read the whole text.
text = open(path.join(d, 'constitution.txt')).read()
wordcloud = WordCloud().generate(text)
# Open a plot of the generated image.
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
You can't increase the resolution of the image in plt.show() since that is determined by your screen, but you can increase the size. This allows it to scale, zoom, etc. without blurring. To do this pass dimensions to WordCloud, e.g.
wordcloud = WordCloud(width=800, height=400).generate(text)
However, this just determines the size of the image created by WordCloud. When you display this using matplotlib it is scaled to the size of the plot canvas, which is (by default) around 800x600 and you again lose quality. To fix this you need to specify the size of the figure before you call imshow, e.g.
plt.figure( figsize=(20,10) )
plt.imshow(wordcloud)
By doing this I can successfully create a 2000x1000 high resolution word cloud.
For your second question (removing the border) first we could set the border to black, so it is less apparent, e.g.
plt.figure( figsize=(20,10), facecolor='k' )
You can also shrink the size of the border by using tight_layout, e.g.
plt.tight_layout(pad=0)
The final code:
# Read the whole text.
text = open(path.join(d, 'constitution.txt')).read()
wordcloud = WordCloud(width=1600, height=800).generate(text)
# Open a plot of the generated image.
plt.figure( figsize=(20,10), facecolor='k')
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad=0)
plt.show()
By replacing the last two lines with the following you can get the final output shown below:
plt.savefig('wordcloud.png', facecolor='k', bbox_inches='tight')
If you are trying to use an image as a mask, make sure to use a big image to get better image quality.. I spent hours figuring this out.
Heres an example of a code snippet I used
mask = np.array(Image.open('path_to_your_image'))
image_colors = ImageColorGenerator(mask)
wordcloud = WordCloud(width=1600, height=800, background_color="rgba(255, 255, 255, 0)", mask=mask
,color_func = image_colors).generate_from_frequencies(x)
# Display the generated image:
plt.figure( figsize=(20,10) )
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
It is very simple, plt.tight_layout(pad=0) does the job, reduces the space in the background, removing the excess padding.
You can use the method to_svg and get a resolution however high you want.
with open("Output.svg", "w") as text_file:
text_file.write(wc.to_svg())
Try an example by appending these two lines to this file, and the result is gorgeous.
(Other answers have addressed the border problem, and also the example doe not have a border.)
In case you run into the issue of slower application while improving the resolution ie. in a web application, the WordCloud documentation advises that you utilize the scale parameter along with the canvas' width & height params to get a resolution & response time that works for your use case.
Blurry wordclouds - I've been wrestling with this. For my use, I found that too large a differential in the between the most frequent word occurrences and those with few occurrences left the lower-count words unreadable. When I scaled the more frequent counts to reduce the differential, all the lower-frequency words were much more readable.
I have a program using PtQt that utilizes matplotlib to do some plot rendering. For saving images, I would like to make use of the legend to render a custom image (additionally the built-in draggable feature makes this very appealing). I'm reading up on the legend, but I can't seem to figure out how to make a legend that calls my own paintEvent() method for Qt in which I can render custom images.
In case this approach is terrible, here's my goal: I want to put a image (rendered inside the program by Qt) either inside the plot window or find a way to append this image on top of the exported figure.
Here's a screenshot of what the output looks like now:
I'd like to take the DAIP... sequence at the top and have that exported with the figure.
Hopefully someone has tackled a similar problem before.
I solved it by using the OffsetImage and AnnotationBBox features of matplotlib after saving the image to a temporary png file. For some reason keeping it as a temporary file didn't work well.
Briefly:
#draw stuff onto QPixmap named pix
byteArray = QByteArray()
buffer = QBuffer(byteArray)
buffer.open(QIODevice.WriteOnly)
pix.save(buffer, 'PNG')
stringIO = StringIO(byteArray)
stringIO.seek(0)
tfile = tempfile.NamedTemporaryFile(suffix=".png", mode="wb", delete=False)
tfile.write(stringIO.buf)
tfile.close()
imagebox = mpl.offsetbox.OffsetImage(mpl._png.read_png(tfile.name),zoom=zlvl)
ab = mpl.offsetbox.AnnotationBbox(imagebox, [w/2,0],frameon=False)
ab.set_figure(self.canvas.figure)
ab.draggable()
self.subplot.axes.add_artist(ab)
os.remove(tfile.name)