I want to automate one process, and I need to place some kind of pointer on my image. I found a great solution which works exactly as I would like, but its disadvantage is that it destroys my picture quality. I want to keep same size of the build in picture.
Bellow I share my code and error, which I receive. I would be grateful for your help :)
from matplotlib import image
from matplotlib import pyplot as plt
from PIL import Image
# to read the image stored in the working directory
# data = image.imread(file_name)
data = Image.open('File_name')
x, y = data.size
# to draw a point on co-ordinate (200,300)
plt.figure(figsize=(x, y))
plt.plot(650, 310, marker='*', color="red")
# plt.axis('off')
plt.imshow(data)
File = "File_name"
plt.savefig(File)
plt.show()
ValueError: Image size of 105480x55224 pixels is too large. It must be less than 2^16 in each direction.
Related
I am trying to make a palettized version of my height image data (using Python/Matplotlib) and for some reason...it is giving me quite weird horizontal lines which I know are not actually present in the dataset.
Both images (mine and the "better" one).
Is this something weird with how Matplotlib normalizes the data? I just don't quite understand how this could happen, so I am at a loss for where to start. I have provided my code below (sorry if there is a typo, I slightly changed it to make sense outside of the code).
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# file location of the raw data
fileloc = r'C:\Users\...\raw_height_profile.csv'
# generate height profile map
palettized_image = getheightprofile(fileloc)
def getheightprofile(fileloc, color_palette='jet'):
# read data from file
data = pd.read_csv(fileloc, skiprows=0)
# generate colormap (I'm using the jet colormap rn)
colormap = plt.get_cmap(color_palette)
# normalize the height data to the range [0, 1]
norm = (data - np.min(data)) / (np.max(data) - np.min(data))
# convert the height data to RGB values using the palette
palettized_data = (colormap(norm)*255).astype(np.uint8)
# save the file as a png (to check quality)
saveloc = r'C:\Users\...\palletized_height_profile.png'
plt.imsave(saveloc, palettized_data)
# return the nice numbers for later analysis
return palettized_data
But instead of returning the nice image that I think I should get, it returns a super weird image with lines across it. note: I know these images aren't quite the same palettization, but I think you can understand the issue.
Does anyone understand how, why, etc.? I have also attached a link to the dataset, because maybe that is helpful...but I am quite sure there is nothing wrong with the data.
The Problem:
I'm trying to simulate a live video by cycling through a series of still images I have saved in a directory, but when I add the animation and update functions my plot is displayed empty.
Background on why I'm doing this:
I believe its important for me to do it this way rather than a complete change of approach, say turning the images into a video first then displaying that, because what I really want to test is the image analysis I will be adding and then overlaying on each frame. The final application will be receiving frames one by one from a camera and will need to do some processing, display the image + annotations + output the data as .csv etc... I'm simulating this for now because I do not have any of the hardware to generate the images and will not have it for several months during which time I need to get the image processing set up, but I do have access to some sets of stills that are approximately what will be produced. In case its relevant my simulation images are 1680x1220 and are 1.88 Mb TIFFs, though I could covert and compress them if needed, and in the final form the resolution will be a bit higher and probably the image format could be adjusted if needed.
What I have tried:
I followed an example to list all files in a folder, and an example
to update a plot. However, the plot displays blank when I run the
code.
I added a line to print the current file name, and I can see this
cycling as expected.
I also made sure the images will display in the plot if I just create
a plot and add one image, and they do. But, when combined with the
animation function the plot is blank and I'm not sure what I've done
wrong/failed to include.
I also tried adding a plt.pause() in the update, but again this
didn't work.
I increased the interval up to 2000 to give it more time, but that didn't work. I believe 2000 is extreme, I'm expecting it should work with more like 20-30fps. Going to 0.5fps tells me the code is wrong or incomplete, rather than it just being a question of needing time to read the image file.
I appreciate no one else has my images, but they are nothing special. I'm using 60 images but I guess it could be tested with any 2 random images and setting range(60) to range(2) instead?
The example I copied originally demonstrated the animation function by making a random array, and if I do that it will show a plot that updates with random squares as expected.
Replacing:
A = np.random.randn(10,10)
im.set_array(A)
...with my image instead...
im = cv2.imread(files[i],0)
...and the plot remains empty/blank. I get a window shown called "Figure1" (like when using the random array), but unlike with the array there is nothing in this window.
Full code:
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import os
import cv2
def update(i):
im = cv2.imread(files[i],0)
print(files[i])
#plt.pause(0.1)
return im
path = 'C:\\Test Images\\'
files = []
# r=root, d=directories, f = files
for r, d, f in os.walk(path):
for file in f:
if '.TIFF' in file:
files.append(os.path.join(r, file))
ani = FuncAnimation(plt.gcf(), update, frames=range(60), interval=50, blit=False)
plt.show()
I'm a python and a programming novice so have relied on adjusting examples others have given online but I have only a simplistic understanding of how they are working and end up with a lot of trial and error on the syntax. I just can't figure out anything to make this one work though.
Cheers for any help!
The main reason nothing is showing up is because you never add the images to the plot. I've provided some code below to do what you want, be sure to look up anything you are curious about or don't understand!
import glob
import os
from matplotlib import animation
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
IMG_DIRPATH = 'C:\\Test Images\\' # the folder with your images (be careful about
# putting spaces in directory names!)
IMG_EXT = '.TIFF' # the file extension of your images
# Create a figure, and set to the desired size.
fig = plt.figure(figsize=[5, 5])
# Create axes for the current figure so that images can be sized appropriately.
# Passing in [0, 0, 1, 1] makes the axes fill the whole figure.
# frame_on=False means we won't have a bounding box, and setting xticks=[] and
# yticks=[] means that we won't have pesky tick marks along our image.
ax_props = {'frame_on': False, 'xticks': [], 'yticks': []}
ax = plt.axes([0, 0, 1, 1], **ax_props)
# Get all image filenames.
img_filepaths = glob.glob(os.path.join(IMG_DIRPATH, '*' + IMG_EXT))
def update_image(img_filepath):
# Remove all existing images on the axes, and restore our settings.
ax.clear()
ax.update(ax_props)
# Read the current image.
img = mpimg.imread(img_filepath)
# Add the current image to the plot axes.
ax.imshow(img)
anim = animation.FuncAnimation(fig, update_image, frames=img_filepaths, interval=250)
plt.show()
I am trying to display several pictures on my Jupyter notebook. However, the pixel is really rough like below.
The pixel of original picture is clear. How should I improve this issue ?
This is a certain point of process to have a classification whether the picture is dog or cat. I have a many pictures of dogs and cat in the folder located on same directory and just took them from there. The picture is I just tried to show on the Jupyter notebook with using matplotlib.
Thank you in advance.
To force the resolution of the matplotlib inline images:
import matplotlib as plt
dpi = 300 # Recommended to set between 150-300 for quality image preview
plt.rcParams['figure.dpi'] = dpi
I think it uses a very low setting around 80 dpi by default.
The image quality seems to be degraded in the example picture simply because you are trying to show a 64 pixel large image on 400 pixels or so on screen. Each original pixel thus comprises several pixels on screen.
It seems you do not necessarily want to use matplotlib at all if the aim is to simply show the image in its original size on screen.
%matplotlib inline
import numpy as np
from IPython import display
from PIL import Image
a = np.random.rand(64,64,3)
b = np.random.rand(64,64,3)
c = (np.concatenate((a,b), axis=1)*255).astype(np.uint8)
display.display(Image.fromarray(c))
To achieve a similar result with matplotlib, you need to crop the margin around the axes and make sure the figure size is exactly the size of the array to show.
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
a = np.random.rand(64,64,3)
b = np.random.rand(64,64,3)
c = np.concatenate((a,b), axis=1)
fig, ax = plt.subplots(figsize=(c.shape[1]/100.,c.shape[0]/100.), dpi=100)
fig.subplots_adjust(0,0,1,1)
ax.axis("off")
_ = ax.imshow(c)
This seems like it's going to be something simple that will fix my code but I think I've just looked at the code too much at the moment and need to get some fresh eyes on it. I'm simply trying to bring in a Grib2 file that I've downloaded from NCEP for the HRRR model. According to their information the grid type is Lambert Conformal with the extents of (21.13812, 21.14055, 47.84219, 47.83862) for the latitudes of the corners and (-122.7195, -72.28972, -60.91719, -134.0955) for the longitudes of the corners for the models domain.
Before even trying to zoom into my area of interest I just wanted to simply display an image in the appropriate CRS however when I try to do this for the domain of the model I get the borders and coastlines to fall within that extent but the actual image produced from the Grib2 file is just zoomed into. I've tried to use extent=[my domain extent] but it always seems to crash the notebook I'm testing it in. Here is my code and the associated image that I get from it.
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy
from mpl_toolkits.basemap import Basemap
from osgeo import gdal
gdal.SetConfigOption('GRIB_NORMALIZE_UNITS', 'NO')
plt.figure()
filename='C:\\Users\\Public\\Documents\\GRIB\\hrrr.t18z.wrfsfcf00.grib2'
grib = gdal.Open(filename, gdal.GA_ReadOnly)
z00 = grib.GetRasterBand(47)
meta00 = z00.GetMetadata()
band_description = z00.GetDescription()
bz00 = z00.ReadAsArray()
latitude_south = 21.13812 #38.5
latitude_north = 47.84219 #50
longitude_west = -134.0955 #-91
longitude_east = -60.91719 #-69
fig = plt.figure(figsize=(20, 20))
title= meta00['GRIB_COMMENT']+' at '+meta00['GRIB_SHORT_NAME']
fig.set_facecolor('white')
ax = plt.axes(projection=ccrs.LambertConformal())
ax.add_feature(cartopy.feature.BORDERS, linestyle=':')
ax.coastlines(resolution='110m')
ax.imshow(bz00,origin='upper',transform=ccrs.LambertConformal())
plt.title(title)
plt.show()
Returns Just Grib File
If I change:
ax = plt.axes(projection=ccrs.LambertConformal()
to
ax = plt.axes(projection=ccrs.LambertConformal(central_longitude=-95.5,
central_latitude=38.5,cutoff=21.13)
I get my borders but my actual data is not aligned and it creates what I'm dubbing a Batman plot.
Batman Plot
A similar issue occurs even when I do zoom into the domain and still have my borders present. The underlying data from the Grib file doesn't change to correspond to what I'm trying to get.
So as I've already said this is probably something that is an easy fix that I'm just missing but if not, it would be nice to know what step or what process I'm screwing up that I can learn from so that I don't do it in the future!
Updated 1:
I've added and changed some code and am back to getting only the image to show without the borders and coastlines showing up.
test_extent = [longitude_west,longitude_east,latitude_south,latitude_north]
ax.imshow(bz00,origin='upper',extent=test_extent)
This gives me the following image.
Looks exactly like image 1.
The other thing that I'm noticing which maybe the root cause of all of this is that when I'm printing out the value for plt.gca().get_ylim() and plt.gca().get_xlim() I'm getting hugely different values depending on what is being displayed.
It seems that my problem arises from the fact that the Grib file regardless of whether or not it can be displayed properly in other programs just doesn't play nicely with Matplotlib and Cartopy out of the box. Or at the very least does not with the Grib files that I was using. Which for sake of this perhaps helping others in the future are from the NCEP HRRR model that you can get here or here.
Everything seems to work nicely if you convert the file from Grib2 format to NetCDF format and I was able to get what I wanted with the borders, coastlines, etc. on the map. I've attached the code and the output below to show how it worked. Also I hand picked a single dataset that I wanted to display to test versus my previous code so incase you want to look at the rest of datasets available in the file you'll need to utilize ncdump or something similar to view the information on the datasets.
import numpy as np
from netCDF4 import Dataset
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy
import cartopy.feature as cfeature
from osgeo import gdal
gdal.SetConfigOption('GRIB_NORMALIZE_UNITS', 'NO')
nc_f = 'C:\\Users\\Public\\Documents\\GRIB\\test.nc' # Your filename
nc_fid = Dataset(nc_f, 'r') # Dataset is the class behavior to open the
# file and create an instance of the ncCDF4
# class
# Extract data from NetCDF file
lats = nc_fid.variables['gridlat_0'][:]
lons = nc_fid.variables['gridlon_0'][:]
temp = nc_fid.variables['TMP_P0_L1_GLC0'][:]
fig = plt.figure(figsize=(20, 20))
states_provinces = cfeature.NaturalEarthFeature(category='cultural', \
name='admin_1_states_provinces_lines',scale='50m', facecolor='none')
proj = ccrs.LambertConformal()
ax = plt.axes(projection=proj)
plt.pcolormesh(lons, lats, temp, transform=ccrs.PlateCarree(),
cmap='RdYlBu_r', zorder=1)
ax.add_feature(cartopy.feature.BORDERS, linestyle=':', zorder=2)
ax.add_feature(states_provinces, edgecolor='black')
ax.coastlines()
plt.show()
Final Preview of Map
I need to load an image file with matplotlib and see the coordinates of points within it, as if it were a simple x,y scatter plot.
I can assume that the x axis extension is [0, 1], and the y axis follows the same scaling. I can load the above image file with
from PIL import Image
im = Image.open("del.png")
im.show()
but this uses ImageMagick (I'm on a Linux system) to display the image, and no coordinates are shown in the bottom left part of the plot window as would for a simple data plot:
Use pyplot for that:
from matplotlib import pyplot as plt
plt.imshow(plt.imread('del.png'))