I am aiming to export an animation as a gif format. I can achieve this using an mp4 but am getting an error when converting to gif. I'm not sure if its the script that wrong or some backend settings.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import animation
df1 = pd.DataFrame({
'Time' : [1,1,1,2,2,2,3,3,3],
'GroupA_X' : [3, 4, 5, 12, 15, 16, 21, 36, 47],
'GroupA_Y' : [2, 4, 5, 12, 15, 15, 22, 36, 45],
'GroupB_X' : [2, 5, 3, 12, 14, 12, 22, 33, 41],
'GroupB_Y' : [2, 4, 3, 13, 13, 14, 24, 32, 45],
})
fig, ax = plt.subplots()
ax.grid(False)
ax.set_xlim(0,50)
ax.set_ylim(0,50)
def groups():
Group_A = df1[['Time','GroupA_X','GroupA_Y']]
GA_X = np.array(Group_A.groupby(['Time'])['GroupA_X'].apply(list))
GA_Y = np.array(Group_A.groupby(['Time'])['GroupA_Y'].apply(list))
GA = ax.scatter(GA_X[0], GA_Y[0], c = ['blue'], marker = 'o', s = 10, edgecolor = 'black')
return GA, GA_X, GA_Y
def animate(i) :
GA, GA_X, GA_Y = groups()
GA.set_offsets(np.c_[GA_X[0+i], GA_Y[0+i]])
ani = animation.FuncAnimation(fig, animate, np.arange(0,3), interval = 1000, blit = False)
# If exporting as an mp4 it works fine.
#Writer = animation.writers['ffmpeg']
#writer = Writer(fps = 10, bitrate = 8000)
#ani.save('ani_test.mp4', writer = writer)
#But if I try to export as a gif it returns an error:
ani.save('gif_test.gif', writer = 'imagemagick')
Error:
MovieWriter imagemagick unavailable. Trying to use pillow instead.
self._frames[0].save(
IndexError: list index out of range
Note: I have also tried the following which returns the same Index error
my_writer=animation.PillowWriter(fps = 10)
ani.save(filename='gif_test.gif', writer=my_writer)
I have tried adjusting numerous settings from other questions animate gif. My current animation settings are as follows. I am using a Mac.
###ANIMATION settings
#animation.html : none ## How to display the animation as HTML in
## the IPython notebook. 'html5' uses
## HTML5 video tag; 'jshtml' creates a
## Javascript animation
#animation.writer : imagemagick ## MovieWriter 'backend' to use
#animation.codec : mpeg4 ## Codec to use for writing movie
#animation.bitrate: -1 ## Controls size/quality tradeoff for movie.
## -1 implies let utility auto-determine
#animation.frame_format: png ## Controls frame format used by temp files
#animation.html_args: ## Additional arguments to pass to html writer
animation.ffmpeg_path: C:\Program Files\ImageMagick-6.9.1-Q16\ffmpeg.exe ## Path to ffmpeg binary. Without full path
## $PATH is searched
#animation.ffmpeg_args: ## Additional arguments to pass to ffmpeg
#animation.avconv_path: avconv ## Path to avconv binary. Without full path
## $PATH is searched
#animation.avconv_args: ## Additional arguments to pass to avconv
animation.convert_path: C:\Program Files\ImageMagick-6.9.2-Q16-HDRI ## Path to ImageMagick's convert binary.
## On Windows use the full path since convert
## is also the name of a system tool.
#animation.convert_args: ## Additional arguments to pass to convert
#animation.embed_limit : 20.0
The paths you have configured,
animation.ffmpeg_path: C:\Program Files\ImageMagick-6.9.1-Q16\ffmpeg.exe
and
animation.convert_path: C:\Program Files\ImageMagick-6.9.2-Q16-HDRI
Are for Windows, but since you are on Mac you need paths for MacOS. You should be able to get them using which from the terminal. On my Ubuntu install which gives the following
>$ which convert
/usr/bin/convert
>$ which ffmpeg
/usr/bin/ffmpeg
It should be similar for MacOS. Those are the paths which need to be supplied to the rcParams animation.convert_path and animation.ffmpeg_path, i.e.
animation.ffmpeg_path: /usr/bin/ffmpeg
animation.convert_path: /usr/bin/convert
Do note that while having the wrong paths in the matplotlib configuration would produce the error in question, fixing it may not resolve the error - there might be something else wrong as well.
I found the solution from a post to of a similar question. It seems the PillowWriter class is what worked on my computer, I couldn't get over the error arising from the ImageMagick class. You may have a better idea on what to set the bitrate and codec to, these were guesses or copied from the question I mentioned before.
ani = animation.FuncAnimation(fig, new_animate, frames=np.arange(0, 3)
plt.show()
my_writer=animation.PillowWriter(fps=20, codec='libx264', bitrate=2)
ani.save(filename='gif_test.gif', writer=my_writer)
Related
I'm trying to show a tree visualisation using plot_tree, but it shows a chunk of text instead:
from sklearn.tree import plot_tree
plot_tree(t)
(where t is an instance of DecisionTreeClassifier)
This is the output:
[Text(464.99999999999994, 831.6, 'X[3] <= 0.8\nentropy = 1.581\nsamples = 120\nvalue = [39, 37, 44]'),
Text(393.46153846153845, 646.8, 'entropy = 0.0\nsamples = 39\nvalue = [39, 0, 0]'),
and so on and so forth. How do I make it show the visual tree instead?
I'm using Jupyter 6.4.1 and I already imported matplotlib earlier in the code. Thanks!
In my case, it works with a simple "show":
plot_tree(t)
plt.show()
You can plot your tree and specify the plot size of your tree with plt.figure
width = 10
height = 7
plt.figure(figsize=(width, height))
tree_plot_max_depth = 6
plot_tree(t, max_depth=tree_plot_max_depth)
## the key to the problem of not showing tree is the command below
plt.show()
And as the documentation is mentioned below you can specify more parameters for your tree to get a more informative image.
https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html
In code snippet one, if I run it in my Pycharm console as a complete block of code a single time, it will run completely and successfully print the PCA output twice.
from sklearn.decomposition import PCA
import numpy as np
import matplotlib.pyplot as plt
x = [1, 2 , 3, 4, 5, 6, 7, 8, 9]
y = [3, 4, 5, 6, 7, 8, 9, 10, 11]
xy = np.array([x, y]).T
xy_pca = PCA(n_components=1).fit_transform(xy)
print(xy_pca)
ax = plt.figure().add_subplot(111)
xy_pca_2 = PCA(n_components=1).fit_transform(xy)
print(xy_pca_2)
However, if I run that complete block of code again, I get "nan's" as the output of the first print statement, but a correct output from the second print statement.
Also if I start over and use the python console to run that block of code line by line, I get a correct output on the first print statement, but "nan's" from the second print statement.
This leads me to believe that matplotlib functionality somehow interferes with a state necessary to successfully run sklearn's PCA.fit_transform function. Or something weird is happening in the console state.
But this is not the end of the story. With this second block of code below, whether it is run as a complete block or line by line from the Python console, it will never fail. The only difference is that the x and y arrays are one item shorter each. This should not affect either the matplotlib or sklearn functionality, but somehow it is making a difference.
from sklearn.decomposition import PCA
import numpy as np
import matplotlib.pyplot as plt
x = [1, 2 , 3, 4, 5, 6, 7, 8]
y = [3, 4, 5, 6, 7, 8, 9, 10]
xy = np.array([x, y]).T
xy_pca = PCA(n_components=1).fit_transform(xy)
print(xy_pca)
ax = plt.figure().add_subplot(111)
plt.show()
xy_pca_2 = PCA(n_components=1).fit_transform(xy)
print(xy_pca_2)
System config:
Python 3.7.8
PyCharm 2020.2 (Community Edition)
Build #PC-202.6397.98, built on July 28, 2020
Runtime version 11.0.7+10-b944.20 amd64
VM OpenJDK 64-Bit Server VM by JetBrains s.r.o.
Windows 10 10.0
GC ParNew, ConcurrentMarkSweep
Memory 2014M
Cores 4
matplotlib 3.3.1
sklearn 0.23.2
I'm currently working on an image processing script in Python (Spyder IDE, Python 3.5, Anaconda 4.0.0). When I first open the IDE, I only have to press 'run script' once for the script to execute. But after that, I have to press 'run script' twice, sometimes even three times, for it to execute. Searching around the internet, it seems that the issue has to do with using matplotlib and pyplot. It's mainly an issue because I will spend 5 minutes per test just getting my script to execute. My code is included below. I decided to ask about the issue here to see if anyone might have a suggestion or idea to get my script to execute on the first press.
EDIT: Whenever I restart my kernel (start a new console), I'm able to get the script run on the first press.
import numpy as np
import matplotlib.pyplot as plt
from skimage.color import rgb2gray
from skimage import data, img_as_float
from skimage.filters import gaussian
from skimage.segmentation import active_contour
from skimage import io
from skimage import exposure
import scipy
scipy_version = list(map(int, scipy.__version__.split('.')))
new_scipy = scipy_version[0] > 0 or \
(scipy_version[0] == 0 and scipy_version[1] >= 14)
'''
img = data.astronaut()
img = rgb2gray(img)
'''
openLocation = "file location here"
img = io.imread(openLocation)
#img = rgb2gray(img)
s = np.linspace(0, 2*np.pi, 600)
x = 400 + 300*np.cos(s)
y = 550 + 280*np.sin(s)
init = np.array([x, y]).T
if not new_scipy:
print('You are using an old version of scipy. '
'Active contours is implemented for scipy versions '
'0.14.0 and above.')
if new_scipy:
snake = active_contour(img, init, alpha=0.01, beta=0.01, w_line = 5, w_edge = 0, gamma=0.01, bc = 'periodic')
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111)
plt.gray()
ax.imshow(img)
ax.plot(init[:, 0], init[:, 1], '--r', lw=3)
ax.plot(snake[:, 0], snake[:, 1], '-b', lw=3)
ax.set_xticks([]), ax.set_yticks([])
ax.axis([0, img.shape[1], img.shape[0], 0])
I think your issue is related to this its a bug in spyder.It provides a partial solution so far is to use a different Matplotlib backend. You can change it in:
Preferences > Console > External modules > Matplotlib
from the default (Qt4Agg) to TkAgg (the only other available on Windows).
One more thing you can try is updating spyder and then try running your script.
Is it possible to save images made with VisPy? Maybe using vispy.io.imsave or vispy.write_png?
Also, it is possible to plot matplotlib figures in vispy using vispy.mpl_plot but is it possible to use a vispy image in matplotlib?
In any case, I would need to generate an image object with VisPy but I did not find any example of that.
Here is a minimal example. Use canvas.render to create an image, then export it with io.write_png:
import vispy.plot as vp
import vispy.io as io
# Create a canvas showing plot data
canvas = vp.plot([1, 6, 2, 4, 3, 8, 5, 7, 6, 3])
# Use render to generate an image object
img=canvas.render()
# Use write_png to export your wonderful plot as png !
io.write_png("wonderful.png",img)
Here is an updated version jvtrudel's of answer (working with vispy 0.5.0-dev):
The official demo https://github.com/vispy/vispy/blob/master/examples/basics/plotting/export.py does something very similar, and a stripped down version adjusted to export a png could look like this:
import vispy.plot as vp
import vispy.io as io
fig = vp.Fig(show=False)
fig[0, 0].plot([1, 6, 2, 4, 3, 8, 5, 7, 6, 3])
image = fig.render()
io.write_png("wonderful.png",image)
Clarification: I somehow left out the key aspect: not using os.system or subprocess - just the python API.
I'm trying to convert a section of a NOAA GTX offset grid for vertical datum transformations and not totally following how to do this in GDAL with python. I'd like to take a grid (in this case a Bathymetry Attributed Grid, but it could be a geotif) and use it as the template that I'd like to do to. If I can do this right, I have a feeling that it will greatly help people make use of this type of data.
Here is what I have that is definitely not working. When I run gdalinfo on the resulting destination dataset (dst_ds), it does not match the source grid BAG.
from osgeo import gdal, osr
bag = gdal.Open(bag_filename)
gtx = gdal.Open(gtx_filename)
bag_srs = osr.SpatialReference()
bag_srs.ImportFromWkt(bag.GetProjection())
vrt = gdal.AutoCreateWarpedVRT(gtx, None, bag_srs.ExportToWkt(), gdal.GRA_Bilinear, 0.125)
dst_ds = gdal.GetDriverByName('GTiff').Create(out_filename, bag.RasterXSize, bag.RasterYSize,
1, gdalconst.GDT_Float32)
dst_ds.SetProjection(bag_srs.ExportToWkt())
dst_ds.SetGeoTransform(vrt.GetGeoTransform())
def warp_progress(pct, message, user_data):
return 1
gdal.ReprojectImage(gtx, dst_ds, None, None, gdal.GRA_NearestNeighbour, 0, 0.125, warp_progress, None)
Example files (but any two grids where they overlap, but are in different projections would do):
http://surveys.ngdc.noaa.gov/mgg/NOS/coast/F00001-F02000/F00574/BAG/
F00574_MB_2m_MLLW_2of3.bag
http://vdatum.noaa.gov/download/data/VDatum_National.zip
MENHMAgome01_8301/mllw.gtx
The command line equivalent to what I'm trying to do:
gdalwarp -tr 2 -2 -te 369179 4773093 372861 4775259 -of VRT -t_srs EPSG:2960 \
MENHMAgome01_8301/mllw.gtx mllw-2960-crop-resample.vrt
gdal_translate mllw-2960-crop-resample.{vrt,tif}
Thanks to Jamie for the answer.
#!/usr/bin/env python
from osgeo import gdal, gdalconst
# Source
src_filename = 'MENHMAgome01_8301/mllw.gtx'
src = gdal.Open(src_filename, gdalconst.GA_ReadOnly)
src_proj = src.GetProjection()
# We want a section of source that matches this:
match_filename = 'F00574_MB_2m_MLLW_2of3.bag'
match_ds = gdal.Open(match_filename, gdalconst.GA_ReadOnly)
match_proj = match_ds.GetProjection()
match_geotrans = match_ds.GetGeoTransform()
wide = match_ds.RasterXSize
high = match_ds.RasterYSize
# Output / destination
dst_filename = 'F00574_MB_2m_MLLW_2of3_mllw_offset.tif'
dst = gdal.GetDriverByName('GTiff').Create(dst_filename, wide, high, 1, gdalconst.GDT_Float32)
dst.SetGeoTransform( match_geotrans )
dst.SetProjection( match_proj)
# Do the work
gdal.ReprojectImage(src, dst, src_proj, match_proj, gdalconst.GRA_Bilinear)
del dst # Flush
If I understand the question correctly, you could accomplish your goal by running gdalwarp and gdal_translate as subprocesses. Just assemble your options then do the following for example:
import subprocess
param = ['gdalwarp',option1,option2...]
cmd = ' '.join(param)
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout = ''.join(process.stdout.readlines())
stderr = ''.join(process.stderr.readlines())
if len(stderr) > 0:
raise IOError(stderr)
It may not be the most elegant solution, but it will get the job done. Once it is run, just load your data into numpy using gdal and carry on your way.