Getting video properties with Python without calling external software - python

[Update:] Yes, it is possible, now some 20 months later. See Update3 below! [/update]
Is that really impossible? All I could find were variants of calling FFmpeg (or other software). My current solution is shown below, but what I really would like to get for portability is a Python-only solution that doesn't require users to install additional software.
After all, I can easily play videos using PyQt's Phonon, yet I can't get simply things like dimension or duration of the video?
My solution uses ffmpy (http://ffmpy.readthedocs.io/en/latest/ffmpy.html ) which is a wrapper for FFmpeg and FFprobe (http://trac.ffmpeg.org/wiki/FFprobeTips). Smoother than other offerings, yet it still requires an additional FFmpeg installation.
import ffmpy, subprocess, json
ffprobe = ffmpy.FFprobe(global_options="-loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0", inputs={"myvideo.mp4": None})
print("ffprobe.cmd:", ffprobe.cmd) # printout the resulting ffprobe shell command
stdout, stderr = ffprobe.run(stderr=subprocess.PIPE, stdout=subprocess.PIPE)
# std* is byte sequence, but json in Python 3.5.2 requires str
ff0string = str(stdout,'utf-8')
ffinfo = json.loads(ff0string)
print(json.dumps(ffinfo, indent=4)) # pretty print
print("Video Dimensions: {}x{}".format(ffinfo["streams"][0]["width"], ffinfo["streams"][0]["height"]))
print("Streams Duration:", ffinfo["streams"][0]["duration"])
print("Format Duration: ", ffinfo["format"]["duration"])
Results in output:
ffprobe.cmd: ffprobe -loglevel quiet -sexagesimal -of json -show_entries stream=width,height,duration -show_entries format=duration -select_streams v:0 -i myvideo.mp4
{
"streams": [
{
"duration": "0:00:32.033333",
"width": 1920,
"height": 1080
}
],
"programs": [],
"format": {
"duration": "0:00:32.064000"
}
}
Video Dimensions: 1920x1080
Streams Duration: 0:00:32.033333
Format Duration: 0:00:32.064000
UPDATE after several days of experimentation: The hachoire solution as proposed by Nick below does work, but will give you a lot of headaches, as the hachoire responses are too unpredictable. Not my choice.
With opencv coding couldn't be any easier:
import cv2
vid = cv2.VideoCapture( picfilename)
height = vid.get(cv2.CAP_PROP_FRAME_HEIGHT) # always 0 in Linux python3
width = vid.get(cv2.CAP_PROP_FRAME_WIDTH) # always 0 in Linux python3
print ("opencv: height:{} width:{}".format( height, width))
The problem is that it works well on Python2 but not on Py3. Quote: "IMPORTANT NOTE: MacOS and Linux packages do not support video related functionality (not compiled with FFmpeg)" (https://pypi.python.org/pypi/opencv-python).
On top of this it seems that opencv needs the presence of the binary packages of FFmeg at runtime (https://docs.opencv.org/3.3.1/d0/da7/videoio_overview.html).
Well, if I need an installation of FFmpeg anyway, I can stick to my original ffmpy example shown above :-/
Thanks for the help.
UPDATE2: master_q (see below) proposed MediaInfo. While this failed to work on my Linux system (see my comments), the alternative of using pymediainfo, a py wrapper to MediaInfo, did work. It is simple to use, but it takes 4 times longer than my initial ffprobe approach to obtain duration, width and height, and still needs external software, i.e. MediaInfo:
from pymediainfo import MediaInfo
media_info = MediaInfo.parse("myvideofile")
for track in media_info.tracks:
if track.track_type == 'Video':
print("duration (millisec):", track.duration)
print("width, height:", track.width, track.height)
UPDATE3: OpenCV is finally available for Python3, and is claimed to run on Linux, Win, and Mac! It makes it really easy, and I verfied that external software - in particular ffmpeg - is NOT needed!
First install OpenCV via Pip:
pip install opencv-python
Run in Python:
import cv2
cv2video = cv2.VideoCapture( videofilename)
height = cv2video.get(cv2.CAP_PROP_FRAME_HEIGHT)
width = cv2video.get(cv2.CAP_PROP_FRAME_WIDTH)
print ("Video Dimension: height:{} width:{}".format( height, width))
framecount = cv2video.get(cv2.CAP_PROP_FRAME_COUNT )
frames_per_sec = cv2video.get(cv2.CAP_PROP_FPS)
print("Video duration (sec):", framecount / frames_per_sec)
# equally easy to get this info from images
cv2image = cv2.imread(imagefilename, flags=cv2.IMREAD_COLOR )
height, width, channel = cv2image.shape
print ("Image Dimension: height:{} width:{}".format( height, width))
I also needed the first frame of a video as an image, and used ffmpeg for this to save the image in the file system. This also is easier with OpenCV:
hasFrames, cv2image = cv2video.read() # reads 1st frame
cv2.imwrite("myfilename.png", cv2image) # extension defines image type
But even better, as I need the image only in memory for use in the PyQt5 toolkit, I can directly read the cv2-image into an Qt-image:
bytesPerLine = 3 * width
# my_qt_image = QImage(cv2image, width, height, bytesPerLine, QImage.Format_RGB888) # may give false colors!
my_qt_image = QImage(cv2image.data, width, height, bytesPerLine, QImage.Format_RGB888).rgbSwapped() # correct colors on my systems
As OpenCV is a huge program, I was concerned about timing. Turned out, OpenCV was never behind the alternatives. I takes some 100ms to read a slide, all the rest combined takes never more than 10ms.
I tested this successfully on Ubuntu Mate 16.04, 18.04, and 19.04, and on two different installations of Windows 10 Pro. (Did not have Mac avalable). I am really delighted about OpenCV!
You can see it in action in my SlideSorter program, which allows to sort images and videos, preserve sort order, and present as slideshow. Available here: https://sourceforge.net/projects/slidesorter/

OK, after investigating this myself because I needed it too, it looks like it can be done with hachoir. Here's a code snippet that can give you all the metadata hachoir can read:
import re
from hachoir.parser import createParser
from hachoir.metadata import extractMetadata
def get_video_metadata(path):
"""
Given a path, returns a dictionary of the video's metadata, as parsed by hachoir.
Keys vary by exact filetype, but for an MP4 file on my machine,
I get the following keys (inside of "Common" subdict):
"Duration", "Image width", "Image height", "Creation date",
"Last modification", "MIME type", "Endianness"
Dict is nested - common keys are inside of a subdict "Common",
which will always exist, but some keys *may* be inside of
video/audio specific stream subdicts, named "Video Stream #1"
or "Audio Stream #1", etc. Not all formats result in this
separation.
:param path: str path to video file
:return: dict of video metadata
"""
if not os.path.exists(path):
raise ValueError("Provided path to video ({}) does not exist".format(path))
parser = createParser(path)
if not parser:
raise RuntimeError("Unable to get metadata from video file")
with parser:
metadata = extractMetadata(parser)
if not metadata:
raise RuntimeError("Unable to get metadata from video file")
metadata_dict = {}
line_matcher = re.compile("-\s(?P<key>.+):\s(?P<value>.+)")
group_key = None # group_key stores which group we're currently in for nesting subkeys
for line in metadata.exportPlaintext(): # this is what hachoir offers for dumping readable information
parts = line_matcher.match(line) #
if not parts: # not all lines have metadata - at least one is a header
if line == "Metadata:": # if it's the generic header, set it to "Common: to match items with multiple streams, so there's always a Common key
group_key = "Common"
else:
group_key = line[:-1] # strip off the trailing colon of the group header and set it to be the current group we add other keys into
metadata_dict[group_key] = {} # initialize the group
continue
if group_key: # if we're inside of a group, then nest this key inside it
metadata_dict[group_key][parts.group("key")] = parts.group("value")
else: # otherwise, put it in the root of the dict
metadata_dict[parts.group("key")] = parts.group("value")
return metadata_dict
This seems to return good results for me right now and requires no extra installs. The keys seem to vary a decent amount by video and type of video, so you'll need to do some checking and not just assume any particular key is there. This code is written for Python 3 and is using hachoir3 and adapted from hachoir3 documentation - I haven't investigated if it works for hachoir for Python 2.
In case it's useful, I also have the following for turning the text-based duration values into seconds:
def length(duration_value):
time_split = re.match("(?P<hours>\d+\shrs)?\s*(?P<minutes>\d+\smin)?\s*(?P<seconds>\d+\ssec)?\s*(?P<ms>\d+\sms)", duration_value) # get the individual time components
fields_and_multipliers = { # multipliers to convert each value to seconds
"hours": 3600,
"minutes": 60,
"seconds": 1,
"ms": 1
}
total_time = 0
for group in fields_and_multipliers: # iterate through each portion of time, multiply until it's in seconds and add to total
if time_split.group(group) is not None: # not all groups will be defined for all videos (eg: "hrs" may be missing)
total_time += float(time_split.group(group).split(" ")[0]) * fields_and_multipliers[group] # get the number from the match and multiply it to make seconds
return total_time

Mediainfo is another choice. cross platform together with MediaInfoDLL.py and Mediainfo.DLL library
Download Mediainfo.dll from their site, CLI package to get DLL or both files including python script from https://github.com/MediaArea/MediaInfoLib/releases
working in python 3.6:
you create dict of parameters you want, keys have to be exact but values will be defined later, it is just to be clear what the value might be
from MediaInfoDLL import *
# could be in __init__ of some class
self.video = {'Format': 'AVC', 'Width': '1920', 'Height':'1080', 'ScanType':'Progressive', 'ScanOrder': 'None', 'FrameRate': '29.970',
'FrameRate_Num': '','FrameRate_Den': '','FrameRate_Mode': '', 'FrameRate_Minimum': '', 'FrameRate_Maximum': '',
'DisplayAspectRatio/String': '16:9', 'ColorSpace': 'YUV','ChromaSubsampling': '4:2:0', 'BitDepth': '8',
'Duration': '', 'Duration/String3': ''}
self.audio = {'Format': 'AAC', 'BitRate': '320000', 'BitRate_Mode': 'CBR', 'Channel(s)': '2', 'SamplingRate': '48000', 'BitDepth': '16'}
#a method within a class:
def mediainfo(self, file):
MI = MediaInfo()
MI.Open(file)
for key in self.video:
value = MI.Get(Stream.Video, 0, key)
self.video[key] = value
for key in self.audio:
# 0 means track 0
value = MI.Get(Stream.Audio, 0, key)
self.audio[key] = value
MI.Close()
.
.
#calling it from another method:
self.mediainfo(self.file)
.
# you'll get a dict with correct values, if none then value is ''
# for example to get frame rate out of that dictionary:
fps = self.video['FrameRate']

Related

How to get video metadata from bytes using imageio.v3?

I am creating a python class to process videos received from a http post. The videos can be from a wide range of sizes, 10 seconds up to 10 hours. I am looking for a way to get video metadata such as fps, height, width etc. without having to store the whole video in memory.
The class is initialized like:
class VideoToolkit:
def __init__(self,video,video_name,media_type,full_video=True,frame_start=None,frame_stop=None):
self._frames = iio.imiter(video,format_hint=''.join(['.',media_type.split('/')[1]])) # Generator
self._meta = iio.immeta(video,exclude_applied=False)
The line of self._meta doesn't work giving an error:
OSError: Could not find a backend to open `<bytes>`` with iomode `r`.
Is there a similar way to get metadata using imageio.v3 and not storing the whole video in memory?
Just as an example, it is possible to get the metadata directly opening a video from a file:
import imageio.v3 as iio
metadata = iio.immeta('./project.mp4',exclude_applied=False)
print(metadata)
Output:
{'plugin': 'ffmpeg', 'nframes': inf, 'ffmpeg_version': '4.2.2-static https://johnvansickle.com/ffmpeg/ built with gcc 8 (Debian 8.3.0-6)', 'codec': 'mpeg4', 'pix_fmt': 'yuv420p', 'fps': 14.25, 'source_size': (500, 258), 'size': (500, 258), 'rotate': 0, 'duration': 1.69}
But opening the same file as bytes, this didn't work:
import imageio.v3 as iio
with open('./project.mp4', 'rb') as vfile:
vbytes = vfile.read()
metadata = iio.immeta(vbytes,exclude_applied=False)
print(metadata)
Output:
OSError: Could not find a backend to open `<bytes>`` with iomode `r`.
PS: One way could be doing next(self._frames) to get the first frame and then get its shape, but the video fps would be still missing.
You are correct that you'd use iio.immeta for this. The reason this fails for you is because you are using the imageio-ffmpeg backend which makes the decision if it can/can't read something based on the ImageResource's extension. bytes have no extension, so the plugin will think it can't read the ImageResource. Here are different ways you can fix this:
import imageio.v3 as iio
# setup
frames = iio.imread("imageio:cockatoo.mp4")
video_bytes = iio.imwrite("<bytes>", frames, extension=".mp4")
# set the `extension` kwarg (check the docs I linked)
meta = iio.immeta(video_bytes, extension=".mp4")
# use the new-ish pyav plugin
# (`pip install av` and ImageIO will pick it up automatically)
meta = iio.immeta(video_bytes)
Note 1: Using pyav is actually preferable, because it extracts metadata without decoding pixels. This is faster than imageio-ffmpeg, which internally calls ffmpeg in a subprocess, will decode some pixels and then discard that data (expensive noop). This is especially true when reading from HTTP resources.
Note 2: In v2.21.2, the pyav plugin doesn't report FPS, only duration where availabe. There is now a PR (853) that adds this (and other things), but it will likely not get merged for the next few weeks, because I am busy with my PhD defense. (now merged)
Note 3: Many people interested in FPS want to know this info to calculate the total number of frames in the video. In this case, it can be much easier to call iio.improps and inspect the resulting .shape, e.g., iio.improps("imageio:cockatoo.mp4", plugin="pyav").shape # (280, 720, 1280, 3)

How to utilise ffmpeg to to extract key frames from a video stream and only print the labels present within these frames?

So a bit of context, I'm using the TensorFlow object detection API for a project, and I've modified the visualization_utils file to print any present class labels to the terminal and then write them to a .txt file. From a bit of research I've come across FFmpeg, I'm wondering if there is a function I can use in FFmpeg so that it only prints and writes the class labels from keyframes within the video? - i.e. when there is a change in the video. At the moment it is printing all the class labels per frame even if there is no change, so I have duplicate numbers of labels even if there is no new object within the video. Following on from this, would I have to apply this keyframe filtering to an input video beforehand?
Thanks in advance!
I'm using opencv2 to capture my video input.
Please see below for code:
visualization_utils.py - inside the draw_bounding_box_on_image_array function:
# Write video output to file for evaluation.
f = open("ObjDecOutput.txt", "a")
print(display_str_list[0])
f.write(display_str_list[0])
Thought I'd just follow up on this, I ended up using ffmpeg mpdecimate and setpts filters to remove duplicate and similar frames.
ffmpeg -i example.mp4 -vf mpdecimate=frac=1,setpts=N/FRAME_RATE/TB example_decimated.mp4
This however didn't solve the problem of duplicates within the file I was writing the labels to - to solve this I appended each row in the file to a list and looped through it to remove groups of duplicated elements and only kept the first occurrence and appended that to a new list.
Finally, I found the solution here after a year. However, there is a small bug in the code converted from this script.
The fix is and frame["key_frame"]
import json
import subprocess
def get_frames_metadata(file):
command = '"{ffexec}" -show_frames -print_format json "{filename}"'.format(ffexec='ffprobe', filename=file)
response_json = subprocess.check_output(command, shell=True, stderr=None)
frames = json.loads(response_json)["frames"]
frames_metadata, frames_type, frames_type_bool = [], [], []
for frame in frames:
if frame["media_type"] == "video":
video_frame = json.dumps(dict(frame), indent=4)
frames_metadata.append(video_frame)
frames_type.append(frame["pict_type"])
if frame["pict_type"] == "I" and frame["key_frame"]:
frames_type_bool.append(True)
else:
frames_type_bool.append(False)
# print(frames_type)
return frames_metadata, frames_type, frames_type_bool
The frame types are stores in frames_type, but don't trust it. True keyframes are in frames_type_bool.
I tested a clip for which I had two consecutive I-frames at the beginning, but avidemux was showing only one. So I checked the original code and found that some frames may have pict_type = I but key_frame = False. I thus fixed the code.
After having the frames_type_bool, you can extract the True indices and opencv or imageio to extract keyframes only.
This is how to use this function and imageio to show the keyframes:
import matplotlib.pyplot as plt
import imageio
filename = 'Clip.mp4'
# extract frame types
_,_, isKeyFrame = get_frames_metadata(filename)
# keep keyframes indices
keyframes_index = [i for i,b in enumerate(isKeyFrame) if b]
# open file
vid = imageio.get_reader(filename, 'ffmpeg')
for i in keyframes_index:
image = vid.get_data(i)
fig = plt.figure()
fig.suptitle('image #{}'.format(i), fontsize=20)
plt.imshow(image)
plt.show()

Python EXIF can't find HEIC file date taken, but it's visible in other tools

This is similar to this question, except that the solution there doesn't work for me.
Viewing a HEIC file in Windows Explorer, I can see several dates. The one that matches what I know is the date I took the photo is headed 'Date' and 'Date taken'. The other dates aren't what I want.
Image in Windows Explorer
I've tried two methods to get EXIF data from this file in Python:
from PIL import Image
_EXIF_DATE_TAG = 36867
img = Image.open(fileName)
info = img._getexif()
c.debug('info is', info)
# If info != None, search for _EXIF_DATE_TAG
This works for lots of other images, but for my HEIC files info is None.
I found the question linked above, and tried the answer there (exifread):
import exifread
with open(filename, 'rb') as image:
exif = exifread.process_file(image)
and exif here is None. So I wondered if the dates are encoded in the file in some other way, not EXIF, but these two tools seem to show otherwise:
http://exif.regex.info/exif.cgi shows:
EXIF Site
and exiftool shows:
exiftool
So I'm thoroughly confused! Am I seeing EXIF data in Windows Explorer and these tools? And if so, why is neither Python tool seeing it?
Thanks for any help!
Windows 10, Python 2.7.16. The photos were taken on an iPhone XS, if that's relevant.
Update: Converting the HEIC file to a jpg, both methods work fine.
On macOS you can use the native mdls (meta-data list, credit to Ask Dave Taylor) through a shell to get the data from HEIC. Note that calling a shell like this is not good programming, so use with care.
import datetime
import subprocess
class DateNotFoundException(Exception):
pass
def get_photo_date_taken(filepath):
"""Gets the date taken for a photo through a shell."""
cmd = "mdls '%s'" % filepath
output = subprocess.check_output(cmd, shell = True)
lines = output.decode("ascii").split("\n")
for l in lines:
if "kMDItemContentCreationDate" in l:
datetime_str = l.split("= ")[1]
return datetime.datetime.strptime(datetime_str, "%Y-%m-%d %H:%M:%S +0000")
raise DateNotFoundException("No EXIF date taken found for file %s" % filepath)
It's a HEIC file issue - it's not supported apparently, some difficulties around licensing I think.
While doing it with mdls, it's better (performance-wise) to give it a whole bunch of filenames separated by space at once.
I tested with 1000 files: works fine, 20 times performance gain.

Setting meta data tags with piexif

I try to set specific meta data for jpegs with the piexif module. I get the respective dicts out of the piexif.load().
data = piexif.load().
They return {'GPS': {}, 'Exif': {}, 'Interop': {}, 'thumbnail': None, '1st': {}, '0th': {}}. (Maybe the answer is very obviouse but I am a little confused to the dicts)
However, I would like to know where and what to write to set my focus length, the camera maker and model.
The reason for that, I want to use the Regard3D reconstruction GUI from http://www.regard3d.org/index.php/documentation/details/picture-set.
Therefore, I need to add the meta to the jpegs and the data of the camera into the camera db. This is needed for the triangulation step.
Thank you very much in advance
The Tags are different in the standard than in the tutorial. They supposed to be Make, Model, Focal Length in the EXIF format.
I believe that there is a limit to what can be done with piexif. Although, I was unable to find camera make/model.
However, you should be able to access the focal length like this.
(Of course there are multiple ways to achieve this. This was just what I came up with)
import piexif
from PIL import Image
img = Image.open('input filename or path')
# Defining dictionary
exif_dict = piexif.FocalLength(img.info["exif"])
if piexif.ExifIFD.FocalLength in exif_dict["Exif"]:
print("Current Focal Length is ", exif_dict["Exif"][piexif.ExifIFD.FocalLength])
else:
print("No Focal Length is set")
# Getting User Input
fl = input("Enter a Focal Length: ")
# Applying user variable
exif_dict["Exif"][piexif.ExifIFD.LensMake] = fl
# Converting to bytes
exif_bytes = piexif.dump(exif_dict)
#Saving Image
img.save("output filename or path", exif=exif_bytes)
There does look to be a lens make and lens model though.
piexif.ExifIFD.LensMake
piexif.ExifIFD.LensModel
Also, it might be worth noting that this value may not be changeable on some images and sometimes updating the value for an image does not seem to apply to the same field that would be displayed in windows 10 ui. I'm unsure if this is a bug or a compatibility issue with the image I was testing.
You also may want to check the piexif docs as there might be more information on it as well.

making a GIF from imges in any order with python

I'm trying to make a gif out of a sequence of png format pics with python language in ubuntu 12.04. I have a file which my pictures are in it. they are named as, lip_shapes1.png to lip_shapes11.png. also I have a list with names of images in it which i want to make that gif in this sequence. the list looks like this:
list = [lip_shapes1.png, lip_shapes4.png , lip_shapes11.png, lip_shapes3.png]
but my problem is that i found this code :
import os
os.system('convert -loop 0 lip_shapes*.gnp anime.gif')
but it only makes gif in the order of the gnp's names, but I want it to be in any order I want. is it possible?
if anybody can help me i really appreciate it.
thanks in advance
PS: i also want to make a movie out of it. i tried this code(shapes is my list of images names):
s = Popen(['ffmpeg', '-f', 'image2', '-r', '24', '-i'] + shapes + ['-vcodec', 'mpeg4', '-y', 'movie.mp4'])
s.communicate()
but it gives me this in terminal and doesnt work:
The ffmpeg program is only provided for script compatibility and will be removed
in a future release. It has been deprecated in the Libav project to allow for
incompatible command line syntax improvements in its replacement called avconv
(see Changelog for details). Please use avconv instead.
Input #0, image2, from 'shz8.jpeg':
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0.0: Video: mjpeg, yuvj420p, 266x212 [PAR 1:1 DAR 133:106], 24 tbr, 24 tbn, 24 tbc
shz8.jpeg is the first name on the list.
thanks
If you use subprocess.call, you can pass the filenames as a list of strings. This will avoid shell quotation issues that might arise if the filenames, for example, contained quotes or spaces.
import subprocess
shapes = ['lip_shapes1.png', 'lip_shapes4.png' , 'lip_shapes11.png', 'lip_shapes3.png']
cmd = ['convert', '-loop0'] + shapes + ['anime.gif']
retcode = subprocess.call(cmd)
if not retval == 0:
raise ValueError('Error {} executing command: {}'.format(retcode, cmd))
So you've got a list of images you want to convert into gif as a python list. You can sort it or arrange in any order you want. e.g
img_list = ['lip_shapes1.png', 'lip_shapes4.png' , 'lip_shapes11.png', 'lip_shapes3.png']
img_list.sort()
Please note that list should not be used as variable name, because it's a name of list type.
Then you can use this list in calling os.system(convert ...) e.g.
os.system('convert -loop 0 %s anime.gif' % ' '.join(img_list))
You should be sure to handle a few things here, if you want to read a series of pngs from a folder, I recommend using a for loop to check for the ending of the file, i.e. .png, .jpg, etc. I wrote a blog post on how to easily do this (read about it here):
image_file_names = [],[]
for file_name in os.listdir(png_dir):
if file_name.endswith('.png'):
image_file_names.append(file_name)
sorted_files = sorted(image_file_names, key=lambda y: int(y.split('_')[1]))
This will put all the '.png' files into one vector of file names. From there, you can loop through the files to customize the gif using the following:
images = []
frame_length = 0.5 # seconds between frames
end_pause = 4 # seconds to stay on last frame
# loop through files, join them to image array, and write to GIF called 'test.gif'
for ii in range(0,len(sorted_files)):
file_path = os.path.join(png_dir, sorted_files[ii])
if ii==len(sorted_files)-1:
for jj in range(0,int(end_pause/frame_length)):
images.append(imageio.imread(file_path))
else:
images.append(imageio.imread(file_path))
# the duration is the time spent on each image (1/duration is frame rate)
imageio.mimsave('test.gif', images,'GIF',duration=frame_length)
here's an example produced by the method above:
https://engineersportal.com/blog/2018/7/27/how-to-make-a-gif-using-python-an-application-with-the-united-states-wind-turbine-database

Categories

Resources