geemap: Average ImageCollection bands over time period and extract values to points - python

I'm trying to extract band values from a Daymet image collection (V4) at specific sampling locations, averaged over the course of year (daily average). I'm using the Google Earth Engine python API (geemap) to do so.
Currently, I'm able to extract all of the daily values for each band over the time period using the following code:
ee.Initialize()
daymet = ee.ImageCollection('NASA/ORNL/DAYMET_V4') \
.filter(ee.Filter.date('2000-01-01', '2000-12-31'))
DaymetImage = daymet.toBands() ##convert to ee.Image in order to extract_values_to_points
##export data
out_dir = os.path.expanduser('~/Downloads')
out_csv = os.path.join(out_dir, 'daymet2000.csv')
geemap.extract_values_to_points(fc_coords, DaymetImage, out_csv)
##Note: fc_coords contains sampling points (lat/long) and other variables of interest
While this has all of the information I need, the resulting .csv file is bulky and would require a lot of work to clean up. I would like to be able to directly extract the daily average value for each band in the ImageCollection. The code I have so far is:
ee.Initialize()
daymet = ee.ImageCollection('NASA/ORNL/DAYMET_V4') \
.filter(ee.Filter.date('2000-01-01', '2000-12-31')) \
.mean()
And this is where I get stuck. The .mean function appears to turn it from an ee.ImageCollection to an ee.Image; however, when I try to extract_values_to_points from this Image, I get the following error message: EEException: Image.reduceRegions: The default WGS84 projection is invalid for aggregations. Specify a scale or crs & crs_transform.
As it is an ee.Image, the toBands() function (which I used earlier on an ImageCollection) does not work. Is there any way I can extract the average band values from this ImageCollection easily?
Thanks.

Related

How to average a 3-D Array Into 2-D Array Python

I would like to take a temperature variable from a netcdf file in python and average over all of the satellite's scans.
The temperature variable is given as:
tdisk = file.variables['tdisk'][:,:,:] # Disk Temp(nscans,nlons,nlats)
The shape of the tdisk array is 68,52,46. The satellite makes 68 scans per day. The longitude and latitude variables are given as:
lats = file.variables['latitude'][:,:] # Latitude(nlons,nlats)
lons = file.variables['longitude'][:,:] # Longitude(nlons,nlats)
Which have sizes of 52,46. I would like to average the each nscan of temperature together to get a daily mean so the temperature size becomes 52,46. I've seen ways to stack the arrays and concatenate them, but I would like a mean value. Eventually I am looking to make a contour plot with (x=longitude, y=latitude , and z=temp)
Is this possible to do? Thanks for any help
If you are using Xarray, you can do this using DataArray.mean:
import xarray as xr
# open netcdf file
ds = xr.open_dataset('file.nc')
# take the mean of the tdisk variable
da = ds['tdisk'].mean(dim='nscans')
# make a contour plot
da.plot.contour('longitude', 'latitude')
Based on the question you seem to want to calculate a temporal mean, not a daily mean as you seem to only have one day. So the following will probably work:
ds.mean(“time”)

creating a new dataset from large datset xarray

I have a large dataset, with hourly data for an entire year. What I want to do is to create a new dataset with specific variables at sepcific distances from a point source and use all the data to create a box and wiskers plot.
The dataset has time, lon and lat and a concentration variable with multiple dimensions:
Concentration[hours,lat, lon]
I want to create a dataset that loops through all of the times for different lat and lon and produce a concentration output for all time at each of these different locations, to then use it to create a box and wisker plot and to show the decreace of atmospheric concetration from a point source. I know the specific grids I am interested in but need help setting up the script.
EDIT:
I cropped the Global dataset and this is what the output currently looks like:
time: (8761), latitude: (30), longitude: (30)
I tried using a for loop, but it would not allow me to loop over lat/lon...
for i in range(8761):
print(Conc[i,:,:])
This lets me loop over all times and see the conc at all grids, but I want to instead of printing create a new ds, and also only loop through certains grids..
I wants a list that provides me 8761 concentrations values for each grid that I specify, and to keep all the data in one dataset so I can make a Box plot from this...

How to quickly retrieve radiances from satellite netcdf files?

I would like to retrieve GOES-16 ABI radiance data for predetermined locations (about 10,000 points per individual image) for an entire year. Each day has ~100 individual images. I have all required ABI data (in netCDF format) on disk already. The points I'd like to extract are given in terms of the row and column of the netCDF array, so in principle, retrieving the correct radiances is an array indexing operation.
However, all my attempts at doing this have been painstakingly slow (order of 10+ minutes for a single day). I've been trying to use xarray, as follows.
import xarray as xr, pandas as pd
df = pd.read_csv("selected_pixels/20190101.csv")
ds = xr.open_mfdataset('noaa-goes16/ABI-L2-MCMIPF/2019/001/*/*.nc', parallel=True,
combine='nested', concat_dim='t')
t = xr.DataArray(df.time_id.values, dims="s")
x = xr.DataArray(df.col.values, dims="s")
y = xr.DataArray(df.row.values, dims="s")
a_df = ds[[f"CMI_C{str(i).rjust(2,'0')}" for i in range(1,17)]].isel(t=t,x=x, y=y).to_dataframe()
I'm fortunate enough to have multiple processors at my disposal: I would highly appreciate any suggestions to speed up this operation.

Merging xarray datasets with same extent but different spatial resolution

I have one dataset of satellite based solar induced fluorescence (SIF) and one of modeled precipitation. I want to compare precipitation to SIF on a per pixel basis in my study area. My two datasets are of the same area but at slightly different spatial resolutions. I can successfully plot these values across time and compare against each other when I take the mean for the whole area, but I'm struggling to create a scatter plot of this on a per pixel basis.
Honestly I'm not sure if this is the best way to compare these two values when looking for the impact of precip on SIF so I'm open to ideas of different approaches. As for merging the data currently I'm using xr.combine_by_coords but it is giving me an error I have described below. I could also do this by converting the netcdfs into geotiffs and then using rasterio to warp them, but that seems like an inefficient way to do this comparison. Here is what I have thus far:
import netCDF4
import numpy as np
import dask
import xarray as xr
rainy_bbox = np.array([
[-69.29519955115512,-13.861261028444734],
[-69.29519955115512,-12.384786628185896],
[-71.19583431678012,-12.384786628185896],
[-71.19583431678012,-13.861261028444734]])
max_lon_lat = np.max(rainy_bbox, axis=0)
min_lon_lat = np.min(rainy_bbox, axis=0)
# this dataset is available here: ftp://fluo.gps.caltech.edu/data/tropomi/gridded/
sif = xr.open_dataset('../data/TROPO_SIF_03-2018.nc')
# the dataset is global so subset to my study area in the Amazon
rainy_sif_xds = sif.sel(lon=slice(min_lon_lat[0], max_lon_lat[0]), lat=slice(min_lon_lat[1], max_lon_lat[1]))
# this data can all be downloaded from NASA Goddard here either manually or with wget but you'll need an account on https://disc.gsfc.nasa.gov/: https://pastebin.com/viZckVdn
imerg_xds = xr.open_mfdataset('../data/3B-DAY.MS.MRG.3IMERG.201803*.nc4')
# spatial subset
rainy_imerg_xds = imerg_xds.sel(lon=slice(min_lon_lat[0], max_lon_lat[0]), lat=slice(min_lon_lat[1], max_lon_lat[1]))
# I'm not sure the best way to combine these datasets but am trying this
combo_xds = xr.combine_by_coords([rainy_imerg_xds, rainy_xds])
Currently I'm getting a seemingly unhelpful RecursionError: maximum recursion depth exceeded in comparison on that final line. When I add the argument join='left' then the data from the rainy_imerg_xds dataset is in combo_xds and when I do join='right' the rainy_xds data is present, and if I do join='inner' no data is present. I assumed there was some internal interpolation with this function but it appears not.
This documentation from xarray outlines quite simply the solution to this problem. xarray allows you to interpolate in multiple dimensions and specify another Dataset's x and y dimensions as the output dimensions. So in this case it is done with
# interpolation based on http://xarray.pydata.org/en/stable/interpolation.html
# interpolation can't be done across the chunked dimension so we have to load it all into memory
rainy_sif_xds.load()
#interpolate into the higher resolution grid from IMERG
interp_rainy_sif_xds = rainy_sif_xds.interp(lat=rainy_imerg_xds["lat"], lon=rainy_imerg_xds["lon"])
# visualize the output
rainy_sif_xds.dcSIF.mean(dim='time').hvplot.quadmesh('lon', 'lat', cmap='jet', geo=True, rasterize=True, dynamic=False, width=450).relabel('Initial') +\
interp_rainy_sif_xds.dcSIF.mean(dim='time').hvplot.quadmesh('lon', 'lat', cmap='jet', geo=True, rasterize=True, dynamic=False, width=450).relabel('Interpolated')
# now that our coordinates match, in order to actually merge we need to convert the default CFTimeIndex to datetime to merge dataset with SIF data because the IMERG rainfall dataset was CFTime and the SIF was datetime
rainy_imerg_xds['time'] = rainy_imerg_xds.indexes['time'].to_datetimeindex()
# now the merge can easily be done with
merged_xds = xr.combine_by_coords([rainy_imerg_xds, interp_rainy_sif_xds], coords=['lat', 'lon', 'time'], join="inner")
# now visualize the two datasets together // multiply SIF by 30 because values are so ow
merged_xds.HQprecipitation.rolling(time=7, center=True).sum().mean(dim=('lat', 'lon')).hvplot().relabel('Precip') * \
(merged_xds.dcSIF.mean(dim=('lat', 'lon'))*30).hvplot().relabel('SIF')

Heat map visualizing touch input on smartphone (weighted 2d binning, histogram)

I have a dataset where each sample consists of x- and y-position, timestamp and a pressure value of touch input on a smartphone. I have uploaded the dataset here (OneDrive): data.csv
It can be read by:
import pandas as pd
df = pd.read_csv('data.csv')
Now, I would like to create a heat map visualizing the pressure distribution in the x-y space.
I envision a heat map which looks like the left or right image:
For a heat map of spatial positions a similar approach as given here could be used. For the heat map of pressure values the problem is that there are 3 dimensions, namely the x- and y-position and the pressure.
I'm happy about every input regarding the creation of the heat map.
There are several ways data can be binned. One is just by the number of events. Functions like numpy.histogram2d or hist2d allow to specify weights to each data point to manipulate the weight of each event.
But there is a more general histogram function that might be useful in your case: scipy.stats.binned_statistic_2d
By using the keyword argument statistic you can pick how the value of each bin is calculated from the values that lie within:
mean
std
median
count
sum
min
max
or a user defined function
I guess in your case mean or median might be a good solution.

Categories

Resources