I want to generate global weather satellite image using GOES17, EUMETSAT, and GK-2A.
I want make it Plate carree coordinate. (GOES 17 netcdf file convert to Plate Carree)
First, using Satpy, I made plate carree image.
from satpy import Scene
from glob import glob
from pyresample import create_area_def
area_def = create_area_def("my_area_def", "+proj=eqc +datum=WGS84", resolution=2000)
goes17 = glob('./samplefile/*')
goes17_scene = Scene(reader="abi_l1b", filenames=goes17)
goes17_scene.load('[C13]')
new_scn = goes17_scene.resample(area_def)
# save to geotiffs
new_scn.save_datasets()
like this method, I want make other satellite image and merging to 1 image file. but is there any simple or easiest way to generate global weather image? My final goal is generate numpy array of global satellite image.
-- My entire code --
from satpy import Scene, MultiScene
from glob import glob
from pyresample import create_area_def
area_def = create_area_def("my_area_def", "+proj=eqc +datum=WGS84", resolution=2000,)
goes17 = glob('E:/Global/GOES_17/OR_ABI-L1b-RadF-M6C13_G17_s20212130000319_e20212130009396_c20212130009445.nc')
goes17_scene = Scene(reader="abi_l1b", filenames=goes17)
goes17_scene.load(['C13'])
gk2a = glob('E:/Global/GK-2A/gk2a_ami_le1b_ir105_fd020ge_202108010000.nc')
gk2a_scene = Scene(reader="ami_l1b", filenames=gk2a)
gk2a_scene.load(['IR105'])
eumetsat = glob('E:/Global/EUMETSAT/MSG4-SEVI-MSG15-0100-NA-20210801000010.306000000Z-20210801001259-4774254.nat')
eumetsat_scene = Scene(reader='seviri_l1b_native', filenames=eumetsat)
eumetsat_scene.load(['IR_108'])
from satpy import MultiScene, DataQuery
mscn = MultiScene([goes17_scene, gk2a_scene, eumetsat_scene])
groups = {DataQuery(name='IR_group', wavelength=(10.35, 10.35, 10.8)): ['C13', 'IR105', 'IR_108']}
mscn.group(groups)
from pyresample.geometry import AreaDefinition
resampled = mscn.resample(area_def, reduce_data=False)
resampled.load(['IR_group'])
blended = resampled.blend()
blended.show(['IR_group'])
There are some ways to do this inside Satpy, but typically people have specific ways they want the data joined together. That is a question you'll have to answer before you choose the code you want. First though you need to make a Scene for each separate satellite image you want in the final image and resample them to the same grid. A DynamicAreaDefinition (as you're using now) is not good for this overall process as each resampled Scene would be on a different final area (based on the satellite data being resampled that "froze" the DynamicAreaDefinition).
Your options for merging:
Satpy has a BackgroundCompositor where you can put one image on top of another. There is some documentation for creating a custom composite where you could make a composite like this. A series of these composites could be chained together to get the overall global composite you are looking for. You can put all the datasets in the same Scene to make things easier:
scn = Scene()
scn["C13"] = resampled_goes17_scene["C13"]
... and so on for the other sensors ...
scn.load(["custom_composite"])
Use the Satpy MultiScene, give it all of your resampled Scenes and run the "blend" method to join the images together. https://satpy.readthedocs.io/en/stable/multiscene.html#blending-scenes-in-multiscene
Use xarray and dask .where functions along with a custom mask array to say where each image should appear in the overall image. Some people do this type of thing with solar zenith angle have a nice blend between the images rather than just overlaying one on top of the other.
Create individual geotiffs for each resampled sensor and use GDAL's gdal_merge.py utility to join them into one geotiff.
Related
I have a Landsat Image and an image collection (3 images: static in time but each partially overlapping the Landsat image) with one band and want to add this one band to the Landsat image.
In a traditional GIS/python df I would do an Inner Join based on geometry but I can't figure out how this might be carried out on GEE.
Neither the image or the collection share any bands for a simple join. From what I gather a spatial join is similar to a within buffer so not what I need here. I've also tried the Filter.contains() for the join but this hasn't worked. I tried addBands() despite expecting it not to work and it results in TypeError: 'ImageCollection' object is not callable:
#landsat image and image collection
geometry = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_5").filter(ee.Filter.eq('HYBAS_ID', 7050329490))
landsat= ee.ImageCollection('LANDSAT/LT04/C01/T1_TOA').filterBounds(geometry)
landsat= landsat.first()
imagecollection= ee.ImageCollection("projects/sat-io/open-datasets/GRWL/water_mask_v01_01").filterBounds(geometry)
#example of failure at addBands
combined = landsat.addBands(imagecollection('b1')) #b1 is the only band in the ic
Any help would be appreciated.
Edit: I can use a for loop to add each image individually but even with .unmask() these cannot then be combined into a single band because the lack of overlap in the ic results in null values
Not 100% sure this is what you're after, but you can simply mosaic() the 3 images into one image, and then combine the two datasets into a new ImageCollection.
UPDATE: Use addBands() instead:
// landsat image and image collection
var geometry = ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_5").filter(ee.Filter.eq('HYBAS_ID', 7050329490))
var landsat= ee.ImageCollection('LANDSAT/LT04/C01/T1_TOA').filterBounds(geometry)
landsat= landsat.first()
var imagecollection= ee.ImageCollection("projects/sat-io/open-datasets/GRWL/water_mask_v01_01")
.filterBounds(geometry)
.mosaic()
.rename('WaterMask')
print(imagecollection)
// combine both datasets
var combined = landsat.addBands(imagecollection)
print(combined)
I'm having some troubles with trying to get a monthly average with Sentinel 3 images on... Everything, really. Python, Matlab, we are two people getting stuck in this problem.
The main reason deals with the fact that these images' information is not on a single netcdf file, neatly put with coordinates and products. Instead, they are all in separate files inside a one day folder as
different .nc files with different information each, about one single satellite image. SNAP uses an xmlxs file to work with all of these separate .nc files as I understand it.
Now, I though it would be a good idea to try to merge and create/edit the .nc files as to create a new daily .nc which included the chlorophyll, the coordinates and, might as well add it, time. Later on, I would merge these new ones so to be able to make a monthly mean with xarray. At least that was my idea but I can't do the first part. It might be an obvious solution however here's what I tried, using the xarray module
import os
import numpy as np
import xarray as xr
import netCDF4
from netCDF4 import Dataset
nc_folder = df_try.iloc[0] #folder where the image files are
#open dataset in xarray
nc_chl = xr.open_dataset(str(nc_folder['path']) + '/' + 'chl_nn.nc') #path to chlorophyll file
nc_chl
n_coord =xr.open_dataset(str(nc_folder['path'])+ '/'+ 'geo_coordinates.nc') #path to coordinates file
n_time = xr.open_dataset(str(nc_folder['path'])+ '/' + 'time_coordinates.nc') #path to time file
ds_grid = [[nc_chl], [n_coord], [n_time]]
combined = xr.combine_nested(ds_grid, concat_dim=[None, None])
combined #dataset with all but not recognizing coordinates
ds = combined.rename({'latitude': 'lat', 'longitude': 'lon', 'time_stamp' : 'time'}).set_coords(['lon', 'lat', 'time']) #dataset recognizing coordinates as coordinates
ds
which gives a dataset with
Dimensions: columns 4865 rows: 4091
3 coordinates (lat, lon and time) and the chl variable.
Now, it doesn't save to netcdf4 (I tried but there was an error) but I was also thinking if anyone knew of another way to make an average? I have images from three years (beginning on 2017 to ending on 2019) I would need to average in different ways (monthly, seasonally...). My main current problem is that the chlorophyll values are separate from the geographical coordinates so directly only using the chlorophyll files should not work and would just make a mess.
Any suggestions?
Two options here:
Using xarray
In xarray you can add them as coordinates. It is a bit tricky as the coordinates in the geo_coordinates.nc file are multidimensional as well.
A possible solution is the following:
import netCDF4
import xarray as xr
import matplotlib.pyplot as plt
# paths
root = r'C:<your_path>\S3B_OL_2_WFR____20201015.SEN3\chl_nn.nc' #set path to chl file
coor = r'C:<your_path>\S3B_OL_2_WFR____20201015.SEN3\geo_coordinates.nc' #set path to the coordinates file
# loading xarray datasets
ds = xr.open_dataset(root)
olci_geo_coords = xr.open_dataset(coor)
# extracting coordinates
lat = olci_geo_coords.latitude.data
lon = olci_geo_coords.longitude.data
# assign coordinates to the chl dataset (needs to refer to both the dimensions of our dataset)
ds = ds.assign_coords({"lon":(["rows","columns"], lon), "lat":(["rows","columns"], lat)})
# clip the image (add your own coordinates)
area_of_interest = ds.where((10 < ds.lon) & (ds.lon < 12) & (58 < ds.lat) & (ds.lat < 59), drop=True)
# simple plot with coordinates as axis
plt.figure(figsize=(15,15))
area_of_interest["CHL_NN"].plot(x="lon",y="lat")
Even simpler is to add them as variables in a new dataset:
# path to the folder
root = r'C:<your_path>\S3B_OL_2_WFR____20201015.SEN3\*.nc' #set path to chl file
# create a dataset by combining nc files (coordinates will become variables)
ds = xr.open_mfdataset(root,combine = 'by_coords')
But in this case when you plot the image or clip it you cannot use the coordinates directly.
Using snappy
In python the snappy package is available and based on SNAP toolbox (which is implemented on JAVA). Check: https://senbox.atlassian.net/wiki/spaces/SNAP/pages/19300362/How+to+use+the+SNAP+API+from+Python
Once installed (unfortunately snappy supports only python 2.7, 3.3 or 3.4), you can use the available SNAP function directly on python to aggregate your satellite images and create week/month averages. You then do not need to merge the lon, lat netcdf file as you will work on the xfdumanifest.xml and SNAP will take care of that.
This is an example. It performs aggregation as well (mean calculated on two chl nc files):
from snappy import ProductIO, WKTReader
from snappy import jpy
from snappy import GPF
from snappy import HashMap
# setting the aggregator method
aggregator_average_config = snappy.jpy.get_type('org.esa.snap.binning.aggregators.AggregatorAverage$Config')
agg_avg_chl = aggregator_average_config('CHL_NN')
# creating the hashmap to store the parameters
HashMap = snappy.jpy.get_type('java.util.HashMap')
parameters = HashMap()
#creating the aggregator array
aggregators = snappy.jpy.array('org.esa.snap.binning.aggregators.AggregatorAverage$Config', 1)
#adding my aggregators in the list
aggregators[0] = agg_avg_chl
# set parameters
# output directory
dir_out = 'level-3_py_dynamic.dim'
parameters.put('outputFile', dir_out)
# number of rows (directly linked with resolution)
parameters.put('numRows', 66792) # to have about 300 meters spatial resolution
# aggregators list
parameters.put('aggregators', aggregators)
# Region to clip the aggregation on
wkt="POLYGON ((8.923302175377243 59.55648108694149, 13.488748662344074 59.11388968719029,12.480488185001589 56.690625338725155, 8.212366327767503 57.12425256476263,8.923302175377243 59.55648108694149))"
geom = WKTReader().read(wkt)
parameters.put('region', geom)
# Source product path
path_15 = r"C:<your_path>\S3B_OL_2_WFR____20201015.SEN3\xfdumanifest.xml"
path_16 = r"C:\<your_path>\S3B_OL_2_WFR____20201016.SEN3\xfdumanifest.xml"
path = path_15 + "," + path_16
parameters.put('sourceProductPaths', path)
#result = snappy.GPF.createProduct('Binning', parameters, (source_p1, source_p2))
# create results
result = snappy.GPF.createProduct('Binning', parameters) #to be used with product paths specified in the parameters hashmap
print("results stored in: {0}".format(dir_out) )
I am quite new and interested in the topic and would be happy to hear your/other solutions!
I know this topic has been asked before, but as i'm new to python I couldn't fully understand how to do that and I would like to get explanations about.
I have ndarray cube (stack of images from the same location with the same size and shape which differs in the wavelength they were taken).
I want to convert this image into pandas dataframe in order to be able to iterate through specific rows.
i'm really confused because of the big number of columns I have: I ahve 1024 columns in each image and that confuse me when I need to index those images.
My end goal is to get in the end the images in structure of df, so maybe it means to have kind of imagecollection that I can iterate rows in each one of them.
this is the code I have written until now:
import spectral.io.envi as envi
import matplotlib.pyplot as plt
import os
from spectral import *
import numpy as np
#Create the image path
#the path
img_path = r'N:\this\is\a\path\capture'
cali_path=r'N:\location\Image_Python'
#the specific file
img_file = 'emptyname_2019-08-13_11-05-46.hdr'
img_dark= 'DARKREF_emptyname_2019-08-13_11-05-46.hdr'
cali_hdr= 'Radiometric_1x1.hdr'
cali_img = 'Radiometric_1x1.cal'
img= envi.open(os.path.join(img_path,img_file)).load()
img_dark= envi.open(os.path.join(img_path,img_dark)).load()
img_cali= envi.open(os.path.join(cali_path,cali_hdr), image = os.path.join(cali_path,cali_img)).load()
cali_shape=img_cali.shape
dark_shape=img_dark.shape
img_shape=img.shape
print('shape image:',img_shape,'shape dark:',dark_shape,'calibration shape:',cali_shape)
wavelength=[float(i) for i in img.metadata['wavelength']]
#get the exposure time
tint=float(img.metadata['tint'])
print(tint)
#goak: need to reduce the dark reference from DN image.
#step 1: for each column in the dark reference, calculate mean. then reduce this mean line from the DN image.
#we have created average according to the horizontal axix- axis=0, it calculates the mean for the whole column and we get one row.
dark_1024=img_dark.mean(axis=0)
from numpy import asarray
import pandas as pd
img_np=asarray(img)
dark_np=asarray(img_dark)
cali_np=asarray(img_cali)
I have a series of unreferenced aerial images that I would like to georeference using python. The images are identical spatially (they are actually frames extracted from a video), and I obtained ground control points for them by manually georeferencing one frame in ArcMap. I would like to apply the ground control points I obtained to all the subsequent images, and as a result obtain a geo-tiff or a jpeg file with a corresponding world file (.jgw) for each processed image. I know this is possible to do using arcpy, but I do not have access to arcpy, and would really like to use a free open source module if possible.
My coordinate system is NZGD2000 (epsg 2193), and here is the table of control points I wish to apply to my images:
176.412984, -310.977264, 1681255.524654, 6120217.357425
160.386905, -141.487145, 1681158.424227, 6120406.821253
433.204947, -310.547238, 1681556.948690, 6120335.658359
Here is an example image: https://imgur.com/a/9ThHtOz
I've read a lot of information on GDAL and rasterio, but I don't have any experience with them, and am failing to adapt bits of code I found to my particular situation.
Rasterio attempt:
import cv2
from rasterio.warp import reproject
from rasterio.control import GroundControlPoint
from fiona.crs import from_epsg
img = cv2.imread("Example_image.jpg")
# Creating ground control points (not sure if I got the order of variables right):
points = [(GroundControlPoint(176.412984, -310.977264, 1681255.524654, 6120217.357425)),
(GroundControlPoint(160.386905, -141.487145, 1681158.424227, 6120406.821253)),
(GroundControlPoint(433.204947, -310.547238, 1681556.948690, 6120335.658359))]
# The function requires a parameter "destination", but I'm not sure what to put there.
# I'm guessing this may not be the right function to use
reproject(img, destination, src_transform=None, gcps=points, src_crs=from_epsg(2193),
src_nodata=None, dst_transform=None, dst_crs=from_epsg(2193), dst_nodata=None,
src_alpha=0, dst_alpha=0, init_dest_nodata=True, warp_mem_limit=0)
GDAL attempt:
from osgeo import gdal
import osr
inputImage = "Example_image.jpg"
outputImage = "image_gdal.jpg"
dataset = gdal.Open(inputImage)
I = dataset.ReadAsArray(0,0,dataset.RasterXSize,dataset.RasterYSize)
outdataset = gdal.GetDriverByName('GTiff')
output_SRS = osr.SpatialReference()
output_SRS.ImportFromEPSG(2193)
outdataset = outdataset.Create(outputImage,dataset.RasterXSize,dataset.RasterYSize,I.shape[0])
for nb_band in range(I.shape[0]):
outdataset.GetRasterBand(nb_band+1).WriteArray(I[nb_band,:,:])
# Creating ground control points (not sure if I got the order of variables right):
gcp_list = []
gcp_list.append(gdal.GCP(176.412984, -310.977264, 1681255.524654, 6120217.357425))
gcp_list.append(gdal.GCP(160.386905, -141.487145, 1681158.424227, 6120406.821253))
gcp_list.append(gdal.GCP(433.204947, -310.547238, 1681556.948690, 6120335.658359))
outdataset.SetProjection(srs.ExportToWkt())
wkt = outdataset.GetProjection()
outdataset.SetGCPs(gcp_list,wkt)
outdataset = None
I don't quite know how to make the above code work, and I would really appreciate any help with this.
I ended up reading a book "Geoprocessing with Python" and finally found a solution that worked for me. Here is the code I adapted to my problem:
import shutil
from osgeo import gdal, osr
orig_fn = 'image.tif'
output_fn = 'output.tif'
# Create a copy of the original file and save it as the output filename:
shutil.copy(orig_fn, output_fn)
# Open the output file for writing for writing:
ds = gdal.Open(output_fn, gdal.GA_Update)
# Set spatial reference:
sr = osr.SpatialReference()
sr.ImportFromEPSG(2193) #2193 refers to the NZTM2000, but can use any desired projection
# Enter the GCPs
# Format: [map x-coordinate(longitude)], [map y-coordinate (latitude)], [elevation],
# [image column index(x)], [image row index (y)]
gcps = [gdal.GCP(1681255.524654, 6120217.357425, 0, 176.412984, 310.977264),
gdal.GCP(1681158.424227, 6120406.821253, 0, 160.386905, 141.487145),
gdal.GCP(1681556.948690, 6120335.658359, 0, 433.204947, 310.547238)]
# Apply the GCPs to the open output file:
ds.SetGCPs(gcps, sr.ExportToWkt())
# Close the output file in order to be able to work with it in other programs:
ds = None
For your gdal method, just using gdal.Warp with the outdataset should work, e.g.
outdataset.SetProjection(srs.ExportToWkt())
wkt = outdataset.GetProjection()
outdataset.SetGCPs(gcp_list,wkt)
gdal.Warp("output_name.tif", outdataset, dstSRS='EPSG:2193', format='gtiff')
This will create a new file, output_name.tif.
As an addition to #Kat's answer, to avoid quality loss of the original image file and set the nodata-value to 0, the following can be used.
#Load the original file
src_ds = gdal.Open(orig_fn)
#Create tmp dataset saved in memory
driver = gdal.GetDriverByName('MEM')
tmp_ds = driver.CreateCopy('', src_ds, strict=0)
#
# ... setting GCP....
#
# Setting no data for all bands
for i in range(1, tmp_ds.RasterCount + 1):
f = tmp_ds.GetRasterBand(i).SetNoDataValue(0)
# Saving as file
driver = gdal.GetDriverByName('GTiff')
ds = driver.CreateCopy(output_fn, tmp_ds, strict=0)
I am trying to splice a fits array based on the latitudes provided from the Header. However, I cannot seem to do so with my knowledge of Python and the documentation of astropy. The code I have is something like this:
from astropy.io import fits
import numpy as np
Wise1 = fits.open('Image1.fits')
im1 = Wise1[0].data
im1 = np.where(im1 > *latitude1, 0, im1)
newhdu = fits.PrimaryHDU(im1)
newhdulist = fits.HDUList([newhdu])
newhdulist.writeto('1b1_Bg_Removed_2.fits')
Here latitude1 would be a value in degrees, recognized after being called from the header. So there are two things I need to accomplish:
How to call the header to recognize Galactic Latitudes?
Splice the array in such a way that it only contains values for the range of latitudes, with everything else being 0.
I think by "splice" you mean "cut out" or "crop", based on the example you've shown.
astropy.nddata has a routine for world-coordinate-system-based (i.e., lat/lon or ra/dec) cutouts
However, in the simple case you're dealing with, you just need the coordinates of each pixel. Do this by making a WCS:
from astropy import wcs
w = wcs.WCS(Wise1[0].header)
xx,yy = np.indices(im.shape)
lon,lat = w.wcs_pix2world(xx,yy,0)
newim = im[lat > my_lowest_latitude]
But if you want to preserve the header information, you're much better off using the cutout tool, since you then do not have to manually manage this.
from astropy.nddata import Cutout2D
from astropy import coordinates
from astropy import units as u
# example coordinate - you'll have to figure one out that's in your map
center = coordinates.SkyCoord(mylon*u.deg, mylat*u.deg, frame='fk5')
# then make an array cutout
co = nddata.Cutout2D(im, center, size=[0.1,0.2]*u.arcmin, wcs=w)
# create a new FITS HDU
hdu = fits.PrimaryHDU(data=co.data, header=co.wcs.to_header())
# write to disk
hdu.writeto('cropped_file.fits')
An example use case is in the astropy documentation.