csv to raster python - python

3 column csv (Lon, Lat, Ref) (63000 rows) and I would like to convert the "Ref" to raster. points (x,y) are being plotted. I want to plot the "Ref" column and add contour and color-fill it. Thanks
Data:
Lon,Lat, Ref
-115.0377,51.9147,0
-115.0679,51.9237,0
-115.0528,51.9237,0
-115.0377,51.9237,0
-115.1134,51.9416,0
-115.0982,51.9416,0
-115.0831,51.9416,0
-115.1437,51.9596,6
-115.1285,51.9596,6
-115.1588,51.9686,6
-115.1437,51.9686,10.5
-115.1285,51.9686,10.5
-115.1134,51.9686,8
-115.1891,51.9776,7.5
-115.174,51.9776,7.5
-115.1588,51.9776,7.5
-115.1437,51.9776,8
-115.1285,51.9776,8
-115.1134,51.9776,8
-115.1891,51.9866,7
-115.174,51.9866,7
-115.1588,51.9866,7
-115.1437,51.9866,0
-115.1285,51.9866,0
-115.1134,51.9866,0
-115.1891,51.9956,7
-113.1143,52.2385,3.5
-113.0992,52.2475,3.5
-113.084,52.2475,3.5
-113.0689,52.2475,5.5
-113.0537,52.2475,5.5
Code:
import pandas as pd
import geopandas
from shapely.geometry import Point
import fiona
import matplotlib.pyplot as plt
df=pd.read_csv('name.csv')
df1=df.interpolate()
geometry=[Point(xyz) for xyz in zip(df1.ix[:,0], df1.ix[:,1], df1.ix[:,2])]
df3=geopandas.GeoDataFrame(df1, geometry=geometry)
df3.plot()
plt.savefig('raster.tiff')
wanted result:

If you want to plot points from GeoPandas based on the "Ref" column, you don't need it as a z coordinate.
import pandas as pd
import geopandas
from shapely.geometry import Point
import matplotlib.pyplot as plt
df = pd.read_csv('name.csv')
geometry = [Point(xy) for xy in zip(df.iloc[:, 0], df.iloc[:, 1])]
gdf = geopandas.GeoDataFrame(df, geometry=geometry)
gdf.plot(column=' Ref')
plt.savefig('raster.tiff')
You don't even need interpolate().
However, if you want to convert your vector point dataset to raster geoTIFF, plot() is not the right way to do it. I would go for gdal.Grid() as explained here. - [Python - gdal.Grid() correct use][1]
EDIT
Using gdal.Grid() like this I am able to generate tif based on the sample of data you provided.
import os
import gdal
dir_with_csvs = r"/home/panda"
os.chdir(dir_with_csvs)
def find_csv_filenames(path_to_dir, suffix=".csv"):
filenames = os.listdir(path_to_dir)
return [ filename for filename in filenames if filename.endswith(suffix) ]
csvfiles = find_csv_filenames(dir_with_csvs)
for fn in csvfiles:
vrt_fn = fn.replace(".csv", ".vrt")
lyr_name = fn.replace('.csv', '')
out_tif = fn.replace('.csv', '.tiff')
with open(vrt_fn, 'w') as fn_vrt:
fn_vrt.write('<OGRVRTDataSource>\n')
fn_vrt.write('\t<OGRVRTLayer name="%s">\n' % lyr_name)
fn_vrt.write('\t\t<SrcDataSource>%s</SrcDataSource>\n' % fn)
fn_vrt.write('\t\t<GeometryType>wkbPoint</GeometryType>\n')
fn_vrt.write('\t\t<GeometryField encoding="PointFromColumns" x="Lon" y="Lat" z="Ref"/>\n')
fn_vrt.write('\t</OGRVRTLayer>\n')
fn_vrt.write('</OGRVRTDataSource>\n')
output = gdal.Grid('outcome.tif','name.vrt')
# below using your settings - I don't have sample large enough to properly test it, but it is generating file as well
output2 = gdal.Grid('outcome2.tif','name.vrt', algorithm='invdist:power=2.0:smoothing=1.0')
Do you have any particular reason to use gdal via shell?
[1]: https://gis.stackexchange.com/questions/254330/python-gdal-grid-correct-use

#ctvtkar, I am attaching a code here using gdal. When I run it, the file.vrt gets created, but not the .tif file. The erroe i get is: gdal_grid: not found. gdal is instaled
Code:
import subprocess
import os
dir_with_csvs = r"/home/panda"
os.chdir(dir_with_csvs)
def find_csv_filenames(path_to_dir, suffix=".csv"):
filenames = os.listdir(path_to_dir)
return [ filename for filename in filenames if filename.endswith(suffix) ]
csvfiles = find_csv_filenames(dir_with_csvs)
for fn in csvfiles:
vrt_fn = fn.replace(".csv", ".vrt")
lyr_name = fn.replace('.csv', '')
out_tif = fn.replace('.csv', '.tiff')
with open(vrt_fn, 'w') as fn_vrt:
fn_vrt.write('<OGRVRTDataSource>\n')
fn_vrt.write('\t<OGRVRTLayer name="%s">\n' % lyr_name)
fn_vrt.write('\t\t<SrcDataSource>%s</SrcDataSource>\n' % fn)
fn_vrt.write('\t\t<GeometryType>wkbPoint</GeometryType>\n')
fn_vrt.write('\t\t<GeometryField encoding="PointFromColumns" x="Lon" y="Lat" z="Ref"/>\n')
fn_vrt.write('\t</OGRVRTLayer>\n')
fn_vrt.write('</OGRVRTDataSource>\n')
gdal_cmd = 'gdal_grid -a invdist:power=2.0:smoothing=1.0 -zfield "Ref" -of GTiff -ot Float64 -l %s %s %s' % (lyr_name, vrt_fn, out_tif)
subprocess.call(gdal_cmd, shell=True)

Related

Convert .geojson to .wkt | extract 'coordinates'

Goal: Ultimately, to convert .geojson to .wkt. Here, I want to extract all coordinates, each as a list.
In my.geojson, there are n many: {"type":"Polygon","coordinates":...
Update: I've successfully extracted the first coordinates. However, this file has two coordinates.
Every .geojson has at least 1 coordinates, but may have more.
How can I dynamically extract all key-values of many coordinates?
Code:
from pathlib import Path
import os
import geojson
import json
from shapely.geometry import shape
ROOT = Path('path/')
all_files = os.listdir(ROOT)
geojson_files = list(filter(lambda f: f.endswith('.geojson'), all_files))
for gjf in geojson_files:
with open(f'{str(ROOT)}/{gjf}') as f:
gj = geojson.load(f)
o = dict(coordinates = gj['features'][0]['geometry']['coordinates'], type = "Polygon")
geom = shape(o)
wkt = geom.wkt
Desired Output:
1 .wkt for all corrdinates in geojson
To convert a series of geometries in GeoJSON files to WKT, the shape() function can convert the GeoJSON geometry to a shapely object which then can be formatted as WKT and/or projected to a different coordinate reference system.
If want to access the coordinates of polygon once it's in a shapely object, usex,y = geo.exterior.xy.
If just want to convert a series of GeoJSON files into one .wkt file per GeoJSON file then try this:
from pathlib import Path
import json
from shapely.geometry import shape
ROOT = Path('path')
for f in ROOT.glob('*.geojson'):
with open(f) as fin, open(f.with_suffix(".wkt"), "w") as fout:
features = json.load(fin)["features"]
for feature in features:
geo = shape(feature["geometry"])
# format geometry coordinates as WKT
wkt = geo.wkt
print(wkt)
fout.write(wkt + "\n")
This output uses your example my.geojson file.
Output:
POLYGON ((19372 2373, 19322 2423, ...
POLYGON ((28108 25855, 27755 26057, ...
If need to convert the coordinates to EPSG:4327 (WGS-84) (e.g. 23.314208, 37.768469), you can use pyproj.
Full code to convert collection of GeoJSON files to a new GeoJSON file in WGS-84.
from pathlib import Path
import json
import geojson
from shapely.geometry import shape, Point
from shapely.ops import transform
from pyproj import Transformer
ROOT = Path('wkt/')
features = []
# assume we're converting from 3857 to 4327
# and center point is at lon=23, lat=37
c = Point(23.676757000000002, 37.9914205)
local_azimuthal_projection = f"+proj=aeqd +R=6371000 +units=m +lat_0={c.y} +lon_0={c.x}"
aeqd_to_wgs84 = Transformer.from_proj(local_azimuthal_projection,
'+proj=longlat +datum=WGS84 +no_defs')
for f in ROOT.glob('*.geojson'):
with open(f) as fin:
features = json.load(fin)["features"]
for feature in features:
geo = shape(feature["geometry"])
poly_wgs84 = transform(aeqd_to_wgs84.transform, geo)
features.append(geojson.Feature(geometry=poly_wgs84))
# Output new GeoJSON file
with open("out.geojson", "w") as fp:
fc = geojson.FeatureCollection(features)
fp.write(geojson.dumps(fc))
Assuming the conversion is from EPSG:3857 to EPSG:4327 and center point is at lon=23, lat=37, the output GeoJSON file will look like this:
{"features": [{"type": "Polygon", "geometry": {"coordinates": [[[23.897879, 38.012554], ...

Cut NetCDF files by shapefile

I have a large dataset of global .nc files and I am trying to clip them to a smaller area. I have this area stored as a .shp file.
I have tried using gdal from Qgis but needs to do this by converting each variable and I must select each variable and same shape for all files one by one and with 400 files going trough each variable seems not the best idea. Also this returns .tiff files separated and not the .nc file that i am aiming for.
I had this little script but its not doing what i need
import glob
import subprocess
import os
ImageList = sorted(glob.glob('*.nc'))
print('number of images to process: ', len(ImageList))
Shapefile = 'NHAF-250m.shp'
# Create output directory
OutDir = './Clipped_Rasters/'
if not os.path.exists(OutDir):
os.makedirs(OutDir)
for Image in ImageList:
print('Processing ' + Image)
OutImage = OutDir + Image.replace('.nc', '_BurnedArea_Clipped.tif') # Defines Output Image
# Clip image
subprocess.call('gdalwarp -q -cutline /Users/path/to/file/NHAF-250-vector/ -tr 0.25 0.25 -of GTiff NETCDF:'+Image+":burned_area "+OutImage, shell=True)
print('Done.' + '\n')
print('All images processed.')
Thank you in advance
I recommend to use xarray to handle netcdf data and geopandas + rasterio to handle your Shapefile.
import geopandas
import xarray
import rasterio
import glob
shapefile = 'NHAF-250m.shp'
sf = geopandas.read_file(shapefile)
shape_mask = rasterio.features.geometry_mask(sf.iloc[0],
out_shape=(len(ndvi.y), len(ndvi.x)),
transform=ndvi.geobox.transform,
invert=True)
shape_mask = xarray.DataArray(shape_masj , dims=("y", "x"))
file_list = sorted(glob.glob('*.nc'))
for file in file_list:
nc_file = xarray.open_dataset(file)
# Then apply the mask
masked_netcdf_file = nc_file.where(shape_mask == True, drop=True)
# store again as netcdf or do what every you want with the masked array

Resample image rasterio/gdal, Python

How can I resample a single band GeoTIFF using Bilinear interpolation?
import os
import rasterio
from rasterio.enums import Resampling
from rasterio.plot import show,show_hist
import numpy as np
if __name__ == "__main__":
input_Dir = 'sample.tif'
#src = rasterio.open(input_Dir)
#show(src,cmap="magma")
upscale_factor = 2
with rasterio.open(input_Dir) as dataset:
# resample data to target shape
data = dataset.read(
out_shape=(
dataset.count,
int(dataset.height * upscale_factor),
int(dataset.width * upscale_factor)
),
resampling=Resampling.bilinear
)
# scale image transform
transform = dataset.transform * dataset.transform.scale(
(dataset.width / data.shape[-1]),
(dataset.height / data.shape[-2])
)
show(dataset,cmap="magma",transform=transform)
I have tried the following code and my output is as follows:
I am trying to achieve the following output:
One option would be to use the GDAL python bindings. Then you can perform the resample in memory (or you can save the image if you want). Assuming the old raster resolution was 0.25x0.25 and you're resampling to 0.10x0.10:
from osgeo import gdal
input_Dir = 'sample.tif'
ds = gdal.Translate('', input_Dir, xres=0.1, yres=0.1, resampleAlg="bilinear", format='vrt')
If you want to save the image put output filepath instead of the empty string for the first argument and change the format to 'tif'!

How to import data from absolute path in a csv using pandas?

I have a CSV containing n records and it is filled with absolute paths to the images. I'd like to import those images into a numpy matrix.
import pandas as pd
from PIL import Image
import numpy as np
def load_image( infilename ) :
img = Image.open( infilename )
img.load()
data = np.asarray( img, dtype="int32" )
return data
df = pd.read_csv (r'Path where the CSV file is stored\File name.csv')
for i in range(len(df)) :
print(load_image(df.iloc[i, 0]))
You can store the returned values in list if you want else directly use.
You can use the pandas read_csv function.
import pandas as pd
df = pd.read_csv (r'Path where the CSV file is stored\File name.csv')
print (df)
Source: https://datatofish.com/import-csv-file-python-using-pandas/

How to convert pixel to wavelength in spectra from FITS files in Python?

I've been using matplotlib.pyplot to plot a spectrum from fits files in Python and getting the intensity versus pixel, but what I actually need is to convert the pixels to wavelength. I've seen similar questions that got me in the right path of what I need to do (e.g. similar question,RGB example) but I still feel lost in the process.
I have FITS files with wavelengths around 3500 and 6000 (A), in float32 format and dimensions (53165,).
So as I understand, I need to calibrate the pixel positions to wavelength. I have my rest wavelength header (RESW), my "step" wavelength header (STW) and I would need to get:
x = RESW + (number of pixels * STW)
and the plot it. Here is what I got in my code so far.
import os, glob
from glob import glob
from pylab import *
from astropy.io import ascii
import scipy.constants as constants
import matplotlib.pylab as plt
from astropy.io import fits
#add directory you have your files in
dir = ''
#OPEN ALL FILES AND STORE THEM INTO A LIST
files= glob(dir + '*.fits')
for fi in (files):
print(fi)
name = fi[:-len('.fits')] #Remove '.fits' from the file name
with fits.open(dir + fi) as hdu:
hdu.info()
data = hdu[0].data
hdr = hdu[0].header #added to try2
step = hdr['CDELT1'] #added to try2
restw = hdr['CRVAL1'] #added to try2
#step = fits.getheader('STW') #added to try
#restw = fits.getheader('RESW') #added to try
spectra = restw + (data * step) #added to try
plt.clf()
plt.plot(spectra)
plt.savefig(name + '.pdf')
I've tried using fits.getheader('') but I don't know where or how to put it because this way is not working right.
Could someone please help? Thanks in advance!

Categories

Resources