I have a following dataframe, the lat and lon are the latitudes and longitudes in Geographic coordinates system. I am trying to convert these coordinate system into native (x, y) projection.
I have tried pyproj for single points, but how do I proceed for the whole dataframe with thousands of rows.
time lat lon
0 2011-01-31 02:41:00 18.504273 -66.009332
1 2011-01-31 02:42:00 18.504673 -66.006225
I am trying to get something like this:
time lat lon x_Projn y_Projn
0 2011-01-31 02:41:00 18.504273 -66.009332 resp_x_val resp_y_val
1 2011-01-31 02:42:00 18.504673 -66.006225 resp_x_val resp_y_val
and so on...
Following is the code I tried for lat/lon to x,y system:
from pyproj import Proj, transform
inProj = Proj(init='epsg:4326')
outProj = Proj(init='epsg:3857')
x1,y1 = -105.150271116, 39.7278572773
x2,y2 = transform(inProj,outProj,x1,y1)
print (x2,y2)
Output:
-11705274.637407782 4826473.692203013
Thanks for any kind of help.
Unfortunately, pyproj only converts point by point. I guess something like this should work:
import pandas as pd
from pyproj import Proj, transform
inProj = Proj(init='epsg:4326')
outProj = Proj(init='epsg:3857')
def towgs84(row):
return pd.Series(transform(inProj, outProj, row["lat"], row["lon"]))
wsg84_df = df.apply(towgs84, axis=1) # new coord dataframe with two columns
You can iterate through the rows in a pandas data frame, transform the Longitude and Latitude values for each row, make two lists with the 1st and second coordinate values, and then turn the lists into new columns in original your data frame. Maybe not the prettiest, but this got the job done for me.
from pyproj import Proj, transform
M1s = [] #initiate empty list for 1st coordinate value
M2s = [] #initiate empty list for 2nd coordinate value
for index, row in df.iterrows(): #iterate over rows in the dataframe
long = row["Longitude (decimal degrees)"] #get the longitude for one row
lat = row["Latitude (decimal degrees)"] #get the latitude for one row
M1 = (transform(Proj(init='epsg:4326'), Proj(init='epsg:3857'), long, lat))[0] #get the 1st coordinate
M2 = (transform(Proj(init='epsg:4326'), Proj(init='epsg:3857'), long, lat))[1] #get the second coordinate
M1s.append(M1) #append 1st coordinate to list
M2s.append(M2) #append second coordinate to list
df['M1'] = M1s #new dataframe column with 1st coordinate
df['M2'] = M2s #new dataframe columne with second coordinate
Related
I want to calculate the distance from each point of dataframe geosearch_crs to the polygons in the gelb_crs dataframe, returning only the minimum distance.
I have tried this code:
for i in range(len(geosearch_crs)):
point = geosearch_crs['geometry'].iloc[i]
for j in range(len(gelb_crs)):
poly = gelb_crs['geometry'].iloc[j]
print(point.distance(poly).min())
it returns this error:
AttributeError: 'float' object has no attribute 'min'
I somehow don't get how to return what i want, the points.distance(poly).min() function should work though.
This is part of the data frames (around 180000 entries):
geosearch_crs:
count
geometry
12
POINT (6.92334 50.91695)
524
POINT (6.91970 50.93167)
5
POINT (6.96946 50.91469)
gelb_crs (35 entries):
name
geometry
Polygon 1
POLYGON Z ((6.95712 50.92851 0.00000, 6.95772 ...
Polygon 2
POLYGON Z ((6.91896 50.92094 0.00000, 6.92211 ...
I'm not sure about the 'distance' method, but maybe you could try adding the distances to a list:
distances = list()
for i in geosearch_crs:
for j in gelb_crs:
distances.append(i.distance(j))
print(min(distances))
your sample polygon data is unusable as it's truncated with ellipses. Have used two other polygons to deomonstrate as a MWE
need to ensure that CRS in both data frames is compatible. Your sample data is clearly in two different CRS, points look like epsg:4326, polygons are either a UTM CRS or EPSG:3857 from the range of values
geopandas sjoin_nearest() is simple way to find nearest polygon and get distance. Have use UTM CRS so that distance is in meters rather than degrees
import geopandas as gpd
import pandas as pd
import shapely
import io
df = pd.read_csv(
io.StringIO(
"""count,geometry
12,POINT (6.92334 50.91695)
524,POINT (6.91970 50.93167)
5,POINT (6.96946 50.91469)"""
)
)
geosearch_crs = gpd.GeoDataFrame(
df, geometry=df["geometry"].apply(shapely.wkt.loads), crs="epsg:4326"
)
# generated as sample in question unusable
df = pd.read_csv(
io.StringIO(
'''name,geometry
Polygon 1,"POLYGON ((6.9176561 50.8949742, 6.9171649 50.8951417, 6.9156967 50.8957149, 6.9111788 50.897751, 6.9100077 50.8989409, 6.9101989 50.8991319, 6.9120049 50.9009167, 6.9190374 50.9078591, 6.9258157 50.9143227, 6.9258714 50.9143691, 6.9259546 50.9144355, 6.9273598 50.915413, 6.9325715 50.9136438, 6.9331018 50.9134553, 6.9331452 50.9134397, 6.9255391 50.9018725, 6.922309 50.8988869, 6.9176561 50.8949742))"
Polygon 2,"POLYGON ((6.9044955 50.9340428, 6.8894236 50.9344297, 6.8829359 50.9375553, 6.8862995 50.9409307, 6.889446 50.9423764, 6.9038401 50.9436598, 6.909518 50.9383374, 6.908634 50.9369064, 6.9046363 50.9340648, 6.9045721 50.9340431, 6.9044955 50.9340428))"'''
)
)
gelb_crs = gpd.GeoDataFrame(
df, geometry=df["geometry"].apply(shapely.wkt.loads), crs="epsg:4326"
)
geosearch_crs.to_crs(geosearch_crs.estimate_utm_crs()).sjoin_nearest(
gelb_crs.to_crs(geosearch_crs.estimate_utm_crs()), distance_col="distance"
)
count
geometry
index_right
name
distance
0
12
POINT (354028.1446652143 5642643.287732874)
0
Polygon 1
324.158
2
5
POINT (357262.7994182631 5642301.777981625)
0
Polygon 1
2557.33
1
524
POINT (353818.4585403281 5644287.172541857)
1
Polygon 2
971.712
I am struggling to calculate the distance between multiple sets of latitude and longitude coordinates. In, short, I have found numerous tutorials that either use math or geopy. These tutorials work great when I just want to find the distance between ONE set of coordindates (or two unique locations). However, my objective is to scan a data set that has 400k combinations of origin and destination coordinates. One example of the code I have used is listed below, but it seems I am getting errors when my arrays are > 1 record. Any helpful tips would be much appreciated. Thank you.
# starting dataframe is df
lat1 = df.lat1.as_matrix()
long1 = df.long1.as_matrix()
lat2 = df.lat2.as_matrix()
long2 = df.df_long2.as_matrix()
from geopy.distance import vincenty
point1 = (lat1, long1)
point2 = (lat2, long2)
print(vincenty(point1, point2).miles)
Edit: here's a simple notebook example
A general approach, assuming that you have a DataFrame column containing points, and you want to calculate distances between all of them (If you have separate columns, first combine them into (lon, lat) tuples, for instance). Name the new column coords.
import pandas as pd
import numpy as np
from geopy.distance import vincenty
# assumes your DataFrame is named df, and its lon and lat columns are named lon and lat. Adjust as needed.
df['coords'] = zip(df.lat, df.lon)
# first, let's create a square DataFrame (think of it as a matrix if you like)
square = pd.DataFrame(
np.zeros(len(df) ** 2).reshape(len(df), len(df)),
index=df.index, columns=df.index)
This function looks up our 'end' coordinates from the df DataFrame using the input column name, then applies the geopy vincenty() function to each row in the input column, using the square.coords column as the first argument. This works because the function is applied column-wise from right to left.
def get_distance(col):
end = df.ix[col.name]['coords']
return df['coords'].apply(vincenty, args=(end,), ellipsoid='WGS-84')
Now we're ready to calculate all the distances.
We're transposing the DataFrame (.T), because the loc[] method we'll be using to retrieve distances refers to index label, row label. However, our inner apply function (see above) populates a column with retrieved values
distances = square.apply(get_distance, axis=1).T
Your geopy values are (IIRC) returned in kilometres, so you may need to convert these to whatever unit you want to use using .meters, .miles etc.
Something like the following should work:
def units(input_instance):
return input_instance.meters
distances_meters = distances.applymap(units)
You can now index into your distance matrix using e.g. loc[row_index, column_index].
You should be able to adapt the above fairly easily. You might have to adjust the apply call in the get_distance function to ensure you're passing the correct values to great_circle. The pandas apply docs might be useful, in particular with regard to passing positional arguments using args (you'll need a recent pandas version for this to work).
This code hasn't been profiled, and there are probably much faster ways to do it, but it should be fairly quick for 400k distance calculations.
Oh and also
I can't remember whether geopy expects coordinates as (lon, lat) or (lat, lon). I bet it's the latter (sigh).
Update
Here's a working script as of May 2021.
import geopy.distance
# geopy DOES use latlon configuration
df['latlon'] = list(zip(df['lat'], df['lon']))
square = pd.DataFrame(
np.zeros((df.shape[0], df.shape[0])),
index=df.index, columns=df.index
)
# replacing distance.vicenty with distance.distance
def get_distance(col):
end = df.loc[col.name, 'latlon']
return df['latlon'].apply(geopy.distance.distance,
args=(end,),
ellipsoid='WGS-84'
)
distances = square.apply(get_distance, axis=1).T
I recently had to do a similar job, I ended writing a solution I consider very easy to understand and tweak to your needs, but possibly not the best/fastest:
Solution
It is very similar to what urschrei posted: assuming you want the distance between every two consecutive coordinates from a Pandas DataFrame, we can write a function to process each pair of points as the start and finish of a path, compute the distance and then construct a new DataFrame to be the return:
import pandas as pd
from geopy import Point, distance
def get_distances(coords: pd.DataFrame,
col_lat='lat',
col_lon='lon',
point_obj=Point) -> pd.DataFrame:
traces = len(coords) -1
distances = [None] * (traces)
for i in range(traces):
start = point_obj((coords.iloc[i][col_lat], coords.iloc[i][col_lon]))
finish = point_obj((coords.iloc[i+1][col_lat], coords.iloc[i+1][col_lon]))
distances[i] = {
'start': start,
'finish': finish,
'path distance': distance.geodesic(start, finish),
}
return pd.DataFrame(distances)
Usage example
coords = pd.DataFrame({
'lat': [-26.244333, -26.238000, -26.233880, -26.260000, -26.263730],
'lon': [-48.640946, -48.644670, -48.648480, -48.669770, -48.660700],
})
print('-> coords DataFrame:\n', coords)
print('-'*79, end='\n\n')
distances = get_distances(coords)
distances['total distance'] = distances['path distance'].cumsum()
print('-> distances DataFrame:\n', distances)
print('-'*79, end='\n\n')
# Or if you want to use tuple for start/finish coordinates:
print('-> distances DataFrame using tuples:\n', get_distances(coords, point_obj=tuple))
print('-'*79, end='\n\n')
Output example
-> coords DataFrame:
lat lon
0 -26.244333 -48.640946
1 -26.238000 -48.644670
2 -26.233880 -48.648480
3 -26.260000 -48.669770
4 -26.263730 -48.660700
-------------------------------------------------------------------------------
-> distances DataFrame:
start finish \
0 26 14m 39.5988s S, 48 38m 27.4056s W 26 14m 16.8s S, 48 38m 40.812s W
1 26 14m 16.8s S, 48 38m 40.812s W 26 14m 1.968s S, 48 38m 54.528s W
2 26 14m 1.968s S, 48 38m 54.528s W 26 15m 36s S, 48 40m 11.172s W
3 26 15m 36s S, 48 40m 11.172s W 26 15m 49.428s S, 48 39m 38.52s W
path distance total distance
0 0.7941932910049856 km 0.7941932910049856 km
1 0.5943709651000332 km 1.3885642561050187 km
2 3.5914909016938505 km 4.980055157798869 km
3 0.9958396130609087 km 5.975894770859778 km
-------------------------------------------------------------------------------
-> distances DataFrame using tuples:
start finish path distance
0 (-26.244333, -48.640946) (-26.238, -48.64467) 0.7941932910049856 km
1 (-26.238, -48.64467) (-26.23388, -48.64848) 0.5943709651000332 km
2 (-26.23388, -48.64848) (-26.26, -48.66977) 3.5914909016938505 km
3 (-26.26, -48.66977) (-26.26373, -48.6607) 0.9958396130609087 km
-------------------------------------------------------------------------------
As of 19th May
For anyone working with multiple geolocation data, you can adapt the above code but modify a bit to read the CSV file in your data drive. the code will write the output distances in the marked folder.
import pandas as pd
from geopy import Point, distance
def get_distances(coords: pd.DataFrame,
col_lat='lat',
col_lon='lon',
point_obj=Point) -> pd.DataFrame:
traces = len(coords) -1
distances = [None] * (traces)
for i in range(traces):
start = point_obj((coords.iloc[i][col_lat], coords.iloc[i][col_lon]))
finish = point_obj((coords.iloc[i+1][col_lat], coords.iloc[i+1][col_lon]))
distances[i] = {
'start': start,
'finish': finish,
'path distance': distance.geodesic(start, finish),
}
output = pd.DataFrame(distances)
output.to_csv('geopy_output.csv')
return output
I used the same code and generated distance data for over 50,000 coordinates.
I have a GeoPandas dataframe named barrios and I use barrios.geometry.centroid and assign it to center.
So center is geopandas.geoseries.GeoSeries with an index and values - POINT (-58.42266 -34.57393)
I need to get these coordinates and save them as a list:
[(point_1_lat, point_1_lon), (point_2_lat, point_2_lon), ...]
I tried:
[center.values.y , center.values.x]
But it returns a list of 2 arrays - [array(lat), array(lng)].
How can I get the desired result?
you can use the zip to loop through multiple variables. This should extract the coordinates to a list.
coord_list = [(x,y) for x,y in zip(gdf['geometry'].x , gdf['geometry'].y)]
alternatively, you can create a GeoDataFrame with the x and y coordinates.
first, extract the x and y coordinates and put them in new columns.
import geopandas as gpd
url = r"link\to\file"
gdf = gpd.read_file(url)
gdf['x'] = None
gdf['y'] = None
gdf['x'] = gdf.geometry.apply(lambda x: x.x)
gdf['y'] = gdf.geometry.apply(lambda x: x.y)
this returns a GeoDataFrame with x and y coordinate columns. Now extracting the coordinates into the list.
coordinate_list = [(x,y) for x,y in zip(gdf.x , gdf.y)]
this return the list of coordinate tuples
[(105.27, -5.391),
(107.615, -6.945264),
(107.629, -6.941126700000001),
(107.391, -6.9168726),
(107.6569, -6.9087003),
(107.638, -6.9999),
(107.67, -6.553),
(107.656, -6.8),
...
you'll have a list, and a GeoDataFrame with x and y columns.
I am pretty new with Python and I need some help.
I need to find the grid cells in the precipitation file (.nc) that matches the locations of water flow stations (excel file) and then extract time series for these grid cells.
I have a Exel file with 117 stations in Norway that contains columns with station name and their areal, latitude and longitude.
I also have a nc file with precipitation series for this stations.
I manage to run a python script (Jupyter notebook) for on station at a time, but want to run it for all stations.
How do i do this? I know I need to make a for loop some how.
This is my code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
import xarray as xr
import cartopy.crs as ccrs
import cartopy as cy
metapath = "Minestasjoner.xlsx"
rrdatapath = "cropped_monsum_rr_ens_mean_0.25deg_reg_v20.0e.nc"
meta = pd.read_excel(metapath)
rrdata = xr.open_dataset(rrdatapath)
i=0
station = meta.iloc[i]["Regime"]*100000 + meta.iloc[i]["Main_nr"]
lon = meta.iloc[i]["Longitude"] #get longitude
lat = meta.iloc[i]["Latitude"] #get latitude
rr_at_obsloc = rrdata["rr"].sel(latitude=lat, longitude=lon, method='nearest')
df = rr_at_obsloc.to_dataframe()
print("Station %s with lon=%.2f and lat=%.2f have closest rr gridcell at lon=%.2f and lat=%.2f"%(station,lon,lat,df.longitude[0],df.latitude[0]))
df
I think the easiest way for you to do this is to make a python dictionary containing the station name and precipitation time-series for that station, and then to convert that dictionary to a pandas.DataFrame.
Here's how you do that in a simple loop:
"""
Everything you had previously...
"""
# Initialize empty dictionary to hold station names and time-series
station_name_and_data = {}
# Loop over all stations
for i in range(117):
# Get name of station 'i'
station = meta.iloc[i]["Regime"]*100000 + meta.iloc[i]["Main_nr"]
# Get lat/lon of station 'i'
lon = meta.iloc[i]["Longitude"]
lat = meta.iloc[i]["Latitude"]
# Extract precip time-series for this lat-lon
rr_at_obsloc = rrdata["rr"].sel(latitude=lat, longitude=lon, method='nearest')
# Put this station name and it's relevant time-series into a dictionary
station_name_and_data[station]=rr_at_obsloc
# Finally, convert this dictionary to a pandas dataframe
df = pd.DataFrame(data=station_name_and_data)
print(df)
I'm working with datasets where latitudes and longitudes are sometimes mislabeled and I need to flip the longitudes and the latitudes. The best solution I could come up with is to extract the x an y coordinates using df.geometry.x and df.geometry.y, create a new geometry column, and reconstruct the GeoDataFrame using the new geometry column. Or in code form:
import geopandas
from shapely.geometry import Point
gdf['coordinates'] = list(zip(gdf.geometry.y, gdf.geometry.x))
gdf['coordinates'] = gdf['coordinates'].apply(Point)
gdf= gpd.GeoDataFrame(point_data, geometry='coordinates', crs = 4326)
This is pretty ugly, requires creating a new column and isn't efficient for large datasets. Is there an easier way to flip the longitude and latitude coordinates of a GeoSeries/ GeoDataFrame?
You can create the geometry column directly:
df['geometry'] = df.apply(lambda row: Point(row['y'], row['x']), axis=1)
df = gpd.GeoDataFrame(df, crs=4326)
It works for Point and Polygon either:
gpd.GeoSeries(gdf['coordinates']).map(lambda polygon: shapely.ops.transform(lambda x, y: (y, x), polygon))