Satellite elevation angles negative - python

Until recently, the code I used to calculate the elevation and azimuth of a satellite from one particular site appeared to work. Then I used a different file from a site, not in the northern but in the southern hemisphere. The elevation angles calculated were sometimes negative i.e. the satellite was below the horizon. This is incorrect, so either the file parameters to be used for the satellite position are false (less likely), or my code below is incorrect (more likely).
rx_position = (self.X_rx, self.Y_rx, self.Z_rx)
diff_position = satellite_carteisan_coords - rx_position
diff_position_individual = diff_position[0]
rho = np.sqrt(diff_position_individual[0]**2 + diff_position_individual[1]**2 + diff_position_individual[2]**2)
dphi, dlambda, h = self.cartesian_to_geoid(rx_position)
slat = np.sin(np.radians(dphi))
slon = np.sin(np.radians(dlambda))
clat = np.cos(np.radians(dphi))
clon = np.cos(np.radians(dlambda))
F = np.array([[-slon, -slat*clon, clat*clon],
[clon, -slat*slon, clat*slon],
[0, clat, slat ]])
np.split(diff_position_individual, 3)
local_vector = np.dot(F.T, diff_position_individual)
E = local_vector[0]
N = local_vector[1]
U = local_vector[2]
hor_dis = np.sqrt(E**2 + N**2)
if hor_dis < 1e-20:
Azi = 0
Ele = 90
else:
Azi = np.rad2deg(math.atan2(E, N))
Ele = np.rad2deg(math.atan2(U, hor_dis))
if Azi < 0:
Azi = Azi + 360
return Azi, Ele
How can I ensure that the code above does not end up calculating elevation angles that are negative?

Related

Counting data points in each 'pixel'

I am having issue writing a function that will scan a (for example) 10×10 pixel area for the amount of data points with a certain average.
My issue is getting a 10×10 area across the x axis while keeping the y axis at a constant and then once the x axis 'scan' is complete moving up 10 pixels on the y axis and repeating the scan on the x axis.
# creating a standard to compare images against.
# from corner dark region with a lack of sources.
baseline = np.mean(
M91_master_image[0].data[1800:2000, 1800:2000]
+ 4 * (M91_master_image[0].data[1800:2000, 1800:2000].std())
)
mst_image_data = M91_master_image[0].data
o_y = 400
o_y2 = 2000
o_x = 400
o_x2 = 2000
s = 10 # pixel increments
x_array = []
y_array = []
while o_x < o_x2:
# print(o_x)
counts = mst_image_data[o_x:o_x + s, o_y:o_y + s]
if counts.mean() > baseline:
x_pos_c = (o_x + (o_x + s))/2
# print(x_pos_c)
x_array = np.append(x_array, x_pos_c)
# print(o_x)
y_pos_c = (o_y + (o_y + s))/2
y_array = np.append(y_array, y_pos_c)
o_y += s
o_x += s
print(x_array, y_array)
From the image you can see what I get is a diagonal:

Weighted correlation function in python

I tried to implement a weighted correlation function based on the article in the link, formula number 2:
http://staff.ustc.edu.cn/~lshao/papers/paper07.pdf
let suppose to have 3 vectors s, r and w each of n elements.
The vector w is obtained from the following formulas:
w = |r|/(1+D)
D = |s - k*r|
k = (r_Transpose * s)/(r_Transpose*r)
I would like to implement the formula for the weighted correlation function described on the article. It is correct my implementation?
I start from a matrix of dimension [224,640] which means that i have 640 vectors of 224 elements. I would like to calculated the weighted correlation coefficient between those 640 vector respect one other vector - r. Every one of those 640 vector is the vector s.
ref = reference
ref_mean = np.mean(ref) # calcolo il valore medio dello spettro di riferimento
sens = 190
frame_correlation = np.zeros((1,640))
img_correlation = np.zeros((nf,npixels))
for i in range(nf):
frame_test = dati_new[:,:,i] Selection of one matrix from a cell of matrices
for j in range(npixels):
spettro_test = frame_test[:,j] # is my vector s
spettro_test = np.reshape(spettro_test,(224,1))
spettro_test_mean = np.mean(spettro_test)
k = np.dot(np.transpose(ref),spettro_test)/np.dot(np.transpose(ref),ref)
k = k[0][0]
D = np.abs(spettro_test - k*ref)
W = np.abs(ref)/(1+D)
# NUMERATOR OF FORMULA IN THE ARTICLE
numeratore = np.sum(W*(spettro_test - spettro_test_mean)*(ref - ref_mean))
# DENOMINATOR
den1_ex = np.sqrt(np.sum(W*np.power(spettro_test - spettro_test_mean,2)))
den2_ex = np.sqrt(np.sum(W*np.power(ref - ref_mean,2)))
denominatore = den1_ex * den2_ex
rho = numeratore/denominatore
if rho < 0:
rho = 0
if rho > 1: # for safety reason
rho = 1
if rho >=0.99:
rho = (sens*rho)/100
frame_correlation[:,j]= rho
img_correlation[i,:] = frame_correlation
this is the code i wrote in order to implement the Weighted correlation function between two array, selected from a matrix.
ref = reference
ref_mean = np.mean(ref)
sens = 190
nf = n #number of matrices
frame_correlation = np.zeros((1,640))
img_correlation = np.zeros((nf,npixels))
for i in range(nf):
frame_test = dati_new[:,:,i] #dati_new is a 3D structure made of nf matrices
for j in range(npixels):
spettro_test = frame_test[:,j]
spettro_test = np.reshape(spettro_test,(224,1))
spettro_test_mean = np.mean(spettro_test)
# calcolo del peso per lo spettro selezionato
k = np.dot(np.transpose(ref),spettro_test)/np.dot(np.transpose(ref),ref)
k = k[0][0]
D = np.abs(spettro_test - k*ref)
W = np.abs(ref)/(1+D)
# Definizione del numeratore del coefficiente di correlazione
numeratore = np.sum(W*(spettro_test - spettro_test_mean)*(ref - ref_mean))
# Definizione del denominatore del coefficiente di correlazione
den1_ex = np.sqrt(np.sum(W*np.power(spettro_test - spettro_test_mean,2)))
den2_ex = np.sqrt(np.sum(W*np.power(ref - ref_mean,2)))
denominatore = den1_ex * den2_ex
rho = numeratore/denominatore
if rho < 0:
rho = 0
if rho > 1: # just in case
rho = 1
if rho >=0.998:
rho = (sens*rho)/100
frame_correlation[:,j]= rho
img_correlation[i,:] = frame_correlation
img_correlation = np.array(img_correlation)
fig, ax=plt.subplots()
ax.imshow(img_correlation,cmap="gray", origin="lower")
plt.title('correlation coefficient image')
plt.xlabel("Pixels")
plt.ylabel("Number of frames")
plt.show()

Using Runge-Kutta-4 method to simulate an orbit in Python (Physics)

I'm trying to implement an RK4 method to solve for the orbit of a rocket around the Earth. Eventually this code will be used for more complex solar system simulations, but I'm just trying to get it working in this simple system first.
My code is below - I'm hoping someone can tell me what is wrong with it.
My trouble-shooting efforts have been long and unfruitful, but I'll summarise what I've found:
I believe the acceleration function is fine and correct, as it gives believable values and agrees with my calculator/brain
It appears as though the problem lies somewhere in the calculation of the next "r" value - when you run this code, an x-y graph will appear, showing that the rocket initially falls in towards the Earth, then bounces away again, then back. I printed all the relevant values at this point, and found that "v" and "a" were negative in both components, despite the rocket clearly moving in the positive y direction. This makes me think that the calculation of the new "r" is in disagreement with the physics.
The rocket is falling to Earth much faster than it should, which is also suspicious (technically it shouldn't fall into Earth at all, since the initial velocity is set to the required orbital velocity)
Either way I would greatly appreciate if anyone could find the error, as I have been unable up to this point.
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
mE = 5.9742e24 #earth mass
mM = 7.35e22 #moon mass
dM = 379728240.5 #distance from moon to barycentre
dE = 4671759.5 #distance from earth to barycentre
s = 6.4686973e7 #hypothesised distance from moon to Lagrange-2 point
sr = 6.5420e7 #alternate L2 distance
def Simulate(iterations):
x = dM #initialise rocket positions
y = 0
a = 10 #set the time step
xdot = 0. #initialise rocket velocity
ydot = -((6.6726e-11)*mE/x)**0.5
rocket_history_x, rocket_history_y = [[] for _ in range(2)]
history_mx, history_my = [[] for _ in range(2)]
history_ex, history_ey = [[] for _ in range(2)]
sep_history, step_history = [[] for _ in range(2)] #create lists to store data in
history_vx, history_vy = [[] for _ in range(2)]
history_ax, history_ay = [[] for _ in range(2)]
n = 1500
m = 10000 #n,m,p are for storing every nth, mth and pth value to the lists
p = 60000
r = np.array((x,y)) #create rocket position vector
v = np.array((xdot, ydot)) #create rocket velocity vector
for i in range(iterations):
xe, ye = 0, 0 #position of earth
re = np.array((xe,ye)) #create earth position vector
phi = np.arctan2((r[1]-ye),(r[0]-xe)) #calculate phi, the angle between the rocket and the earth, as measured from the earth
r_hat_e = np.array((np.cos(phi), np.sin(phi))) #create vector along which earth's acceleration acts
def acc(r): #define the acceleration vector function
return ((-6.6726e-11)*(mE)/abs(np.dot((r-re),(r-re))))*r_hat_e
k1v = acc(r) #use RK4 method
k1r = v
k2v = acc(r + k1r*(a/2)) #acc(r + (a/2)*v)
k2r = v * (a/2) * k1v # v*acc(r)
k3v = acc(r + k2r*(a/2)) #acc(r + (a/2)*v*acc(r))
k3r = v * k2v * (a/2) #v*(a/2)*acc(r + (a/2)*v)
k4v = acc(r + k3r*a) #acc(r + (a**2/2)*v*acc(r + (a/2)*v))
k4r = v * k3v * a #v*a*acc(r + (a/2)*v*acc(r))
v = v + (a/6) * (k1v + 2*k2v + 2*k3v + k4v) #update v
r = r + (a/6) * (k1r + 2*k2r + 2*k3r + k4r) #update r
sep = np.sqrt(np.dot((r-re),(r-re))) #separation of rocket and earth, useful for visualisation/trouble-shooting
if i% n == 0: # Check for the step
rocket_history_x.append(r[0])
rocket_history_y.append(r[1])
history_ex.append(xe)
history_ey.append(ye)
sep_history.append(sep) #putting data into lists for plotting and troubleshooting
step_history.append(i)
history_ax.append(acc(r)[0])
history_ay.append(acc(r)[1])
history_vx.append(v[0])
history_vy.append(v[1])
#if i% m == 0: # Check for the step
#print r
#print acc(r)
#if i% p == 0: # Check for the step
#print ((a/6)*(k1v + 2*k2v + 2*k3v + k4v))
#print ((a/6)*(k1r + 2*k2r + 2*k3r + k4r))
#print k1v, k2v, k3v, k4v
#print k1r, k2r, k3r, k4r
return rocket_history_x, rocket_history_y, history_ex, history_ey, history_mx, history_my, sep_history, step_history, history_ax, history_ay, history_vx, history_vy
x , y, xe, ye, mx, my, sep, step, ax, ay, vx, vy = Simulate(130000)
#print x,y,vx,vy,ax,ay,step
print ("Plotting graph...")
plt.figure()
plt.subplot(311)
plt.plot(x, y, linestyle='--', color = 'green')
#plt.plot(mx, my, linestyle='-', color = 'blue')
plt.plot(xe, ye, linestyle='-', color = 'red')
#plt.plot(xm, ym)
plt.xlabel("Orbit X")
plt.ylabel("Orbit Y")
'''
plt.plot(step, vy)
plt.ylabel("vy")
'''
plt.subplot(312)
plt.plot(step, sep)
plt.xlabel("steps")
plt.ylabel("separation")
plt.subplot(313)
plt.plot(step, ay)
plt.ylabel("ay")
plt.show()
print("Simulation Complete")
Your most grave error is that in the computation of the v slopes, you used multiplication instead of addition.
k1v = acc(r) #use RK4 method
k1r = v
k2v = acc(r + (a/2) * k1r)
k2r = v + (a/2) * k1v
k3v = acc(r + (a/2) * k2r)
k3r = v + (a/2) * k2v
k4v = acc(r + a * k3r)
k4r = v + a * k3v
A second error is that you use a value from a different state in the acceleration computation of the changed states. This might reduce the order of the method down to 1. Which might not change this plot visibly but will have larger errors over longer integration periods. Use
def acc(r): #define the acceleration vector function
return ((-6.6726e-11)*(mE)/abs(np.dot((r-re),(r-re)))**1.5)*(r-re)

How to calculate the midpoint of several geolocations in python

Is there's a library or a way to calculate the center point for several geolocations points?
This is my list of geolocations based in New York and want to find the approximate midpoint geolocation
L = [
(-74.2813611,40.8752222),
(-73.4134167,40.7287778),
(-74.3145014,40.9475244),
(-74.2445833,40.6174444),
(-74.4148889,40.7993333),
(-73.7789256,40.6397511)
]
After the comments I received and comment from HERE
With coordinates that close to each other, you can treat the Earth as being locally flat and simply find the centroid as though they were planar coordinates. Then you would simply take the average of the latitudes and the average of the longitudes to find the latitude and longitude of the centroid.
lat = []
long = []
for l in L :
lat.append(l[0])
long.append(l[1])
sum(lat)/len(lat)
sum(long)/len(long)
-74.07461283333332, 40.76800886666667
Based on: https://gist.github.com/tlhunter/0ea604b77775b3e7d7d25ea0f70a23eb
Assume you have a pandas DataFrame with latitude and longitude columns, the next code will return a dictionary with the mean coordinates.
import math
x = 0.0
y = 0.0
z = 0.0
for i, coord in coords_df.iterrows():
latitude = math.radians(coord.latitude)
longitude = math.radians(coord.longitude)
x += math.cos(latitude) * math.cos(longitude)
y += math.cos(latitude) * math.sin(longitude)
z += math.sin(latitude)
total = len(coords_df)
x = x / total
y = y / total
z = z / total
central_longitude = math.atan2(y, x)
central_square_root = math.sqrt(x * x + y * y)
central_latitude = math.atan2(z, central_square_root)
mean_location = {
'latitude': math.degrees(central_latitude),
'longitude': math.degrees(central_longitude)
}
Considering that you are using signed degrees format (more), simple averaging of latitude and longitudes would create problems for even small regions near to antimeridian (i.e. + or - 180-degree longitude) due to discontinuity of longitude value at this line (sudden jump between -180 to 180).
Consider two locations whose longitudes are -179 and 179, their mean would be 0, which is wrong.
This link can be useful, first convert lat/lon into an n-vector, then find average. A first stab at converting the code into Python
is below
import numpy as np
import numpy.linalg as lin
E = np.array([[0, 0, 1],
[0, 1, 0],
[-1, 0, 0]])
def lat_long2n_E(latitude,longitude):
res = [np.sin(np.deg2rad(latitude)),
np.sin(np.deg2rad(longitude)) * np.cos(np.deg2rad(latitude)),
-np.cos(np.deg2rad(longitude)) * np.cos(np.deg2rad(latitude))]
return np.dot(E.T,np.array(res))
def n_E2lat_long(n_E):
n_E = np.dot(E, n_E)
longitude=np.arctan2(n_E[1],-n_E[2]);
equatorial_component = np.sqrt(n_E[1]**2 + n_E[2]**2 );
latitude=np.arctan2(n_E[0],equatorial_component);
return np.rad2deg(latitude), np.rad2deg(longitude)
def average(coords):
res = []
for lat,lon in coords:
res.append(lat_long2n_E(lat,lon))
res = np.array(res)
m = np.mean(res,axis=0)
m = m / lin.norm(m)
return n_E2lat_long(m)
n = lat_long2n_E(30,20)
print (n)
print (n_E2lat_long(np.array(n)))
# find middle of france and libya
coords = [[30,20],[47,3]]
m = average(coords)
print (m)
I would like to improve on the #BBSysDyn'S answer.
The average calculation can be biased if you are calculating the center of a polygon with extra vertices on one side. Therefore the average function can be replaced with centroid calculation explained here
def get_centroid(points):
x = points[:,0]
y = points[:,1]
# Solving for polygon signed area
A = 0
for i, value in enumerate(x):
if i + 1 == len(x):
A += (x[i]*y[0] - x[0]*y[i])
else:
A += (x[i]*y[i+1] - x[i+1]*y[i])
A = A/2
#solving x of centroid
Cx = 0
for i, value in enumerate(x):
if i + 1 == len(x):
Cx += (x[i]+x[0]) * ( (x[i]*y[0]) - (x[0]*y[i]) )
else:
Cx += (x[i]+x[i+1]) * ( (x[i]*y[i+1]) - (x[i+1]*y[i]) )
Cx = Cx/(6*A)
#solving y of centroid
Cy = 0
for i , value in enumerate(y):
if i+1 == len(x):
Cy += (y[i]+y[0]) * ( (x[i]*y[0]) - (x[0]*y[i]) )
else:
Cy += (y[i]+y[i+1]) * ( (x[i]*y[i+1]) - (x[i+1]*y[i]) )
Cy = Cy/(6*A)
return Cx, Cy
Note: If it is a polygon or more than 2 points, they must be listed in order that the polygon or shape would be drawn.

Detecting geographic clusters

I have a R data.frame containing longitude, latitude which spans over the entire USA map. When X number of entries are all within a small geographic region of say a few degrees longitude & a few degrees latitude, I want to be able to detect this and then have my program then return the coordinates for the geographic bounding box. Is there a Python or R CRAN package that already does this? If not, how would I go about ascertaining this information?
I was able to combine Joran's answer along with Dan H's comment. This is an example ouput:
The python code emits functions for R: map() and rect(). This USA example map was created with:
map('state', plot = TRUE, fill = FALSE, col = palette())
and then you can apply the rect()'s accordingly from with in the R GUI interpreter (see below).
import math
from collections import defaultdict
to_rad = math.pi / 180.0 # convert lat or lng to radians
fname = "site.tsv" # file format: LAT\tLONG
threshhold_dist=50 # adjust to your needs
threshhold_locations=15 # minimum # of locations needed in a cluster
def dist(lat1,lng1,lat2,lng2):
global to_rad
earth_radius_km = 6371
dLat = (lat2-lat1) * to_rad
dLon = (lng2-lng1) * to_rad
lat1_rad = lat1 * to_rad
lat2_rad = lat2 * to_rad
a = math.sin(dLat/2) * math.sin(dLat/2) + math.sin(dLon/2) * math.sin(dLon/2) * math.cos(lat1_rad) * math.cos(lat2_rad)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a));
dist = earth_radius_km * c
return dist
def bounding_box(src, neighbors):
neighbors.append(src)
# nw = NorthWest se=SouthEast
nw_lat = -360
nw_lng = 360
se_lat = 360
se_lng = -360
for (y,x) in neighbors:
if y > nw_lat: nw_lat = y
if x > se_lng: se_lng = x
if y < se_lat: se_lat = y
if x < nw_lng: nw_lng = x
# add some padding
pad = 0.5
nw_lat += pad
nw_lng -= pad
se_lat -= pad
se_lng += pad
# sutiable for r's map() function
return (se_lat,nw_lat,nw_lng,se_lng)
def sitesDist(site1,site2):
#just a helper to shorted list comprehension below
return dist(site1[0],site1[1], site2[0], site2[1])
def load_site_data():
global fname
sites = defaultdict(tuple)
data = open(fname,encoding="latin-1")
data.readline() # skip header
for line in data:
line = line[:-1]
slots = line.split("\t")
lat = float(slots[0])
lng = float(slots[1])
lat_rad = lat * math.pi / 180.0
lng_rad = lng * math.pi / 180.0
sites[(lat,lng)] = (lat,lng) #(lat_rad,lng_rad)
return sites
def main():
sites_dict = {}
sites = load_site_data()
for site in sites:
#for each site put it in a dictionary with its value being an array of neighbors
sites_dict[site] = [x for x in sites if x != site and sitesDist(site,x) < threshhold_dist]
results = {}
for site in sites:
j = len(sites_dict[site])
if j >= threshhold_locations:
coord = bounding_box( site, sites_dict[site] )
results[coord] = coord
for bbox in results:
yx="ylim=c(%s,%s), xlim=c(%s,%s)" % (results[bbox]) #(se_lat,nw_lat,nw_lng,se_lng)
print('map("county", plot=T, fill=T, col=palette(), %s)' % yx)
rect='rect(%s,%s, %s,%s, col=c("red"))' % (results[bbox][2], results[bbox][0], results[bbox][3], results[bbox][2])
print(rect)
print("")
main()
Here is an example TSV file (site.tsv)
LAT LONG
36.3312 -94.1334
36.6828 -121.791
37.2307 -121.96
37.3857 -122.026
37.3857 -122.026
37.3857 -122.026
37.3895 -97.644
37.3992 -122.139
37.3992 -122.139
37.402 -122.078
37.402 -122.078
37.402 -122.078
37.402 -122.078
37.402 -122.078
37.48 -122.144
37.48 -122.144
37.55 126.967
With my data set, the output of my python script, shown on the USA map. I changed the colors for clarity.
rect(-74.989,39.7667, -73.0419,41.5209, col=c("red"))
rect(-123.005,36.8144, -121.392,38.3672, col=c("green"))
rect(-78.2422,38.2474, -76.3,39.9282, col=c("blue"))
Addition on 2013-05-01 for Yacob
These 2 lines give you the over all goal...
map("county", plot=T )
rect(-122.644,36.7307, -121.46,37.98, col=c("red"))
If you want to narrow in on a portion of a map, you can use ylim and xlim
map("county", plot=T, ylim=c(36.7307,37.98), xlim=c(-122.644,-121.46))
# or for more coloring, but choose one or the other map("country") commands
map("county", plot=T, fill=T, col=palette(), ylim=c(36.7307,37.98), xlim=c(-122.644,-121.46))
rect(-122.644,36.7307, -121.46,37.98, col=c("red"))
You will want to use the 'world' map...
map("world", plot=T )
It has been a long time since I have used this python code I have posted below so I will try my best to help you.
threshhold_dist is the size of the bounding box, ie: the geographical area
theshhold_location is the number of lat/lng points needed with in
the bounding box in order for it to be considered a cluster.
Here is a complete example. The TSV file is located on pastebin.com. I have also included an image generated from R that contains the output of all of the rect() commands.
# pyclusters.py
# May-02-2013
# -John Taylor
# latlng.tsv is located at http://pastebin.com/cyvEdx3V
# use the "RAW Paste Data" to preserve the tab characters
import math
from collections import defaultdict
# See also: http://www.geomidpoint.com/example.html
# See also: http://www.movable-type.co.uk/scripts/latlong.html
to_rad = math.pi / 180.0 # convert lat or lng to radians
fname = "latlng.tsv" # file format: LAT\tLONG
threshhold_dist=20 # adjust to your needs
threshhold_locations=20 # minimum # of locations needed in a cluster
earth_radius_km = 6371
def coord2cart(lat,lng):
x = math.cos(lat) * math.cos(lng)
y = math.cos(lat) * math.sin(lng)
z = math.sin(lat)
return (x,y,z)
def cart2corrd(x,y,z):
lon = math.atan2(y,x)
hyp = math.sqrt(x*x + y*y)
lat = math.atan2(z,hyp)
return(lat,lng)
def dist(lat1,lng1,lat2,lng2):
global to_rad, earth_radius_km
dLat = (lat2-lat1) * to_rad
dLon = (lng2-lng1) * to_rad
lat1_rad = lat1 * to_rad
lat2_rad = lat2 * to_rad
a = math.sin(dLat/2) * math.sin(dLat/2) + math.sin(dLon/2) * math.sin(dLon/2) * math.cos(lat1_rad) * math.cos(lat2_rad)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a));
dist = earth_radius_km * c
return dist
def bounding_box(src, neighbors):
neighbors.append(src)
# nw = NorthWest se=SouthEast
nw_lat = -360
nw_lng = 360
se_lat = 360
se_lng = -360
for (y,x) in neighbors:
if y > nw_lat: nw_lat = y
if x > se_lng: se_lng = x
if y < se_lat: se_lat = y
if x < nw_lng: nw_lng = x
# add some padding
pad = 0.5
nw_lat += pad
nw_lng -= pad
se_lat -= pad
se_lng += pad
#print("answer:")
#print("nw lat,lng : %s %s" % (nw_lat,nw_lng))
#print("se lat,lng : %s %s" % (se_lat,se_lng))
# sutiable for r's map() function
return (se_lat,nw_lat,nw_lng,se_lng)
def sitesDist(site1,site2):
# just a helper to shorted list comprehensioin below
return dist(site1[0],site1[1], site2[0], site2[1])
def load_site_data():
global fname
sites = defaultdict(tuple)
data = open(fname,encoding="latin-1")
data.readline() # skip header
for line in data:
line = line[:-1]
slots = line.split("\t")
lat = float(slots[0])
lng = float(slots[1])
lat_rad = lat * math.pi / 180.0
lng_rad = lng * math.pi / 180.0
sites[(lat,lng)] = (lat,lng) #(lat_rad,lng_rad)
return sites
def main():
color_list = ( "red", "blue", "green", "yellow", "orange", "brown", "pink", "purple" )
color_idx = 0
sites_dict = {}
sites = load_site_data()
for site in sites:
#for each site put it in a dictionarry with its value being an array of neighbors
sites_dict[site] = [x for x in sites if x != site and sitesDist(site,x) < threshhold_dist]
print("")
print('map("state", plot=T)') # or use: county instead of state
print("")
results = {}
for site in sites:
j = len(sites_dict[site])
if j >= threshhold_locations:
coord = bounding_box( site, sites_dict[site] )
results[coord] = coord
for bbox in results:
yx="ylim=c(%s,%s), xlim=c(%s,%s)" % (results[bbox]) #(se_lat,nw_lat,nw_lng,se_lng)
# important!
# if you want an individual map for each cluster, uncomment this line
#print('map("county", plot=T, fill=T, col=palette(), %s)' % yx)
if len(color_list) == color_idx:
color_idx = 0
rect='rect(%s,%s, %s,%s, col=c("%s"))' % (results[bbox][2], results[bbox][0], results[bbox][3], results[bbox][1], color_list[color_idx])
color_idx += 1
print(rect)
print("")
main()
I'm doing this on a regular basis by first creating a distance matrix and then running clustering on it. Here is my code.
library(geosphere)
library(cluster)
clusteramounts <- 10
distance.matrix <- (distm(points.to.group[,c("lon","lat")]))
clustersx <- as.hclust(agnes(distance.matrix, diss = T))
points.to.group$group <- cutree(clustersx, k=clusteramounts)
I'm not sure if it completely solves your problem. You might want to test with different k, and also perhaps do a second run of clustering of some of the first clusters in case they are too big, like if you have one point in Minnesota and a thousand in California.
When you have the points.to.group$group, you can get the bounding boxes by finding max and min lat lon per group.
If you want X to be 20, and you have 18 points in New York and 22 in Dallas, you must decide if you want one small and one really big box (20 points each), if it is better to have have the Dallas box include 22 points, or if you want to split the 22 points in Dallas to two groups. Clustering based on distance can be good in some of these cases. But it of course depend on why you want to group the points.
/Chris
A few ideas:
Ad-hoc & approximate: The "2-D histogram". Create arbitrary "rectangular" bins, of the degree width of your choice, assign each bin an ID. Placing a point in a bin means "associate the point with the ID of the bin". Upon each add to a bin, ask the bin how many points it has. Downside: doesn't correctly "see" a cluster of points that stradle a bin boundary; and: bins of "constant longitudinal width" actually are (spatially) smaller as you move north.
Use the "Shapely" library for Python. Follow it's stock example for "buffering points", and do a cascaded union of the buffers. Look for globs over a certain area, or that "contain" a certain number of original points. Note that Shapely is not intrinsically "geo-savy", so you'll have to add corrections if you need them.
Use a true DB with spatial processing. MySQL, Oracle, Postgres (with PostGIS), MSSQL all (I think) have "Geometry" and "Geography" datatypes, and you can do spatial queries on them (from your Python scripts).
Each of these has different costs in dollars and time (in the learning curve)... and different degrees of geospatial accuracy. You have to pick what suits your budget and/or requirements.
if you use shapely, you could extend my cluster_points function
to return the bounding box of the cluster via the .bounds property of the shapely geometry , for example like this:
clusterlist.append(cluster, (poly.buffer(-b)).bounds)
maybe something like
def dist(lat1,lon1,lat2,lon2):
#just return normal x,y dist
return sqrt((lat1-lat2)**2+(lon1-lon2)**2)
def sitesDist(site1,site2):
#just a helper to shorted list comprehensioin below
return dist(site1.lat,site1.lon,site2.lat,site2.lon)
sites_dict = {}
threshhold_dist=5 #example dist
for site in sites:
#for each site put it in a dictionarry with its value being an array of neighbors
sites_dict[site] = [x for x in sites if x != site and sitesDist(site,x) < threshhold_dist]
print "\n".join(sites_dict)

Categories

Resources