Efficiently compute distances between thousands of coordinate pairs - python

I have a catalog I opened in python, which has about 70,000 rows of data (ra, dec coordinates and object name) for various objects. I also have another list of about 15,000 objects of interest, which also appear in the previously mentioned catalog. For each of these 15,000 objects, I would like to see if any other objects in the large 70,000 list have ra, dec coordinates within 10 arcseconds of the object. If this is found to be true, I'd just like to flag the object and move on to the next one. However, this process takes a long time, since the distances are computed between the current object of interest (out of 15,000) 70,000 different times. This would take days! How could I accomplish the same task more efficiently? Below is my current code, where all_objects is a list of all the 15,000 object names of interest and catalog is the previously mentioned table data for 70,000 objects.
from astropy.coordinates import SkyCoord
from astropy import units as u
for obj_name in all_objects:
obj_ind = list(catalog['NAME']).index(obj_name)
c1 = SkyCoord(ra=catalog['RA'][obj_ind]*u.deg, dec=catalog['DEC'][obj_ind]*u.deg, frame='fk5')
for i in range(len(catalog['NAME'])):
if i != obj_ind:
# Compute distance between object and other source
c2 = SkyCoord(ra=catalog['RA'][i]*u.deg, dec=catalog['DEC'][i]*u.deg, frame='fk5')
sep = c1.separation(c2)
contamination_flag = False
if sep.arcsecond <= 10:
contamination_flag = True
print('CONTAMINATION FOUND')
break

1 Create your own separation function
This step is really easy once you look at the implementation and ask yourself: "how can I make this faster"
def separation(self, other):
from . import Angle
from .angle_utilities import angular_separation # I've put that in the code bellow so it is clearer
if not self.is_equivalent_frame(other):
try:
other = other.transform_to(self, merge_attributes=False)
except TypeError:
raise TypeError('Can only get separation to another SkyCoord '
'or a coordinate frame with data')
lon1 = self.spherical.lon
lat1 = self.spherical.lat
lon2 = other.spherical.lon
lat2 = other.spherical.lat
sdlon = np.sin(lon2 - lon1)
cdlon = np.cos(lon2 - lon1)
slat1 = np.sin(lat1)
slat2 = np.sin(lat2)
clat1 = np.cos(lat1)
clat2 = np.cos(lat2)
num1 = clat2 * sdlon
num2 = clat1 * slat2 - slat1 * clat2 * cdlon
denominator = slat1 * slat2 + clat1 * clat2 * cdlon
return Angle(np.arctan2(np.hypot(num1, num2), denominator), unit=u.degree)
It calculates a lot of cosines and sines, then creates an instance of Angle and converts to degrees then you convert to arc seconds.
You might not want to use Angle, nor do the tests and conversions at the beginning, nor doing the import in the function, nor doing so much variable assignment if you need performance.
The separation function feels a bit heavy to me, it should just take numbers and return a number.
2 Use a quad tree (requires a complete rewrite of your code)
That said, let's look at the complexity of your algorithm, it checks every element against every other element, complexity is O(n**2) (Big O notation). Can we do better...
YES You could use a Quad-tree, worst case complexity of Quad tree is O(N). What that basically means if you're not familiar with Big O is that for 15 000 element, the lookup will be 15 000 times what it is for 1 element instead of 225 000 000 times (15 000 squared)... quite an improvement right... Scipy has a great Quad tree library (I've always used my own).

Related

Finding the global minimum of a noisy function via simulated annealing in python

I'm trying to find the global minimum of the function from the hundred digit hundred dollars challenge, question #4 as an exercise for simulated annealing.
As the basis of my understanding and approach to writing the code, I refer to the global optimization algorithms version 3 book which is found for free online.
Consequently, I've initially come up with the following code:
The noisy func:
def noisy_func(x, y):
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
The function used to mutate the values:
def mutate(X_Value, Y_Value):
mutationResult_X = X_Value + randomNumForInput()
mutationResult_Y = Y_Value + randomNumForInput()
while mutationResult_X > 4 or mutationResult_X < -4:
mutationResult_X = X_Value + randomNumForInput()
while mutationResult_Y > 4 or mutationResult_Y < -4:
mutationResult_Y = Y_Value + randomNumForInput()
mutationResults = [mutationResult_X, mutationResult_Y]
return mutationResults
randomNumForInput simply returns a random number between 4 and -4. (Interval Limits for the search.) Hence it is equivalent to random.uniform(-4, 4).
This is the central function of the program.
def simulated_annealing(f):
"""Peforms simulated annealing to find a solution"""
#Start by initializing the current state with the initial state
#acquired by a random generation of a number and then using it
#in the noisy func, also set solution(best_state) as current_state
#for a start
pCurSelect = [randomNumForInput(),randomNumForInput()]
current_state = f(pCurSelect[0],pCurSelect[1])
best_state = current_state
#Begin time monitoring, this will represent the
#Number of steps over time
TimeStamp = 1
#Init current temp via the func, using such values as to get the initial temp
initial_temp = 100
final_temp = .1
alpha = 0.001
num_of_steps = 1000000
#calculates by how much the temperature should be tweaked
#each iteration
#suppose the number of steps is linear, we'll send in 100
temp_Delta = calcTempDelta(initial_temp, final_temp, num_of_steps)
#set current_temp via initial temp
current_temp = getTemperature(initial_temp, temp_Delta)
#max_iterations = 100
#initial_temp = get_Temperature_Poly(TimeStamp)
#current_temp > final_temp
while current_temp > final_temp:
#get a mutated value from the current value
#hence being a 'neighbour' value
#with it, acquire the neighbouring state
#to the current state
neighbour_values = mutate(pCurSelect[0], pCurSelect[1])
neighbour_state = f(neighbour_values[0], neighbour_values[1])
#calculate the difference between the newly mutated
#neighbour state and the current state
delta_E_Of_States = neighbour_state - current_state
# Check if neighbor_state is the best state so far
# if the new solution is better (lower), accept it
if delta_E_Of_States <= 0:
pCurSelect = neighbour_values
current_state = neighbour_state
if current_state < best_state:
best_state = current_state
# if the new solution is not better, accept it with a probability of e^(-cost/temp)
else:
if random.uniform(0, 1) < math.exp(-(delta_E_Of_States) / current_temp):
pCurSelect = neighbour_values
current_state = neighbour_state
# Here, we'd decrement the temperature or increase the timestamp, normally
"""current_temp -= alpha"""
#print("Run number: " + str(TimeStamp) + " current_state = " + str(current_state) )
#increment TimeStamp
TimeStamp = TimeStamp + 1
# calc temp for next iteration
current_temp = getTemperature(current_temp, temp_Delta)
#print("Iteration Count: " + str(TimeStamp))
return best_state
alpha is not used for this implementation, however temperature is moderated linearly using the following funcs:
def calcTempDelta(T_Initial, T_Final, N):
return((T_Initial-T_Final)/N)
def getTemperature(T_old, T_new):
return (T_old - T_new)
This is how I implemented the solution described in page 245 of the book. However, this implementation does not return to me the global minimum of the noisy function, but rather, one of its near-by local minimum.
The reasons I implemented the solution in this way is two fold:
It has been provided to me as a working example of a linear temperature moderation, and thus a working template.
Although I have tried to understand the other forms of temperature moderation laid out in the book in pages 248-249, it is not entirely clear to me how the variable "Ts" is calculated, and even after trying to look through some of the cited sources the book references, it remains esoteric for me still. Thus I figured, I'd rather try to make this "simple" solution work correctly first, before proceeding to attempt other approaches of temperature quenching (logarithmic, exponential, etc).
Since then I have tried in numerous ways to acquire the global minimum of the noisy func through various different iterations of the code, which would be too much to post here all at once. I've tried different rewrites of this code:
Decrease the randomly rolled number over each iteration as in order to search within a smaller scope every time, this has resulted in more consistent but still incorrect results.
Mutate by different increments, so lets say, between -1 and 1, etc. Same effect.
Rewrite mutate as in order to examine the neighbouring points to the current point via some step size, and examine neighboring points by adding/reducing said step size from the current point's x/y values, checking the differences between the newly generated point and the current point (the delta of E's, basically), and return the appropriate values with whichever one produced the lowest distance to the current function, thus being its closest proximity neighbour.
Reduce the intervals limits over which the search occurs.
It is in these, the solutions involving step-size/reducing limits/checking neighbours by quadrants that I have used movements comprised of some constant alpha times the time_stamp.
These and other solutions which I've attempted have not worked, either producing even less accurate results (albeit in some cases more consistent results) or in one case, not working at all.
Therefore I must be missing something, whether its to do with the temperature moderation, or the precise way (formula) by which I'm supposed to make the next step (mutate) in the algorithm.
I know its a lot to take in and look at, but I'd appreciate any constructive criticism/help/advice you can provide me.
If it will be of any help to showcase code bits of the other solution attempts, I'll post them if asked.
It is important that you keep track of what you are doing.
I have put a few important tips on frigidum
The alpha cooling generally works well, it makes sure you don't speed through the interesting sweet-spot, where about 0.1 of the proposals are accepted.
Make sure your proposals are not too coarse, I have put a example where I only change x or y, but never both. The idea is that annealing will take whats best, or take a tour, and let the scheme decide.
I use the package frigidum for the algo, but its pretty much the same are your code. Also notice I have 2 proposals, a large change and a small change, combinations usually work well.
Finally, I noticed its hopping a lot. A small variation would be to pick the best-so-far before you go in the last 5% of your cooling.
I use/install frigidum
!pip install frigidum
And made a small change to make use of numpy arrays;
import math
def noisy_func(X):
x, y = X
return (math.exp(math.sin(50*x)) +
math.sin(60*math.exp(y)) +
math.sin(70*math.sin(x)) +
math.sin(math.sin(80*y)) -
math.sin(10*(x + y)) +
0.25*(math.pow(x, 2) +
math.pow(y, 2)))
import frigidum
import numpy as np
import random
def random_start():
return np.random.random( 2 ) * 4
def random_small_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.02 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.02 * (random.random() - .5), 0] ), -4,4)
def random_big_step(x):
if np.random.random() < .5:
return np.clip( x + np.array( [0, 0.5 * (random.random() - .5)] ), -4,4)
else:
return np.clip( x + np.array( [0.5 * (random.random() - .5), 0] ), -4,4)
local_opt = frigidum.sa(random_start=random_start,
neighbours=[random_small_step, random_big_step],
objective_function=noisy_func,
T_start=10**2,
T_stop=0.00001,
repeats=10**4,
copy_state=frigidum.annealing.copy)
The output of the above was
---
Neighbour Statistics:
(proportion of proposals which got accepted *and* changed the objective function)
random_small_step : 0.451045
random_big_step : 0.268002
---
(Local) Minimum Objective Value Found:
-3.30669277
With the above code sometimes I get below -3, but I also noticed sometimes it has found something around -2, than it is stuck in the last phase.
So a small tweak would be to re-anneal the last phase of the annealing, with the best-found-so-far.
Hope that helps, let me know if any questions.

Working out which points lat/lon coordinates are closest to

I currently have a list of coordinates
[(52.14847612092221, 0.33689512047881015),
(52.14847612092221, 0.33689512047881015),
(52.95756796776235, 0.38027099942700493),
(51.78723479900971, -1.4214854900618064)
...]
I would like to split this list into 3 separate lists/datafames corresponding to which city they are closest to (in this case the coordinates are all in the UK and the 3 cities are Manchester, Cardiff and London)
So at the end result I would like the current single list of coordinates to be split into either separate lists ideally or it could be a dataframe with 3 columns would be fine eg:
leeds cardiff london
(51.78723479900971, (51.78723479900971, (51.78723479900971,
-1.4214854900618064) -1.4214854900618064) -1.4214854900618064)
(those are obiously not correct coordinates!)
-Hope that makes sense. It doesn't have to be overly accurate (don't need to take into consideration the curvature of the earth or anything like that!)
I'm really not sure where to start with this - I'm very new to python and would appreciate any help!
Thanks in advance
This will get you started:
from geopy.geocoders import Nominatim
geolocator = Nominatim()
places = ['london','cardiff','leeds']
coordinates = {}
for i in places:
coordinates[i] = ((geolocator.geocode(i).latitude, geolocator.geocode(i).longitude))
>>>print coordinates
{'cardiff': (51.4816546, -3.1791933), 'leeds': (53.7974185, -1.543794), 'london': (51.5073219, -0.1276473)}
You can now hook up the architecture for putting this in a pandas dataframe, calculating the distance metric between your coordinates and the above.
Ok so now we want to do distances between what is a very small array (the coordinates).
Here's some code:
import numpy as np
single_point = [3, 4] # A coordinate
points = np.arange(20).reshape((10,2)) # Lots of other coordinates
dist = (points - single_point)**2
dist = np.sum(dist, axis=1)
dist = np.sqrt(dist)
From here there is any number of things you can do. You can sort it using numpy, or you can place it in a pandas dataframe and sort it there (though that's really just a wrapper for the numpy function I believe). Whichever you're more comfortable with.
This is a pretty brute force approach, and not too adaptable. However, that can be the easiest to understand and might be plenty efficient for the problem at hand. It also uses only pure python, which may help you to understand some of python's conventions.
points = [(52.14847612092221, 0.33689512047881015), (52.14847612092221, 0.33689512047881015), (52.95756796776235, 0.38027099942700493), (51.78723479900971, -1.4214854900618064), ...]
cardiff = (51.4816546, -3.1791933)
leeds = (53.7974185, -1.543794)
london = (51.5073219, -0.1276473)
def distance(pt, city):
return ((pt[0] - city[0])**2 + (pt[1] - city[1])**2)**0.5
cardiff_pts = []
leeds_pts = []
london_pts = []
undefined_pts = [] # for points equidistant between two/three cities
for pt in points:
d_cardiff = distance(pt, cardiff)
d_leeds = distance(pt, leeds)
d_london = distance(pt, london)
if (d_cardiff < d_leeds) and (d_cardiff < d_london):
cardiff_pts.append(pt)
elif (d_leeds < d_cardiff) and (d_leeds < d_london):
leeds_pts.append(pt)
elif (d_london < d_cardiff) and (d_london < d_leeds):
london_pts.append(pt)
else:
undefined_pts.append(pt)
Note that this solution assumes the values are on a cartesian reference frame, which latitude longitude pairs are not.

Using astropy.fits and numpy to apply coincidence corrections to SWIFT fits image

This question may be a little specialist, but hopefully someone might be able to help. I normally use IDL, but for developing a pipeline I'm looking to use python to improve running times.
My fits file handling setup is as follows:
import numpy as numpy
from astropy.io import fits
#Directory: /Users/UCL_Astronomy/Documents/UCL/PHASG199/M33_UVOT_sum/UVOTIMSUM/M33_sum_epoch1_um2_norm.img
with fits.open('...') as ima_norm_um2:
#Open UVOTIMSUM file once and close it after extracting the relevant values:
ima_norm_um2_hdr = ima_norm_um2[0].header
ima_norm_um2_data = ima_norm_um2[0].data
#Individual dimensions for number of x pixels and number of y pixels:
nxpix_um2_ext1 = ima_norm_um2_hdr['NAXIS1']
nypix_um2_ext1 = ima_norm_um2_hdr['NAXIS2']
#Compute the size of the images (you can also do this manually rather than calling these keywords from the header):
#Call the header and data from the UVOTIMSUM file with the relevant keyword extensions:
corrfact_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
coincorr_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
#Check that the dimensions are all the same:
print(corrfact_um2_ext1.shape)
print(coincorr_um2_ext1.shape)
print(ima_norm_um2_data.shape)
# Make a new image file to save the correction factors:
hdu_corrfact = fits.PrimaryHDU(corrfact_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_corrfact]).writeto('.../M33_sum_epoch1_um2_corrfact.img')
# Make a new image file to save the corrected image to:
hdu_coincorr = fits.PrimaryHDU(coincorr_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_coincorr]).writeto('.../M33_sum_epoch1_um2_coincorr.img')
I'm looking to then apply the following corrections:
# Define the variables from Poole et al. (2008) "Photometric calibration of the Swift ultraviolet/optical telescope":
alpha = 0.9842000
ft = 0.0110329
a1 = 0.0658568
a2 = -0.0907142
a3 = 0.0285951
a4 = 0.0308063
for i in range(nxpix_um2_ext1 - 1): #do begin
for j in range(nypix_um2_ext1 - 1): #do begin
if (numpy.less_equal(i, 4) | numpy.greater_equal(i, nxpix_um2_ext1-4) | numpy.less_equal(j, 4) | numpy.greater_equal(j, nxpix_um2_ext1-4)): #then begin
#UVM2
corrfact_um2_ext1[i,j] == 0
coincorr_um2_ext1[i,j] == 0
else:
xpixmin = i-4
xpixmax = i+4
ypixmin = j-4
ypixmax = j+4
#UVM2
ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax])
xvec_UVM2 = ft*ima_UVM2sum
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2*xvec_UVM2) + (a3*xvec_UVM2*xvec_UVM2*xvec_UVM2) + (a4*xvec_UVM2*xvec_UVM2*xvec_UVM2*xvec_UVM2)
Ctheory_UVM2 = - alog(1-(alpha*ima_UVM2sum*ft))/(alpha*ft)
corrfact_um2_ext1[i,j] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum)
coincorr_um2_ext1[i,j] = corrfact_um2_ext1[i,j]*ima_sk_um2[i,j]
The above snippet is where it is messing up, as I have a mixture of IDL syntax and python syntax. I'm just not sure how to convert certain aspects of IDL to python. For example, the ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax]) I'm not quite sure how to handle.
I'm also missing the part where it will update the correction factor and coincidence correction image files, I would say. If anyone could have the patience to go over it with a fine tooth comb and suggest the neccessary changes I need that would be excellent.
The original normalised image can be downloaded here: Replace ... in above code with this file
One very important thing about numpy is that it does every mathematical or comparison function on an element-basis. So you probably don't need to loop through the arrays.
So maybe start where you convolve your image with a sum-filter. This can be done for 2D images by astropy.convolution.convolve or scipy.ndimage.filters.uniform_filter
I'm not sure what you want but I think you want a 9x9 sum-filter that would be realized by
from scipy.ndimage.filters import uniform_filter
ima_UVM2sum = uniform_filter(ima_norm_um2_data, size=9)
since you want to discard any pixel that are at the borders (4 pixel) you can simply slice them away:
ima_UVM2sum_valid = ima_UVM2sum[4:-4,4:-4]
This ignores the first and last 4 rows and the first and last 4 columns (last is realized by making the stop value negative)
now you want to calculate the corrections:
xvec_UVM2 = ft*ima_UVM2sum_valid
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2**2) + (a3*xvec_UVM2**3) + (a4*xvec_UVM2**4)
Ctheory_UVM2 = - np.alog(1-(alpha*ima_UVM2sum_valid*ft))/(alpha*ft)
these are all arrays so you still do not need to loop.
But then you want to fill your two images. Be careful because the correction is smaller (we inored the first and last rows/columns) so you have to take the same region in the correction images:
corrfact_um2_ext1[4:-4,4:-4] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum_valid)
coincorr_um2_ext1[4:-4,4:-4] = corrfact_um2_ext1[4:-4,4:-4] *ima_sk_um2
still no loop just using numpys mathematical functions. This means it is much faster (MUCH FASTER!) and does the same.
Maybe I have forgotten some slicing and that would yield a Not broadcastable error if so please report back.
Just a note about your loop: Python's first axis is the second axis in FITS and the second axis is the first FITS axis. So if you need to loop over the axis bear that in mind so you don't end up with IndexErrors or unexpected results.

How can I convert speeds in my code?

I have a task to create a program that detects the number plates that are both foreign and speeding. For this, a section of the road, elapsed times etc had to be created. The problem of the code isn't with how unrealistic the speeds are, but the fact that I found out from my teacher recently that the speeds are supposed to be in miles per hour, not metres per second.
import re
# DATA
distance = 0.06 # Distance between the Camera A and B; 0.06 = 600 metres
speed_limit = 20 # (meters per second)
number_plates = ["DV61 GGB", #UK
"D31 EG 2A", #F
"5314 10A02", #F
"24TEG 5063", #F
"TR09 TRE", #UK
"524 WAL 75", #F
"TR44 VCZ", #UK
"FR52 SWD", #UK
"100 GBS 12", #F
"HG55 BPO" #UK
]
enter = [7.12,7.17,7.22,7.12,7.23,7.41,7.18,7.25,7.11,7.38]
leave = [7.56,7.39,7.49,7.56,7.45,7.57,7.22,7.31,7.59,7.47]
# Find the non-UK plates
pattern = "(?![A-Z]{2}\d{2}\s+[A-Z]{3}$)"
foreign_numbers = list(filter(lambda x: re.match(pattern, x), number_plates))
# Calculations for speed
elapsed = [(l - e)/100 for l, e in zip(leave, enter)]
speed = [distance/t for t in converted_elapsed]
# Dictionary for foreign speeders + 2 conditions
foreign_speeders = {plate: speed
for plate, speed in zip(number_plates, speed)
if (plate in foreign_numbers) and (speed > speed_limit)}
print("10 cars have passed Camera A, then Camera B\nSpeed limit is 20 meters per second.\n")
# Write foreign speeders to file
for plate, speed in foreign_speeders.items():
speeders_data = open("speeders.txt","w") # Opens file with name of "speeders.txt"
speeders_data.write(
"{0:>13s} is foreign and is speeding at{1:5.1f} mps, and has an excess speed of {2:3.1f} mps.".format(plate, speed, speed-speed_limit))
speeders_data = open("speeders.txt","r")
print(speeders_data.read())
speeders_data.close()
I wonder, would it be simpler to re-write all speed variables, for example speed_limit and the items of elapsed & formula of speed in their converted forms, or to convert the speed in the middle of the code?
Whichever solution seems more suitable, how can I do it?
As a start, you definitely need a function which can convert units, e.g. mps_to_mph as #mirosval suggested.
However, I suggest you should make it more obvious what the unit is at some point.
Simplest solution is to have it in variable name: speed_mps = 78.7, speed_mph = mps_to_mph(speed_mps), otherwise you will not be able to understand your code when you read it again (well, you might never read this code again, but at least your teacher will... and every code you ever write should be easy to understand without additional explanations).
In a more complex application, with many such calculations, you might want to have a class which can remember units and knows how to convert values, so that you can do something like:
speed = Speed(78.8, 'm/s')
if speed > Speed(60, 'mph'):
# something
You can write a function such as:
def mps_to_mph(mps):
return 2.23694 * mps
and then use it in your for loop:
speed = map(mps_to_mph, speed)
speed_limit = mps_to_mph(speed_limit)
You can leave the calculations and thresholding in meters per second and only convert for the display.

Given a set of locations and a single location, find the closest location from the set to the single

Given a set of locations and a single location, find the location from the set which is closest to the single location. It is not about finding a path trough nodes; it's about distance in a birds eye view.
The locations are a property of a 'node', (it's for a Finite Element software extension). Problem is: this takes to friggin long. I'm looking for something quicker. One user has to call this function up to 500 times (with a different single location) on a set of 1 million locations (the set stays the same).
I'd rather not limit the set before doing this calculation; I don't have to query a database or anything; I feel this simple arithmethics should be done in a few ms anyway. I don't get why it takes so long.
# excerpt of how LocationByNodeId looks like. 40k keys is a small model, can contain up to a million keys.
node_location_by_nodeId = {43815: (3.2835714285714266, -1.8875000000000068, 0.23571428571420952), 43816: (3.227857142857142, -1.8875000000000068, 0.23571428571421035)}
location_in_space=(1,3,7)
def node_closest_to_location_in_space(location_in_space):
global node_location_by_nodeId
distances = {}
for NodeId in node_location_by_nodeId:
NodeLocation = node_location_by_nodeId[NodeId]
distances[NodeId] = (NodeLocation[0] - location_in_space[0])**2 +
(NodeLocation[1] - location_in_space[1])**2 +
(NodeLocation[2] - location_in_space[2])**2
return min(distances, key=distances.get) # I don't really get this statement, i got it from here. Maybe this one is slow?
node_closest_to_location_in_space(location_in_space)
edit: solution taken from answers below reduced runtime to 35% of original runtime in big data set (400 calls over a set of 1.2million).
closest_node = None
closest_distance = 1e100 # An arbitrary, HUGE, value
x,y,z = location_in_space[:3]
for NodeId, NodeLocation in LocationByNodeId.iteritems():
distance = (NodeLocation[0] - x)**2 + (NodeLocation[1] - y)**2 + (NodeLocation[2] - z)**2
if distance < closest_distance:
closest_distance = distance
closest_node = NodeId
return closest_node
You cannot run a simple linear search on an unsorted dict and expect it to be fast (at least not very fast).
There are so many algorithms that helps you tackle this problem in a much optimized way.
An R-Tree as suggested is the perfect data structure to store your locations.
You can also look for solutions in this wikipedia page: Nearest Neighbor Search
Indexing into your location argument takes time, and location doesn't change for all your million nodes, so lift these invariants out of the for loop:
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = (NodeLocation[0] - location_in_space[0])**2 +
(NodeLocation[1] - location_in_space[1])**2 +
(NodeLocation[2] - location_in_space[2])**2
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
becomes:
x,y,z = location_in_space
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = (NodeLocation[0] - x)**2 +
(NodeLocation[1] - y)**2 +
(NodeLocation[2] - z)**2
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
Now these become simple (and faster) local value references.
You can also try replacing your distance calculation with calls to math.hypot, which is implemented in fast C code:
from math import hypot
x,y,z = location_in_space
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = hypot(hypot((NodeLocation[0] - x), (NodeLocation[1] - y)),(NodeLocation[2] - z))
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
(hypot is written to only do 2D distance calculation, so to do 3D you have to call hypot(hypot(xdist,ydist),zdist).)
You're creating, and destroying, a dictionary (distances) with a million items each time you run this function, but that's not even necessary. Try this:
def node_closest_to_location_in_space(location_in_space)
global node_location_by_nodeId
closest_node = None
closest_distance = 1e100 # An arbitrary, HUGE, value
for NodeId, NodeLocation in node_location_by_nodeId.iteritems():
distance = (NodeLocation[0] - location_in_space[0])**2 +
(NodeLocation[1] - location_in_space[1])**2 +
(NodeLocation[2] - location_in_space[2])**2
if distance <= closest_distance:
closest_distance = distance
closest_node = NodeId
return (closest_node, closest_distance)
I believe the overhead involved in creating and tearing down that distances dict every time you call the function was what was killing your performance. If so, this version should be faster.

Categories

Resources