Improving a numpy implementation of a simple spring network - python

I wanted a very simple spring system written in numpy. The system would be defined as a simple network of knots, linked by links. I'm not interested in evaluating the system over time, but instead I want to go from an initial state, change a variable (usually move a knot to a new position) and solve the system until it reaches a stable state (last applied force is below a given threshold). The knots have no mass, there's no gravity, the forces are all derived from each link's current lengths/init lengths. And the only "special" variable is that each knot can bet set as "anchored" (doesn't move).
So I wrote this simple solver below, and included a simple example. Jump to the very end for my question.
import numpy as np
from numpy.core.umath_tests import inner1d
np.set_printoptions(precision=4)
np.set_printoptions(suppress=True)
np.set_printoptions(linewidth =150)
np.set_printoptions(threshold=10)
def solver(kPos, kAnchor, link0, link1, w0, cycles=1000, precision=0.001, dampening=0.1, debug=False):
"""
kPos : vector array - knot position
kAnchor : float array - knot's anchor state, 0 = moves freely, 1 = anchored (not moving)
link0 : int array - array of links connecting each knot. each index corresponds to a knot
link1 : int array - array of links connecting each knot. each index corresponds to a knot
w0 : float array - initial link length
cycles : int - eval stops when n cycles reached
precision : float - eval stops when highest applied force is below this value
dampening : float - keeps system stable during each iteration
"""
kPos = np.asarray(kPos)
pos = np.array(kPos) # copy of kPos
kAnchor = 1-np.clip(np.asarray(kAnchor).astype(float),0,1)[:,None]
link0 = np.asarray(link0).astype(int)
link1 = np.asarray(link1).astype(int)
w0 = np.asarray(w0).astype(float)
F = np.zeros(pos.shape)
i = 0
for i in xrange(cycles):
# Init force applied per knot
F = np.zeros(pos.shape)
# Calculate forces
AB = pos[link1] - pos[link0] # get link vectors between knots
w1 = np.sqrt(inner1d(AB,AB)) # get link lengths
AB/=w1[:,None] # normalize link vectors
f = (w1 - w0) # calculate force vectors
f = f[:,None] * AB
# Apply force vectors on each knot
np.add.at(F, link0, f)
np.subtract.at(F, link1, f)
# Update point positions
pos += F * dampening * kAnchor
# If the maximum force applied is below our precision criteria, exit
if np.amax(F) < precision:
break
# Debug info
if debug:
print 'Iterations: %s'%i
print 'Max Force: %s'%np.amax(F)
return pos
Here's some test data to show how it works. In this case i'm using a grid, but in reality this can be any type of network, like a string with many knots, or a mess of polygons...:
import cProfile
# Create a 5x5 3D knot grid
z = np.linspace(-0.5, 0.5, 5)
x = np.linspace(-0.5, 0.5, 5)[::-1]
x,z = np.meshgrid(x,z)
kPos = np.array([np.array(thing) for thing in zip(x.flatten(), z.flatten())])
kPos = np.insert(kPos, 1, 0, axis=1)
'''
array([[-0.5 , 0. , 0.5 ],
[-0.25, 0. , 0.5 ],
[ 0. , 0. , 0.5 ],
...,
[ 0. , 0. , -0.5 ],
[ 0.25, 0. , -0.5 ],
[ 0.5 , 0. , -0.5 ]])
'''
# Define the links connecting each knots
link0 = [0,1,2,3,5,6,7,8,10,11,12,13,15,16,17,18,20,21,22,23,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
link1 = [1,2,3,4,6,7,8,9,11,12,13,14,16,17,18,19,21,22,23,24,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]
AB = kPos[link0]-kPos[link1]
w0 = np.sqrt(inner1d(AB,AB)) # this is a square grid, each link's initial length will be 0.25
# Set the anchor states
kAnchor = np.zeros(len(kPos)) # All knots will be free floating
kAnchor[12] = 1 # Middle knot will be anchored
This is what the grid looks like:
If we run my code using this data, nothing will happen since the links aren't pushing or stretching:
print np.allclose(kPos,solver(kPos, kAnchor, link0, link1, w0, debug=True))
# Returns True
# Iterations: 0
# Max Force: 0.0
Now lets move that middle anchored knot up a bit and solve the system:
# Move the center knot up a little
kPos[12] = np.array([0,0.3,0])
# eval the system
new = solver(kPos, kAnchor, link0, link1, w0, debug=True) # positions will have moved
#Iterations: 102
#Max Force: 0.000976603249133
# Rerun with cProfile to see how fast it runs
cProfile.run('solver(kPos, kAnchor, link0, link1, w0)')
# 520 function calls in 0.008 seconds
And here's what the grid looks like after being pulled by that single anchored knot:
Question:
My actual use cases are a little more complex than this example and solve a little too slow for my taste: (100-200 knots with a network anywhere between 200-300 links, solves in a few seconds).
How can i make my solver function run faster? I'd consider Cython but i have zero experience with C. Any help would be greatly appreciated.

Your method, at a cursory glance, appears to be an explicit under-relaxation type of method. Calculate the residual force at each knot, apply a factor of that force as a displacement, repeat until convergence. It's the repeating until convergence that takes the time. The more points you have, the longer each iteration takes, but you also need more iterations for the constraints at one end of the mesh to propagate to the other.
Have you considered an implicit method? Write the equation for the residual force at each non-constrained node, assemble them into a large matrix, and solve in one step. Information now propagates across the entire problem in a single step. As an additional benefit, the matrix you construct should be sparse, which scipy has a module for.
Wikipedia: explicit and implicit methods
EDIT Example of an implicit method matching (roughly) your problem. This solution is linear, so it doesn't take into account the effect of the calculated displacement on the force. You would need to iterate (or use non-linear techniques) to calculate this. Hope it helps.
#!/usr/bin/python3
import matplotlib.pyplot as pp
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
#------------------------------------------------------------------------------#
# Generate a grid of knots
nX = 10
nY = 10
x = np.linspace(-0.5, 0.5, nX)
y = np.linspace(-0.5, 0.5, nY)
x, y = np.meshgrid(x, y)
knots = list(zip(x.flatten(), y.flatten()))
# Create links between the knots
links = []
# Horizontal links
for i in range(0, nY):
for j in range(0, nX - 1):
links.append((i*nX + j, i*nX + j + 1))
# Vertical links
for i in range(0, nY - 1):
for j in range(0, nX):
links.append((i*nX + j, (i + 1)*nX + j))
# Create constraints. This dict takes a knot index as a key and returns the
# fixed z-displacement associated with that knot.
constraints = {
0 : 0.0,
nX - 1 : 0.0,
nX*(nY - 1): 0.0,
nX*nY - 1 : 1.0,
2*nX + 4 : 1.0,
}
#------------------------------------------------------------------------------#
# Matrix i-coordinate, j-coordinate and value
Ai = []
Aj = []
Ax = []
# Right hand side array
B = np.zeros(len(knots))
# Loop over the links
for link in links:
# Link geometry
displacement = np.array([ knots[1][i] - knots[0][i] for i in range(2) ])
distance = np.sqrt(displacement.dot(displacement))
# For each node
for i in range(2):
# If it is not a constraint, add the force associated with the link to
# the equation of the knot
if link[i] not in constraints:
Ai.append(link[i])
Aj.append(link[i])
Ax.append(-1/distance)
Ai.append(link[i])
Aj.append(link[not i])
Ax.append(+1/distance)
# If it is a constraint add a diagonal and a value
else:
Ai.append(link[i])
Aj.append(link[i])
Ax.append(+1.0)
B[link[i]] += constraints[link[i]]
# Create the matrix and solve
A = sp.sparse.coo_matrix((Ax, (Ai, Aj))).tocsr()
X = sp.sparse.linalg.lsqr(A, B)[0]
#------------------------------------------------------------------------------#
# Plot the links
fg = pp.figure()
ax = fg.add_subplot(111, projection='3d')
for link in links:
x = [ knots[i][0] for i in link ]
y = [ knots[i][1] for i in link ]
z = [ X[i] for i in link ]
ax.plot(x, y, z)
pp.show()

Related

Fit a time series in python with a mean value as boundary condition

I have the following boundary conditions for a time series in python.
The notation I use here is t_x, where x describe the time in milliseconds (this is not my code, I just thought this notation is good to explain my issue).
t_0 = 0
t_440 = -1.6
t_830 = 0
mean_value = -0.6
I want to create a list that contains 83 values (so the spacing is 10ms for each value).
The list should descibe a "curve" that starts at zero, has the minimum value of -1.6 at 440ms (so 44 in the list), ends with 0 at 880ms (so 83 in the list) and the overall mean value of the list should be -0.6.
I absolutely could not come up with an idea how to "fit" the boundaries to create such a list.
I would really appreciate help.
It is a quick and dirty approach, but it works:
X = list(range(0, 830 +1, 10))
Y = [0.0 for x in X]
Y[44] = -1.6
b = 12.3486
for x in range(44):
Y[x] = -1.6*(b*x+x**2)/(b*44+44**2)
for x in range(83, 44, -1):
Y[x] = -1.6*(b*(83-x)+(83-x)**2)/(b*38+38**2)
print(f'{sum(Y)/len(Y)=:8.6f}, {Y[0]=}, {Y[44]=}, {Y[83]=}')
from matplotlib import pyplot as plt
plt.plot(X,Y)
plt.show()
With the code giving following output:
sum(Y)/len(Y)=-0.600000, Y[0]=-0.0, Y[44]=-1.6, Y[83]=-0.0
And showing following diagram:
The first step in coming up with the above approach was to create a linear sloping 'curve' from the minimum to the zeroes. I turned out that linear approach gives here too large mean Y value what means that the 'curve' must have a sharp peak at its minimum and need to be approached with a polynomial. To make things simple I decided to use quadratic polynomial and approach the minimum from left and right side separately as the curve isn't symmetric. The b-value was found by trial and error and its precision can be increased manually or by writing a small function finding it in an iterative way.
Update providing a generic solution as requested in a comment
The code below provides a
meanYboundaryXY(lbc = [(0,0), (440,-1.6), (830,0), -0.6], shape='saw')
function returning the X and Y lists of the time series data calculated from the passed parameter with the boundary values:
def meanYboundaryXY(lbc = [(0,0), (440,-1.6), (830,0), -0.6]):
lbcXY = lbc[0:3] ; meanY_boundary = lbc[3]
minX = min(x for x,y in lbcXY)
maxX = max(x for x,y in lbcXY)
minY = lbc[1][1]
step = 10
X = list(range(minX, maxX + 1, step))
lenX = len(X)
Y = [None for x in X]
sumY = 0
for x, y in lbcXY:
Y[x//step] = y
sumY += y
target_sumY = meanY_boundary*lenX
if shape == 'rect':
subY = (target_sumY-sumY)/(lenX-3)
for i, y in enumerate(Y):
if y is None:
Y[i] = subY
elif shape == 'saw':
peakNextY = 2*(target_sumY-sumY)/(lenX-1)
iYleft = lbc[1][0]//step-1
iYrght = iYleft+2
iYstart = lbc[0][0] // step
iYend = lbc[2][0] // step
for i in range(iYstart, iYleft+1, 1):
Y[i] = peakNextY * i / iYleft
for i in range(iYend, iYrght-1, -1):
Y[i] = peakNextY * (iYend-i)/(iYend-iYrght)
else:
raise ValueError( str(f'meanYboundaryXY() EXIT, {shape=} not in ["saw","rect"]') )
return (X, Y)
X, Y = meanYboundaryXY()
print(f'{sum(Y)/len(Y)=:8.6f}, {Y[0]=}, {Y[44]=}, {Y[83]=}')
from matplotlib import pyplot as plt
plt.plot(X,Y)
plt.show()
The code outputs:
sum(Y)/len(Y)=-0.600000, Y[0]=0, Y[44]=-1.6, Y[83]=0
and creates following two diagrams for shape='rect' and shape='saw':
As an old geek, i try to solve the question with a simple algorithm.
First calculate points as two symmetric lines from 0 to 44 and 44 to 89 (orange on the graph).
Calculate sum except middle point and its ratio with sum of points when mean is -0.6, except middle point.
Apply ratio to previous points except middle point. (blue curve on the graph)
Obtain curve which was called "saw" by Claudio.
For my own, i think quadratic interpolation of Claudio is a better curve, but needs trial and error loops.
import matplotlib
# define goals
nbPoints = 89
msPerPoint = 10
midPoint = nbPoints//2
valueMidPoint = -1.6
meanGoal = -0.6
def createSerieLinear():
# two lines 0 up to 44, 44 down to 88 (89 values centered on 44)
serie=[0 for i in range(0,nbPoints)]
interval =valueMidPoint/midPoint
for i in range(0,midPoint+1):
serie[i]=i*interval
serie[nbPoints-1-i]=i*interval
return serie
# keep an original to plot
orange = createSerieLinear()
# work on a base
base = createSerieLinear()
# total except midPoint
totalBase = (sum(base)-valueMidPoint)
#total goal except 44
totalGoal = meanGoal*nbPoints - valueMidPoint
# apply ratio to reduce
reduceRatio = totalGoal/totalBase
for i in range(0,midPoint):
base[i] *= reduceRatio
base[nbPoints-1-i] *= reduceRatio
# verify
meanBase = sum(base)/nbPoints
print("new mean:",meanBase)
# draw
from matplotlib import pyplot as plt
X =[i*msPerPoint for i in range(0,nbPoints)]
plt.plot(X,base)
plt.plot(X,orange)
plt.show()
new mean: -0.5999999999999998
Hope you enjoy simple things :)

Vispy multiple graphs

I'm fairly new to python programming and I'm struggling with the Vispy Library.
Basically, I have a Raspberry pi connected to 2 Arduinos accelerometers sensors. The raspberry is sending the X, Y and Z values from both of the sensors through UDP to my computer. Then my computer has to displays 9 graphs : 6 for the evolutions of x, y and z for both sensors and 3 for the differences between them (X1-X2, Y1-Y2 and Z1-Z2) and finally, it must be in real-time.
I wanted to use the Vispy library for that last point. After reading the documentation, I came up with the following code :
#!/usr/bin/env python3
import numpy as np
from vispy import app
from vispy import gloo
import socket
from itertools import count
# init x, y arrays
x1_vals = []
time_vals = []
#UDP connection from Raspberry pi
UDP_IP = ""
UDP_PORT = 5005
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((UDP_IP, UDP_PORT))
# Initialize the index and set it to 1
index = count()
next(index)
# Initialize the Canvas c
c = app.Canvas(keys='interactive')
vertex = """
attribute vec2 a_position;
void main (void)
{
gl_Position = vec4(a_position, 0.0, 1.0);
}
"""
fragment = """
void main()
{
gl_FragColor = vec4(0.0, 0.0, 15.0, 10.0);
}
"""
program = gloo.Program(vertex, fragment)
#c.connect
def on_resize(event):
gloo.set_viewport(0, 0, *event.size)
#c.connect
def on_draw(event):
gloo.clear((1,1,1,1))
program.draw('line_strip')
def on_timer(event):
# next index
cpt = next(index)
# Get data from UDP
recv, addr = sock.recvfrom(1024)
data = recv.decode('UTF-8').split(';')
# We want to display only 100 samples so the graph still readable.
# So we delete the first value of the x array if there are more than 100 values
if (cpt > 100):
del x1_vals[0]
time_vals = np.linspace(-1.0, +1.0, 100)
else:
time_vals = np.linspace(-1.0, +1.0, cpt)
# The values must be bound between -1.0 and 1.0
tmp = float(data[0])*0.5
if (tmp >= 1):
tmp = float(0.99)
elif (tmp <= -1):
tmp = float(-0.99)
x1_vals.append(tmp)
# Then we concatenate the arrays of x and y
program['a_position'] = np.c_[time_vals, x1_vals].astype(np.float32)
c.update()
c.timer = app.Timer('auto', connect=on_timer, start=True)
c.show()
app.run()
So as the comments describe it, it firstly intitializes the UDP connection and the canvas, then for each values received it updates the canvas with the newly added value. If the number of values exceed 100, the first value of the array is deleted to keep a constant number of samples.
It works well when I want to display only the X1 accelerometer sensors evolution. So now I picked the code from the Vispy documentation which demonstrates how to show multiple graphs, but the code is a bit too complex for my level.
Basically, in my code I receive all the sensors values in the data array. I pick the first value [0] (X1), but the complete data looks like this : [x1, y1, z1, dx, dy, dz, x2, y2, z2] where dx = x1 - x2, dy = y1 - y2 and dz = z1 - z2. (the difference has to be directly calculated on the raspberry).
So I tried to modify the code from the documentation as following :
# Number of cols and rows in the table.
nrows = 3
ncols = 3
# Number of signals.
m = nrows*ncols
# Number of samples per signal.
n = 100
Because I want 9 graphs and only 100 samples per graph.
I ignored the index, the color and deleted the amplitude has it is not required in my case. Basically, I almost kept the original code for the whole setting part, then I replaced the def on_timer with mine.
Now I'm trying to feed the a_position array from GLSL with my own data. But I'm not sure how to prepare the data to make it works properly with this code. I'm struggling to understand what does these lines do :
# GLSL C code
VERT_SHADER = """
// Compute the x coordinate from the time index.
float x = -1 + 2*a_index.z / (u_n-1);
vec2 position = vec2(x - (1 - 1 / u_scale.x), a_position);
// Find the affine transformation for the subplots.
vec2 a = vec2(1./ncols, 1./nrows)*.9;
vec2 b = vec2(-1 + 2*(a_index.x+.5) / ncols,
-1 + 2*(a_index.y+.5) / nrows);
// Apply the static subplot transformation + scaling.
gl_Position = vec4(a*u_scale*position+b, 0.0, 1.0);
"""
# Python code
def __init__(self):
self.program['a_position'] = y.reshape(-1, 1)
def on_timer(self, event):
k = 10
y[:, :-k] = y[:, k:]
y[:, -k:] = amplitudes * np.random.randn(m, k)
self.program['a_position'].set_data(y.ravel().astype(np.float32))
I deleted the surrounding code that I think I'm understanding.
Note that even if I'm starting with python, I'm aware that they are using a class definition for the Canvas when I'm using the bare object in my code. I understand the use of self and others.
How can I adapt the code from the realtime_signals documentation to my case ?
Disclaimer: Overall that realtime signals example is, in my opinion, a bit of a hack. It "cheats" to produce as many plots as it does, but in the end the result is fast.
What that bit of shader code is doing is trying to take the series of line vertices and figure out which "sub-plot" they should go in. All vertices of all the lines are going into the shader as one array. The shader code is trying to say "this vertex is 23rd in the array which means it must belong to sub-plot 5 and it is the 3rd point in that plot because we know we have 5 points per plot" (as an example). The shader does this mostly by the information in a_index. For example, this bit:
// Compute the x coordinate from the time index.
float x = -1 + 2*a_index.z / (u_n-1);
vec2 position = vec2(x - (1 - 1 / u_scale.x), a_position);
Is adjusting the x coordinate (a_position) based on which sub-plot the point falls in.
The next chunk:
// Find the affine transformation for the subplots.
vec2 a = vec2(1./ncols, 1./nrows)*.9;
vec2 b = vec2(-1 + 2*(a_index.x+.5) / ncols,
-1 + 2*(a_index.y+.5) / nrows);
// Apply the static subplot transformation + scaling.
gl_Position = vec4(a*u_scale*position+b, 0.0, 1.0);
Is trying to determine how big each subplot should be. So the first chunk was "what subplot does this point fall in" and this one is "where in that subplot does the point sit". This code it coming up with a linear affine transformation (y = m*x + b) to scale the line to the appropriate size so that all the subplots are the same size and don't overlap.
I'm not sure I can go into more detail without re-walking the whole script and trying to understand exactly what each value in a_index is.
Edit: Another suggestion, in the long run you may want to move the UDP recv code to a separate thread (QThread if using a Qt backend) that emits a signal with the new data when it is available. This way the GUI/main thread stays responsive and isn't hung up waiting for data to come in.

How to real-time filter with scipy and lfilter?

Disclaimer: I am probably not as good at DSP as I should be and therefore have more issues than I should have getting this code to work.
I need to filter incoming signals as they happen. I tried to make this code to work, but I have not been able to so far.
Referencing scipy.signal.lfilter doc
import numpy as np
import scipy.signal
import matplotlib.pyplot as plt
from lib import fnlib
samples = 100
x = np.linspace(0, 7, samples)
y = [] # Unfiltered output
y_filt1 = [] # Real-time filtered
nyq = 0.5 * samples
f1_norm = 0.1 / nyq
f2_norm = 2 / nyq
b, a = scipy.signal.butter(2, [f1_norm, f2_norm], 'band', analog=False)
zi = scipy.signal.lfilter_zi(b,a)
zi = zi*(np.sin(0) + 0.1*np.sin(15*0))
This sets zi as zi*y[0 ] initially, which in this case is 0. I have got it from the example code in the lfilter documentation, but I am not sure if this is correct at all.
Then it comes to the point where I am not sure what to do with the few initial samples.
The coefficients a and b are len(a) = 5 here.
As lfilter takes input values from now to n-4, do I pad it with zeroes, or do I need to wait until 5 samples have gone by and take them as a single bloc, then continuously sample each next step in the same way?
for i in range(0, len(a)-1): # Append 0 as initial values, wrong?
y.append(0)
step = 0
for i in xrange(0, samples): #x:
tmp = np.sin(x[i]) + 0.1*np.sin(15*x[i])
y.append(tmp)
# What to do with the inital filterings until len(y) == len(a) ?
if (step> len(a)):
y_filt, zi = scipy.signal.lfilter(b, a, y[-len(a):], axis=-1, zi=zi)
y_filt1.append(y_filt[4])
print(len(y))
y = y[4:]
print(len(y))
y_filt2 = scipy.signal.lfilter(b, a, y) # Offline filtered
plt.plot(x, y, x, y_filt1, x, y_filt2)
plt.show()
I think I had the same problem, and found a solution on https://github.com/scipy/scipy/issues/5116:
from scipy import zeros, signal, random
def filter_sbs():
data = random.random(2000)
b = signal.firwin(150, 0.004)
z = signal.lfilter_zi(b, 1) * data[0]
result = zeros(data.size)
for i, x in enumerate(data):
result[i], z = signal.lfilter(b, 1, [x], zi=z)
return result
if __name__ == '__main__':
result = filter_sbs()
The idea is to pass the filter state z in each subsequent call to lfilter. For the first few samples the filter may give strange results, but later (depending on the filter length) it starts to behave correctly.
The problem is not how you are buffering the input. The problem is that in the 'offline' version, the state of the filter is initialized using lfilter_zi which computes the internal state of an LTI so that the output will already be in steady-state when new samples arrive at the input. In the 'real-time' version, you skip this so that the filter's initial state is 0. You can either initialize both versions to using lfilter_zi or else initialize both to 0. Then, it doesn't matter how many samples you filter at a time.
Note, if you initialize to 0, the filter will 'ring' for a certain amount of time before reaching a steady state. In the case of FIR filters, there is an analytic solution for determining this time. For many IIR filters, there is not.
This following is correct. For simplicity's sake I initialize to 0 and feed the input on sample at a time. However, any non-zero block size will produce equivalent output.
from scipy import signal, random
from numpy import zeros
def filter_sbs(data, b):
z = zeros(b.size-1)
result = zeros(data.size)
for i, x in enumerate(data):
result[i], z = signal.lfilter(b, 1, [x], zi=z)
return result
def filter(data, b):
result = signal.lfilter(b,1,data)
return result
if __name__ == '__main__':
data = random.random(20000)
b = signal.firwin(150, 0.004)
result1 = filter_sbs(data, b)
result2 = filter(data, b)
print(result1 - result2)
Output:
[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 ... -5.55111512e-17
0.00000000e+00 1.66533454e-16]

Calculating medoid of a cluster (Python)

So I'm running a KNN in order to create clusters. From each cluster, I would like to obtain the medoid of the cluster.
I'm employing a fractional distance metric in order to calculate distances:
where d is the number of dimensions, the first data point's coordinates are x^i, the second data point's coordinates are y^i, and f is an arbitrary number between 0 and 1
I would then calculate the medoid as:
where S is the set of datapoints, and δ is the absolute value of the distance metric used above.
I've looked online to no avail trying to find implementations of medoid (even with other distance metrics, but most thing were specifically k-means or k-medoid which [I think] is relatively different from what I want.
Essentially this boils down to me being unable to translate the math into effective programming. Any help would or pointers in the right direction would be much appreciated! Here's a short list of what I have so far:
I have figured out how to calculate the fractional distance metric (the first equation) so I think I'm good there.
I know numpy has an argmin() function (documented here).
Extra points for increased efficiency without lack of accuracy (I'm trying not to brute force by calculating every single fractional distance metric (because the number of point pairs might lead to a factorial complexity...).
compute pairwise distance matrix
compute column or row sum
argmin to find medoid index
i.e. numpy.argmin(distMatrix.sum(axis=0)) or similar.
So I've accepted the answer here, but I thought I'd provide my implementation if anyone else was trying to do something similar:
(1) This is the distance function:
def fractional(p_coord_array, q_coord_array):
# f is an arbitrary value, but must be greater than zero and
# less than one. In this case, I used 3/10. I took advantage
# of the difference of cubes in this case, so that I wouldn't
# encounter an overflow error.
a = np.sum(np.array(p_coord_array, dtype=np.float64))
b = np.sum(np.array(q_coord_array, dtype=np.float64))
a2 = np.sum(np.power(p_coord_array, 2))
ab = np.sum(p_coord_array) * np.sum(q_coord_array)
b2 = np.sum(np.power(p_coord_array, 2))
diffab = a - b
suma2abb2 = a2 + ab + b2
temp_dist = abs(diffab * suma2abb2)
temp_dist = np.power(temp_dist, 1./10)
dist = np.power(temp_dist, 10./3)
return dist
(2) The medoid function (if the length of the dataset was less than 6000 [if greater than that, I ran into overflow errors... I'm still working on that bit to be perfectly honest...]):
def medoid(dataset):
point = []
w = len(dataset)
if(len(dataset) < 6000):
h = len(dataset)
dist_matrix = [[0 for x in range(w)] for y in range(h)]
list_combinations = [(counter_1, counter_2, data_1, data_2) for counter_1, data_1 in enumerate(dataset) for counter_2, data_2 in enumerate(dataset) if counter_1 < counter_2]
for counter_3, tuple in enumerate(list_combinations):
temp_dist = fractional(tuple[2], tuple[3])
dist_matrix[tuple[0]][tuple[1]] = abs(temp_dist)
dist_matrix[tuple[1]][tuple[0]] = abs(temp_dist)
Any questions, feel free to comment!
If you don't mind using brute force this might help:
def calc_medoid(X, Y, f=2):
n = len(X)
m = len(Y)
dist_mat = np.zeros((m, n))
# compute distance matrix
for j in range(n):
center = X[j, :]
for i in range(m):
if i != j:
dist_mat[i, j] = np.linalg.norm(Y[i, :] - center, ord=f)
medoid_id = np.argmin(dist_mat.sum(axis=0)) # sum over y
return medoid_id, X[medoid_id, :]
Here is an example of computing a medoid for a single cluster with Euclidean distance.
import numpy as np, pandas as pd, matplotlib.pyplot as plt
a, b, c, d = np.array([0,1]), np.array([1, 3]), np.array([4,2]), np.array([3, 1.5])
vCenroid = np.mean([a, b, c, d], axis=0)
def GetMedoid(vX):
vMean = np.mean(vX, axis=0) # compute centroid
return vX[np.argmin([sum((x - vMean)**2) for x in vX])] # pick a point closest to centroid
vMedoid = GetMedoid([a, b, c, d])
print(f'centroid = {vCenroid}')
print(f'medoid = {vMedoid}')
df = pd.DataFrame([a, b, c, d], columns=['x', 'y'])
ax = df.plot.scatter('x', 'y', grid=True, title='Centroid in 2D plane', s=100);
plt.plot(vCenroid[0], vCenroid[1], 'ro', ms=10); # plot centroid as red circle
plt.plot(vMedoid[0], vMedoid[1], 'rx', ms=20); # plot medoid as red star
You can also use the following package to compute medoid for one or more clusters
!pip -q install scikit-learn-extra > log
from sklearn_extra.cluster import KMedoids
GetMedoid = lambda vX: KMedoids(n_clusters=1).fit(vX).cluster_centers_
GetMedoid([a, b, c, d])[0]
I would say that you just need to compute the median.
np.median(np.asarray(points), axis=0)
Your median is the point with the biggest centrality.
Note: if you are using distances different than Euclidean this doesn't hold.

Python PCA - projection into lower dimensional space

i am trying to implement PCA, which worked well regarding the intermediate results such as eigenvalues and eigenvectors. Yet when i try to project the data (3 dimensional) into the a 2D-principal-component space, the result is wrong.
I spent a lot of time comparing my code to other implementations such as:
http://sebastianraschka.com/Articles/2014_pca_step_by_step.html
Yet after a long time there is no progress and I can not find the mistake. I assume the problem is a simple coding mistake due to the correct intermediate results.
Thanks in advance for anyone who actually read this question and thanks even more to those who give helpful comments/answers.
My code is as follows:
import numpy as np
class PCA():
def __init__(self, X):
#center the data
X = X - X.mean(axis=0)
#calculate covariance matrix based on X where data points are represented in rows
C = np.cov(X, rowvar=False)
#get eigenvectors and eigenvalues
d,u = np.linalg.eigh(C)
#sort both eigenvectors and eigenvalues descending regarding the eigenvalue
#the output of np.linalg.eigh is sorted ascending, therefore both are turned around to reach a descending order
self.U = np.asarray(u).T[::-1]
self.D = d[::-1]
**problem starts here**
def project(self, X, m):
#use the top m eigenvectors with the highest eigenvalues for the transformation matrix
Z = np.dot(X,np.asmatrix(self.U[:m]).T)
return Z
The result of my code is:
myresult
([[ 0.03463706, -2.65447128],
[-1.52656731, 0.20025725],
[-3.82672364, 0.88865609],
[ 2.22969475, 0.05126909],
[-1.56296316, -2.22932369],
[ 1.59059825, 0.63988429],
[ 0.62786254, -0.61449831],
[ 0.59657118, 0.51004927]])
correct result - such as by sklearn.PCA
([[ 0.26424835, -2.25344912],
[-1.29695602, 0.60127941],
[-3.59711235, 1.28967825],
[ 2.45930604, 0.45229125],
[-1.33335186, -1.82830153],
[ 1.82020954, 1.04090645],
[ 0.85747383, -0.21347615],
[ 0.82618248, 0.91107143]])
The input is defined as follows:
X = np.array([
[-2.133268233289599,0.903819474847349,2.217823388231679,-0.444779660856219,-0.661480010318842,-0.163814281248453,-0.608167714051449, 0.949391996219125],
[-1.273486742804804,-1.270450725314960,-2.873297536940942, 1.819616794091556,-2.617784834189455, 1.706200163080549,0.196983250752276,0.501491995499840],
[-0.935406638147949,0.298594472836292,1.520579082270122,-1.390457671168661,-1.180253547776717,-0.194988736923602,-0.645052874385757,-1.400566775105519]]).T
You need to center your data by subtracting the mean before you project it onto the new basis:
mu = X.mean(0)
C = np.cov(X - mu, rowvar=False)
d, u = np.linalg.eigh(C)
U = u.T[::-1]
Z = np.dot(X - mu, U[:2].T)
print(Z)
# [[ 0.26424835 -2.25344912]
# [-1.29695602 0.60127941]
# [-3.59711235 1.28967825]
# [ 2.45930604 0.45229125]
# [-1.33335186 -1.82830153]
# [ 1.82020954 1.04090645]
# [ 0.85747383 -0.21347615]
# [ 0.82618248 0.91107143]]

Categories

Resources