I have a set of x, y and z points and am trying to fit a plane to this three-dimensional data so that z=f(x,y) can be calculated for any x and y.
I am hoping to get an equation for the plane and plot the graph in a Jupyter notebook for visualization.
This is the (working) code I've been using to plot my data:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import pandas as pd
x = np.arange(-12, 1)
y = np.arange(-40,-25)
Z = array([[402., 398., 395., 391., 387., 383., 379., 375., 371., 367., 363.,358., 354.],
[421., 417., 413., 409., 406., 402., 398., 393., 389., 385., 381.,
376., 372.],
[440., 436., 432., 429., 425., 421., 416., 412., 408., 404., 399.,
395., 391.],
[460., 456., 452., 448., 444., 440., 436., 432., 427., 423., 419.,
414., 410.],
[480., 477., 473., 469., 465., 460., 456., 452., 447., 443., 438.,
434., 429.],
[501., 498., 494., 490., 485., 481., 477., 472., 468., 463., 459.,
454., 449.],
[523., 519., 515., 511., 507., 502., 498., 494., 489., 484., 480.,
475., 470.],
[545., 541., 537., 533., 529., 524., 520., 515., 511., 506., 501.,
496., 492.],
[568., 564., 560., 556., 551., 547., 542., 538., 533., 528., 523.,
518., 513.],
[592., 588., 583., 579., 575., 570., 565., 561., 556., 551., 546.,
541., 536.],
[616., 612., 607., 603., 598., 594., 589., 584., 579., 575., 569.,
564., 559.],
[640., 636., 632., 627., 623., 618., 613., 609., 604., 599., 593.,
588., 583.],
[666., 662., 657., 653., 648., 643., 638., 633., 628., 623., 618.,
613., 607.],
[692., 688., 683., 679., 674., 669., 664., 659., 654., 649., 643.,
638., 632.],
[ nan, 714., 710., 705., 700., 695., 690., 685., 680., 675., 669.,
664., 658.]])
X, Y = np.meshgrid(x, y)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
print (X.shape, Y.shape, Z.shape)
ax.plot_surface(X, Y, Z)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
I have tried implementing these solutions:
https://gist.github.com/amroamroamro/1db8d69b4b65e8bc66a6
http://inversionlabs.com/2016/03/21/best-fit-surfaces-for-3-dimensional-data.html
However, since my x and y arrays don't have the same length, I get this error message:
ValueError: all the input array dimensions except for the concatenation axis must match exactly
The data you shared seemed to work for me during plotting. Your X, Y, Z are all having the same size. There is one nan value in your Z array. You can remove that point while estimating equation of plane.
You want to fit your data to a plan in 3D. Thus, it is a linear regression problem. You can use multivariate regression from scikit-learn package to estimate the coefficient of the equation of plane.
Equation of plane is given by the following:
Z = a1 * X + a2 * Y + c
You can flatten your data as follows and use scikit-learn's linear_model to fit a plane to the data. Please refer below:
# your data is stored as X, Y, Z
print(X.shape, Y.shape, Z.shape)
x1, y1, z1 = X.flatten(), Y.flatten(), Z.flatten()
X_data = np.array([x1, y1]).reshape((-1, 2))
Y_data = z1
from sklearn import linear_model
reg = linear_model.LinearRegression().fit(X_data, Y_data)
print("coefficients of equation of plane, (a1, a2): ", reg.coef_)
print("value of intercept, c:", reg.intercept_)
The above code will fit a plane to the given data which is linear.
To fit a second degree surface, read further.
You will have Second Degree Surface equation for the following form:
Z = a1*X + a2*Y + a3*X*Y + a4*X*X + a5*Y*Y + c
To fit this curve using linear regression, you will have to modify the above code in the following manner:
# your data is stored as X, Y, Z
print(X.shape, Y.shape, Z.shape)
x1, y1, z1 = X.flatten(), Y.flatten(), Z.flatten()
x1y1, x1x1, y1y1 = x1*y1, x1*x1, y1*y1
X_data = np.array([x1, y1, x1y1, x1x1, y1y1]).T # X_data shape: n, 5
Y_data = z1
from sklearn import linear_model
reg = linear_model.LinearRegression().fit(X_data, Y_data)
print("coefficients of equation of plane, (a1, a2, a3, a4, a5): ", reg.coef_)
print("value of intercept, c:", reg.intercept_)
Related
I want to plot 2 trendlines for one scatterplot using Matplotlib in Python but I don't know how. The graph should be similar to this target plot (from here, fig.2).
I managed to plot 1 trendline on a scatterplot here but can't figure out how to plot another trend.
Underneath is what I tried until now:
This proved ok for other parameters that I plotted, but not for this case, which led me to the conclusion that it's not too correct.
X = vO2.reshape(-1, 1)
Y = ve.reshape(-1, 1)
linear_regressor = LinearRegression()
linear_regressor.fit(X, Y)
y_pred = linear_regressor.predict(X)
x_pred = linear_regressor.predict(Y)
plt.scatter(X, Y)
plt.plot(X, y_pred, '-*',label="O2")
plt.plot(x_pred, Y, '-*',label="vent")
plt.xlabel("VO2 (L/min)")
plt.ylabel("VE (L/min)")
plt.show()
and also
z1 = np.polyfit(vO2, ve, 1)
p1 = np.poly1d(z1)
z2 = np.polyfit(ve, vO2, 1)
p2 = np.poly1d(z2)
plt.scatter(vO2, ref_vent, label='original')
plt.plot(vO2, p1(vO2), label='trendline')
plt.plot(ve, p2(ve), label='trendline')
plt.show()
which also didn't look similar to the target plot.
I don't know how to continue. Thanks in advance!
example dataset:
vo2 = [1.673925 1.9015125 1.981775 2.112875 2.1112625 2.086375 2.13475
2.1777 2.176975 2.1857125 2.258925 2.2718375 2.3381 2.3330875
2.353725 2.4879625 2.448275 2.4829875 2.5084375 2.511275 2.5511
2.5678375 2.5844625 2.6101875 2.6457375 2.6602125 2.6939875 2.7210625
2.720475 2.767025 2.751375 2.7771875 2.776025 2.7319875 2.564
2.3977625 2.4459125 2.42965 2.401275 2.387175 2.3544375]
ve = [ 3.93125 7.1975 9.04375 14.06125 14.11875 13.24375
14.6625 15.3625 15.2 15.035 17.7625 17.955
19.2675 19.875 21.1575 22.9825 23.75625 23.30875
25.9925 25.6775 27.33875 27.7775 27.9625 29.35
31.86125 32.2425 33.7575 34.69125 36.20125 38.6325
39.4425 42.085 45.17 47.18 42.295 37.5125
38.84375 37.4775 34.20375 33.18 32.67708333]
OK, so you need to find the point, where slope of line changes. I tried 2nd derivative, but it was noisy and I coulnd't find the right spot.
Another way is to try all possible points, calculate left and right regression lines and find pair with best fit (r2 coeff). Give this code a try. It is not complete. I do not know, how to force regression lines to go through point in the middle. And it might be better to work with interpolated data, if there are not enough datapoints.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
vo2 = [1.673925,1.9015125,1.981775,2.112875,2.1112625,2.086375,2.13475,2.1777,2.176975,2.1857125,2.258925,2.2718375,2.3381,2.3330875,2.353725,2.4879625,2.448275,2.4829875,2.5084375,2.511275,2.5511,2.5678375,2.5844625,2.6101875,2.6457375,2.6602125,2.6939875,2.7210625,2.720475,2.767025,2.751375,2.7771875,2.776025,2.7319875,2.564,2.3977625,2.4459125,2.42965,2.401275,2.387175,2.3544375]
ve = [ 3.93125,7.1975,9.04375,14.06125,14.11875,13.24375,14.6625,15.3625,15.2,15.035,17.7625,17.955,19.2675,19.875,21.1575,22.9825,23.75625,23.30875,25.9925,25.6775,27.33875,27.7775,27.9625,29.35,31.86125,32.2425,33.7575,34.69125,36.20125,38.6325,39.4425,42.085,45.17,47.18,42.295,37.5125,38.84375,37.4775,34.20375,33.18,32.67708333]
x = np.array(vo2)
y = np.array(ve)
sort_idx = x.argsort()
x = x[sort_idx]
y = y[sort_idx]
assert len(x) == len(y)
def fit(x,y):
p = np.polyfit(x, y, 1)
f = np.poly1d(p)
r2 = r2_score(y, f(x))
return p, f, r2
skip = 5 # minimal length of split data
r2 = [0] * len(x)
funcs = {}
for i in range(len(x)):
if i < skip or i > len(x) - skip:
continue
_, f_left, r2_left = fit(x[:i], y[:i])
_, f_right, r2_right = fit(x[i:], y[i:])
r2[i] = r2_left * r2_right
funcs[i] = (f_left, f_right)
split_ix = np.argmax(r2) # index of split
f_left,f_right = funcs[split_ix]
print(f"split point index: {split_ix}, x: {x[split_ix]}, y: {y[split_ix]}")
xd = np.linspace(min(x), max(x), 100)
plt.plot(x, y, "o")
plt.plot(xd, f_left(xd))
plt.plot(xd, f_right(xd))
plt.plot(x[split_ix], y[split_ix], "x")
plt.show()
i'm new in this forum.
I have a small problem to understand how to calcolate slope and intercept from value that are in a csv file.
This is my working codes(minquadbasso.py is the programme's name):
import numpy as np
import matplotlib.pyplot as plt # To visualize
import pandas as pd # To read data
from sklearn.linear_model import LinearRegression
data = pd.read_csv('TelefonoverticaleAsseY.csv') # load data set
X = data.iloc[:, 0].values.reshape(-1, 1) # values converts it into a numpy array
Y = data.iloc[:, 1].values.reshape(-1, 1) # -1 means that calculate the dimension of rows, but have 1 column
linear_regressor = LinearRegression() # create object for the class
linear_regressor.fit(X, Y) # perform linear regression
Y_pred = linear_regressor.predict(X) # make predictions
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='black')
plt.show()
If I use:
from scipy.stats import linregress
linregress(X, Y)
compiler give me this error:
Traceback (most recent call last):
File "minquadbasso.py", line 11, in <module>
linregress(X, Y)
File "/usr/local/lib/python3.7/dist-packages/scipy/stats/_stats_mstats_common.py", line 116, in linregress
ssxm, ssxym, ssyxm, ssym = np.cov(x, y, bias=1).flat
ValueError: too many values to unpack (expected 4)
Can you make me understand what i'm doing wrong and suggest what change in order to calculate seccesfully slope and intercept?
My go-to for linear regression is np.polyfit. If you have an array (or list) of x data, and an array or list of y data just use
coeff = np.polyfit(x,y, deg = 1)
coeff is now a list of least square coefficients to fit your data, with highest degree of x first. So for a first degree fit y = ax + b,
a = coeff[0] and b = coeff[1] 'deg' is the degree of the polynomial you want to fit to your data. To evaluate your regression (predict) you can use np.polyval
y_prediction = np.polyval(coeff, x)
If you want the covariance matrix for the fit
coeff, cov = np.polyfit(x,y, deg = 1, cov = True)
you can find more on it here.
I want to use customized kernel function in Epsilon-Support Vector Regression module of Sklearn.svm. I found this code as an example for customized kernel for svc at the scilit-learn documentation:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
Y = iris.target
def my_kernel(X, Y):
"""
We create a custom kernel:
(2 0)
k(X, Y) = X ( ) Y.T
(0 1)
"""
M = np.array([[2, 0], [0, 1.0]])
return np.dot(np.dot(X, M), Y.T)
h = .02 # step size in the mesh
# we create an instance of SVM and fit out data.
clf = svm.SVC(kernel=my_kernel)
clf.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolors='k')
plt.title('3-Class classification using Support Vector Machine with custom'
' kernel')
plt.axis('tight')
plt.show()
I want to define some function like:
def my_new_kernel(X):
a,b,c = (random.randint(0,100) for _ in range(3))
# imagine f1,f2,f3 are functions like sin(x), cos(x), ...
ans = a*f1(X) + b*f2(X) + c*f3(X)
return ans
What I thought about kernel method is that it's a function that gets matrix of features (X) as input and returns a matrix of shape (n,1) . Then svm appends the returned matrix to the feature columns and uses that to classify the labels Y.
In the code above the kernel is used in svm.fit function and I can't figure out what are X and Y inputs of kernel and their shapes. if X and Y (inputs of my_kernel method) are the features and label of dataset, so then how does the kernel work for test data where we have no labels?
Actually I want to use svm for a dataset with shape of (10000, 6), (5 columns=features, 1 column = label) then if I want to use my_new_kernel method what would be the inputs and output and their shapes.
Your exact issue is quite unclear; here are some remarks which may be helpful nevertheless.
I can't figure out what are X and Y inputs of kernel and their shapes. if X and Y (inputs of my_kernel method) are the features and label of dataset,
Indeed they are; from the documentation of fit:
Parameters:
X : {array-like, sparse matrix}, shape (n_samples, n_features)
Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel=”precomputed”, the
expected shape of X is (n_samples, n_samples).
y : array-like, shape(n_samples,)
Target values (class labels in classification, real numbers in regression)
exactly like they are for the default available kernels.
so then how does the kernel work for test data where we have no labels?
A close look at the code you have provided will reveal that the labels Y are indeed used only during training (fit); they are not of course used during prediction (clf.predict() in the code above - don't get confused with yy, which have nothing to do with Y).
I'm attempting to plot my dataset, x and y (generated from a csv file via numpy.genfromtxt('/Users/.../somedata.csv', delimiter=',', unpack=True)) as a simple density plot. To ensure this is self containing I will define them here:
x = [ 0.2933215 0.2336305 0.2898058 0.2563835 0.1539951 0.1790058
0.1957057 0.5048573 0.3302402 0.2896122 0.4154893 0.4948401
0.4688092 0.4404935 0.2901995 0.3793949 0.6343423 0.6786809
0.5126349 0.4326627 0.2318232 0.538646 0.1351541 0.2044524
0.3063099 0.2760263 0.1577156 0.2980986 0.2507897 0.1445099
0.2279241 0.4229934 0.1657194 0.321832 0.2290785 0.2676585
0.2478505 0.3810182 0.2535708 0.157562 0.1618909 0.2194217
0.1888698 0.2614876 0.1894155 0.4802076 0.1059326 0.3837571
0.3609228 0.2827142 0.2705508 0.6498625 0.2392224 0.1541462
0.4540277 0.1624592 0.160438 0.109423 0.146836 0.4896905
0.2052707 0.2668798 0.2506224 0.5041728 0.201774 0.14907
0.21835 0.1609169 0.1609169 0.205676 0.4500787 0.2504743
0.1906289 0.3447547 0.1223678 0.112275 0.2269951 0.1616036
0.1532181 0.1940938 0.1457424 0.1094261 0.1636615 0.1622345
0.705272 0.3158471 0.1416916 0.1290324 0.3139713 0.2422002
0.1593835 0.08493619 0.08358301 0.09691083 0.2580497 0.1805554 ]
y = [ 1.395807 1.31553 1.333902 1.253527 1.292779 1.10401 1.42933
1.525589 1.274508 1.16183 1.403394 1.588711 1.346775 1.606438
1.296017 1.767366 1.460237 1.401834 1.172348 1.341594 1.3845
1.479691 1.484053 1.468544 1.405156 1.653604 1.648146 1.417261
1.311939 1.200763 1.647532 1.610222 1.355913 1.538724 1.319192
1.265142 1.494068 1.268721 1.411822 1.580606 1.622305 1.40986
1.529142 1.33644 1.37585 1.589704 1.563133 1.753167 1.382264
1.771445 1.425574 1.374936 1.147079 1.626975 1.351203 1.356176
1.534271 1.405485 1.266821 1.647927 1.28254 1.529214 1.586097
1.357731 1.530607 1.307063 1.432288 1.525117 1.525117 1.510123
1.653006 1.37388 1.247077 1.752948 1.396821 1.578571 1.546904
1.483029 1.441626 1.750374 1.498266 1.571477 1.659957 1.640285
1.599326 1.743292 1.225557 1.664379 1.787492 1.364079 1.53362
1.294213 1.831521 1.19443 1.726312 1.84324 ]
Now, I have used many attempts to plot my contours using variations on:
delta = 0.025
OII_OIII_sAGN_sorted = numpy.arange(numpy.min(OII_OIII_sAGN), numpy.max(OII_OIII_sAGN), delta)
Dn4000_sAGN_sorted = numpy.arange(numpy.min(Dn4000_sAGN), numpy.max(Dn4000_sAGN), delta)
OII_OIII_sAGN_X, Dn4000_sAGN_Y = np.meshgrid(OII_OIII_sAGN_sorted, Dn4000_sAGN_sorted)
Z1 = matplotlib.mlab.bivariate_normal(OII_OIII_sAGN_X, Dn4000_sAGN_Y, 1.0, 1.0, 0.0, 0.0)
Z2 = matplotlib.mlab.bivariate_normal(OII_OIII_sAGN_X, Dn4000_sAGN_Y, 0.5, 1.5, 1, 1)
# difference of Gaussians
Z = 0.2 * (Z2 - Z1)
pyplot_middle.contour(OII_OIII_sAGN_X, Dn4000_sAGN_Y, Z, 12, colors='k')
This doesn't seem to give the desired output.I have also tried:
H, xedges, yedges = np.histogram2d(OII_OIII_sAGN,Dn4000_sAGN)
extent = [xedges[0],xedges[-1],yedges[0],yedges[-1]]
ax.contour(H, extent=extent)
Not quite working as I wanted either. Essentially, I am looking for something similar to this:
If anyone could help me with this I would be very grateful, either by suggesting a totally new method or modifying my existing code. Please also attach images of your output if you have some useful techniques or ideas.
seaborn does density plots right out of the box:
import seaborn as sns
import matplotlib.pyplot as plt
sns.kdeplot(x, y)
plt.show()
It seems that histogram2d takes some fiddling to plot the contour in the right place. I took the transpose of the histogram matrix and also took the mean values of the elements in xedges and yedges instead of just removing one from the end.
from matplotlib import pyplot as plt
import numpy as np
fig = plt.figure()
h, xedges, yedges = np.histogram2d(x, y, bins=9)
xbins = xedges[:-1] + (xedges[1] - xedges[0]) / 2
ybins = yedges[:-1] + (yedges[1] - yedges[0]) / 2
h = h.T
CS = plt.contour(xbins, ybins, h)
plt.scatter(x, y)
plt.show()
I have a very simple 1D classification problem: a list of values [0, 0.5, 2] and their associated classes [0, 1, 2]. I would like to get the classification boundaries between those classes.
Adapting the iris example (for visualization purposes), getting rid of the non-linear models:
X = np.array([[x, 1] for x in [0, 0.5, 2]])
Y = np.array([1, 0, 2])
C = 1.0 # SVM regularization parameter
svc = svm.SVC(kernel='linear', C=C).fit(X, Y)
lin_svc = svm.LinearSVC(C=C).fit(X, Y)
Gives the following result:
LinearSVC is returning junk (why?), but the SVC with linear kernel is working okay. So I would like to get the boundaries values, that you can graphically guess: ~0.25 and ~1.25.
That's where I'm lost: svc.coef_ returns
array([[ 0.5 , 0. ],
[-1.33333333, 0. ],
[-1. , 0. ]])
while svc.intercept_ returns array([-0.125 , 1.66666667, 1. ]).
This is not explicit.
I must be missing something silly, how to obtain those values? They seem obvious to compute, that would be ridiculous to iterate over the x-axis to find the boundary...
I had the same question and eventually found the solution in the sklearn documentation.
Given the weights W=svc.coef_[0] and the intercept I=svc.intercept_ , the decision boundary is the line
y = a*x - b
with
a = -W[0]/W[1]
b = I[0]/W[1]
Exact boundary calculated from coef_ and intercept_
I think this is a great question and haven't been able to find a general answer to it anywhere in the documentation. This site really needs Latex, but anyway, I'll try to do my best without...
In general, a hyperplane is defined by its unit normal and an offset from the origin. So we hope to find some decision function of the form: x dot n + d > 0 (where the > may of course be replaced with >=).
In the case of the SVM Margins Example, we can manipulate the equation they start with to clarify its conceptual significance. First, let's establish the notational convenience of writing coef to represent coef_[0] and intercept to represent intercept_[0], since these arrays only have 1 value. Then some simple substitution yields the equation:
y + coef[0]*x/coef[1] + intercept/coef[1] = 0
Multiplying through by coef[1], we obtain
coef[1]*y + coef[0]*x + intercept = 0
And so we see that the coefficients and intercept function roughly as their names would imply. Applying one quick generalization of notation should make the answer clear - we will replace x and y with a single vector x.
coef[0]*x[0] + coef[1]*x[1] + intercept = 0
In general, the coef_ and intercept_ members of the svm classifier will have dimension matching the data set it was trained on, so we can extrapolate this equation to data of arbitrary dimension. And to avoid leading anyone astray, here is the final generalized decision boundary using the original variable names from the svm:
coef_[0][0]*x[0] + coef_[0][1]*x[1] + coef_[0][2]*x[2] + ... + coef_[0][n-1]*x[n-1] + intercept_[0] = 0
where the dimension of the data is n.
Or more tersely:
sum(coef_[0][i]*x[i]) + intercept_[0] = 0
where i sums over the range of the dimension of the input data.
Get decision line from SVM, demo 1
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.datasets import make_blobs
# we create 40 separable points
X, y = make_blobs(n_samples=40, centers=2, random_state=6)
# fit the model, don't regularize for illustration purposes
clf = svm.SVC(kernel='linear', C=1000)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=plt.cm.Paired)
# plot the decision function
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# plot decision boundary and margins
ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
ax.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=100,
linewidth=1, facecolors='none')
plt.show()
Prints:
Approximate the separating n-1 dimensional hyperplane of an SVM, Demo 2
import numpy as np
import mlpy
from sklearn import svm
from sklearn.svm import SVC
import matplotlib.pyplot as plt
np.random.seed(0)
mean1, cov1, n1 = [1, 5], [[1,1],[1,2]], 200 # 200 samples of class 1
x1 = np.random.multivariate_normal(mean1, cov1, n1)
y1 = np.ones(n1, dtype=np.int)
mean2, cov2, n2 = [2.5, 2.5], [[1,0],[0,1]], 300 # 300 samples of class -1
x2 = np.random.multivariate_normal(mean2, cov2, n2)
y2 = 0 * np.ones(n2, dtype=np.int)
X = np.concatenate((x1, x2), axis=0) # concatenate the 1 and -1 samples
y = np.concatenate((y1, y2))
clf = svm.SVC()
#fit the hyperplane between the clouds of data, should be fast as hell
clf.fit(X, y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
production_point = [1., 2.5]
answer = clf.predict([production_point])
print("Answer: " + str(answer))
plt.plot(x1[:,0], x1[:,1], 'ob', x2[:,0], x2[:,1], 'or', markersize = 5)
colormap = ['r', 'b']
color = colormap[answer[0]]
plt.plot(production_point[0], production_point[1], 'o' + str(color), markersize=20)
#I want to draw the decision lines
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.show()
Prints:
These hyperplanes are all straight as an arrow, they're just straight in higher dimensions and can't be comprehended by mere mortals confined to 3 dimensional space. These hyperplanes are cast into higher dimensions with the creative kernel functions, than flattened back into the visible dimension for your viewing pleasure. Here is a video trying to impart some intuition of what is going on in demo 2: https://www.youtube.com/watch?v=3liCbRZPrZA