I have conducted PCA on iris data as an exercise. Here is my code:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use("ggplot")
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA # as sklearnPCA
import pandas as pd
#=================
df = pd.read_csv('iris.csv');
# Split the 1st 4 columns comprising values
# and the last column that has species
X = df.ix[:,0:4].values
y = df.ix[:,4].values
X_std = StandardScaler().fit_transform(X); # standardization of data
# Fit the model with X_std and apply the dimensionality reduction on X_std.
pca = PCA(n_components=2) # 2 PCA components;
Y_pca = pca.fit_transform(X_std)
# How to plot my results???? I am struck here!
Please advise on how to plot my original iris data and the PCAs derived using a scatter plot.
Here is the way I think you can visualize it. I'll put PC1 on X-Axis and PC2 in Y-Axis and color each point based on its category. Here is the code:
#first we need to map colors on labels
dfcolor = pd.DataFrame([['setosa','red'],['versicolor','blue'],['virginica','yellow']],columns=['Species','Color'])
mergeddf = pd.merge(df,dfcolor)
#Then we do the graph
plt.scatter(Y_pca[:,0],Y_pca[:,1],color=mergeddf['Color'])
plt.show()
Related
There are so many ways to visualize a data set. I want to have all those methods together here and I have chosen iris data set for that. In order to do so These are been written here.
I would have use either pandas' visualization or seaborn's.
import seaborn as sns
import matplotlib.pyplot as plt
from pandas.plotting import parallel_coordinates
import pandas as pd
# Parallel Coordinates
# Load the data set
iris = sns.load_dataset("iris")
parallel_coordinates(iris, 'species', color=('#556270', '#4ECDC4', '#C7F464'))
plt.show()
and Result is as follow:
from pandas.plotting import andrews_curves
# Andrew Curves
a_c = andrews_curves(iris, 'species')
a_c.plot()
plt.show()
and its plot is shown below:
from seaborn import pairplot
# Pair Plot
pairplot(iris, hue='species')
plt.show()
which would plot the following fig:
and also another plot which is I think the least used and the most important is the following one:
from plotly.express import scatter_3d
# Plotting in 3D by plotly.express that would show the plot with capability of zooming,
# changing the orientation, and rotating
scatter_3d(iris, x='sepal_length', y='sepal_width', z='petal_length', size="petal_width",
color="species", color_discrete_map={"Joly": "blue", "Bergeron": "violet", "Coderre": "pink"})\
.show()
This one would plot into your browser and demands HTML5 and you can see as you wish with it. The next figure is the one. Remember that It is a SCATTERING plot and the size of each ball is showing data of the petal_width so all four features are in one single plot.
Naive Bayes is a classification algorithm for binary (two-class) and multiclass classification
problems. It is called Naive Bayes because the calculations of the probabilities for each class are
simplified to make their calculations tractable. Rather than attempting to calculate the
probabilities of each attribute value, they are assumed to be conditionally independent given the
class value. This is a very strong assumption that is most unlikely in real data, i.e. that the
attributes do not interact. Nevertheless, the approach performs surprisingly well on data where
this assumption does not hold.
Here is a good example of developing a model to predict labels of this data set. You can use this example to develop every model because this is the basic of it.
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
import seaborn as sns
# Load the data set
iris = sns.load_dataset("iris")
iris = iris.rename(index=str, columns={'sepal_length': '1_sepal_length', 'sepal_width': '2_sepal_width',
'petal_length': '3_petal_length', 'petal_width': '4_petal_width'})
# Setup X and y data
X_data_plot = df1.iloc[:, 0:2]
y_labels_plot = df1.iloc[:, 2].replace({'setosa': 0, 'versicolor': 1, 'virginica': 2}).copy()
x_train, x_test, y_train, y_test = train_test_split(df2.iloc[:, 0:4], y_labels_plot, test_size=0.25,
random_state=42) # This is for the model
# Fit model
model_sk_plot = GaussianNB(priors=None)
nb_model = GaussianNB(priors=None)
model_sk_plot.fit(X_data_plot, y_labels_plot)
nb_model.fit(x_train, y_train)
# Our 2-dimensional classifier will be over variables X and Y
N_plot = 100
X_plot = np.linspace(4, 8, N_plot)
Y_plot = np.linspace(1.5, 5, N_plot)
X_plot, Y_plot = np.meshgrid(X_plot, Y_plot)
plot = sns.FacetGrid(iris, hue="species", size=5, palette='husl').map(plt.scatter, "1_sepal_length",
"2_sepal_width", ).add_legend()
my_ax = plot.ax
# Computing the predicted class function for each value on the grid
zz = np.array([model_sk_plot.predict([[xx, yy]])[0] for xx, yy in zip(np.ravel(X_plot), np.ravel(Y_plot))])
# Reshaping the predicted class into the meshgrid shape
Z = zz.reshape(X_plot.shape)
# Plot the filled and boundary contours
my_ax.contourf(X_plot, Y_plot, Z, 2, alpha=.1, colors=('blue', 'green', 'red'))
my_ax.contour(X_plot, Y_plot, Z, 2, alpha=1, colors=('blue', 'green', 'red'))
# Add axis and title
my_ax.set_xlabel('Sepal length')
my_ax.set_ylabel('Sepal width')
my_ax.set_title('Gaussian Naive Bayes decision boundaries')
plt.show()
Add whatever you think is necessary to this , for example decision boundaries in 3d is what I have not done before.
I am trying to use TSNE to visualize data based on a Category to show me if the data is separable.
I have been trying to do this for the past two days but I am not getting a scatter plot showing the different categories plotted to enable me to see the relationship.
Instead, it is plotting all the data in a straight linear line, which cannot be correct as there are 5 different distinct attributes with the column I am trying to use as a label and legend.
What do I do to correct this?
import label as label
import pandas as pd
from matplotlib.cm import get_cmap
from matplotlib.colors import rgb2hex
from sklearn.manifold import TSNE
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from matplotlib import pyplot as plt
import numpy as np
# #region Loading Data
filename = 'Dataset/test.csv'
df = pd.read_csv(filename)
label = df.pop('Activity')
label_counts = label.value_counts()
# # Scale Data
scale = StandardScaler()
tsne_data= scale.fit_transform(df)
fig, axa = plt.subplots(2, 1, figsize=(15,10))
group = label.unique()
for i , labels in label.iteritems():
# mask =(label = group)
axa[0].scatter(x = tsne_data, y = tsne_data, label = group)
plt.legend
plt.show()
I am using python for k-means clustering for Mnist database(http://yann.lecun.com/exdb/mnist/). I am able to successfully cluster the data but unable to label the clusters. Meaning, I am unable to see that what cluster number holds what digit. For example cluster 5 can hold digit 7.
I need to write a code to correctly label the clusters after the k-means clustering has been done. Also need to add a legend to the code.
from __future__ import division, print_function, absolute_import
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D #only needed for 3D plots
#scikit learn
from sklearn.cluster import KMeans
#pandas to read excel file
import pandas
import xlrd
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
Links:
[MNIST Dataset] http://yann.lecun.com/exdb/mnist/
df = pandas.read_csv('test_encoded_with_label.csv',header=None,
delim_whitespace=True)
#df = pandas.read_excel('test_encoded_with_label.xls')
#print column names
print(df.columns)
df1 = df.iloc[:,0:2] #0 and 1, the last index is not used for iloc
labels = df.iloc[:,2]
labels = labels.values
dataset = df1.values
#train indices - depends how many samples
trainidx = np.arange(0,9999)
testidx = np.arange(0,9999)
train_data = dataset[trainidx,:]
test_data = dataset[testidx,:]
train_labels = labels[trainidx] #just 1D, no :
tpredct_labels = labels[testidx]
kmeans = KMeans(n_clusters=10, random_state=0).fit(train_data)
kmeans.labels_
#print(kmeans.labels_.shape)
plt.scatter(train_data[:,0],train_data[:,1], c=kmeans.labels_)
predct_labels = kmeans.predict(train_data)
print(predct_labels)
print('actual label', tpredct_labels)
centers = kmeans.cluster_centers_
print(centers)
plt.show()
To create markers to find cluster of labelled points, you can use the annotate method
Here is a sample code run on sklearn digits dataset where I try to mark the centroids of the resultant clustering. Note that I just label the clusters from 0-9 just for illustrative purpose:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
np.random.seed(42)
digits = load_digits()
data = scale(digits.data)
n_samples, n_features = data.shape
n_digits = len(np.unique(digits.target))
labels = digits.target
h = .02
reduced_data = PCA(n_components=2).fit_transform(data)
kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans.fit(reduced_data)
centroids = kmeans.cluster_centers_
plt_data = plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=kmeans.labels_, cmap=plt.cm.get_cmap('Spectral', 10))
plt.colorbar()
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x')
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlabel('component 1')
plt.ylabel('component 2')
labels = ['{0}'.format(i) for i in range(10)]
for i in range (10):
xy=(centroids[i, 0],centroids[i, 1])
plt.annotate(labels[i],xy, horizontalalignment='right', verticalalignment='top')
plt.show()
This is the result you get:
To add the legend, try:
plt.scatter(train_data[:,0], train_data[:,1], c=kmeans.labels_, label=kmeans.labels_)
plt.legend()
# -*- coding: utf-8 -*-
"""
Created on Sun Oct 28 17:35:48 2018
#author: User
"""
import matplotlib
matplotlib.use('GTKAgg')
import matplotlib.pyplot as plt
import matplotlib.transforms
import numpy as np
from sklearn.linear_model import LinearRegression
import pandas
# Load CSV and columns
df = pandas.read_csv(r'C:\Users\User\Desktop\dataset.csv')
print (df.head())
df = df[['TTL_value','packet_size']]
#X = df['Input_port']
#Y = df['Output_port']
X=np.array(df.packet_size)
Y=np.array(df.TTL_value)
#Split the data into training/testing sets
X_train = X[:-100]
X_test = X[-100:]
# Split the targets into training/testing sets
Y_train = Y[:-100]
Y_test = Y[-100:]
# Plot outputsregr = linear_model.LinearRegression()
regr=LinearRegression(fit_intercept=True)
regr.fit(X_test[:,np.newaxis],Y_test)
X_testfit=np.linspace(0,100000)
Y_testfit=np.linspace(255,255)
#Y_testfit=regr.predict(X_testfit[:,np.newaxis])
print ("Normal Pakctet size range is 7 to 65542")
plt.scatter(X_test, Y_test, color='black')
plt.plot(X_testfit, Y_testfit, color='red',linewidth=3)
plt.title('Test Data')
plt.xlabel('packet_size')
plt.ylabel('TTL_value')
plt.xticks((0,20000,40000,60000,80000,100000))
plt.yticks((0,50,100,150,200,250,300))
plt.show()
print ("The TTL value more than 255 is a malicious traffic")
I want to display the red line with the values of the y-axis.The red line is at 255. I tried many times, but really couldn't do it.
If I understood you correctly, you want to annotate the red horizontal line at y=255 with the corresponding y-value (255). In that case, here is a sample working solution for you. You just need to use plt.text with the desired x- and y-coordinates and the string you want to put as the text. Here I am using the y-value as the string. Y_testfit[0]*1.005 slightly shifts the text above the horizontal line to avoid overlap with it.
You can adapt this solution for your problem.
import matplotlib.pyplot as plt
X_testfit=np.linspace(0,100000)
Y_testfit=np.linspace(255,255)
plt.plot(X_testfit, Y_testfit, '-r', lw=3)
plt.text(20000, Y_testfit[0]*1.005, 'y=%d' %Y_testfit[0], fontsize=20)
I am using following code to perform PCA on iris dataset:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# get iris data to a dataframe:
from sklearn import datasets
iris = datasets.load_iris()
varnames = ['SL', 'SW', 'PL', 'PW']
irisdf = pd.DataFrame(data=iris.data, columns=varnames)
irisdf['Species'] = [iris.target_names[a] for a in iris.target]
# perform pca:
from sklearn.decomposition import PCA
model = PCA(n_components=2)
scores = model.fit_transform(irisdf.iloc[:,0:4])
loadings = model.components_
# plot results:
scoredf = pd.DataFrame(data=scores, columns=['PC1','PC2'])
scoredf['Grp'] = irisdf.Species
sns.lmplot(fit_reg=False, x="PC1", y='PC2', hue='Grp', data=scoredf) # plot point;
loadings = loadings.T
for e, pt in enumerate(loadings):
plt.plot([0,pt[0]], [0,pt[1]], '--b')
plt.text(x=pt[0], y=pt[1], s=varnames[e], color='b')
plt.show()
I am getting following plot:
However, when I compare with plots from other sites (e.g. at http://marcoplebani.com/pca/ ), my plot is not correct. Following differences seem to be present:
Petal length and petal width lines should have similar lengths.
Sepal length line should be closer to petal length and petal width lines rather than closer to sepal width line.
All 4 lines should be on the same side of x-axis.
Why is my plot not correct. Where is the error and how can it be corrected?
It depends on whether you scale the variance or not. The "other site" uses scale=TRUE. If you want to do this with sklearn, add StandardScaler before fitting the model and fit the model with scaled data, like this:
from sklearn.preprocessing import StandardScaler
X = StandardScaler().fit_transform(irisdf.iloc[:,0:4])
scores = model.fit_transform(X)
Edit: Difference between StandardScaler and normalize
Here is an answer which pointed out a key difference (row vs column). Even you use normalize here, you might want to consider X = normalize(X.T).T. The following code shows some differences after transformation:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler, normalize
iris = datasets.load_iris()
varnames = ['SL', 'SW', 'PL', 'PW']
fig, ax = plt.subplots(2, 2, figsize=(16, 12))
irisdf = pd.DataFrame(data=iris.data, columns=varnames)
irisdf.plot(kind='kde', title='Raw data', ax=ax[0][0])
irisdf_std = pd.DataFrame(data=StandardScaler().fit_transform(irisdf), columns=varnames)
irisdf_std.plot(kind='kde', title='StandardScaler', ax=ax[0][1])
irisdf_norm = pd.DataFrame(data=normalize(irisdf), columns=varnames)
irisdf_norm.plot(kind='kde', title='normalize', ax=ax[1][0])
irisdf_norm = pd.DataFrame(data=normalize(irisdf.T).T, columns=varnames)
irisdf_norm.plot(kind='kde', title='normalize', ax=ax[1][1])
plt.show()
I'm not sure how deep I can go with the algorithm/math. The point for StandardScaler is to get uniform/consistent mean and variance across features. The assumption is that variables with large measurement units are not necessarily (and should not be) dominant in PCA. In other word, StandardScaler makes features contribute equally to PCA. As you can see, normalize won't give consistent mean or variance.