昨天我们学习了支持向量机基本概念,重申数学推导原理的重要性并向大家介绍了一篇非常不错的文章。今天,我们使用Scikit-Learn中的SVC分类器实现SVM。我们将在day16使用kernel-trick实现SVM。
导入库
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
导入数据
数据集依然是Social_Network_Ads,下载链接:
https://pan.baidu.com/s/1cPBt2DAF2NraOMhbk5-_pQ
提取码:vl2g
dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:,[2,3]].values y = dataset.iloc[:,4].values
拆分数据集为训练集合和测试集合
from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size = 0.25,random_state = 0)
特征量化
from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.fit_transform(X_test)
适配SVM到训练集合
from sklearn.svm import SVC classifier = SVC(kernel = 'linear',random_state = 0) classifier.fit(X_train,y_train)
预测测试集合结果
y_pred = classifier.predict(X_test)
创建混淆矩阵
from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test,y_pred)

训练集合结果可视化
from matplotlib.colors import ListedColormap
X_set,y_set = X_train,y_train
X1,X2 = np.meshgrid(np.arange(start = X_set[:,0].min() - 1,stop = X_set[:,0].max() + 1,step = 0.01),
np.arange(start = X_set[:,1].min() - 1,1].max() + 1,step = 0.01))
plt.contourf(X1,X2,classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75,cmap = ListedColormap(('red','green')))
plt.xlim(X1.min(),X1.max())
plt.ylim(X2.min(),X2.max())
for i,j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j,0],X_set[y_set == j,1],
c = ListedColormap(('red','green'))(i),label = j)
plt.title('SVM (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
测试集合结果可视化
from matplotlib.colors import ListedColormap
X_set,y_set = X_test,y_test
X1,label = j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
 (编辑:北几岛)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|