【Python-ML】SKlearn库集成学习器Boosting
生活随笔
收集整理的這篇文章主要介紹了
【Python-ML】SKlearn库集成学习器Boosting
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
# -*- coding: utf-8 -*-
'''
Created on 2018年1月19日@author: Jason.F
@summary: Boosting,無(wú)放回抽樣,串行訓(xùn)練基學(xué)習(xí)器,用整個(gè)訓(xùn)練集來(lái)訓(xùn)練弱學(xué)習(xí)機(jī),訓(xùn)練樣本在每次迭代中都會(huì)重新賦予一個(gè)權(quán)重,在上一弱學(xué)習(xí)機(jī)錯(cuò)誤的基礎(chǔ)上進(jìn)行學(xué)習(xí)進(jìn)而構(gòu)建一個(gè)更強(qiáng)大的分類器。
'''
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt#導(dǎo)入數(shù)據(jù)和數(shù)據(jù)處理
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data',header=None)
df_wine.columns=['Class label','Alcohol','Malic acid','Ash','Alcalinity of ash','Magnesium','Total phenols','Flavanoids','Nonflavanoid phenols','Proanthocyanins','Color intensity','Hue','OD280/OD315 of diluted wines','Proline']
print ('class labels:',np.unique(df_wine['Class label']))
df_wine=df_wine[df_wine['Class label']!=1]#選擇2和3類別
y=df_wine['Class label'].values
X=df_wine[['Alcohol','Hue']].values #選擇Alcohol和 Hue兩個(gè)特征
le=LabelEncoder()
y=le.fit_transform(y)
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.40,random_state=1)
#引用BaggingClassifier訓(xùn)練
tree=DecisionTreeClassifier(criterion='entropy',max_depth=None)#構(gòu)建基學(xué)習(xí)器
ada=AdaBoostClassifier(base_estimator=tree,n_estimators=500,learning_rate=0.1,random_state=0)
#評(píng)分,比較baggin集成器和單顆未剪枝決策樹(shù)的性能差異
#單顆樹(shù)評(píng)估
tree=tree.fit(X_train,y_train)
y_train_pred=tree.predict(X_train)
y_test_pred=tree.predict(X_test)
tree_train=accuracy_score(y_train, y_train_pred)
tree_test=accuracy_score(y_test, y_test_pred)
print ('Decision tree train/test accuracies %.3f/%.3f'%(tree_train,tree_test))#未剪枝,過(guò)擬合
#集成器評(píng)估
ada=ada.fit(X_train,y_train)
y_train_pred=ada.predict(X_train)
y_test_pred=ada.predict(X_test)
ada_train=accuracy_score(y_train, y_train_pred)
ada_test=accuracy_score(y_test, y_test_pred)
print ('Adaboost train/test accuracies %.3f/%.3f'%(ada_train,ada_test))
#可視化決策區(qū)域
x_min = X_train[:,0].min()-1
x_max = X_train[:,0].max()+1
y_min = X_train[:,1].min()-1
y_max = X_train[:,1].max()+1
xx,yy =np.meshgrid(np.arange(x_min,x_max,0.1),np.arange(y_min,y_max,0.1))
f,axarr= plt.subplots(nrows=1,ncols=2,sharex='col',sharey='row',figsize=(8,3))
for idx,clf,tt in zip([0,1],[tree,ada],['Decision Tree','Adaboost']):clf.fit(X_train,y_train)Z=clf.predict(np.c_[xx.ravel(),yy.ravel()])Z=Z.reshape(xx.shape)axarr[idx].contourf(xx,yy,Z,alpha=0.3)axarr[idx].scatter(X_train[y_train==0,0],X_train[y_train==0,1],c='blue',marker='^')axarr[idx].scatter(X_train[y_train==1,0],X_train[y_train==1,1],c='red',marker='o')axarr[idx].set_title(tt)
axarr[0].set_ylabel('Alcohol',fontsize=12)
plt.text(10.2,-1.2,s='Hue',ha='center',va='center',fontsize=12)
plt.show()
結(jié)果:
('class labels:', array([1, 2, 3], dtype=int64)) Decision tree train/test accuracies 1.000/0.833 Adaboost train/test accuracies 1.000/0.833總結(jié)
以上是生活随笔為你收集整理的【Python-ML】SKlearn库集成学习器Boosting的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 【Python-ML】SKlearn库集
- 下一篇: 【Python-ML】电影评论数据集文本