日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器学习的几种方法(knn,逻辑回归,SVM,决策树,随机森林,极限随机树,集成学习,Adaboost,GBDT)

發布時間:2024/7/23 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习的几种方法(knn,逻辑回归,SVM,决策树,随机森林,极限随机树,集成学习,Adaboost,GBDT) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?一.判別模式與生成模型基礎知識

舉例:要確定一個瓜是好瓜還是壞瓜,用判別模型的方法是從歷史數據中學習到模型,然后通過提取這個瓜的特征來預測出這只瓜是好瓜的概率,是壞瓜的概率。

舉例:利用生成模型是根據好瓜的特征首先學習出一個好瓜的模型,然后根據壞瓜的特征學習得到一個壞瓜的模型,然后從需要預測的瓜中提取特征,放到生成好的好瓜的模型中看概率是多少,在放到生產的壞瓜模型中看概率是多少,哪個概率大就預測其為哪個。

舉例:

假如你的任務是識別一個語音屬于哪種語言。例如對面一個人走過來,和你說了一句話,你需要識別出她說的到底是漢語、英語還是法語等。那么你可以有兩種方法達到這個目的:

1.學習每一種語言,你花了大量精力把漢語、英語和法語等都學會了,我指的學會是你知道什么樣的語音對應什么樣的語言。然后再有人過來對你說,你就可以知道他說的是什么語言.

2.不去學習每一種語言,你只學習這些語言之間的差別,然后再判斷(分類)。意思是指我學會了漢語和英語等語言的發音是有差別的,我學會這種差別就好了。
那么第一種方法就是生成方法,第二種方法是判別方法。

生成模型是所有變量的全概率模型,而判別模型是在給定觀測變量值前提下目標變量條件概率模型。因此生成模型能夠用于模擬(即生成)模型中任意變量的分布情況,而判別模型只能根據觀測變量得到目標變量的采樣。判別模型不對觀測變量的分布建模,因此它不能夠表達觀測變量與目標變量之間更復雜的關系。因此,生成模型更適用于無監督的任務,如分類和聚類。

條件概率: 就是事件A在事件B發生的條件下發生的概率。條件概率表示為P(A|B),讀作“A在B發生的條件下發生的概率”。

貝葉斯公式:

P(X)?代表 X 事件發生的概率,也稱為先驗概率;

P(Y|X)?代表在 X 事件發生的前提下,Y 事件發生的概率,也稱為似然率;

P(X|Y)?代表事件 Y 發生后,X 事件發生的概率,也稱為后驗概率;

最大似然估計(英語:maximum likelihood estimation,縮寫為MLE),是用來估計一個概率模型的參數的一種方法。

?

條件概率,就是在條件為瓜的顏色是青綠的情況下,瓜是好瓜的概率

先驗概率,就是常識、經驗、統計學所透露出的“因”的概率,即瓜的顏色是青綠的概率。

后驗概率,就是在知道“果”之后,去推測“因”的概率,也就是說,如果已經知道瓜是好瓜,那么瓜的顏色是青綠的概率是多少。后驗和先驗的關系就需要運用貝葉斯決策理論來求解。

基于條件獨立性假設,對于多個屬性的后驗概率可以寫成:

d為屬性數目,xi是x在第i個屬性上取值。
對于所有的類別來說P(x)相同,基于極大似然的貝葉斯判定準則有樸素貝葉斯的表達式:

樸素貝葉斯算法實現:?

#coding:utf-8#P(y|x) = [P(x|y)*P(y)]/P(x)import numpy as np import pandas as pdclass Naive_Bayes:def __init__(self):pass# 樸素貝葉斯訓練過程def nb_fit(self, X, y):# print('===y.columns[0]:', y.columns[0])classes = y[y.columns[0]].unique()# print('==classes:', classes)# print('==y[y.columns[0]]:', y[y.columns[0]])class_count = y[y.columns[0]].value_counts()# print('=class_count:', class_count)# 計算類先驗概率class_prior = class_count / len(y)print('==class_prior:', class_prior)# 計算類條件概率prior = dict()#也就是求P(x1=?|y=?)for col in X.columns:for j in classes:# print('y:', y)# print('j:', j)# print('===X[(y == j).values]:', X[(y == j).values])# print('==X[(y == j).values][col]:', X[(y == j).values][col])p_x_y = X[(y == j).values][col].value_counts()# print('==p_x_y:', p_x_y)for i in p_x_y.index:# print('=i:', i)# print('==p_x_y[i]:', p_x_y[i])prior[(col, i, j)] = p_x_y[i] / class_count[j]# print(prior)# assert 1 == 0print('==prior:', prior)return classes, class_prior, prior# 預測新的實例def predict(self, X_test):#argmax(P(x1=?|y=?)*P(y=?))res = []for c in classes:p_y = class_prior[c]p_x_y = 1for i in X_test.items():# print('i:', i)# print(tuple(list(i) + [c]))p_x_y *= prior[tuple(list(i) + [c])]res.append(p_y * p_x_y)# print('===res:', res)return classes[np.argmax(res)]if __name__ == "__main__":x1 = [1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3]x2 = ['S', 'M', 'M', 'S', 'S', 'S', 'M', 'M', 'L', 'L', 'L', 'M', 'M', 'L', 'L']y = [-1, -1, 1, 1, -1, -1, -1, 1, 1, 1, 1, 1, 1, 1, -1]df = pd.DataFrame({'x1': x1, 'x2': x2, 'y': y})print('==df:\n', df)X = df[['x1', 'x2']]# print('==X:', X)y = df[['y']]# print('==y:', y)X_test = {'x1': 2, 'x2': 'S'}nb = Naive_Bayes()classes, class_prior, prior = nb.nb_fit(X, y)print('測試數據預測類別為:', nb.predict(X_test))

???

樸素貝葉斯分類器代碼:

樸素貝葉斯分類器采用了“屬性條件獨立性假設”,對已知類別,假設所有屬性相互獨立。換言之,假設每個屬性獨立的對分類結果發生影響相互獨立。

采用GaussianNB 高斯樸素貝葉斯,概率密度函數為

import mathclass NaiveBayes:def __init__(self):self.model = None# 數學期望@staticmethoddef mean(X):"""計算均值Param: X : list or np.ndarrayReturn:avg : float"""avg = 0.0# ========= show me your code ==================avg = sum(X) / float(len(X))# ========= show me your code ==================return avg# 標準差(方差)def stdev(self, X):"""計算標準差Param: X : list or np.ndarrayReturn:res : float"""res = 0.0avg = self.mean(X)res = math.sqrt(sum([pow(x - avg, 2) for x in X]) / float(len(X)))return res# 概率密度函數def gaussian_probability(self, x, mean, stdev):"""根據均值和標注差計算x符號該高斯分布的概率Parameters:----------x : 輸入mean : 均值stdev : 標準差Return:res : float, x符合的概率值"""res = 0.0# ========= show me your code ==================exponent = math.exp(-(math.pow(x - mean, 2) /(2 * math.pow(stdev, 2))))res = (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent# ========= show me your code ==================return res# 處理X_traindef summarize(self, train_data):"""計算每個類目下對應數據的均值和標準差Param: train_data : listReturn : [mean, stdev]"""summaries = [0.0, 0.0]# ========= show me your code ==================# for i in zip(*train_data):# print(i)summaries = [(self.mean(i), self.stdev(i)) for i in zip(*train_data)]# ========= show me your code ==================return summaries# 分類別求出數學期望和標準差def fit(self, X, y):labels = list(set(y))data = {label: [] for label in labels}for f, label in zip(X, y):data[label].append(f)print('===data:', data)self.model = {label: self.summarize(value) for label, value in data.items()}print(self.model)#得到每一類的每個特征的均值和方差return 'gaussianNB train done!'# 計算概率def calculate_probabilities(self, input_data):"""計算數據在各個高斯分布下的概率Paramter:input_data : 輸入數據Return:probabilities : {label : p}"""# summaries:{0.0: [(5.0, 0.37),(3.42, 0.40)], 1.0: [(5.8, 0.449),(2.7, 0.27)]}# input_data:[1.1, 2.2]probabilities = {}# ========= show me your code ==================for label, value in self.model.items():print('====label, value', label, value)print('==len(value)', len(value))probabilities[label] = 1for i in range(len(value)):mean, stdev = value[i]probabilities[label] *= self.gaussian_probability(input_data[i], mean, stdev)print('===probabilities:', probabilities)# ========= show me your code ==================return probabilities# 類別def predict(self, X_test):# {0.0: 2.9680340789325763e-27, 1.0: 3.5749783019849535e-26}label = sorted(self.calculate_probabilities(X_test).items(), key=lambda x: x[-1])[-1][0]return label# 計算得分def score(self, X_test, y_test):right = 0for X, y in zip(X_test, y_test):label = self.predict(X)if label == y:right += 1return right / float(len(X_test))def test_bayes_model():from sklearn.datasets import load_irisimport pandas as pdfrom sklearn.model_selection import train_test_splitiris = load_iris()X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2)print(len(X_train))print(len(y_train))model = NaiveBayes()model.fit(X_train, y_train)print(model.predict([4.4, 3.2, 1.3, 0.2])) if __name__ == '__main__':test_bayes_model()

基于pgmpy的貝葉斯網絡例子:

pgmpy是一款基于Python的概率圖模型包,主要包括貝葉斯網絡和馬爾可夫蒙特卡洛等常見概率圖模型的實現以及推斷方法.

下圖是學生獲得推薦信質量的例子。具體有向圖和概率表如下圖所示:

代碼:

#coding:utf-8 #git clone https://github.com/pgmpy/pgmpy #cd pgmpy #python setup.py installfrom pgmpy.factors.discrete import TabularCPD from pgmpy.models import BayesianModelstudent_model = BayesianModel([('D', 'G'),('I', 'G'),('G', 'L'),('I', 'S')]) #分數節點 grade_cpd = TabularCPD(variable='G',# 節點名稱variable_card=3,# 節點取值個數values=[[0.3, 0.05, 0.9, 0.5],# 該節點的概率表[0.4, 0.25, 0.08, 0.3],[0.3, 0.7, 0.02, 0.2]],evidence=['I', 'D'], # 該節點的依賴節點evidence_card=[2, 2] # 依賴節點的取值個數 ) #考試難度節點 difficulty_cpd = TabularCPD(variable='D',variable_card=2,values=[[0.6, 0.4]] ) ##智商節點 intel_cpd = TabularCPD(variable='I',variable_card=2,values=[[0.7, 0.3]] ) #收到推薦信節點 letter_cpd = TabularCPD(variable='L',variable_card=2,values=[[0.1, 0.4, 0.99],[0.9, 0.6, 0.01]],evidence=['G'],evidence_card=[3] ) #sat分數節點 sat_cpd = TabularCPD(variable='S',variable_card=2,values=[[0.95, 0.2],[0.05, 0.8]],evidence=['I'],evidence_card=[2] )student_model.add_cpds(grade_cpd,difficulty_cpd,intel_cpd,letter_cpd,sat_cpd ) print(student_model.get_cpds())print('D節點路徑:', student_model.active_trail_nodes('D')) print('I節點路徑:', student_model.active_trail_nodes('I'))print(student_model.local_independencies('G'))# print(student_model.get_independencies())# print(student_model.to_markov_model())# 進行貝葉斯推斷 from pgmpy.inference import VariableElimination student_infer = VariableElimination(student_model) prob_G = student_infer.query(variables=['G'])print('所有可能性的分數概率prob_G:', prob_G)prob_G = student_infer.query(variables=['G'],evidence={'I': 1, 'D': 0}) print('聰明學生的分數概率prob_G', prob_G)# prob_G = student_infer.query( # variables=['G'], # evidence={'I': 0, 'D': 1}) # print(prob_G)# # 生成數據 # import numpy as np # import pandas as pd # # raw_data = np.random.randint(low=0, high=2, size=(1000, 5)) # data = pd.DataFrame(raw_data, columns=['D', 'I', 'G', 'L', 'S']) # data.head() # # # # 定義模型 # from pgmpy.models import BayesianModel # from pgmpy.estimators import MaximumLikelihoodEstimator, BayesianEstimator # # model = BayesianModel([('D', 'G'), ('I', 'G'), ('I', 'S'), ('G', 'L')]) # # # 基于極大似然估計進行模型訓練 # model.fit(data, estimator=MaximumLikelihoodEstimator) # for cpd in model.get_cpds(): # # 打印條件概率分布 # print("CPD of {variable}:".format(variable=cpd.variable)) # print(cpd)

二.機器學習

knn的詳細鏈接:https://blog.csdn.net/fanzonghao/article/details/86411102

決策樹的詳細鏈接:https://blog.csdn.net/fanzonghao/article/details/85246720

1.SVM:尋找最優的間隔

等式約束的最優解

不等式約束的最優解:利用kkT條件

最終得到分類器:

也就是C(松弛變量)越大:得到高方差,低偏差的模型;更傾向于過擬合;

C越小:得到低方差,高偏差的模型;更傾向于欠擬合。

推導:

SVM案例,應用SMO算法:

import numpy as np import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split import matplotlib.pyplot as pltdef create_data():iris = load_iris()df = pd.DataFrame(iris.data, columns=iris.feature_names)df['label'] = iris.targetdf.columns = ['sepal length', 'sepal width', 'petal length', 'petal width', 'label']data = np.array(df.iloc[:100, [0, 1, -1]])for i in range(len(data)):if data[i, -1] == 0:data[i, -1] = -1# print(data)return data[:, :2], data[:, -1]X, y = create_data() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) print('==X_train.shape:', X_train.shape) print('==y_train.shape:', y_train.shape) plt.scatter(X[:50, 0], X[:50, 1], label='0', color='R') plt.scatter(X[50:, 0], X[50:, 1], label='1', color='G') plt.legend() # plt.show()#w = alpha*y*x class SVM:def __init__(self, max_iter=100, kernel='linear'):self.max_iter = max_iterself._kernel = kerneldef init_args(self, features, labels):self.m, self.n = features.shape#m數據量 n特征維度self.X = featuresself.Y = labelsself.b = 0.0# 將Ei保存在一個列表里self.alpha = np.ones(self.m)self.E = [self._E(i) for i in range(self.m)]# 松弛變量self.C = 1.0def _KKT(self, i):y_g = self._g(i) * self.Y[i]if self.alpha[i] == 0:return y_g >= 1elif 0 < self.alpha[i] < self.C:return y_g == 1else:return y_g <= 1# g(x)預測值,輸入xi(X[i])def _g(self, i):r = self.bfor j in range(self.m):r += self.alpha[j] * self.Y[j] * self.kernel(self.X[i], self.X[j])return r# E(x)為g(x)對輸入x的預測值和y的差def _E(self, i):return self._g(i) - self.Y[i]# 核函數def kernel(self, x1, x2):if self._kernel == 'linear':return sum([x1[k] * x2[k] for k in range(self.n)])elif self._kernel == 'poly':return (sum([x1[k] * x2[k] for k in range(self.n)]) + 1)**2return 0def _init_alpha(self):# 外層循環首先遍歷所有滿足0<a<C的樣本點,檢驗是否滿足KKTindex_list = [i for i in range(self.m) if 0 < self.alpha[i] < self.C]# 否則遍歷整個訓練集non_satisfy_list = [i for i in range(self.m) if i not in index_list]index_list.extend(non_satisfy_list)for i in index_list:if self._KKT(i):continueE1 = self.E[i]# 如果E2是+,選擇最小的;如果E2是負的,選擇最大的if E1 >= 0:j = min(range(self.m), key=lambda x: self.E[x])else:j = max(range(self.m), key=lambda x: self.E[x])return i, jdef _compare(self, _alpha, L, H):if _alpha > H:return Helif _alpha < L:return Lelse:return _alphadef fit(self, features, labels):self.init_args(features, labels)for t in range(self.max_iter):# traini1, i2 = self._init_alpha()# 邊界if self.Y[i1] == self.Y[i2]:L = max(0, self.alpha[i1] + self.alpha[i2] - self.C)H = min(self.C, self.alpha[i1] + self.alpha[i2])else:L = max(0, self.alpha[i2] - self.alpha[i1])H = min(self.C, self.C + self.alpha[i2] - self.alpha[i1])E1 = self.E[i1]E2 = self.E[i2]# eta=K11+K22-2K12eta = self.kernel(self.X[i1], self.X[i1]) + self.kernel(self.X[i2],self.X[i2]) - 2 * self.kernel(self.X[i1], self.X[i2])if eta <= 0:# print('eta <= 0')continuealpha2_new_unc = self.alpha[i2] + self.Y[i2] * (E1 - E2) / eta #此處有修改,根據書上應該是E1 - E2,書上130-131頁alpha2_new = self._compare(alpha2_new_unc, L, H)alpha1_new = self.alpha[i1] + self.Y[i1] * self.Y[i2] * (self.alpha[i2] - alpha2_new)b1_new = -E1 - self.Y[i1] * self.kernel(self.X[i1], self.X[i1]) * (alpha1_new - self.alpha[i1]) - self.Y[i2] * self.kernel(self.X[i2],self.X[i1]) * (alpha2_new - self.alpha[i2]) + self.bb2_new = -E2 - self.Y[i1] * self.kernel(self.X[i1], self.X[i2]) * (alpha1_new - self.alpha[i1]) - self.Y[i2] * self.kernel(self.X[i2],self.X[i2]) * (alpha2_new - self.alpha[i2]) + self.bif 0 < alpha1_new < self.C:b_new = b1_newelif 0 < alpha2_new < self.C:b_new = b2_newelse:# 選擇中點b_new = (b1_new + b2_new) / 2# 更新參數self.alpha[i1] = alpha1_newself.alpha[i2] = alpha2_newself.b = b_newself.E[i1] = self._E(i1)self.E[i2] = self._E(i2)return 'train done!'def predict(self, data):r = self.bfor i in range(self.m):r += self.alpha[i] * self.Y[i] * self.kernel(data, self.X[i])return 1 if r > 0 else -1def score(self, X_test, y_test):right_count = 0for i in range(len(X_test)):result = self.predict(X_test[i])if result == y_test[i]:right_count += 1return right_count / len(X_test)# def _weight(self):# # linear model# yx = self.Y.reshape(-1, 1) * self.X# self.w = np.dot(yx.T, self.alpha)# return self.wsvm = SVM(max_iter=200) svm.fit(X_train, y_train) score = svm.score(X_test, y_test) print('===score:', score)

SVM案例,用于水果數據集分類,調用scikit-learn:

import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.svm import SVC import matplotlib.patches as mpatches from matplotlib.colors import ListedColormapdef plot_class_regions_for_classifier(clf, X, y, X_test=None, y_test=None, title=None,target_names=None, plot_decision_regions=True):"""根據分類器可視化數據分類的結果只能用于二維特征的數據"""num_classes = np.amax(y) + 1color_list_light = ['#FFFFAA', '#EFEFEF', '#AAFFAA', '#AAAAFF']color_list_bold = ['#EEEE00', '#000000', '#00CC00', '#0000CC']cmap_light = ListedColormap(color_list_light[0:num_classes])cmap_bold = ListedColormap(color_list_bold[0:num_classes])h = 0.03k = 0.5x_plot_adjust = 0.1y_plot_adjust = 0.1plot_symbol_size = 50x_min = X[:, 0].min()x_max = X[:, 0].max()y_min = X[:, 1].min()y_max = X[:, 1].max()x2, y2 = np.meshgrid(np.arange(x_min-k, x_max+k, h), np.arange(y_min-k, y_max+k, h))P = clf.predict(np.c_[x2.ravel(), y2.ravel()])P = P.reshape(x2.shape)plt.figure()if plot_decision_regions:plt.contourf(x2, y2, P, cmap=cmap_light, alpha=0.8)plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold, s=plot_symbol_size, edgecolor='black')plt.xlim(x_min - x_plot_adjust, x_max + x_plot_adjust)plt.ylim(y_min - y_plot_adjust, y_max + y_plot_adjust)if X_test is not None:plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cmap_bold, s=plot_symbol_size,marker='^', edgecolor='black')train_score = clf.score(X, y)test_score = clf.score(X_test, y_test)title = title + "\nTrain score = {:.2f}, Test score = {:.2f}".format(train_score, test_score)if target_names is not None:legend_handles = []for i in range(0, len(target_names)):patch = mpatches.Patch(color=color_list_bold[i], label=target_names[i])legend_handles.append(patch)plt.legend(loc=0, handles=legend_handles)if title is not None:plt.title(title)plt.show()# 加載數據集 fruits_df = pd.read_table('fruit_data_with_colors.txt')X = fruits_df[['width', 'height']] y = fruits_df['fruit_label'].copy()# 將不是apple的標簽設為0 y[y != 1] = 0 # 分割數據集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4, random_state=0) print(y_test.shape) # 不同的C值 c_values = [0.0001, 1, 100]for c_value in c_values:# 建立模型svm_model = SVC(C=c_value, kernel='rbf')# 訓練模型svm_model.fit(X_train, y_train)# 驗證模型y_pred = svm_model.predict(X_test)acc = accuracy_score(y_test, y_pred)print('C={},準確率:{:.3f}'.format(c_value, acc))# 可視化plot_class_regions_for_classifier(svm_model, X_test.values, y_test.values, title='C={}'.format(c_value))

二維高斯分布?

將kernel替換成‘linear’

2.集成學習

def load_data():# 加載數據集fruits_df = pd.read_table('fruit_data_with_colors.txt')# print(fruits_df)print('樣本個數:', len(fruits_df))# 創建目標標簽和名稱的字典fruit_name_dict = dict(zip(fruits_df['fruit_label'], fruits_df['fruit_name']))# 劃分數據集X = fruits_df[['mass', 'width', 'height', 'color_score']]y = fruits_df['fruit_label']X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=1/4, random_state=0)print('數據集樣本數:{},訓練集樣本數:{},測試集樣本數:{}'.format(len(X), len(X_train), len(X_test)))# print(X_train)return X_train, X_test, y_train, y_test #特征歸一化 def minmax_scaler(X_train,X_test):scaler = MinMaxScaler()X_train_scaled = scaler.fit_transform(X_train)# print(X_train_scaled)#此時scaled得到一個最小最大值,對于test直接transform就行X_test_scaled = scaler.transform(X_test)for i in range(4):print('歸一化前,訓練數據第{}維特征最大值:{:.3f},最小值:{:.3f}'.format(i + 1,X_train.iloc[:, i].max(),X_train.iloc[:, i].min()))print('歸一化后,訓練數據第{}維特征最大值:{:.3f},最小值:{:.3f}'.format(i + 1,X_train_scaled[:, i].max(),X_train_scaled[:, i].min()))return X_train_scaled,X_test_scaled def stack(X_train_scaled, y_train,X_test_scaled, y_test):from sklearn.linear_model import LogisticRegressionfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.svm import SVCfrom mlxtend.classifier import StackingClassifierclf1 = KNeighborsClassifier(n_neighbors=1)clf2 = SVC(kernel='linear')clf3 = DecisionTreeClassifier()lr = LogisticRegression(C=100)sclf = StackingClassifier(classifiers=[clf1, clf2, clf3],meta_classifier=lr)clf1.fit(X_train_scaled, y_train)clf2.fit(X_train_scaled, y_train)clf3.fit(X_train_scaled, y_train)sclf.fit(X_train_scaled, y_train)print('kNN測試集準確率:{:.3f}'.format(clf1.score(X_test_scaled, y_test)))print('SVM測試集準確率:{:.3f}'.format(clf2.score(X_test_scaled, y_test)))print('DT測試集準確率:{:.3f}'.format(clf3.score(X_test_scaled, y_test)))print('Stacking測試集準確率:{:.3f}'.format(sclf.score(X_test_scaled, y_test))) if __name__ == '__main__':X_train, X_test, y_train, y_test=load_data()X_train_scaled,X_test_scaled=minmax_scaler(X_train,X_test)

2.1Boosting

  • Boosting(提升)方法從某個基學習器出發,反復學習,得到一系列基學習器,然后組合它們構成一個強學習器。
  • Boosting 基于串行策略:基學習器之間存在依賴關系,新的學習器需要依據舊的學習器生成。
  • 代表算法/模型
  • 提升方法 AdaBoost
  • 提升樹
  • 梯度提升樹 GBDT

2.1.1Adaboost

2.1.2 GBDT

def gbdt(X_train_scaled, y_train, X_test_scaled, y_test):from sklearn.ensemble import GradientBoostingClassifierfrom sklearn.model_selection import GridSearchCVparameters = {'learning_rate': [0.001, 0.01, 0.1, 1, 10, 100]}clf = GridSearchCV(GradientBoostingClassifier(), parameters, cv=3, scoring='accuracy')clf.fit(X_train_scaled, y_train)print('最優參數:', clf.best_params_)print('驗證集最高得分:', clf.best_score_)print('測試集準確率:{:.3f}'.format(clf.score(X_test_scaled, y_test)))

?

2.2 Bagging

  • Bagging 基于并行策略:基學習器之間不存在依賴關系,可同時生成。
  • 代表算法/模型
    • 隨機森林
    • 神經網絡的?Dropout?策略
import warningsimport matplotlib.pyplot as pltfrom sklearn.datasets import make_circlesfrom sklearn.model_selection import train_test_splitfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn.linear_model import LogisticRegressionfrom sklearn.svm import SVCfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.ensemble import VotingClassifier,RandomForestClassifier,ExtraTreesClassifierfrom sklearn.ensemble import AdaBoostClassifierwarnings.filterwarnings('ignore')X,y=make_circles(n_samples=300,noise=0.15,factor=0.5,random_state=233)plt.scatter(X[y==0,0],X[y==0,1])plt.scatter(X[y== 1, 0], X[y== 1, 1])# plt.show()X_train,X_test,y_train,y_test=train_test_split(X,y)print('X_train.shape=',X_train.shape)print('X_test.shape=',X_test.shape)print(y_test)print('===========knn==============')knn_clf=KNeighborsClassifier()knn_clf.fit(X_train,y_train)print('knn accuracy={}'.format(knn_clf.score(X_test,y_test)))print('\n')print('===========logistic regression==============')log_clf = LogisticRegression()log_clf.fit(X_train, y_train)print('logistic regression accuracy={}'.format(log_clf.score(X_test, y_test)))print('\n')print('===========SVM==============')svm_clf = SVC()svm_clf.fit(X_train, y_train)print('SVM accuracy={}'.format(svm_clf.score(X_test, y_test)))print('\n')print('===========Decison tree==============')dt_clf = DecisionTreeClassifier()dt_clf.fit(X_train, y_train)print('Decison tree accuracy={}'.format(dt_clf.score(X_test, y_test)))print('\n')print('===========ensemble classfier==============')voting_clf=VotingClassifier(estimators=[('knn',KNeighborsClassifier()),('logistic', LogisticRegression()),('SVM',SVC()),('decision tree',DecisionTreeClassifier())],voting='hard')#嚴格遵守少數服從多數voting_clf.fit(X_train,y_train)print('voting classfier accuracy={}'.format(voting_clf.score(X_test, y_test)))print('\n')print('===========random forest==============')rf_clf=RandomForestClassifier(n_estimators=500,#500棵樹max_depth=6,#每顆樹的深度bootstrap=True,# 放回抽樣oob_score=True,#使用沒有被抽到的數據做驗證)rf_clf.fit(X,y)#由于oob_score為true 故直接fit整個訓練集print('rf accuracy={}'.format(rf_clf.oob_score_))print('\n')print('===========extreme random tree==============')ex_clf=ExtraTreesClassifier(n_estimators=500,max_depth=6,bootstrap=True,oob_score=True)ex_clf.fit(X,y)print('extreme random treeaccuracy={}'.format(ex_clf.oob_score_))print('\n')print('===========Adaboost classifier==============')ada_clf = AdaBoostClassifier(DecisionTreeClassifier(),n_estimators=500,learning_rate=0.3)ada_clf.fit(X_train, y_train)print('Adaboost accuracy={}'.format(ada_clf.score(X_test,y_test)))print('\n')

?

? ? 隨機森林算法的高明之處之一就是利用隨機性,使得模型更魯棒。假如森林中有 N 棵樹,那么就隨機取出 N 個訓練數據集,對 N 棵樹分別進行訓練,通過統計每棵樹的預測結果來得出隨機森林的預測結果。?

? ? 因為隨機森林的主要構件是決策樹,所以隨機森林的超參數很多與決策樹相同。除此之外,有2個比較重要的超參數值得注意,一個是 bootstrap,取 true 和 false,表示在劃分訓練數據集時是否采用放回取樣;另一個是?oob_score,因為采用放回取樣時,構建完整的隨機森林之后會有大約 33% 的數據沒有被取到過,所以當 oob_score 取 True 時,就不必再將數據集劃分為訓練集和測試集了,直接取未使用過的數據來驗證模型的準確率。

? ? 由上述可以看出Extremely Randomized Trees 算法精度最高,它不僅在構建數據子集時對樣本的選擇進行隨機抽取,而且還會對樣本的特征進行隨機抽取(即在建樹模型時,采用部分特征而不是全部特征進行訓練)。換句話說,就是對于特征集 X,隨機森林只是在行上隨機,Extremely Randomized Trees是在行和列上都隨機。

Boosting/Bagging 與 偏差/方差 的關系

  • 簡單來說,Boosting?能提升弱分類器性能的原因是降低了偏差Bagging?則是降低了方差
  • Boosting?方法:
    • Boosting 的基本思路就是在不斷減小模型的訓練誤差(擬合殘差或者加大錯類的權重),加強模型的學習能力,從而減小偏差;
    • 但 Boosting 不會顯著降低方差,因為其訓練過程中各基學習器是強相關的,缺少獨立性。
  • Bagging?方法:
    • 對?n?個獨立不相關的模型預測結果取平均,方差是原來的?1/n;
    • 假設所有基分類器出錯的概率是獨立的,超過半數基分類器出錯的概率會隨著基分類器的數量增加而下降。
  • 泛化誤差、偏差、方差、過擬合、欠擬合、模型復雜度(模型容量)的關系圖:

?

參考:

https://gitee.com/zonghaofan/team-learning/blob/master/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E7%AE%97%E6%B3%95%E5%9F%BA%E7%A1%80/Task2%20bayes_plus.ipynb

創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎

總結

以上是生活随笔為你收集整理的机器学习的几种方法(knn,逻辑回归,SVM,决策树,随机森林,极限随机树,集成学习,Adaboost,GBDT)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。