日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

统计学习方法第八章作业:分类问题AdaBoost算法、回归问题提升树算法 代码实现

發(fā)布時間:2025/3/8 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 统计学习方法第八章作业:分类问题AdaBoost算法、回归问题提升树算法 代码实现 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

分類問題AdaBoost算法

import math import numpy as npclass Adaboost_tree:def __init__(self,X,Y,feature_type='discrete'):self.X = np.array(X)self.Y = np.array(Y)self.N = len(X)self.feature_num = len(X[0])self.w = np.array([1/self.N] * self.N)self.g_x=[]self.feature_type=feature_type #特征類型self.get_feature_dict()def compute_error(self,y):y = np.array(y)return np.sum(self.w[y != self.Y])def compute_am(self,em):return 1/2*math.log((1-em)/em)def get_feature_dict(self):self.f_dict = {}for i in range(self.feature_num):self.f_dict[i] = list(set([x[i] for x in self.X]))def fit(self,max_iter=20):for iter in range(max_iter):index_list=[]error_list1=[]error_list2 = []pred_y_list1 = []pred_y_list2 = []if self.feature_type == 'discrete':for i in range(self.feature_num):for j in self.f_dict[i]:y1 = [1 if m[i] == j else -1 for m in self.X]y2 = [-1 if m[i] == j else 1 for m in self.X]error1 = self.compute_error(y1)error2 = self.compute_error(y2)index_list.append((i,j))error_list1.append(error1)error_list2.append(error2)pred_y_list1.append(y1)pred_y_list2.append(y2)if self.feature_type == 'continuous':for i in range(self.feature_num):for j in self.f_dict[i]:y1 = [1 if m[i] <= j else -1 for m in self.X]y2 = [-1 if m[i] <= j else 1 for m in self.X]error1 = self.compute_error(y1)error2 = self.compute_error(y2)index_list.append((i,j))error_list1.append(error1)error_list2.append(error2)pred_y_list1.append(y1)pred_y_list2.append(y2)if min(error_list1) <= min(error_list2):min_index = error_list1.index(min(error_list1))split_f_index,split_value = index_list[min_index]pred_y = pred_y_list1[min_index]positive = 1else:min_index = error_list2.index(min(error_list2))split_f_index,split_value = index_list[min_index]pred_y = pred_y_list2[min_index]positive = -1em = self.compute_error(pred_y)if em == 0:print('em is zero break')breakam = self.compute_am(em)self.g_x.append((split_f_index,split_value,positive,am))w_list = self.w * np.exp(-am * self.Y * np.array(pred_y))self.w = w_list/np.sum(w_list)def predict_single(self,x):result = 0for split_f_index,split_value,positive,am in self.g_x:if self.feature_type == 'discrete':if x[split_f_index] == split_value:result += positive * amelse:result += - positive * amelif self.feature_type == 'continuous':if x[split_f_index] <= split_value:result += positive * amelse:result += - positive * amreturn np.sign(result)def predict(self,X):result = [self.predict_single(x) for x in X]return resultdef main():X = np.array([[0, 1, 3], [0, 3, 1], [1, 2, 2], [1, 1, 3], [1, 2, 3],[0, 1, 2], [1, 1, 2], [1, 1, 1], [1, 3, 1], [0, 2, 1]])Y = np.array([-1, -1, -1, -1, -1, -1, 1, 1, -1, -1])Adaboost_tree_ = Adaboost_tree(X,Y,feature_type='continuous')Adaboost_tree_.fit(20)print(Adaboost_tree_.predict(X))if __name__ == '__main__':main()#############result######################## /usr/bin/python3 /Users/zhengyanzhao/PycharmProjects/tongjixuexi/shixian2/AdaBoost_cat.py [-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, 1.0, 1.0, -1.0, -1.0]

回歸問題提升樹算法

單個平方誤差回歸樹代碼 請參考我之前的博客
統(tǒng)計學習方法第五章作業(yè):ID3/C4.5算法分類決策樹、平方誤差二叉回歸樹代碼實現(xiàn)

from shixian2 import reg_tree import numpy as npclass adboost_reg_tree:def __init__(self):self.tree_list = []def fit(self,X,Y,max_iter=5,min_leave_data=3):self.X = np.array(X)self.Y = np.array(Y)for i in range(max_iter):reg_t = reg_tree.Cart_reg(self.X, self.Y, min_leave_data)reg_t.build_tree()pred_y = np.array(reg_t.predict(self.X))print(pred_y)self.tree_list.append(reg_t)self.Y = self.Y - pred_yif (self.Y == 0).all():print('total_fit')breakdef predict(self,X):result = np.zeros(len(X))for i in self.tree_list:y = i.predict(X)result += np.array(y)return resultdef main():X=[[1,5,7,4,8,1,2],[2,3,5,5,2,7,8],[1,2,3,4,5,6,7],[1,2,1,2,2,3,9],[2,8,9,7,0,1,4],[4,8,3,4,5,6,7],[4,1,3,1,5,8,0]]Y= [2,6,2,5,8,3,2]adboost_reg_tree_ = adboost_reg_tree()adboost_reg_tree_.fit(X,Y,max_iter=5,min_leave_data=4)print(adboost_reg_tree_.predict(X))if __name__ == '__main__':main()#######result######################### /usr/bin/python3 /Users/zhengyanzhao/PycharmProjects/tongjixuexi/shixian2/adboost_reg_tree.py [2.25 6.33333333 2.25 6.33333333 6.33333333 2.252.25 ] [-0.27083333 -0.27083333 -0.27083333 -1.33333333 1.20833333 1.20833333-0.27083333] [ 0.015625 -0.0625 0.015625 0.015625 0.45833333 -0.458333330.015625 ] [ 0.00390625 0.00390625 0.00390625 -0.015625 0. 0.0.00390625] [ 0.00130208 -0.00195312 0.00043403 0.00043403 -0.00195312 0.000434030.00130208] [2. 6.00195312 1.99913194 5.00043403 7.99804688 3.000434032. ]Process finished with exit code 0

總結(jié)

以上是生活随笔為你收集整理的统计学习方法第八章作业:分类问题AdaBoost算法、回归问题提升树算法 代码实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。