日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

决策树算法预测NBA赛事结果

發布時間:2023/12/29 编程问答 38 豆豆
生活随笔 收集整理的這篇文章主要介紹了 决策树算法预测NBA赛事结果 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

決策樹算法介紹

決策樹(decision tree)是一個樹結構(可以是二叉樹或非二叉樹)。
其每個非葉節點表示一個特征屬性上的測試,每個分支代表這個特征屬性在某個值域上的輸出,而每個葉節點存放一個類別。

使用決策樹進行決策的過程就是從根節點開始,測試待分類項中相應的特征屬性,并按照其值選擇輸出分支,直到到達葉子節點,將葉子節點存放的類別作為決策結果。

總結來說:

決策樹模型核心是下面幾部分:

  • 結點和有向邊組成
  • 結點有內部結點和葉結點倆種類型
  • 內部結點表示一個特征,葉節點表示一個類

加載數據集

import numpy as np import pandas as pd file = "NBA2014.csv" data = pd.read_csv(file) data.iloc[:5]

數據預處理

# Don't read the first row,as it is blank. Parse the date column as a date data = pd.read_csv(file,parse_dates=[0]) data.columns = ["Date","Start","Visitor Team","VisitorPts","Home Team","HomePts","Score Type","OT?","Attend","Notes"] data.iloc[:5] data["Home Win"] = data["VisitorPts"] < data["HomePts"] y_true = data["Home Win"].values data.iloc[:5] print("Home Team Win Percentage: {0:.1f}%".format(np.mean(y_true)*100)) data["HomeLastWin"] = False data["VisitorLastWin"] = False data.iloc[:5] # create a dict to store the team last result from collections import defaultdict won_last = defaultdict(int) for index,row in data.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]row["HomeLastWin"] = won_last[home_team]row["VisitorLastWin"] = won_last[visitor_team]data.iloc[index] = row# set the current winwon_last[home_team] = row["Home Win"]won_last[visitor_team] = not row["Home Win"] data.iloc[20:25]

模型建立

from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import cross_val_score clf = DecisionTreeClassifier(random_state = 14) # create the dataset X_win = data[["HomeLastWin","VisitorLastWin"]].values scores = cross_val_score(clf,X_win,y_true,scoring="accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100))

引入新的特征:賽季排名

import chardet file = "NBA2013_expanded-standings.csv" with open(file, 'rb') as f:print(f)result = chardet.detect(f.read()) # or readline if the file is large standings = pd.read_csv(file,skiprows=[0],encoding=result['encoding']) # create a new feature:HomeTeamRankHigher data["HomeTeamRankHigher"] = 0 for index,row in data.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]if home_team == "New Orleans Pelicans":home_team = "New Orleans Hornets"elif visitor_team == "New Orleans Pelicans":visitor_team = "New Orleans Hornets"home_rank = standings[standings["Team"] == home_team]['Rk'].values[0]visitor_rank = standings[standings["Team"] == visitor_team]["Rk"].values[0]row["HomeTeamRankHigher"] = int(home_rank > visitor_rank)data.iloc[index] = row data.iloc[:5]# create the train set X_homehigher = data[["HomeLastWin","VisitorLastWin","HomeTeamRankHigher"]].values clf = DecisionTreeClassifier(random_state=14) scores = cross_val_score(clf,X_homehigher,y_true,scoring = "accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) # who won the last match last_match_winer = defaultdict(int) data["HomeTeamWonLast"] = 0for index,row in data.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]#sort the team namesteams = tuple(sorted([home_team,visitor_team]))# who won the last gamerow["HomeTeamWonLast"] = 1 if last_match_winer == row["Home Team"] else 0data.iloc[index] = rowwinner = row["Home Team"] if row["Home Win"] else row["Visitor Team"]last_match_winer = winner data.iloc[:5]# create the dataset X_lastwinner = data[["HomeTeamRankHigher","HomeTeamWonLast"]].values clf = DecisionTreeClassifier(random_state=14) scores = cross_val_score(clf,X_lastwinner,y_true,scoring="accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) # convert the string names to into integers from sklearn.preprocessing import LabelEncoder encoding = LabelEncoder() encoding.fit(data["Home Team"].values)home_teams = encoding.transform(data["Home Team"].values) visitor_teams = encoding.transform(data["Visitor Team"].values) X_teams = np.vstack([home_teams,visitor_teams]).T # encode these integers into a number if binary features from sklearn.preprocessing import OneHotEncoder onehot = OneHotEncoder() X_teams_expanded = onehot.fit_transform(X_teams).todense()clf = DecisionTreeClassifier(random_state=14) scores = cross_val_score(clf,X_teams_expanded,y_true,scoring="accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100))

新模型:隨機森林

# use random_forest from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(random_state=14) scores = cross_val_score(clf,X_teams_expanded,y_true,scoring='accuracy') print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) X_all = np.hstack([X_lastwinner,X_teams_expanded]) print("X_all shape: {0}".format(X_all.shape)) clf = RandomForestClassifier(random_state=14) scores = cross_val_score(clf,X_all,y_true,scoring='accuracy') print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) from sklearn.grid_search import GridSearchCV #n_estimators=10, criterion='gini', max_depth=None, #min_samples_split=2, min_samples_leaf=1, #max_features='auto', #max_leaf_nodes=None, bootstrap=True, #oob_score=False, n_jobs=1, #random_state=None, verbose=0, min_density=None, compute_importances=None parameter_space = {"max_features": [2,10,'auto'],"n_estimators": [100,],"criterion": ["gini","entropy"],"min_samples_leaf": [2,4,6], } clf = RandomForestClassifier(random_state=14) grid = GridSearchCV(clf,parameter_space) grid.fit(X_all,y_true) print("Accuracy: {0:.1f}%".format(grid.best_score_ *100))

總結

以上是生活随笔為你收集整理的决策树算法预测NBA赛事结果的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。