日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

ML in Action 决策树

發布時間:2024/4/15 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 ML in Action 决策树 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Project Address:

https://github.com/TheOneAC/ML.git

dataset in ML/ML_ation/tree

 決策樹

  • 計算復雜度低,中間值缺失不敏感,可理解不相關數據
  • 可能過度匹配(過度分類)
  • 適用:數值型和標稱型

決策樹偽代碼createbranch

檢測數據集中子項是否全部屬于一類if so return class_tagelse 尋找數據集最佳劃分特征劃分數據集創建分支節點對每一個子集,遞歸調用createbranch返回分支節點

遞歸結束條件:所有屬性遍歷完,或者數據集屬于同一分類

香農熵

def calcShannonEnt(dataSet):numEntries = len(dataSet)labelCounts = {}for featVec in dataSet:currentLabel = featVec[-1]if currentLabel not in labelCounts.keys():labelCounts[currentLabel] = 0labelCounts[currentLabel] += 1shannonEnt = 0.0for key in labelCounts:prob = float(labelCounts[key])/numEntriesshannonEnt -= prob * log(prob,2)return shannonEnt

數據及劃分與最優選擇(熵最小)

def splitDataSet(dataSet, axis, value):retDataSet = []for featVec in dataSet:if featVec[axis] == value:reduceFeatVec = featVec[:axis]reduceFeatVec.extend(featVec[axis + 1:])retDataSet.append(reduceFeatVec)return retDataSetdef chooseBestFeatureToSplit(dataSet):numFeatures = len(dataSet[0])- 1baseEntropy = calcShannonEnt(dataSet)bestInfoGain = 0.0bestFeature = -1for i in range(numFeatures):featList = [example[i] for example in dataSet]uniqueVals = set(featList)newEntropy = 0.0for value in uniqueVals:subDataSet = splitDataSet(dataSet, i, value)prob = len(subDataSet)/float(len(dataSet))newEntropy += prob * calcShannonEnt(subDataSet)infoGain = baseEntropy - newEntropyif infoGain > bestInfoGain:baseInfoGain = infoGainbestFeature = ireturn bestFeature

所有標簽用盡無法確定類標簽時: 多數表決決定子葉分類

def majorityCnt(classList):classCount = {}for vote in classList:if vote not in classCount.keys(): classCount[vote] = 0classCount[vote] += 1sortedClassCount = sorted(classCount.iteritems(), key = operator.itemgetter(1), reverse = True)return sortedClassCount[0][0]

創建樹

def createTree(dataSet, labels):classList = [example[-1] for example in dataSet]if classList.count(classList[0]) == len(classList):return classList[0]if len(dataSet[0]) == 1:return majorityCnt(classList)bestFeat = chooseBestFeatureToSplit(dataSet)bestFeatureLabel = labels[bestFeat]myTree = {bestFeatureLabel:{}}del(labels[bestFeat])featValues = [example[bestFeat] for example in dataSet]uniqueVals = set(featValues)for value in uniqueVals:subLabels = labels[:]myTree[bestFeatureLabel][value] = createTree(splitDataSet(dataSet, bestFeat,value), subLabels)return myTree

測試

def classify(inputTree,featLabels,testVec):firstStr = inputTree.keys()[0]secondDict = inputTree[firstStr]featIndex = featLabels.index(firstStr)for key in secondDict.keys():if testVec[featIndex] == key:if type(secondDict[key]).__name__=='dict':classLabel = classify(secondDict[key],featLabels,testVec)else:classLabel = secondDict[key]return classLabel >>> import trees >>> myDat,labels=trees.createDataSet() >>> labels ['no surfacing', 'flippers'] >>> myTree=treePlotter.retrieveTree (0) >>> myTree {'no surfacing': {0: 'no', 1: {'flippers': {0: 'no', 1: 'yes'}}}} >>> trees.classify(myTree,labels,[1,0]) 'no' >>> trees.classify(myTree,labels,[1,1]) 'yes'

 存儲與重載

def storeTree(inputTree, filename):import picklefw = open(filename, 'w')pickle.dump(inputTree,fw)fw.close()def grabTree(filename):import picklefr = open(filename)return pickle.load(fr)

 test

#!/usr/bin/python import treesmyDat,labels = trees.createDataSet()myTree = trees.createTree(myDat, labels)trees.storeTree(myTree,'classifierStorage.txt')print(trees.grabTree('classifierStorage.txt'))

圖形化顯示樹結構

#!/usr/bin/pythonimport matplotlib.pyplot as plt decisionNode = dict(boxstyle = "sawtooth", fc = "0.8") leafNode = dict(boxstyle = "round4", fc = "0.8") arrow_args = dict(arrowstyle = "<-")def plotNode(nodeTxt, centerPt, parentPt, nodeType):createPlot.ax1.annotate(nodeTxt, xy = parentPt, xycoords = "axes fraction",xytext = centerPt, textcoords = "axes fraction",va = "center", ha = "center", bbox = nodeType, arrowprops = arrow_args)

創建節點

def createPlot():fig = plt.figure(1, facecolor = "white")fig.clf()createPlot.ax1 = plt.subplot(111, frameon = False)plotNode("a decision node",(0.5, 0.1), (0.1, 0.5), decisionNode)plotNode("a leaf node",(0.8, 0.1), (0.3, 0.8), leafNode)plt.show()

python command line run command as this

import treeplotter treePlotter.createPlot()
  • result like this
def getNumLeafs(myTree):numLeafs = 0firstStr = myTree.keys()[0]secondDict = myTree[firstStr]for key in secondDict.keys():if type(secondDict[key]).__name__ == 'dict':numLeafs += getNumleafs(secondDict[key])else: numLeafs +=1return numLeafsdef getTreeDepth(myTree):maxDepth = 0firstStr = myTree.keys()[0]secondDict = myTree[firstStr]for key in secondDict.keys():if type(secondDict[key]).__name__ == 'dict':thisDepth = 1+ getTreeDepth(secondDict[key])else:thisDepth = 1if thisDepth > maxDepth: maxDepth = thisDepthreturn maxDepthdef retrieveTree(i):listOfTrees =[{'no surfacing': {0: 'no', 1: {'flippers': \{0: 'no', 1: 'yes'}}}},{'no surfacing': {0: 'no', 1: {'flippers': \{0: {'head': {0: 'no', 1: 'yes'}}, 1: 'no'}}}}]return listOfTrees[i]def plotMidText(cntrPt, parentPt, txtString):xMid = (parentPt[0]-cntrPt[0])/2.0 + cntrPt[0]yMid = (parentPt[1]-cntrPt[1])/2.0 + cntrPt[1]createPlot.ax1.text(xMid, yMid, txtString)def plotTree(myTree, parentPt, nodeTxt):numLeafs = getNumLeafs(myTree)depth = getTreeDepth(myTree)firstStr = myTree.keys()[0]cntrPt = (plotTree.xOff + (1.0 + float(numLeafs))/2.0/plotTree.totalW,\plotTree.yOff)plotMidText(cntrPt, parentPt, nodeTxt)plotNode(firstStr, cntrPt, parentPt, decisionNode)secondDict = myTree[firstStr]plotTree.yOff = plotTree.yOff - 1.0/plotTree.totalDfor key in secondDict.keys():if type(secondDict[key]).__name__=='dict':plotTree(secondDict[key],cntrPt,str(key))else:plotTree.xOff = plotTree.xOff + 1.0/plotTree.totalWplotNode(secondDict[key], (plotTree.xOff, plotTree.yOff),cntrPt, leafNode)plotMidText((plotTree.xOff, plotTree.yOff), cntrPt, str(key))plotTree.yOff = plotTree.yOff + 1.0/plotTree.totalDdef createPlot(inTree):fig = plt.figure(1, facecolor='white')fig.clf()axprops = dict(xticks=[], yticks=[])createPlot.ax1 = plt.subplot(111, frameon=False, **axprops)plotTree.totalW = float(getNumLeafs(inTree))plotTree.totalD = float(getTreeDepth(inTree))plotTree.xOff = -0.5/plotTree.totalW; plotTree.yOff = 1.0;plotTree(inTree, (0.5,1.0), '')plt.show()

擴展測試 lens.py

Project Address: ` https://github.com/TheOneAC/ML.git`dataset: `lens.txt in ML/ML_ation/tree` #!/usr/bin/pythonimport trees import treePlotterfr = open("lenses.txt") lenses = [inst.strip().split('\t') for inst in fr.readlines()] lensesLabels=['age', 'prescript', 'astigmatic', 'tearRate']lensesTree = trees.createTree(lenses,lensesLabels) print(lensesTree)treePlotter.createPlot(lensesTree)

轉載于:https://www.cnblogs.com/zeroArn/p/6691287.html

總結

以上是生活随笔為你收集整理的ML in Action 决策树的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。