大三课设-分类聚类预测系统
大三機(jī)器學(xué)習(xí)課設(shè)
下面介紹一下我們的課設(shè)系統(tǒng)
首先看一下我們的課設(shè)要求:
1.熟悉機(jī)器學(xué)習(xí)的完整流程,包括:問(wèn)題建模,獲取數(shù)據(jù),特征工程,模型訓(xùn)練,模型調(diào)優(yōu),線上運(yùn)行;或者分為三大塊:數(shù)據(jù)準(zhǔn)備與預(yù)處理,模型選擇與訓(xùn)練,模型驗(yàn)證與參數(shù)調(diào)優(yōu)。
2.繪制機(jī)器學(xué)習(xí)算法分類歸納思維導(dǎo)圖,按照有監(jiān)督學(xué)習(xí)、無(wú)監(jiān)督學(xué)習(xí)、半監(jiān)督學(xué)習(xí)和強(qiáng)化學(xué)習(xí)進(jìn)行繪制,對(duì)學(xué)過(guò)的算法進(jìn)行歸納總結(jié)。
3.自行選擇學(xué)習(xí)任務(wù),按照機(jī)器學(xué)習(xí)流程,分別設(shè)計(jì)分類、預(yù)測(cè)、聚類系統(tǒng),每個(gè)系統(tǒng)務(wù)必選擇不同的算法進(jìn)行訓(xùn)練,采用多種方法進(jìn)行模型驗(yàn)證與參數(shù)調(diào)優(yōu),選擇適合的多個(gè)指標(biāo)對(duì)模型進(jìn)行評(píng)估,采用可視化方法對(duì)結(jié)果進(jìn)行分析。
(1)分類算法:
k-近鄰算法、貝葉斯分類器、決策樹(shù)分類、BP神經(jīng)網(wǎng)絡(luò)、AdaBoost、GBDT、隨機(jī)森林、邏輯回歸等
(2)預(yù)測(cè):貝葉斯網(wǎng)絡(luò)、馬爾科夫模型、線性回歸、XGBoost、嶺回歸、多項(xiàng)式回歸、決策樹(shù)回歸、深度神經(jīng)網(wǎng)絡(luò)預(yù)測(cè)
(3)聚類:K-means、層次聚類BIRCH、密度聚類DBSCAN算法、高斯混合聚類GMM、密度聚類的OPTICS算法、基于網(wǎng)格的聚類(STING、CLIQUE)、Mean Shift聚類算法
其中:藍(lán)色標(biāo)注的算法要求必須在問(wèn)題中使用,紅色標(biāo)注的為選用(至少選一種,多選加分),黑色的可不用,如用則有加分
4.要求
(1)所選用算法可直接調(diào)用Python中的相關(guān)庫(kù)函數(shù)實(shí)現(xiàn),但要對(duì)其源碼進(jìn)行分析,厘清算法結(jié)構(gòu)及各部分功能。也可自行編寫(xiě)相關(guān)算法,并與庫(kù)函數(shù)進(jìn)行對(duì)比實(shí)驗(yàn)
(2)數(shù)據(jù)集的選擇要分為小數(shù)據(jù)集、中等規(guī)模數(shù)據(jù)集、大規(guī)模數(shù)據(jù)集,數(shù)據(jù)集類型應(yīng)有結(jié)構(gòu)化、半結(jié)構(gòu)化以及非結(jié)構(gòu)化數(shù)據(jù)集。
(3)同一類算法中要實(shí)現(xiàn)各個(gè)算法在不同數(shù)據(jù)集、不同指標(biāo)的比較
(4)算法設(shè)計(jì)中要有較詳細(xì)的注釋說(shuō)明,對(duì)每個(gè)模塊給出詳細(xì)解釋、功能注釋等
接下來(lái)先看一下我們的RGB系統(tǒng)的界面(因?yàn)榻缑婧艹蠹兩珗D設(shè)計(jì)的 所以稱為RGB系統(tǒng))
- 主界面設(shè)置了四個(gè)Button,前三個(gè)分別進(jìn)入一個(gè)子系統(tǒng),最后一個(gè)退出系統(tǒng)
代碼:
main.py
import os import tkinter as tk import matplotlib.pyplot as plt plt.title("")def run_classfiy():os.system(r'python UI_classfiy.py')def run_cluster():os.system(r'python UI_Cluster.py')def run_forecast():os.system(r'python UI_forecast.py')window = tk.Tk() window.title("machine learning") window.geometry("300x400") # 窗口大小 var = tk.StringVar() tk.Label(window, text="請(qǐng)選擇要進(jìn)行的操作", font=("微軟雅黑", 12)).pack() tk.Button(window, text="分類", font=("微軟雅黑", 12), width=15, height=2, command=lambda: run_classfiy()).pack() tk.Button(window, text="聚類", font=("微軟雅黑", 12), width=15, height=2, command=lambda: run_cluster()).pack() tk.Button(window, text="預(yù)測(cè)", font=("微軟雅黑", 12), width=15, height=2, command=lambda: run_forecast()).pack() tk.Button(window, text="退出", font=("微軟雅黑", 12), width=15, height=2, command=lambda: quit()).pack() window.mainloop() # 點(diǎn)擊時(shí)循環(huán)更新數(shù)據(jù)classfiy.py
# k-近鄰算法、 # 貝葉斯分類器、 # 決策樹(shù)分類、 # AdaBoost、 # GBDT、 # 隨機(jī)森林、 # 邏輯回歸、 import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import tree from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier, GradientBoostingRegressor from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier import numpy as npclass Classfiy(object):def __init__(self, x_train, y_train, x_test, y_test):self.x_train = x_trainself.y_train = y_trainself.x_test = x_testself.y_test = y_testself.KNN_pred, self.beyes_pred, self.DT_pred, self.AdaBoost_pred, self.RF_pred, self.LR_pred, self.GBDT_pred \= 0, 0, 0, 0, 0, 0, 0def KNN(self, k=5, p=2):knn = KNeighborsClassifier(n_neighbors=k, p=p, metric='minkowski')knn.fit(self.x_train, self.y_train)self.KNN_pred = knn.predict(self.x_test)self.save_pic("KNN", self.KNN_pred)def beyes(self):beyes = GaussianNB()beyes.fit(self.x_train, self.y_train)self.beyes_pred = beyes.predict(self.x_test)self.save_pic("beyes", self.beyes_pred)def DT(self):dt = tree.DecisionTreeClassifier(criterion="entropy")dt.fit(self.x_train, self.y_train)self.DT_pred = dt.predict(self.x_test)self.save_pic("DT", self.DT_pred)def AdaBoost(self, n_estimators=100):AB = AdaBoostClassifier(n_estimators=n_estimators)AB.fit(self.x_train, self.y_train)self.AdaBoost_pred = AB.predict(self.x_test)self.save_pic("AdaBoost", self.AdaBoost_pred)def RF(self):RF = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2)RF.fit(self.x_train, self.y_train)self.RF_pred = RF.predict(self.x_test)self.save_pic("RF", self.RF_pred)def LR(self):LR = LogisticRegression(solver='liblinear')LR.fit(self.x_train, self.y_train)self.LR_pred = LR.predict(self.x_test)self.save_pic("LR", self.LR_pred)def GBDT(self):GBDT = GradientBoostingRegressor()GBDT.fit(self.x_train, self.y_train)self.GBDT_pred = GBDT.predict(self.x_test)self.GBDT_pred = np.asarray(self.GBDT_pred, dtype=int)self.save_pic("GBDT", self.GBDT_pred)def Evaluation_indicators(self, stri, y_pred):return [stri,round(accuracy_score(self.y_test, y_pred), 3),round(precision_score(self.y_test, y_pred, average="macro"), 3),round(recall_score(self.y_test, y_pred, average="micro"), 3),round(f1_score(self.y_test, y_pred, average="weighted"), 3)]def save_pic(self, stri, y_pred):plt.title(stri)plt.scatter(self.x_test[:, 0], self.x_test[:, 1], c=y_pred)plt.savefig("image/"+stri+".png", dpi=55)Clusterer.py
# K-means、 # 層次聚類BIRCH、 # 密度聚類DBSCAN算法、 # 高斯混合聚類GMM、 # 密度聚類的OPTICS算法、 # Mean Shift聚類算法 import pandas as pd from matplotlib import pyplot as plt from sklearn.cluster import KMeans, Birch, DBSCAN, OPTICS, MeanShift from sklearn.mixture import GaussianMixture import sklearn from sklearn import metrics from sklearn.metrics import accuracy_score from sklearn.metrics import homogeneity_completeness_v_measure import numpy as np# 計(jì)算純度 def purity_score(y_true, y_pred):y_voted_labels = np.zeros(y_true.shape)labels = np.unique(y_true)ordered_labels = np.arange(labels.shape[0])for k in range(labels.shape[0]):y_true[y_true == labels[k]] = ordered_labels[k]labels = np.unique(y_true)bins = np.concatenate((labels, [np.max(labels) + 1]), axis=0)for cluster in np.unique(y_pred):hist, _ = np.histogram(y_true[y_pred == cluster], bins=bins)winner = np.argmax(hist)y_voted_labels[y_pred == cluster] = winnerreturn accuracy_score(y_true, y_voted_labels)class Cluser:def __init__(self, k, data, y_true):self.K = kself.data = dataself.y_true = y_trueself.kmeams_pred, self.birch_pred, self.dbscan_pred, self.gmm_pred, self.optics_pred, self.MS_pred = \0, 0, 0, 0, 0, 0def K_means(self):kmeans = KMeans(n_clusters=self.K)self.kmeams_pred = kmeans.fit_predict(self.data)self.save_pic("K_means", self.kmeams_pred)def BIRCH(self):model = Birch(n_clusters=self.K)self.birch_pred = model.fit_predict(self.data)self.save_pic("BIRCH", self.birch_pred)def DBSCAN(self):model = DBSCAN()self.dbscan_pred = model.fit_predict(self.data)self.save_pic("DBSCAN", self.dbscan_pred)def GMM(self):model = GaussianMixture(n_init=3)self.gmm_pred = model.fit_predict(self.data)self.save_pic("GMM", self.gmm_pred)def OPTICS(self):model = OPTICS()self.optics_pred = model.fit_predict(self.data)self.save_pic("OPTICS", self.optics_pred)def Mean_Shift(self):model = MeanShift()self.MS_pred = model.fit_predict(self.data)self.save_pic("Mean_Shift", self.MS_pred)def Evaluation_indicators(self, stri, y_pred):h_c_v = homogeneity_completeness_v_measure(self.y_true, y_pred)return [stri,round(purity_score(self.y_true, y_pred), 3),round(metrics.adjusted_rand_score(self.y_true, y_pred), 3),round(sklearn.metrics.f1_score(self.y_true, y_pred, average='micro'), 3),round(metrics.mutual_info_score(self.y_true, y_pred), 3),round(h_c_v[0], 3),round(h_c_v[1], 3),round(h_c_v[2], 3)]def save_pic(self, stri, y_pred):plt.title(stri)plt.scatter(self.data[:, 0], self.data[:, 1], c=y_pred)plt.savefig("image/"+stri+".png", dpi=55)forecast.py
# 貝葉斯網(wǎng)絡(luò)、 # 馬爾科夫模型、 # 線性回歸、 # XGBoost、 # 嶺回歸、 # 多項(xiàng)式回歸、 # 決策樹(shù)回歸、import numpy as np import xgboost from hmmlearn.hmm import GaussianHMM from matplotlib import pyplot as plt from sklearn import linear_model, metrics import sklearn.pipeline as pl import sklearn.preprocessing as sp import sklearn.linear_model as lm from sklearn.linear_model import LinearRegression, BayesianRidge from sklearn.tree import DecisionTreeRegressor# 貝葉斯,線性 def mape(y_true, y_pred):return np.mean(np.abs((y_pred - y_true) / y_true)) * 100def smape(y_true, y_pred):return 2.0 * np.mean(np.abs(y_pred - y_true) / (np.abs(y_pred) + np.abs(y_true))) * 100class Forecast(object):def __init__(self, x_train, y_train, x_test, y_test):self.x_train = x_trainself.y_train = y_trainself.x_test = x_testself.y_test = y_testself.xgb_pred, self.LR_pred, self.DT_pred, self.polynomial_pred, self.RidgeCv_pred, self.byes_pred, \self.markov_pred = 0, 0, 0, 0, 0, 0, 0# XGBoost、def XGBoost(self):bst = xgboost.XGBClassifier()bst.fit(self.x_train, self.y_train)self.xgb_pred = bst.predict(self.x_test)self.save_pic("XGBoost", self.xgb_pred)# 線性回歸、def LR(self):model = LinearRegression()model.fit(self.x_train, self.y_train)model.score(self.x_test, self.y_test)self.LR_pred = model.predict(self.x_test)self.save_pic("LR", self.LR_pred)# 決策樹(shù)回歸def DT(self):model = DecisionTreeRegressor(max_depth=5)model.fit(self.x_train, self.y_train)self.DT_pred = model.predict(self.x_test)self.save_pic("DT", self.DT_pred)# 多項(xiàng)式回歸def polynomial(self):model = pl.make_pipeline(sp.PolynomialFeatures(10), # 多項(xiàng)式特征擴(kuò)展器lm.LinearRegression()) # 線性回歸器model.fit(self.x_train, self.y_train)self.polynomial_pred = model.predict(self.x_test)self.save_pic("polynomial", self.polynomial_pred)# 嶺回歸def RidgeCv(self):model = linear_model.RidgeCV()model.fit(self.x_train, self.y_train)model.score(self.x_test, self.y_test)self.RidgeCv_pred = model.predict(self.x_test)self.save_pic("RidgeCv", self.RidgeCv_pred)# 貝葉斯網(wǎng)絡(luò)、def byes(self):mnb = BayesianRidge() # 使用默認(rèn)配置初始化樸素貝葉斯mnb.fit(self.x_train, self.y_train)self.byes_pred = mnb.predict(self.x_test)self.save_pic("byes", self.byes_pred)# 馬爾科夫模型、def Markov(self):model = GaussianHMM(n_components=3, covariance_type='diag', n_iter=1000)model.fit(self.x_train)self.markov_pred = model.predict(self.x_test)self.save_pic("Markov", self.markov_pred)def Evaluation_indicators(self, stri, y_pred):return [stri,round(metrics.mean_squared_error(self.y_test, y_pred), 3),round(np.sqrt(metrics.mean_squared_error(self.y_test, y_pred)), 3),round(metrics.mean_absolute_error(self.y_test, y_pred), 3),round(mape(self.y_test, y_pred), 3),round(smape(self.y_test, y_pred), 3)]def save_pic(self, stri, y_pred):plt.title(stri)plt.plot(np.arange(len(y_pred)), self.y_test, 'go-', label='test value')plt.plot(np.arange(len(y_pred)), y_pred, 'ro-', label='predict value')plt.savefig("image/" + stri + ".png", dpi=55)Button_classfiy.py
from classfiy import * import pandas as pd from sklearn.datasets import * from sklearn.model_selection import train_test_splitclass ButtonClassfiy(object):def __init__(self, ifSelect, dataOption, e_value):self.ifSelect = ifSelectself.dataOption = dataOptionself.e_value = e_valueself.data = 0self.target = 0self.x_train = 0self.y_train = 0self.x_test = 0self.y_test = 0self.result = []def get_data(self):if self.dataOption == "iris":self.data = load_iris()["data"]self.target = load_iris()["target"]elif self.dataOption == "wine_data":self.data = load_wine()["data"]self.target = load_wine()["target"]elif self.dataOption == "breast_cancer":self.data = load_breast_cancer()["data"]self.target = load_breast_cancer()["target"]else:self.data = pd.read_csv(self.e_value)self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.data, self.target, test_size=0.30, random_state=42)def run(self):clf = Classfiy(self.x_train, self.y_train, self.x_test, self.y_test)if self.ifSelect[0]:clf.KNN()self.result.append(clf.Evaluation_indicators("{:<6}".format("KNN"), clf.KNN_pred))if self.ifSelect[1]:clf.beyes()self.result.append(clf.Evaluation_indicators("{:<6}".format("貝葉斯分類器"), clf.beyes_pred))if self.ifSelect[2]:clf.DT()self.result.append(clf.Evaluation_indicators("{:<6}".format("決策樹(shù)"), clf.DT_pred))if self.ifSelect[3]:clf.AdaBoost()self.result.append(clf.Evaluation_indicators("{:<6}".format("AdaBoost"), clf.AdaBoost_pred))if self.ifSelect[4]:clf.GBDT()self.result.append(clf.Evaluation_indicators("{:<6}".format("GBDT"), clf.GBDT_pred))if self.ifSelect[5]:clf.RF()self.result.append(clf.Evaluation_indicators("{:<6}".format("隨機(jī)森林"), clf.RF_pred))if self.ifSelect[6]:clf.LR()self.result.append(clf.Evaluation_indicators("{:<6}".format("邏輯回歸"), clf.LR_pred))return self.resultButton_cluster.py
from Clusterer import * from sklearn.datasets import * from sklearn.model_selection import train_test_splitclass ButtonCluster(object):def __init__(self, ifSelect, dataOption, e_value):self.ifSelect = ifSelectself.dataOption = dataOptionself.e_value = e_valueself.data = 0self.target = 0self.x_train = 0self.y_train = 0self.x_test = 0self.y_test = 0self.result = []def get_data(self):if self.dataOption == "iris":self.data = load_iris()["data"]self.target = load_iris()["target"]elif self.dataOption == "wine_data":self.data = load_wine()["data"]self.target = load_wine()["target"]elif self.dataOption == "breast_cancer":self.data = load_breast_cancer()["data"]self.target = load_breast_cancer()["target"]else:self.data = pd.read_csv(self.e_value)self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.data, self.target, test_size=0.30, random_state=42)def run(self):clf = Cluser(3, self.x_train, self.y_train)if self.ifSelect[0]:clf.K_means()self.result.append(clf.Evaluation_indicators("{:<6}".format("K-means"), clf.kmeams_pred))if self.ifSelect[1]:clf.BIRCH()self.result.append(clf.Evaluation_indicators("{:<6}".format("BIRCH"), clf.birch_pred))if self.ifSelect[2]:clf.DBSCAN()self.result.append(clf.Evaluation_indicators("{:<6}".format("DBSCAN"), clf.dbscan_pred))if self.ifSelect[3]:clf.GMM()self.result.append(clf.Evaluation_indicators("{:<6}".format("GMM"), clf.gmm_pred))if self.ifSelect[4]:clf.OPTICS()self.result.append(clf.Evaluation_indicators("{:<6}".format("OPTICS"), clf.optics_pred))if self.ifSelect[5]:clf.Mean_Shift()self.result.append(clf.Evaluation_indicators("{:<6}".format("Mean_Shift"), clf.MS_pred))return self.resultButton_forecast.py
from forecast import * from sklearn.datasets import * from sklearn.model_selection import train_test_splitclass ButtonForecast(object):def __init__(self, ifSelect, dataOption, e_value):self.ifSelect = ifSelectself.dataOption = dataOptionself.e_value = e_valueself.data = 0self.target = 0self.x_train = 0self.y_train = 0self.x_test = 0self.y_test = 0self.result = []def get_data(self):if self.dataOption == "iris":self.data = load_iris()["data"]self.target = load_iris()["target"]elif self.dataOption == "wine_data":self.data = load_boston()["data"]self.target = load_boston()["target"]elif self.dataOption == "breast_cancer":self.data = load_boston()["data"]self.target = load_boston()["target"]else:self.data = pd.read_csv(self.e_value)self.x_train, self.x_test, self.y_train, self.y_test = train_test_split(self.data, self.target, test_size=0.30, random_state=42)# 貝葉斯網(wǎng)絡(luò)、# 馬爾科夫模型、# 線性回歸、# XGBoost、# 嶺回歸、# 多項(xiàng)式回歸、# 決策樹(shù)回歸、def run(self):clf = Forecast(self.x_train, self.y_train, self.x_test, self.y_test)if self.ifSelect[0]:clf.byes()self.result.append(clf.Evaluation_indicators("{:<6}".format("貝葉斯網(wǎng)絡(luò)"), clf.byes_pred))if self.ifSelect[1]:clf.Markov()self.result.append(clf.Evaluation_indicators("{:<6}".format("馬爾科夫模型"), clf.markov_pred))if self.ifSelect[2]:clf.LR()self.result.append(clf.Evaluation_indicators("{:<6}".format("線性回歸"), clf.LR()))if self.ifSelect[3]:clf.XGBoost()self.result.append(clf.Evaluation_indicators("{:<6}".format("XGBoost"), clf.xgb_pred))if self.ifSelect[4]:clf.RidgeCv()self.result.append(clf.Evaluation_indicators("{:<6}".format("嶺回歸"), clf.RidgeCv_pred))if self.ifSelect[5]:clf.polynomial()self.result.append(clf.Evaluation_indicators("{:<6}".format("多項(xiàng)式回歸"), clf.polynomial_pred))if self.ifSelect[6]:clf.DT()self.result.append(clf.Evaluation_indicators("{:<6}".format("決策樹(shù)回歸"), clf.DT_pred))return self.resultUI_classfiy.py
import tkinter as tk from tkinter import * from Button_classfiy import ButtonClassfiy import tkinter.messageboxwindow = tk.Tk() window.title("machine learning") window.geometry("800x800+50+50") # 窗口大小 data_option = StringVar() data_option.set("iris") ifSelect = {} diy_label = Label()# 窗口布局 select = tk.Frame(window, height=450, width=250, bg='green').place(x=10, y=50) data = tk.Frame(window, height=250, width=250, bg='red').place(x=10, y=530) text = Text(window, width=62, height=10) text.place(x=320, y=50) # 提示語(yǔ) ChooseLabel = tk.Label(window, text="Please select a classification algorithm", font=('微軟雅黑', 12)).place(x=10, y=10) resultLabel = tk.Label(window, text="Results of the selected algorithm classification",font=('微軟雅黑', 12)).place(x=330, y=10) e = tk.Entry(data)canvas = tk.Canvas(window, height=260, width=440) image_flie = tk.PhotoImage(file="image/1.png") image = canvas.create_image(0, 0, anchor="nw", image=image_flie) canvas.place(x=320, y=230) pic_names = ["1.png"] pic_name = pic_names[0] pic_index = 0# 切換圖片 def swicth_pic(pic_name1):global image, image_flie, pic_index, pic_names, pic_nameimage_flie = tk.PhotoImage(file='image/'+pic_name1)image = canvas.create_image(0, 0, anchor="nw", image=image_flie)pic_index += 1if pic_index >= len(pic_names):pic_index = 0pic_name = pic_names[pic_index]Button(window, text="next pic", font=('微軟雅黑', 8), command=lambda: swicth_pic(pic_name), bg='gray').place(x=650, y=200)# 自定義文件路徑 def diy_data():global diy_label# 用戶自己輸入數(shù)據(jù)diy_label = Label(data, text="請(qǐng)輸入數(shù)據(jù)路徑:", font=('微軟雅黑', 10), bg="red")diy_label.place(x=10, y=700)e.place(x=110, y=703) # 若要顯示 則show=Nonedef delete_diy():global diy_labeldiy_label.place_forget()e.place_forget()def reflush():global ifSelect, data_option# 選擇算法復(fù)選框algorithm = {0: 'k-近鄰算法', 1: '貝葉斯分類器', 2: '決策樹(shù)分類', 3: 'AdaBoost', 4: 'GBDT', 5: '隨機(jī)森林', 6: '邏輯回歸'}# 判斷是否選擇for i in range(len(algorithm)):ifSelect[i] = BooleanVar()Checkbutton(select, text=algorithm[i], font=('微軟雅黑', 12), variable=ifSelect[i], bg='green') \.place(x=30, y=80 + i * 55, anchor="nw")# 數(shù)據(jù)層# 設(shè)置單選層,內(nèi)置小數(shù)據(jù)、中數(shù)據(jù)、大數(shù)據(jù)tk.Radiobutton(window, text="小數(shù)據(jù)", variable=data_option, value="iris", bg='red', command=delete_diy) \.place(x=30, y=550)tk.Radiobutton(window, text="中數(shù)據(jù)", variable=data_option, value="wine_data", bg='red', command=delete_diy) \.place(x=30, y=580)tk.Radiobutton(window, text="大數(shù)據(jù)", variable=data_option, value="breast_cancer", bg="red", command=delete_diy) \.place(x=30, y=610)tk.Radiobutton(window, text="diy數(shù)據(jù)", variable=data_option, value="diy", command=diy_data, bg='red') \.place(x=30, y=640)reflush()# 到這里我們所需要的數(shù)據(jù)都可以拿到了 # 這部分我們加到運(yùn)行命令下面def btn_f():global pic_names, pic_namefor i in range(7):ifSelect[i] = ifSelect[i].get()f = 1for i in range(7):if ifSelect[i]:f = 0if f:tkinter.messagebox.showerror("錯(cuò)誤", "你沒(méi)有選擇任何算法")pic_names = ["image/1.png"]quit()e_value = e.get()pic_names = []for i in range(7):if ifSelect[i]:pic_names.append(["KNN.png", "beyes.png", "DT.png", "AdaBoost.png", "GBDT.png", "RF.png", "LR.png"][i])pic_name = pic_names[0]# 進(jìn)行計(jì)算btnf = ButtonClassfiy(ifSelect, data_option.get(), e_value)btnf.get_data()result = btnf.run()show_ = "算法名稱 精確率 準(zhǔn)確率 召回率 f1-score\n"for i in result:show_ = show_ + str(i) + '\n'text.delete('1.0', 'end')text.insert(INSERT, show_, "軟體雅黑")reflush()# 運(yùn)行按鈕Button(window, text="run it", font=('微軟雅黑', 80), command=lambda: btn_f(), bg='gray').place(x=350, y=530) window.mainloop() # 點(diǎn)擊時(shí)循環(huán)更新數(shù)據(jù)UI_Cluster.py
import tkinter as tk from tkinter import * from Button_cluster import ButtonCluster import tkinter.messageboxwindow = tk.Tk() window.title("machine learning") window.geometry("800x800+50+50") # 窗口大小 data_option = StringVar() data_option.set("iris") ifSelect = {} diy_label = Label()# 窗口布局 select = tk.Frame(window, height=450, width=250, bg='green').place(x=10, y=50) data = tk.Frame(window, height=250, width=250, bg='red').place(x=10, y=530) text = Text(window, width=62, height=10) text.place(x=320, y=50) # 提示語(yǔ) ChooseLabel = tk.Label(window, text="Please select a clustering algorithm", font=('微軟雅黑', 12)).place(x=10, y=10) resultLabel = tk.Label(window, text="The clustering results are as follows",font=('微軟雅黑', 12)).place(x=330, y=10) e = tk.Entry(data) # 畫(huà)布 canvas = tk.Canvas(window, height=260, width=440) image_flie = tk.PhotoImage(file="image/1.png") image = canvas.create_image(0, 0, anchor="nw", image=image_flie) canvas.place(x=320, y=230) pic_names = ["1.png"] pic_name = pic_names[0] pic_index = 0# 切換圖片 def swicth_pic(pic_name1):global image, image_flie, pic_index, pic_names, pic_nameimage_flie = tk.PhotoImage(file='image/'+pic_name1)image = canvas.create_image(0, 0, anchor="nw", image=image_flie)pic_index += 1if pic_index >= len(pic_names):pic_index = 0pic_name = pic_names[pic_index]Button(window, text="next pic", font=('微軟雅黑', 8), command=lambda: swicth_pic(pic_name), bg='gray').place(x=650, y=200)def diy_data():global diy_label# 用戶自己輸入數(shù)據(jù)diy_label = Label(data, text="請(qǐng)輸入數(shù)據(jù)路徑:", font=('微軟雅黑', 10), bg="red")diy_label.place(x=10, y=700)e.place(x=110, y=703) # 若要顯示 則show=Nonedef delete_diy():global diy_labeldiy_label.place_forget()e.place_forget()def reflush():global ifSelect, data_option# 選擇算法復(fù)選框algorithm = {0: 'K-means', 1: 'BIRCH', 2: 'DBSCAN', 3: 'GMM', 4: 'OPTICS', 5: 'Mean Shift'}# 判斷是否選擇for i in range(len(algorithm)):ifSelect[i] = BooleanVar()Checkbutton(select, text=algorithm[i], font=('微軟雅黑', 12), variable=ifSelect[i], bg='green') \.place(x=30, y=80 + i * 55, anchor="nw")# 數(shù)據(jù)層# 設(shè)置單選層,內(nèi)置小數(shù)據(jù)、中數(shù)據(jù)、大數(shù)據(jù)tk.Radiobutton(window, text="小數(shù)據(jù)", variable=data_option, value="iris", bg='red', command=delete_diy) \.place(x=30, y=550)tk.Radiobutton(window, text="中數(shù)據(jù)", variable=data_option, value="wine_data", bg='red', command=delete_diy) \.place(x=30, y=580)tk.Radiobutton(window, text="大數(shù)據(jù)", variable=data_option, value="breast_cancer", bg="red", command=delete_diy) \.place(x=30, y=610)tk.Radiobutton(window, text="diy數(shù)據(jù)", variable=data_option, value="diy", command=diy_data, bg='red') \.place(x=30, y=640)reflush()# 到這里我們所需要的數(shù)據(jù)都可以拿到了 # 這部分我們加到運(yùn)行命令下面def btn_f():global pic_names, pic_namefor i in range(6):ifSelect[i] = ifSelect[i].get()f = 1for i in range(6):if ifSelect[i]:f = 0if f:tkinter.messagebox.showerror("錯(cuò)誤", "你沒(méi)有選擇任何算法")pic_names = ["image/1.png"]quit()e_value = e.get()pic_names = []for i in range(6):if ifSelect[i]:pic_names.append(["K_means.png", "BIRCH.png", "DBSCAN.png", "GMM.png", "OPTICS.png", "Mean_Shift.png"][i])pic_name = pic_names[0]# 進(jìn)行計(jì)算btnf = ButtonCluster(ifSelect, data_option.get(), e_value)btnf.get_data()result = btnf.run()show_ = "算法名稱 純度 調(diào)整蘭德系數(shù) f1-score 互信息 同質(zhì)性 完整性 調(diào)和平均\n"for i in result:show_ = show_ + str(i) + '\n'text.delete('1.0', 'end')text.insert(INSERT, show_, "軟體雅黑")reflush()# 運(yùn)行按鈕Button(window, text="run it", font=('微軟雅黑', 80), command=lambda: btn_f(), bg='gray').place(x=350, y=530) window.mainloop() # 點(diǎn)擊時(shí)循環(huán)更新數(shù)據(jù)UI_forecast.py
import tkinter as tk from tkinter import * from Button_forecast import ButtonForecast import tkinter.messageboxwindow = tk.Tk() window.title("machine learning") window.geometry("800x800+50+50") # 窗口大小 data_option = StringVar() data_option.set("iris") ifSelect = {} diy_label = Label()# 窗口布局 select = tk.Frame(window, height=450, width=250, bg='green').place(x=10, y=50) data = tk.Frame(window, height=250, width=250, bg='red').place(x=10, y=530) text = Text(window, width=62, height=10) text.place(x=320, y=50) # 提示語(yǔ) ChooseLabel = tk.Label(window, text="Please select a prediction algorithm", font=('微軟雅黑', 12)).place(x=10, y=10) resultLabel = tk.Label(window, text="The predicted results are as follows", font=('微軟雅黑', 12)).place(x=330, y=10) e = tk.Entry(data)canvas = tk.Canvas(window, height=260, width=440) image_flie = tk.PhotoImage(file="image/1.png") image = canvas.create_image(0, 0, anchor="nw", image=image_flie) canvas.place(x=320, y=230) pic_names = ["image/1.png"] pic_name = pic_names[0] pic_index = 0# 切換圖片 def swicth_pic(pic_name1):global image, image_flie, pic_index, pic_names, pic_nameimage_flie = tk.PhotoImage(file='image/' + pic_name1)image = canvas.create_image(0, 0, anchor="nw", image=image_flie)pic_index += 1if pic_index >= len(pic_names):pic_index = 0pic_name = pic_names[pic_index]Button(window, text="next pic", font=('微軟雅黑', 8), command=lambda: swicth_pic(pic_name), bg='gray').place(x=650, y=200)def diy_data():global diy_label# 用戶自己輸入數(shù)據(jù)diy_label = Label(data, text="請(qǐng)輸入數(shù)據(jù)路徑:", font=('微軟雅黑', 10), bg="red")diy_label.place(x=10, y=700)e.place(x=110, y=703) # 若要顯示 則show=Nonedef delete_diy():global diy_labeldiy_label.place_forget()e.place_forget()def reflush():global ifSelect, data_option# 選擇算法復(fù)選框algorithm = {0: '貝葉斯網(wǎng)絡(luò)', 1: '馬爾科夫模型', 2: '線性回歸', 3: 'XGBoost', 4: '嶺回歸', 5: '多項(xiàng)式回歸', 6: '決策樹(shù)回歸'}# 判斷是否選擇for i in range(len(algorithm)):ifSelect[i] = BooleanVar()Checkbutton(select, text=algorithm[i], font=('微軟雅黑', 12), variable=ifSelect[i], bg='green') \.place(x=30, y=80 + i * 55, anchor="nw")# 數(shù)據(jù)層# 設(shè)置單選層,內(nèi)置小數(shù)據(jù)、中數(shù)據(jù)、大數(shù)據(jù)tk.Radiobutton(window, text="小數(shù)據(jù)", variable=data_option, value="iris", bg='red', command=delete_diy) \.place(x=30, y=550)tk.Radiobutton(window, text="中數(shù)據(jù)", variable=data_option, value="wine_data", bg='red', command=delete_diy) \.place(x=30, y=580)tk.Radiobutton(window, text="大數(shù)據(jù)", variable=data_option, value="breast_cancer", bg="red", command=delete_diy) \.place(x=30, y=610)tk.Radiobutton(window, text="diy數(shù)據(jù)", variable=data_option, value="diy", command=diy_data, bg='red') \.place(x=30, y=640)reflush()# 到這里我們所需要的數(shù)據(jù)都可以拿到了 # 這部分我們加到運(yùn)行命令下面def btn_f():global pic_names, pic_namefor i in range(7):ifSelect[i] = ifSelect[i].get()f = 1for i in range(7):if ifSelect[i]:f = 0if f:tkinter.messagebox.showerror("錯(cuò)誤", "你沒(méi)有選擇任何算法")pic_names = ["image/1.png"]quit()e_value = e.get()pic_names = []for i in range(7):if ifSelect[i]:pic_names.append(["byes.png", "Markov.png", "LR.png", "XGBoost.png", "RidgeCv.png", "polynomial.png","DT.png"][i])pic_name = pic_names[0]# 進(jìn)行計(jì)算btnf = ButtonForecast(ifSelect, data_option.get(), e_value)btnf.get_data()result = btnf.run()show_ = "算法名稱 MSE RMSE MAE MAPE SMAPE\n"for i in result:show_ = show_ + str(i) + '\n'text.delete('1.0', 'end')text.insert(INSERT, show_, "軟體雅黑")reflush()# 運(yùn)行按鈕 Button(window, text="run it", font=('微軟雅黑', 80), command=lambda: btn_f(), bg='gray').place(x=350, y=530) window.mainloop() # 點(diǎn)擊時(shí)循環(huán)更新數(shù)據(jù)總結(jié)
以上是生活随笔為你收集整理的大三课设-分类聚类预测系统的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 字库字符编码
- 下一篇: java信息管理系统总结_java实现科