日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器学习kaggle竞赛实战-泰坦尼克号

發布時間:2023/11/29 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习kaggle竞赛实战-泰坦尼克号 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

數據展示

首先登kaggle 下載泰坦尼克訓練相關數據

import pandas as pd import numpy as np data = pd.read_csv('train.csv') print(data.shape) print(data.head) train = data[:800] test = data[800:] print(train.shape) print(test.shape)

選擇特征

selected_features = ['Pclass', 'Sex', 'Age', 'Embarked', 'SibSp', 'Parch', 'Fare'] X_train = train[selected_features] X_test = test[selected_features]y_train = train['Survived'] print(X_train['Embarked'].value_counts()) print(X_test['Embarked'].value_counts())

對空數據進行填充,采用均值填充

X_train['Embarked'].fillna('S', inplace=True) X_test['Embarked'].fillna('S', inplace=True)X_train['Age'].fillna(X_train['Age'].mean(), inplace=True) X_test['Age'].fillna(X_train['Age'].mean(), inplace=True) X_test['Fare'].fillna(X_train['Fare'].mean(), inplace=True)

特征向量化

from sklearn.feature_extraction import DictVectorizer dict_vec = DictVectorizer(sparse=False) X_train = dict_vec.fit_transform(X_train.to_dict(orient='record')) print(dict_vec.feature_names_) X_test=dict_vec.transform(X_test.to_dict(orient='record'))

引入隨機森林和XGB分類器

from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() from xgboost import XGBClassifier xgbc = XGBClassifier()

交叉驗證

from sklearn.model_selection import train_test_split, cross_val_score,GridSearchCV print(cross_val_score(rfc, X_train, y_train, cv=4, scoring='accuracy').mean()) print(cross_val_score(xgbc, X_train, y_train, cv=4, scoring='accuracy').mean()) y_test = test['Survived']

使用RandomForestClassifier 進行預測操作

rfc.fit(X_train, y_train) rfc_y_predict = rfc.predict(X_test) rfc_submission = pd.DataFrame({'PassengerId': test['PassengerId'], 'Survived': rfc_y_predict}) rfc_submission.to_csv('rfc_submission.csv', index=False) print('Train Accuracy: %.1f%%' % (np.mean(rfc_y_predict == y_test) * 100))

使用默認配置的XGBClassifier進行預測操作

xgbc.fit(X_train, y_train) XGBClassifier(base_score=0.5, colsample_bylevel=1,colsample_bytree=1,gamma=0,learning_rate=0.1,max_delta_step=0,max_depth=3,min_child_weight=1,missing=None,n_estimators=100,nthread=-1,objective='binary:logistic',reg_alpha=0,reg_lambda=1,scale_pos_weight=1,seed=0,silent=True,subsample=1) xgbc_y_predict=xgbc.predict(X_test) xgbc_submission = pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':xgbc_y_predict}) xgbc_submission.to_csv('xgbc_submission.csv', index=False) print('Train Accuracy: %.1f%%' % (np.mean(xgbc_y_predict == y_test) * 100))

使用并行網格搜索的方式尋找更好的超餐組合

params={'max_depth':range(2, 7), 'n_estimators':range(100,1100,200),'learning_rate':[0.05, 0.1, 0.25, 0.5, 1.0]} xgbc_best = XGBClassifier() gs=GridSearchCV(xgbc_best, params, n_jobs=-1, cv=5, verbose=1) gs.fit(X_train, y_train) print(gs.best_score_) print(gs.best_params_)xgbc_best_y_predict=gs.predict(X_test) xgbc_best_submission=pd.DataFrame({'PassengerId':test['PassengerId'],'Survived':xgbc_best_y_predict}) xgbc_best_submission.to_csv('xgbc_best_submission.csv')

總結

以上是生活随笔為你收集整理的机器学习kaggle竞赛实战-泰坦尼克号的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。