日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

机器学习中的OOF

發(fā)布時(shí)間:2023/12/20 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习中的OOF 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

[1]中回答如下:

OF simply stands for "Out-of-fold" and refers to a step in the learning process when using k-fold validation in which the predictions from each set of folds are grouped together into one group of 1000 predictions. These predictions are now "out-of-the-folds" and thus error can be calculated on these to get a good measure of how good your model is.

In terms of learning more about it, there's really not a ton more to it than that, and it certainly isn't its own technique to learning or anything. If you have a follow up question that is small, please leave a comment and I will try and update my answer to include this.

EDIT:?While ambling around the inter-webs I stumbled upon?this[2]?relatively similar question from Cross-Validated (with a slightly more detailed answer), perhaps it will add some intuition if you are still confused.

[2]中回答如下:

When training on each fold (90%) of the data, you will then predict on the remaining 10%. With this 10% you will compute an error metric (RMSE, for example). This leaves you with: 10 values for RMSE, and 10 sets of corresponding predictions. There are 2 things to do this these results:

  • Inspect the mean and standard deviation of your 10 RMSE values. k-fold takes random partitions of your data, and the error on each fold should not vary too greatly. If it does, your model (and its features, hyper-parameters etc.) cannot be expected to yield stable predictions on a test set.

  • Aggregate your 10 sets of predictions into 1 set of predictions. For example, if your training set contains 1,000 data points, you will have 10 sets of 100 predictions (10*100 = 1000). When you stack these into 1 vector, you are now left with 1000 predictions: 1 for every observation in your original training set. These are called out-of-folds predictions. With these, you can compute the RMSE for your whole training set in one go, as?rmse = compute_rmse(oof_predictions, y_train). This is the likely the cleanest way to evaluate the final predictor.

  • ?

    一句話就是,進(jìn)行10折驗(yàn)證的時(shí)候,假如訓(xùn)練集1000條:

    十折cv,10個(gè)模型,每個(gè)模型都是由900條訓(xùn)練集訓(xùn)練而成,對(duì)剩下的100條進(jìn)行預(yù)測(cè),10個(gè)模型都對(duì)各自剩下的100條進(jìn)行預(yù)測(cè),這個(gè)就叫做OOF?predictions

    [1]https://stackoverflow.com/questions/52396191/what-is-oof-approach-in-machine-learning

    [2]https://stats.stackexchange.com/questions/161491/how-to-evaluate-the-final-model-after-k-fold-cross-validation

    總結(jié)

    以上是生活随笔為你收集整理的机器学习中的OOF的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。