日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

ML之LiRDNNEL:基于skflow的LiR、DNN、sklearn的RF对Boston(波士顿房价)数据集进行回归预测(房价)

發(fā)布時(shí)間:2025/3/21 编程问答 15 豆豆
生活随笔 收集整理的這篇文章主要介紹了 ML之LiRDNNEL:基于skflow的LiR、DNN、sklearn的RF对Boston(波士顿房价)数据集进行回归预测(房价) 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

ML之LiR&DNN&EL:基于skflow的LiR、DNN、sklearn的RF對(duì)Boston(波士頓房價(jià))數(shù)據(jù)集進(jìn)行回歸預(yù)測(cè)(房價(jià))

?

?

目錄

輸出結(jié)果

設(shè)計(jì)思路

核心代碼


?

?

?

輸出結(jié)果

?

?

設(shè)計(jì)思路

?

?

核心代碼

tf_lr = skflow.TensorFlowLinearRegressor(steps=10000, learning_rate=0.01, batch_size=50) tf_lr.fit(X_train, y_train) tf_lr_y_predict = tf_lr.predict(X_test)tf_dnn_regressor = skflow.TensorFlowDNNRegressor(hidden_units=[100, 40],steps=10000, learning_rate=0.01, batch_size=50) tf_dnn_regressor.fit(X_train, y_train) tf_dnn_regressor_y_predict = tf_dnn_regressor.predict(X_test)rfr = RandomForestRegressor() rfr.fit(X_train, y_train) rfr_y_predict = rfr.predict(X_test) class TensorFlowLinearRegressor(TensorFlowEstimator, RegressorMixin):"""TensorFlow Linear Regression model."""def __init__(self, n_classes=0, tf_master="", batch_size=32, steps=200, optimizer="SGD", learning_rate=0.1, tf_random_seed=42, continue_training=False, num_cores=4, verbose=1, early_stopping_rounds=None, max_to_keep=5, keep_checkpoint_every_n_hours=10000):super(TensorFlowLinearRegressor, self).__init__(model_fn=models.linear_regression, n_classes=n_classes, tf_master=tf_master, batch_size=batch_size, steps=steps, optimizer=optimizer, learning_rate=learning_rate, tf_random_seed=tf_random_seed, continue_training=continue_training, num_cores=num_cores, verbose=verbose, early_stopping_rounds=early_stopping_rounds, max_to_keep=max_to_keep, keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)@propertydef weights_(self):"""Returns weights of the linear regression."""return self.get_tensor_value('linear_regression/weights:0')@propertydef bias_(self):"""Returns bias of the linear regression."""return self.get_tensor_value('linear_regression/bias:0') class TensorFlowDNNRegressor(TensorFlowEstimator, RegressorMixin):"""TensorFlow DNN Regressor model.Parameters:hidden_units: List of hidden units per layer.tf_master: TensorFlow master. Empty string is default for local.batch_size: Mini batch size.steps: Number of steps to run over data.optimizer: Optimizer name (or class), for example "SGD", "Adam","Adagrad".learning_rate: If this is constant float value, no decay function is used.Instead, a customized decay function can be passed that acceptsglobal_step as parameter and returns a Tensor.e.g. exponential decay function:def exp_decay(global_step):return tf.train.exponential_decay(learning_rate=0.1, global_step,decay_steps=2, decay_rate=0.001)tf_random_seed: Random seed for TensorFlow initializers.Setting this value, allows consistency between reruns.continue_training: when continue_training is True, once initializedmodel will be continuely trained on every call of fit.num_cores: Number of cores to be used. (default: 4)early_stopping_rounds: Activates early stopping if this is not None.Loss needs to decrease at least every every <early_stopping_rounds>round(s) to continue training. (default: None)verbose: Controls the verbosity, possible values:0: the algorithm and debug information is muted.1: trainer prints the progress.2: log device placement is printed.early_stopping_rounds: Activates early stopping if this is not None.Loss needs to decrease at least every every <early_stopping_rounds>round(s) to continue training. (default: None)max_to_keep: The maximum number of recent checkpoint files to keep.As new files are created, older files are deleted.If None or 0, all checkpoint files are kept.Defaults to 5 (that is, the 5 most recent checkpoint files are kept.)keep_checkpoint_every_n_hours: Number of hours between each checkpointto be saved. The default value of 10,000 hours effectively disables the feature."""def __init__(self, hidden_units, n_classes=0, tf_master="", batch_size=32, steps=200, optimizer="SGD", learning_rate=0.1, tf_random_seed=42, continue_training=False, num_cores=4, verbose=1, early_stopping_rounds=None, max_to_keep=5, keep_checkpoint_every_n_hours=10000):self.hidden_units = hidden_unitssuper(TensorFlowDNNRegressor, self).__init__(model_fn=self._model_fn, n_classes=n_classes, tf_master=tf_master, batch_size=batch_size, steps=steps, optimizer=optimizer, learning_rate=learning_rate, tf_random_seed=tf_random_seed, continue_training=continue_training, num_cores=num_cores, verbose=verbose, early_stopping_rounds=early_stopping_rounds, max_to_keep=max_to_keep, keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)def _model_fn(self, X, y):return models.get_dnn_model(self.hidden_units, models.linear_regression)(X, y)@propertydef weights_(self):"""Returns weights of the DNN weight layers."""weights = []for layer in range(len(self.hidden_units)):weights.append(self.get_tensor_value('dnn/layer%d/Linear/Matrix:0' % layer))weights.append(self.get_tensor_value('linear_regression/weights:0'))return weights@propertydef bias_(self):"""Returns bias of the DNN's bias layers."""biases = []for layer in range(len(self.hidden_units)):biases.append(self.get_tensor_value('dnn/layer%d/Linear/Bias:0' % layer))biases.append(self.get_tensor_value('linear_regression/bias:0'))return biases class RandomForestRegressor(ForestRegressor):"""A random forest regressor.A random forest is a meta estimator that fits a number of classifyingdecision trees on various sub-samples of the dataset and use averagingto improve the predictive accuracy and control over-fitting.The sub-sample size is always the same as the originalinput sample size but the samples are drawn with replacement if`bootstrap=True` (default).Read more in the :ref:`User Guide <forest>`.Parameters----------n_estimators : integer, optional (default=10)The number of trees in the forest.criterion : string, optional (default="mse")The function to measure the quality of a split. Supported criteriaare "mse" for the mean squared error, which is equal to variancereduction as feature selection criterion, and "mae" for the meanabsolute error... versionadded:: 0.18Mean Absolute Error (MAE) criterion.max_features : int, float, string or None, optional (default="auto")The number of features to consider when looking for the best split:- If int, then consider `max_features` features at each split.- If float, then `max_features` is a percentage and`int(max_features * n_features)` features are considered at eachsplit.- If "auto", then `max_features=n_features`.- If "sqrt", then `max_features=sqrt(n_features)`.- If "log2", then `max_features=log2(n_features)`.- If None, then `max_features=n_features`.Note: the search for a split does not stop until at least onevalid partition of the node samples is found, even if it requires toeffectively inspect more than ``max_features`` features.max_depth : integer or None, optional (default=None)The maximum depth of the tree. If None, then nodes are expanded untilall leaves are pure or until all leaves contain less thanmin_samples_split samples.min_samples_split : int, float, optional (default=2)The minimum number of samples required to split an internal node:- If int, then consider `min_samples_split` as the minimum number.- If float, then `min_samples_split` is a percentage and`ceil(min_samples_split * n_samples)` are the minimumnumber of samples for each split... versionchanged:: 0.18Added float values for percentages.min_samples_leaf : int, float, optional (default=1)The minimum number of samples required to be at a leaf node:- If int, then consider `min_samples_leaf` as the minimum number.- If float, then `min_samples_leaf` is a percentage and`ceil(min_samples_leaf * n_samples)` are the minimumnumber of samples for each node... versionchanged:: 0.18Added float values for percentages.min_weight_fraction_leaf : float, optional (default=0.)The minimum weighted fraction of the sum total of weights (of allthe input samples) required to be at a leaf node. Samples haveequal weight when sample_weight is not provided.max_leaf_nodes : int or None, optional (default=None)Grow trees with ``max_leaf_nodes`` in best-first fashion.Best nodes are defined as relative reduction in impurity.If None then unlimited number of leaf nodes.min_impurity_split : float,Threshold for early stopping in tree growth. A node will splitif its impurity is above the threshold, otherwise it is a leaf... deprecated:: 0.19``min_impurity_split`` has been deprecated in favor of``min_impurity_decrease`` in 0.19 and will be removed in 0.21.Use ``min_impurity_decrease`` instead.min_impurity_decrease : float, optional (default=0.)A node will be split if this split induces a decrease of the impuritygreater than or equal to this value.The weighted impurity decrease equation is the following::N_t / N * (impurity - N_t_R / N_t * right_impurity- N_t_L / N_t * left_impurity)where ``N`` is the total number of samples, ``N_t`` is the number ofsamples at the current node, ``N_t_L`` is the number of samples in theleft child, and ``N_t_R`` is the number of samples in the right child.``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,if ``sample_weight`` is passed... versionadded:: 0.19bootstrap : boolean, optional (default=True)Whether bootstrap samples are used when building trees.oob_score : bool, optional (default=False)whether to use out-of-bag samples to estimatethe R^2 on unseen data.n_jobs : integer, optional (default=1)The number of jobs to run in parallel for both `fit` and `predict`.If -1, then the number of jobs is set to the number of cores.random_state : int, RandomState instance or None, optional (default=None)If int, random_state is the seed used by the random number generator;If RandomState instance, random_state is the random number generator;If None, the random number generator is the RandomState instance usedby `np.random`.verbose : int, optional (default=0)Controls the verbosity of the tree building process.warm_start : bool, optional (default=False)When set to ``True``, reuse the solution of the previous call to fitand add more estimators to the ensemble, otherwise, just fit a wholenew forest.Attributes----------estimators_ : list of DecisionTreeRegressorThe collection of fitted sub-estimators.feature_importances_ : array of shape = [n_features]The feature importances (the higher, the more important the feature).n_features_ : intThe number of features when ``fit`` is performed.n_outputs_ : intThe number of outputs when ``fit`` is performed.oob_score_ : floatScore of the training dataset obtained using an out-of-bag estimate.oob_prediction_ : array of shape = [n_samples]Prediction computed with out-of-bag estimate on the training set.Examples-------->>> from sklearn.ensemble import RandomForestRegressor>>> from sklearn.datasets import make_regression>>>>>> X, y = make_regression(n_features=4, n_informative=2,... random_state=0, shuffle=False)>>> regr = RandomForestRegressor(max_depth=2, random_state=0)>>> regr.fit(X, y)RandomForestRegressor(bootstrap=True, criterion='mse', max_depth=2,max_features='auto', max_leaf_nodes=None,min_impurity_decrease=0.0, min_impurity_split=None,min_samples_leaf=1, min_samples_split=2,min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,oob_score=False, random_state=0, verbose=0, warm_start=False)>>> print(regr.feature_importances_)[ 0.17339552 0.81594114 0. 0.01066333]>>> print(regr.predict([[0, 0, 0, 0]]))[-2.50699856]Notes-----The default values for the parameters controlling the size of the trees(e.g. ``max_depth``, ``min_samples_leaf``, etc.) lead to fully grown andunpruned trees which can potentially be very large on some data sets. Toreduce memory consumption, the complexity and size of the trees should becontrolled by setting those parameter values.The features are always randomly permuted at each split. Therefore,the best found split may vary, even with the same training data,``max_features=n_features`` and ``bootstrap=False``, if the improvementof the criterion is identical for several splits enumerated during thesearch of the best split. To obtain a deterministic behaviour duringfitting, ``random_state`` has to be fixed.References----------.. [1] L. Breiman, "Random Forests", Machine Learning, 45(1), 5-32, 2001.See also--------DecisionTreeRegressor, ExtraTreesRegressor"""def __init__(self, n_estimators=10, criterion="mse", max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0., max_features="auto", max_leaf_nodes=None, min_impurity_decrease=0., min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False):super(RandomForestRegressor, self).__init__(base_estimator=DecisionTreeRegressor(), n_estimators=n_estimators, estimator_params=("criterion", "max_depth", "min_samples_split", "min_samples_leaf", "min_weight_fraction_leaf", "max_features", "max_leaf_nodes", "min_impurity_decrease", "min_impurity_split", "random_state"), bootstrap=bootstrap, oob_score=oob_score, n_jobs=n_jobs, random_state=random_state, verbose=verbose, warm_start=warm_start)self.criterion = criterionself.max_depth = max_depthself.min_samples_split = min_samples_splitself.min_samples_leaf = min_samples_leafself.min_weight_fraction_leaf = min_weight_fraction_leafself.max_features = max_featuresself.max_leaf_nodes = max_leaf_nodesself.min_impurity_decrease = min_impurity_decreaseself.min_impurity_split = min_impurity_split

?

總結(jié)

以上是生活随笔為你收集整理的ML之LiRDNNEL:基于skflow的LiR、DNN、sklearn的RF对Boston(波士顿房价)数据集进行回归预测(房价)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 亚洲AV无码精品黑人黑人 | 日本aaa级片 | 亚洲老女人视频 | 色校园 | 精品无码国产污污污在线观看 | 欧洲日韩一区二区三区 | 精品视频在线观看免费 | 天天综合精品 | 亚洲爱爱视频 | 国产精品无码久久av | 成人一级毛片 | 天堂99| 亚洲第一区在线观看 | 欧美另类videosbestsex | av鲁丝一区二区鲁丝 | 亚洲熟妇无码乱子av电影 | 国产精品传媒视频 | 欧美日韩在线免费播放 | 久久精工是国产品牌吗 | 中文文字幕文字幕高清 | 亚洲人成免费 | 日本a v网站 | 亚洲污网站 | 黄色av网址在线观看 | 99精品久久久久久 | 亚洲熟女www一区二区三区 | 欧美日韩 一区二区三区 | 在线视频免费观看一区 | 久久亚洲私人国产精品va | 亚洲大胆视频 | 色综合一区二区 | 国产成人无码一区二区三区在线 | 91香蕉视频在线 | 男女久久久 | 青青久久国产 | 99视频 | 伊人免费在线 | 91系列在线观看 | 国内精品福利视频 | 国产亚洲精品久久久久久久久动漫 | 成长快手短视频在线观看 | 国产一区二区三区久久 | 羞羞动漫免费观看 | 内地级a艳片高清免费播放 91在线精品一区二区 | 日少妇视频 | 日韩欧美一区二区免费 | 日本中出视频 | 欧美大尺度做爰啪啪床戏明星 | 国产成人免费在线观看 | 性欧美大战久久久久久久久 | 国产男同gay网站 | 亚洲午夜精品在线观看 | 天天操天天操天天操 | 韩国三级hd中文字幕 | 天堂网www.| 女人脱下裤子让男人捅 | 亚洲成人a∨ | www.伊人久久| 国产视频一区在线观看 | 亚洲视频免费 | 色婷婷精品国产一区二区三区 | 91视频第一页 | 天海翼av在线播放 | 国产午夜福利100集发布 | 亚洲天堂二区 | 蜜臀尤物一区二区三区直播 | 深夜精品视频 | 国产精品七区 | 亚洲每日更新 | 免费午夜av | 天天超碰 | 免费在线亚洲 | 亚洲乱子伦 | 亚洲老老头同性老头交j | 美国美女群体交乱 | 日韩一区二区三区四区五区六区 | 天降女子在线观看 | 欧美一区二区三区激情 | 91国产在线免费观看 | 国产区精品 | 波多野结衣一本一道 | 另类在线视频 | 日韩视频网站在线观看 | 欧美丝袜一区二区三区 | 手机在线看片你懂的 | 日本电车痴汉 | 最新中文字幕一区 | 99久久久无码国产精品性波多 | 成人毛片在线观看 | 亚洲精品一区二区三区精华液 | 亚洲三级在线免费观看 | 亚洲爆乳无码精品aaa片蜜桃 | 日本高清一区二区视频 | 欧美在线一级视频 | 欧美精品一区二区三区蜜臀 | 亚洲免费中文字幕 | 国产精品精东影业 | 国产99re | 女人和拘做爰正片视频 |