日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

神经网络 tensorflow :损失函数

發布時間:2023/12/20 编程问答 35 豆豆
生活随笔 收集整理的這篇文章主要介紹了 神经网络 tensorflow :损失函数 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

    • 基礎理論
      • 交叉熵
    • tensorFlow中的損失函數
      • BinaryCrosstropy
      • CategoricalCrossentropy
      • CosineSimilarity
      • Hinge
      • Huber
      • KLDivergence
      • LogCosh
      • MeanAbsoluteError
      • MeanAbsolutePercentageError
      • MeanSquaredError
      • MeanSquaredLogarithmicError
      • Poisson
      • SquaredHinge

基礎理論

交叉熵

參考:https://zhuanlan.zhihu.com/p/35709485
常用于分類問題,但是也可以用于回歸問題

tensorFlow中的損失函數

BinaryCrosstropy

計算真實標簽和預測標簽之間的交叉熵損失。
當只有兩個標簽類別(假定為0和1)時,請使用此交叉熵損失。對于每個示例,每個預測應該有一個浮點值。

tf.keras.losses.BinaryCrossentropy(from_logits=False, label_smoothing=0, reduction=losses_utils.ReductionV2.AUTO,name='binary_crossentropy' ) y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. bce = tf.keras.losses.BinaryCrossentropy() bce(y_true, y_pred).numpy()

CategoricalCrossentropy

計算標簽和預測之間的交叉熵損失。
當有兩個或多個標簽類別時,請使用此交叉熵損失函數。我們希望標簽以one_hot表示形式提供。如果要以整數形式提供標簽,請使用SparseCategoricalCrossentropy損失。

tf.keras.losses.CategoricalCrossentropy(from_logits=False, label_smoothing=0, reduction=losses_utils.ReductionV2.AUTO,name='categorical_crossentropy' ) y_true = [[0, 1, 0], [0, 0, 1]] y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]] # Using 'auto'/'sum_over_batch_size' reduction type. cce = tf.keras.losses.CategoricalCrossentropy() cce(y_true, y_pred).numpy()

CosineSimilarity

直接計算l2范數的差別
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))

y_true = [[0., 1.], [1., 1.]] y_pred = [[1., 0.], [1., 1.]] # Using 'auto'/'sum_over_batch_size' reduction type. cosine_loss = tf.keras.losses.CosineSimilarity(axis=1) # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) # = -((0. + 0.) + (0.5 + 0.5)) / 2 cosine_loss(y_true, y_pred).numpy()

Hinge

loss = maximum(1 - y_true * y_pred, 0)

y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.Hinge() h(y_true, y_pred).numpy()

Huber

結合了均方誤差和平均絕對值誤差

tf.keras.losses.Huber(delta=1.0, reduction=losses_utils.ReductionV2.AUTO, name='huber_loss' ) y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.Huber() h(y_true, y_pred).numpy()

KLDivergence

相對熵

y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. kl = tf.keras.losses.KLDivergence() kl(y_true, y_pred).numpy()

LogCosh

y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [0., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. l = tf.keras.losses.LogCosh() l(y_true, y_pred).numpy()

MeanAbsoluteError

y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. mae = tf.keras.losses.MeanAbsoluteError() mae(y_true, y_pred).numpy()

MeanAbsolutePercentageError

y_true = [[2., 1.], [2., 3.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. mape = tf.keras.losses.MeanAbsolutePercentageError() mape(y_true, y_pred).numpy()

MeanSquaredError

y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. mse = tf.keras.losses.MeanSquaredError() mse(y_true, y_pred).numpy()

MeanSquaredLogarithmicError

y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [1., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. msle = tf.keras.losses.MeanSquaredLogarithmicError() msle(y_true, y_pred).numpy()

Poisson

y_true = [[0., 1.], [0., 0.]] y_pred = [[1., 1.], [0., 0.]] # Using 'auto'/'sum_over_batch_size' reduction type. p = tf.keras.losses.Poisson() p(y_true, y_pred).numpy()

SquaredHinge

y_true = [[0., 1.], [0., 0.]] y_pred = [[0.6, 0.4], [0.4, 0.6]] # Using 'auto'/'sum_over_batch_size' reduction type. h = tf.keras.losses.SquaredHinge() h(y_true, y_pred).numpy()

總結

以上是生活随笔為你收集整理的神经网络 tensorflow :损失函数的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。