日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

loss盘点: asl loss (Asymmetric Loss) 代码解析详细版

發布時間:2023/12/29 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 loss盘点: asl loss (Asymmetric Loss) 代码解析详细版 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. BCE公式部分

可以簡單瀏覽下這篇博客的文章:
https://blog.csdn.net/qq_14845119/article/details/114121003

這是多分類 經典 BCELossBCELossBCELoss 公式
L=?yL+?(1?y)L?L = -y L_{+} - (1-y) L_{-} L=?yL+??(1?y)L??

其中,L+/?L_{+/-}L+/?? 是正負例預測概率的log值,即:

L+=log(y^)L?=log(1?y^)y^=sigmoid(logit)\begin{aligned} L_{+} &= log( \hat{y} )\\ L_{-} &= log( 1- \hat{y} )\\ \hat{y} &= sigmoid( logit ) \end{aligned} L+?L??y^??=log(y^?)=log(1?y^?)=sigmoid(logit)?

實際上由于 labellabellabel 標簽 yyy 值,是一個 0/10/10/1 矩陣,實際上充當了一個掩碼 maskmaskmask 的作用,挑選出 L+L_{+}L+? 中正例部分 和 L?L_{-}L?? 中負例部分

假設:

y=[0010]y = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} y=[01?00?]

y^=[0.50.10.30.2]L+=[?0.6931?2.3026?1.2040?1.6094]L?=[?0.6931?0.1054?0.3567?0.2231]\hat{y} = \begin{bmatrix} 0.5 & 0.1 \\ 0.3 & 0.2 \end{bmatrix} \ L_{+} = \begin{bmatrix} -0.6931 & -2.3026 \\ -1.2040 & -1.6094 \end{bmatrix} \ L_{-} = \begin{bmatrix} -0.6931 & -0.1054 \\ -0.3567 & -0.2231 \end{bmatrix} y^?=[0.50.3?0.10.2?]?L+?=[?0.6931?1.2040??2.3026?1.6094?]?L??=[?0.6931?0.3567??0.1054?0.2231?]

所以,LLL 左下角為L+L_{+}L+?對應的值的相反數,左上角和右上角和右下角為L?L_{-}L??對應的值的相反數

L=[0.69310.10541.20400.2231]L = \begin{bmatrix} 0.6931 & 0.1054 \\ 1.2040 & 0.2231 \end{bmatrix} L=[0.69311.2040?0.10540.2231?]

代碼驗證:

x = torch.tensor([0.5, 0.1, 0.3, 0.2]).reshape(2, 2).float() y = torch.tensor([0, 0, 1, 0]).reshape(2, 2).float() torch.nn.functional.binary_cross_entropy(x, y, reduction='none') tensor([[0.6931, 0.1054],[1.2040, 0.2231]])

(不要小看這個 mask 代碼的操作,一會兒寫 asl 代碼會用的上)

2. focal loss 公式部分

基本公式依舊是這個:
L=?yL+?(1?y)L?L = -y L_{+} - (1-y) L_{-} L=?yL+??(1?y)L??

L+L_{+}L+?L?L_{-}L?? 如下:
L+=(1?p)γ?log(p)L?=pγ?log(1?p)p=sigmoid(logit)\begin{aligned} L_{+} &= (1-p)^{\gamma} * log(p) \\ L_{-} &= p^{\gamma} * log(1-p) \\ p &= sigmoid(logit) \end{aligned} L+?L??p?=(1?p)γ?log(p)=pγ?log(1?p)=sigmoid(logit)?

3. asl 公式部分

asl loss 是 focal loss的改進版

L+=(1?p)γ+?log(p)L?=pmγ??log(1?pm)p=sigmoid(logit)pm=max(p?m,0)\begin{aligned} L_{+} &= (1-p)^{\gamma_{+}} &*& log(p) \\ L_{-} &= p_m^{\gamma_{-}} &*& log(1-p_m) \\ p &= sigmoid(logit) \\ p_m &= max(p-m, 0) \end{aligned} L+?L??ppm??=(1?p)γ+?=pmγ???=sigmoid(logit)=max(p?m,0)???log(p)log(1?pm?)

由于 pmp_mpm? 僅在 L?L_{-}L?? 中存在,而ppp一般出現在L+L_{+}L+?中,(1?p)(1-p)(1?p)一般出現在L?L_{-}L??中,所以將 pmp_mpm? 做一些反向操作

先引入一個引理,顯然成立,x和y都是函數(或者變量),二者中大的加上負號,就是二者相反數中小的

?max(x,y)==min(?x,?y)-max(x, y) == min(-x, -y) ?max(x,y)==min(?x,?y)

所以:
pm=max(p?m,0)=?min(m?p,0)?pm=min(m?p,0)1?pm=min(m?p,0)+11?pm=min(m?p+1,1)1?pm=min(m+1?p,1)1?pm=np.clip(m+1?p,max=1)\begin{aligned} p_m &= max(p-m, 0) \\ &= -min(m-p, 0) \\ -p_m &= min(m-p, 0) \\ 1-p_m &= min(m-p, 0) + 1 \\ 1-p_m &= min(m-p+ 1, 1) \\ 1-p_m &= min(m+ 1-p, 1) \\ 1-p_m &= np.clip(m+ 1-p, max=1) \\ \end{aligned} pm??pm?1?pm?1?pm?1?pm?1?pm??=max(p?m,0)=?min(m?p,0)=min(m?p,0)=min(m?p,0)+1=min(m?p+1,1)=min(m+1?p,1)=np.clip(m+1?p,max=1)?

這一行咱等會要用到

4. asl 代碼

看看 asl loss 的代碼,torch代碼來自:
https://github.com/Alibaba-MIIL/ASL/blob/main/src/loss_functions/losses.py

  • self.gamma_neg 是 γ?\gamma_{-}γ??
  • self.gamma_pos 是 γ+\gamma_{+}γ+?
  • self.eps 是用作 log 函數內部,防止溢出
class AsymmetricLossOptimized(nn.Module):''' Notice - optimized version, minimizes memory allocation and gpu uploading,favors inplace operations'''def __init__(self, gamma_neg=4, gamma_pos=1, clip=0.05, eps=1e-8, disable_torch_grad_focal_loss=False):super(AsymmetricLossOptimized, self).__init__()self.gamma_neg = gamma_negself.gamma_pos = gamma_posself.clip = clipself.disable_torch_grad_focal_loss = disable_torch_grad_focal_lossself.eps = eps# prevent memory allocation and gpu uploading every iteration, and encourages inplace operationsself.targets = self.anti_targets = self.xs_pos = self.xs_neg = self.asymmetric_w = self.loss = Nonedef forward(self, x, y):""""Parameters----------x: input logitsy: targets (multi-label binarized vector)"""self.targets = yself.anti_targets = 1 - y# 分別計算正負例的概率self.xs_pos = torch.sigmoid(x)self.xs_neg = 1.0 - self.xs_pos# 非對稱裁剪if self.clip is not None and self.clip > 0:self.xs_neg.add_(self.clip).clamp_(max=1) # 給 self.xs_neg 加上 clip 值# 先進行基本交叉熵計算self.loss = self.targets * torch.log(self.xs_pos.clamp(min=self.eps))self.loss.add_(self.anti_targets * torch.log(self.xs_neg.clamp(min=self.eps)))# Asymmetric Focusingif self.gamma_neg > 0 or self.gamma_pos > 0:if self.disable_torch_grad_focal_loss:torch.set_grad_enabled(False)# 以下 4 行相當于做了個并行操作self.xs_pos = self.xs_pos * self.targetsself.xs_neg = self.xs_neg * self.anti_targetsself.asymmetric_w = torch.pow(1 - self.xs_pos - self.xs_neg,self.gamma_pos * self.targets + self.gamma_neg * self.anti_targets)if self.disable_torch_grad_focal_loss:torch.set_grad_enabled(True)self.loss *= self.asymmetric_wreturn -self.loss.sum()

來咱單獨看一下代碼:

# 非對稱裁剪 if self.clip is not None and self.clip > 0:self.xs_neg.add_(self.clip).clamp_(max=1) # 給 self.xs_neg 加上 clip 值

這兩行用于計算:
1?pm=np.clip(m+1?p,max=1)\begin{aligned} 1-p_m &= np.clip(m+ 1-p, max=1) \end{aligned} 1?pm??=np.clip(m+1?p,max=1)?

# 先進行基本交叉熵計算 self.loss = self.targets * torch.log(self.xs_pos.clamp(min=self.eps)) self.loss.add_(self.anti_targets * torch.log(self.xs_neg.clamp(min=self.eps)))

這兩行用于計算紅框部分:

注意 self.targets 和 self.anti_targets 都相當于掩碼 mask 的作用,此處的 self.loss 矩陣的shape是和 self.targets 一樣的 shape,不理解可以回憶一下 BCE公式部分 的計算

而前面的 冪 相當于權重,就是代碼中的 self.asymmetric_w,也就是此處的:

self.asymmetric_w 是這樣計算的,這部分很妙!

self.xs_pos = self.xs_pos * self.targets self.xs_neg = self.xs_neg * self.anti_targets self.asymmetric_w = torch.pow(1 - self.xs_pos - self.xs_neg,self.gamma_pos * self.targets + self.gamma_neg * self.anti_targets)

插一句 torch.pow 該函數會將兩個shape相同的張量的對應位置做冪運算,看這個例子

>>> x = torch.tensor([1, 2, 3, 4]) >>> y = torch.tensor([2, 2, 3, 1]) >>> torch.pow(x, y) tensor([ 1, 4, 27, 4])

計算 self.asymmetric_w 時,只需將pow的 xxx 參數對應位置寫成 (1?p)(1-p)(1?p) 或者 pmp_mpm?,將pow的 yyy 參數對應位置寫成 γ?\gamma_{-}γ?? 或者 γ+\gamma_{+}γ+? 即可,先看簡單的,yyy 參數這里計算:

self.gamma_pos * self.targets + self.gamma_neg * self.anti_targets

也是通過 self.targets 的 mask 操作來進行的,而 xxx 參數這樣計算:

1 - self.xs_pos - self.xs_neg

當計算 L+L_{+}L+? 時,self.xs_neg==0,xxx 參數對應位置就是 1 - self.xs_pos 即 (1-p)
當計算 L?L_{-}L?? 時,self.xs_pos==0,xxx 參數對應位置就是 1 - self.xs_neg 即 (1?(1?pm))=pm(1-(1-p_m))=p_m(1?(1?pm?))=pm?

通過一個 torch.pow 巧妙的計算了 self.asymmetric_w NICE!

之后二者對應位置相乘即可

self.loss *= self.asymmetric_w

5. asl 代碼 Paddle 實現

class AsymmetricLossOptimizedWithLogit(nn.Layer):''' Notice - optimized version, minimizes memory allocation and gpu uploading,favors inplace operations'''def __init__(self, gamma_neg=4, gamma_pos=1, clip=0.05, eps=1e-5, disable_paddle_grad_focal_loss=False):super(AsymmetricLossOptimizedWithLogit, self).__init__()self.gamma_neg = gamma_negself.gamma_pos = gamma_posself.clip = clipself.disable_paddle_grad_focal_loss = disable_paddle_grad_focal_lossself.eps = epsself.targets = self.anti_targets = self.xs_pos = self.xs_neg = self.asymmetric_w = self.loss = Nonedef forward(self, x, y, weights=None):""""Parameters----------x: input logitsy: targets (multi-label binarized vector)"""self.targets = yself.anti_targets = 1 - y# Calculating Probabilitiesself.xs_pos = F.sigmoid(x)self.xs_neg = 1.0 - self.xs_pos# Asymmetric Clippingif self.clip is not None and self.clip > 0:# self.xs_neg.add_(self.clip).clip_(max=1)self.xs_neg = (self.xs_neg + self.clip).clip_(max=1)# Basic CE calculationself.loss = self.targets * paddle.log(self.xs_pos.clip(min=self.eps))self.loss.add_(self.anti_targets * paddle.log(self.xs_neg.clip(min=self.eps)))# Asymmetric Focusingif self.gamma_neg > 0 or self.gamma_pos > 0:if self.disable_paddle_grad_focal_loss:paddle.set_grad_enabled(False)self.xs_pos = self.xs_pos * self.targetsself.xs_neg = self.xs_neg * self.anti_targetsself.asymmetric_w = paddle.pow(1 - self.xs_pos - self.xs_neg,(self.gamma_pos * self.targets + \self.gamma_neg * self.anti_targets).astype("float32"))if self.disable_paddle_grad_focal_loss:paddle.set_grad_enabled(True)self.loss *= self.asymmetric_wif weights is not None:self.loss *= weights_loss = -self.loss.sum()return _lossif __name__ == "__main__":np.random.seed(11070109)x = np.random.randn(3, 3)x = paddle.to_tensor(x).cast("float32")y = (x > 0.5).cast("float32")loss = AsymmetricLossOptimizedWithLogit()out = loss(x, y)

總結

以上是生活随笔為你收集整理的loss盘点: asl loss (Asymmetric Loss) 代码解析详细版的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。