日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【NLP】简单学习一下NLP中的transformer的pytorch代码

發(fā)布時(shí)間:2025/3/12 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【NLP】简单学习一下NLP中的transformer的pytorch代码 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
  • 經(jīng)典transformer的學(xué)習(xí)

  • 文章轉(zhuǎn)自微信公眾號(hào)【機(jī)器學(xué)習(xí)煉丹術(shù)】

  • 作者:陳亦新(已授權(quán))

  • 聯(lián)系方式: 微信cyx645016617

  • 歡迎交流,共同進(jìn)步

  • 代碼細(xì)講

    • transformer

    • Embedding

    • Encoder_MultipleLayers

    • Encoder

  • 完整代碼


代碼細(xì)講

transformer

class?transformer(nn.Sequential):def?__init__(self,?encoding,?**config):super(transformer,?self).__init__()if?encoding?==?'drug':self.emb?=?Embeddings(config['input_dim_drug'],?config['transformer_emb_size_drug'],?50,?config['transformer_dropout_rate'])self.encoder?=?Encoder_MultipleLayers(config['transformer_n_layer_drug'],?config['transformer_emb_size_drug'],?config['transformer_intermediate_size_drug'],?config['transformer_num_attention_heads_drug'],config['transformer_attention_probs_dropout'],config['transformer_hidden_dropout_rate'])elif?encoding?==?'protein':self.emb?=?Embeddings(config['input_dim_protein'],?config['transformer_emb_size_target'],?545,?config['transformer_dropout_rate'])self.encoder?=?Encoder_MultipleLayers(config['transformer_n_layer_target'],?config['transformer_emb_size_target'],?config['transformer_intermediate_size_target'],?config['transformer_num_attention_heads_target'],config['transformer_attention_probs_dropout'],config['transformer_hidden_dropout_rate'])###?parameter?v?(tuple?of?length?2)?is?from?utils.drug2emb_encoder?def?forward(self,?v):e?=?v[0].long().to(device)e_mask?=?v[1].long().to(device)print(e.shape,e_mask.shape)ex_e_mask?=?e_mask.unsqueeze(1).unsqueeze(2)ex_e_mask?=?(1.0?-?ex_e_mask)?*?-10000.0emb?=?self.emb(e)encoded_layers?=?self.encoder(emb.float(),?ex_e_mask.float())return?encoded_layers[:,0]
  • 只要有兩個(gè)組件,一個(gè)是Embedding層,一個(gè)是Encoder_MultipleLayers模塊;

  • forward的輸入v是一個(gè)元組,包含兩個(gè)元素:第一個(gè)是數(shù)據(jù),第二個(gè)是mask。對應(yīng)有效數(shù)據(jù)的位置。

Embedding

class?Embeddings(nn.Module):"""Construct?the?embeddings?from?protein/target,?position?embeddings."""def?__init__(self,?vocab_size,?hidden_size,?max_position_size,?dropout_rate):super(Embeddings,?self).__init__()self.word_embeddings?=?nn.Embedding(vocab_size,?hidden_size)self.position_embeddings?=?nn.Embedding(max_position_size,?hidden_size)self.LayerNorm?=?nn.LayerNorm(hidden_size)self.dropout?=?nn.Dropout(dropout_rate)def?forward(self,?input_ids):seq_length?=?input_ids.size(1)position_ids?=?torch.arange(seq_length,?dtype=torch.long,?device=input_ids.device)position_ids?=?position_ids.unsqueeze(0).expand_as(input_ids)words_embeddings?=?self.word_embeddings(input_ids)position_embeddings?=?self.position_embeddings(position_ids)embeddings?=?words_embeddings?+?position_embeddingsembeddings?=?self.LayerNorm(embeddings)embeddings?=?self.dropout(embeddings)return?embeddings
  • 包含三個(gè)組件,一個(gè)是Embedding,其他是LayerNorm和Dropout層。

torch.nn.Embedding(num_embeddings,?embedding_dim,?padding_idx=None,max_norm=None,??norm_type=2.0,???scale_grad_by_freq=False,?sparse=False,??_weight=None)

其為一個(gè)簡單的存儲(chǔ)固定大小的詞典的嵌入向量的查找表,意思就是說,給一個(gè)編號(hào),嵌入層就能返回這個(gè)編號(hào)對應(yīng)的嵌入向量,嵌入向量反映了各個(gè)編號(hào)代表的符號(hào)之間的語義關(guān)系。

輸入為一個(gè)編號(hào)列表,輸出為對應(yīng)的符號(hào)嵌入向量列表。

  • num_embeddings (python:int) – 詞典的大小尺寸,比如總共出現(xiàn)5000個(gè)詞,那就輸入5000。此時(shí)index為(0-4999)

  • embedding_dim (python:int) – 嵌入向量的維度,即用多少維來表示一個(gè)符號(hào)。

  • padding_idx (python:int, optional) – 填充id,比如,輸入長度為100,但是每次的句子長度并不一樣,后面就需要用統(tǒng)一的數(shù)字填充,而這里就是指定這個(gè)數(shù)字,這樣,網(wǎng)絡(luò)在遇到填充id時(shí),就不會(huì)計(jì)算其與其它符號(hào)的相關(guān)性。(初始化為0)

  • max_norm (python:float, optional) – 最大范數(shù),如果嵌入向量的范數(shù)超過了這個(gè)界限,就要進(jìn)行再歸一化。

  • norm_type (python:float, optional) – 指定利用什么范數(shù)計(jì)算,并用于對比max_norm,默認(rèn)為2范數(shù)。

  • scale_grad_by_freq (boolean, optional) – 根據(jù)單詞在mini-batch中出現(xiàn)的頻率,對梯度進(jìn)行放縮。默認(rèn)為False.

  • sparse (bool, optional) – 若為True,則與權(quán)重矩陣相關(guān)的梯度轉(zhuǎn)變?yōu)橄∈鑿埩俊?/p>

舉一個(gè)例子:

如果你的整數(shù)最大超過了設(shè)置的字典的容量,那么就會(huì)出錯(cuò)誤:

  • Embedding其中有可學(xué)習(xí)參數(shù)!是一個(gè)num_embedding * embedding_dim的矩陣。

Encoder_MultipleLayers

class?Encoder_MultipleLayers(nn.Module):def?__init__(self,?n_layer,?hidden_size,?intermediate_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob):super(Encoder_MultipleLayers,?self).__init__()layer?=?Encoder(hidden_size,?intermediate_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob)self.layer?=?nn.ModuleList([copy.deepcopy(layer)?for?_?in?range(n_layer)])????def?forward(self,?hidden_states,?attention_mask,?output_all_encoded_layers=True):all_encoder_layers?=?[]for?layer_module?in?self.layer:hidden_states?=?layer_module(hidden_states,?attention_mask)return?hidden_states
  • transformer中的embedding,目的是將數(shù)據(jù)轉(zhuǎn)換成對應(yīng)的向量。這個(gè)Encoder-multilayer則是提取特征的關(guān)鍵。

  • 結(jié)構(gòu)很簡單,就是由==n_layer==個(gè)Encoder堆疊而成。

Encoder

class?Encoder(nn.Module):def?__init__(self,?hidden_size,?intermediate_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob):super(Encoder,?self).__init__()self.attention?=?Attention(hidden_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob)self.intermediate?=?Intermediate(hidden_size,?intermediate_size)self.output?=?Output(intermediate_size,?hidden_size,?hidden_dropout_prob)def?forward(self,?hidden_states,?attention_mask):attention_output?=?self.attention(hidden_states,?attention_mask)intermediate_output?=?self.intermediate(attention_output)layer_output?=?self.output(intermediate_output,?attention_output)return?layer_output
  • 其中包含了Attention部分,Intermediate和Output。

class?Attention(nn.Module):def?__init__(self,?hidden_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob):super(Attention,?self).__init__()self.self?=?SelfAttention(hidden_size,?num_attention_heads,?attention_probs_dropout_prob)self.output?=?SelfOutput(hidden_size,?hidden_dropout_prob)def?forward(self,?input_tensor,?attention_mask):self_output?=?self.self(input_tensor,?attention_mask)attention_output?=?self.output(self_output,?input_tensor)return?attention_outputclass?SelfAttention(nn.Module):def?__init__(self,?hidden_size,?num_attention_heads,?attention_probs_dropout_prob):super(SelfAttention,?self).__init__()if?hidden_size?%?num_attention_heads?!=?0:raise?ValueError("The?hidden?size?(%d)?is?not?a?multiple?of?the?number?of?attention?""heads?(%d)"?%?(hidden_size,?num_attention_heads))self.num_attention_heads?=?num_attention_headsself.attention_head_size?=?int(hidden_size?/?num_attention_heads)self.all_head_size?=?self.num_attention_heads?*?self.attention_head_sizeself.query?=?nn.Linear(hidden_size,?self.all_head_size)self.key?=?nn.Linear(hidden_size,?self.all_head_size)self.value?=?nn.Linear(hidden_size,?self.all_head_size)self.dropout?=?nn.Dropout(attention_probs_dropout_prob)def?transpose_for_scores(self,?x):#?num_attention_heads?=?8,?attention_head_size?=?128?/?8?=?16new_x_shape?=?x.size()[:-1]?+?(self.num_attention_heads,?self.attention_head_size)x?=?x.view(*new_x_shape)return?x.permute(0,?2,?1,?3)def?forward(self,?hidden_states,?attention_mask):#?hidden_states.shape?=?[batch,50,128]mixed_query_layer?=?self.query(hidden_states)mixed_key_layer?=?self.key(hidden_states)mixed_value_layer?=?self.value(hidden_states)query_layer?=?self.transpose_for_scores(mixed_query_layer)key_layer?=?self.transpose_for_scores(mixed_key_layer)value_layer?=?self.transpose_for_scores(mixed_value_layer)#?query_layer.shape?=?[batch,8,50,16]#?Take?the?dot?product?between?"query"?and?"key"?to?get?the?raw?attention?scores.attention_scores?=?torch.matmul(query_layer,?key_layer.transpose(-1,?-2))#?attention_score.shape?=?[batch,8,50,50]attention_scores?=?attention_scores?/?math.sqrt(self.attention_head_size)attention_scores?=?attention_scores?+?attention_mask#?Normalize?the?attention?scores?to?probabilities.attention_probs?=?nn.Softmax(dim=-1)(attention_scores)#?This?is?actually?dropping?out?entire?tokens?to?attend?to,?which?might#?seem?a?bit?unusual,?but?is?taken?from?the?original?Transformer?paper.attention_probs?=?self.dropout(attention_probs)context_layer?=?torch.matmul(attention_probs,?value_layer)context_layer?=?context_layer.permute(0,?2,?1,?3).contiguous()new_context_layer_shape?=?context_layer.size()[:-2]?+?(self.all_head_size,)context_layer?=?context_layer.view(*new_context_layer_shape)return?context_layer

這一段和一般的vit處理的流程類似。雖然transformer是從NLP到CV的,但從CV的vit再回看NLP的transformer也是有一種樂趣。里面要注意的點(diǎn)是multihead的概念。本來hidden-size是128,如果設(shè)置multihead的數(shù)量為8,那么其實(shí)好比卷積里面的通道數(shù)量。會(huì)把128的token看成8個(gè)16個(gè)token,然后分別做自注意力。但是把multihead比作卷積的概念感覺說的過去,比作分組卷積的概念好像也OK:

  • 比作卷積。如果固定了每一個(gè)head的size數(shù)量為16,那么head就好比通道數(shù),那么增加head的數(shù)量,其實(shí)就是增加了卷積核通道數(shù)的感覺;

  • 比作分組卷積。如果固定了hidden-size的數(shù)量為128,那么head的數(shù)量就是分組的數(shù)量,那么增加head的數(shù)量就好比卷積分組變多,降低了計(jì)算量。

-其他部分的代碼都是FC + LayerNorm +Dropout,不再贅述。

完整代碼

import?torch.nn?as?nn import?torch.nn.functional?as?F import?copy,math class?Embeddings(nn.Module):"""Construct?the?embeddings?from?protein/target,?position?embeddings."""def?__init__(self,?vocab_size,?hidden_size,?max_position_size,?dropout_rate):super(Embeddings,?self).__init__()self.word_embeddings?=?nn.Embedding(vocab_size,?hidden_size)self.position_embeddings?=?nn.Embedding(max_position_size,?hidden_size)self.LayerNorm?=?nn.LayerNorm(hidden_size)self.dropout?=?nn.Dropout(dropout_rate)def?forward(self,?input_ids):seq_length?=?input_ids.size(1)position_ids?=?torch.arange(seq_length,?dtype=torch.long,?device=input_ids.device)position_ids?=?position_ids.unsqueeze(0).expand_as(input_ids)words_embeddings?=?self.word_embeddings(input_ids)position_embeddings?=?self.position_embeddings(position_ids)embeddings?=?words_embeddings?+?position_embeddingsembeddings?=?self.LayerNorm(embeddings)embeddings?=?self.dropout(embeddings)return?embeddings class?Encoder_MultipleLayers(nn.Module):def?__init__(self,?n_layer,?hidden_size,?intermediate_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob):super(Encoder_MultipleLayers,?self).__init__()layer?=?Encoder(hidden_size,?intermediate_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob)self.layer?=?nn.ModuleList([copy.deepcopy(layer)?for?_?in?range(n_layer)])????def?forward(self,?hidden_states,?attention_mask,?output_all_encoded_layers=True):all_encoder_layers?=?[]for?layer_module?in?self.layer:hidden_states?=?layer_module(hidden_states,?attention_mask)#if?output_all_encoded_layers:#????all_encoder_layers.append(hidden_states)#if?not?output_all_encoded_layers:#????all_encoder_layers.append(hidden_states)return?hidden_states class?Encoder(nn.Module):def?__init__(self,?hidden_size,?intermediate_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob):super(Encoder,?self).__init__()self.attention?=?Attention(hidden_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob)self.intermediate?=?Intermediate(hidden_size,?intermediate_size)self.output?=?Output(intermediate_size,?hidden_size,?hidden_dropout_prob)def?forward(self,?hidden_states,?attention_mask):attention_output?=?self.attention(hidden_states,?attention_mask)intermediate_output?=?self.intermediate(attention_output)layer_output?=?self.output(intermediate_output,?attention_output)return?layer_output???? class?Attention(nn.Module):def?__init__(self,?hidden_size,?num_attention_heads,?attention_probs_dropout_prob,?hidden_dropout_prob):super(Attention,?self).__init__()self.self?=?SelfAttention(hidden_size,?num_attention_heads,?attention_probs_dropout_prob)self.output?=?SelfOutput(hidden_size,?hidden_dropout_prob)def?forward(self,?input_tensor,?attention_mask):self_output?=?self.self(input_tensor,?attention_mask)attention_output?=?self.output(self_output,?input_tensor)return?attention_output??class?SelfAttention(nn.Module):def?__init__(self,?hidden_size,?num_attention_heads,?attention_probs_dropout_prob):super(SelfAttention,?self).__init__()if?hidden_size?%?num_attention_heads?!=?0:raise?ValueError("The?hidden?size?(%d)?is?not?a?multiple?of?the?number?of?attention?""heads?(%d)"?%?(hidden_size,?num_attention_heads))self.num_attention_heads?=?num_attention_headsself.attention_head_size?=?int(hidden_size?/?num_attention_heads)self.all_head_size?=?self.num_attention_heads?*?self.attention_head_sizeself.query?=?nn.Linear(hidden_size,?self.all_head_size)self.key?=?nn.Linear(hidden_size,?self.all_head_size)self.value?=?nn.Linear(hidden_size,?self.all_head_size)self.dropout?=?nn.Dropout(attention_probs_dropout_prob)def?transpose_for_scores(self,?x):new_x_shape?=?x.size()[:-1]?+?(self.num_attention_heads,?self.attention_head_size)x?=?x.view(*new_x_shape)return?x.permute(0,?2,?1,?3)def?forward(self,?hidden_states,?attention_mask):mixed_query_layer?=?self.query(hidden_states)mixed_key_layer?=?self.key(hidden_states)mixed_value_layer?=?self.value(hidden_states)query_layer?=?self.transpose_for_scores(mixed_query_layer)key_layer?=?self.transpose_for_scores(mixed_key_layer)value_layer?=?self.transpose_for_scores(mixed_value_layer)#?Take?the?dot?product?between?"query"?and?"key"?to?get?the?raw?attention?scores.attention_scores?=?torch.matmul(query_layer,?key_layer.transpose(-1,?-2))attention_scores?=?attention_scores?/?math.sqrt(self.attention_head_size)attention_scores?=?attention_scores?+?attention_mask#?Normalize?the?attention?scores?to?probabilities.attention_probs?=?nn.Softmax(dim=-1)(attention_scores)#?This?is?actually?dropping?out?entire?tokens?to?attend?to,?which?might#?seem?a?bit?unusual,?but?is?taken?from?the?original?Transformer?paper.attention_probs?=?self.dropout(attention_probs)context_layer?=?torch.matmul(attention_probs,?value_layer)context_layer?=?context_layer.permute(0,?2,?1,?3).contiguous()new_context_layer_shape?=?context_layer.size()[:-2]?+?(self.all_head_size,)context_layer?=?context_layer.view(*new_context_layer_shape)return?context_layerclass?SelfOutput(nn.Module):def?__init__(self,?hidden_size,?hidden_dropout_prob):super(SelfOutput,?self).__init__()self.dense?=?nn.Linear(hidden_size,?hidden_size)self.LayerNorm?=?nn.LayerNorm(hidden_size)self.dropout?=?nn.Dropout(hidden_dropout_prob)def?forward(self,?hidden_states,?input_tensor):hidden_states?=?self.dense(hidden_states)hidden_states?=?self.dropout(hidden_states)hidden_states?=?self.LayerNorm(hidden_states?+?input_tensor)return?hidden_states???class?Intermediate(nn.Module):def?__init__(self,?hidden_size,?intermediate_size):super(Intermediate,?self).__init__()self.dense?=?nn.Linear(hidden_size,?intermediate_size)def?forward(self,?hidden_states):hidden_states?=?self.dense(hidden_states)hidden_states?=?F.relu(hidden_states)return?hidden_statesclass?Output(nn.Module):def?__init__(self,?intermediate_size,?hidden_size,?hidden_dropout_prob):super(Output,?self).__init__()self.dense?=?nn.Linear(intermediate_size,?hidden_size)self.LayerNorm?=?nn.LayerNorm(hidden_size)self.dropout?=?nn.Dropout(hidden_dropout_prob)def?forward(self,?hidden_states,?input_tensor):hidden_states?=?self.dense(hidden_states)hidden_states?=?self.dropout(hidden_states)hidden_states?=?self.LayerNorm(hidden_states?+?input_tensor)return?hidden_states class?transformer(nn.Sequential):def?__init__(self,?encoding,?**config):super(transformer,?self).__init__()if?encoding?==?'drug':self.emb?=?Embeddings(config['input_dim_drug'],?config['transformer_emb_size_drug'],?50,?config['transformer_dropout_rate'])self.encoder?=?Encoder_MultipleLayers(config['transformer_n_layer_drug'],?config['transformer_emb_size_drug'],?config['transformer_intermediate_size_drug'],?config['transformer_num_attention_heads_drug'],config['transformer_attention_probs_dropout'],config['transformer_hidden_dropout_rate'])elif?encoding?==?'protein':self.emb?=?Embeddings(config['input_dim_protein'],?config['transformer_emb_size_target'],?545,?config['transformer_dropout_rate'])self.encoder?=?Encoder_MultipleLayers(config['transformer_n_layer_target'],?config['transformer_emb_size_target'],?config['transformer_intermediate_size_target'],?config['transformer_num_attention_heads_target'],config['transformer_attention_probs_dropout'],config['transformer_hidden_dropout_rate'])###?parameter?v?(tuple?of?length?2)?is?from?utils.drug2emb_encoder?def?forward(self,?v):e?=?v[0].long().to(device)e_mask?=?v[1].long().to(device)print(e.shape,e_mask.shape)ex_e_mask?=?e_mask.unsqueeze(1).unsqueeze(2)ex_e_mask?=?(1.0?-?ex_e_mask)?*?-10000.0emb?=?self.emb(e)encoded_layers?=?self.encoder(emb.float(),?ex_e_mask.float())return?encoded_layers[:,0]往期精彩回顧適合初學(xué)者入門人工智能的路線及資料下載中國大學(xué)慕課《機(jī)器學(xué)習(xí)》(黃海廣主講)機(jī)器學(xué)習(xí)及深度學(xué)習(xí)筆記等資料打印機(jī)器學(xué)習(xí)在線手冊深度學(xué)習(xí)筆記專輯《統(tǒng)計(jì)學(xué)習(xí)方法》的代碼復(fù)現(xiàn)專輯 AI基礎(chǔ)下載本站qq群955171419,加入微信群請掃碼:

總結(jié)

以上是生活随笔為你收集整理的【NLP】简单学习一下NLP中的transformer的pytorch代码的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 日本熟妇乱子伦xxxx | 亚洲不卡电影 | 亚洲免费一区二区 | 国精品一区 | 五月天男人天堂 | 色资源在线 | 国产人成无码视频在线观看 | 国产成人一区二区三区电影 | 婷婷色av | 伊人激情综合 | 欧洲在线视频 | 69xxxx国产| 红桃视频网站 | 免费av国产 | 五月婷婷欧美 | 制服丝袜亚洲色图 | 97伊人超碰| 亚av| 久久国产精品无码一级毛片 | 147人体做爰大胆图片成人 | 亚洲一区二区免费 | 久久理论片 | 欧美久久久久久久 | 国产影视av | 啊灬啊灬啊灬秀婷 | 天天视频入口 | 九九这里只有精品视频 | 欧美三级一级片 | 欧美日韩八区 | 亚洲精品成人区在线观看 | 少妇一夜三次一区二区 | 黄色精品视频 | 国产永久av | 亚洲天堂国产 | 亚洲中文字幕无码一区 | 欧美性猛交xxxx黑人交 | 精品日本视频 | 日韩欧美一本 | 高清欧美性猛交xxxx黑人猛交 | 日韩经典在线 | 午夜视频久久久 | 美女视频免费在线观看 | 性xxxfllreexxx少妇 | 潘金莲一级淫片aaaaaaa | 男人天堂v | 一级黄色av| 高h喷水荡肉少妇爽多p视频 | 男人用嘴添女人下身免费视频 | 国产xxxx孕妇| 香蕉视频在线免费 | 国产黄色小说 | 黄色顶级片 | 欧美精品第1页 | 午夜激情福利电影 | 黄色免费在线观看视频 | 永久免费看片在线播放 | 婷婷久久网 | 欧美精品动漫 | 国产精品国产三级国产aⅴ 欧美bbbbbbbbbbbb18av | 一二区免费视频 | 国产精品亚洲色图 | 草草影院在线观看视频 | av免费观看网址 | 国产一区二区三区免费播放 | 日韩爆操 | 亚洲欧洲综合av | 中国在线观看视频高清免费 | 日产av在线 | 丰满肉肉bbwwbbww | www,久久久 | 久久久久亚洲av无码专区体验 | 经典杯子蛋糕日剧在线观看免费 | 视频一区二区三区精品 | 国产又粗又猛又爽又黄又 | 国产 欧美 日韩 在线 | 免费网站污 | 亚洲插| 9久精品 | zzjizzji亚洲日本少妇 | 不卡av在线免费观看 | 中文字幕日韩欧美在线 | 村上里沙番号 | 9l视频自拍九色9l视频成人 | 中文日韩在线观看 | 插插插操操操 | 成人123| 国产三级一区 | 超碰一区二区三区 | 欧美午夜剧场 | youjizz.com在线观看 | 国产精品亚洲一区 | 成人免费黄色网址 | 爱涩av | 免费荫蒂添的好舒服视频 | 一级片美女 | 欧美黄色片免费看 | 中文字幕第18页 | 欧洲成人在线观看 | av动漫免费看|