日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

深度学总结:skip-gram pytorch实现

發布時間:2023/11/28 生活经验 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学总结:skip-gram pytorch实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

skip-gram pytorch 樸素實現
網絡結構
訓練過程:使用nn.NLLLoss()
batch的準備,為unsupervised,準備數據獲取(center,contex)的pair:
采樣時的優化:Subsampling降低高頻詞的概率
skip-gram 進階:negative sampling
一般都是針對計算效率優化的方法:negative sampling和hierachical softmax
negative sampling實現:
negative sampling原理:
negative sampling抽樣方法:
negative sampling前向傳遞過程:
negative sampling訓練過程:
skip-gram pytorch 樸素實現

網絡結構


class SkipGram(nn.Module):
def __init__(self, n_vocab, n_embed):
super().__init__()

self.embed = nn.Embedding(n_vocab, n_embed)
self.output = nn.Linear(n_embed, n_vocab)
self.log_softmax = nn.LogSoftmax(dim=1)

def forward(self, x):
x = self.embed(x)
scores = self.output(x)
log_ps = self.log_softmax(scores)

return log_ps

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
訓練過程:使用nn.NLLLoss()

# check if GPU is available
device = 'cuda' if torch.cuda.is_available() else 'cpu'

embedding_dim=300 # you can change, if you want

model = SkipGram(len(vocab_to_int), embedding_dim).to(device)
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)

print_every = 500
steps = 0
epochs = 5

# train for some number of epochs
for e in range(epochs):

# get input and target batches
for inputs, targets in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(inputs), torch.LongTensor(targets)
inputs, targets = inputs.to(device), targets.to(device)

log_ps = model(inputs)
loss = criterion(log_ps, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
batch的準備,為unsupervised,準備數據獲取(center,contex)的pair:

def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''

R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = words[start:idx] + words[idx+1:stop+1]

return list(target_words)
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''

n_batches = len(words)//batch_size

# only full batches
words = words[:n_batches*batch_size]

for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
采樣時的優化:Subsampling降低高頻詞的概率

Words that show up often such as “the”, “of”, and “for” don’t provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word wi w_iw
i
?
in the training set, we’ll discard it with probability given by

P(wi)=1?tf(wi)????√ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}}
P(w
i
?
)=1?
f(w
i
?
)
t
?

?

where t tt is a threshold parameter and f(wi) f(w_i)f(w
i
?
) is the frequency of word wi w_iw
i
?
in the total dataset.

from collections import Counter
import random
import numpy as np

threshold = 1e-5
word_counts = Counter(int_words)
#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear

total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
# discard some frequent words, according to the subsampling equation
# create a new list of words for training
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
skip-gram 進階:negative sampling

一般都是針對計算效率優化的方法:negative sampling和hierachical softmax

?

negative sampling實現:

negative sampling原理:

?

class NegativeSamplingLoss(nn.Module):
def __init__(self):
super().__init__()

def forward(self, input_vectors, output_vectors, noise_vectors):

batch_size, embed_size = input_vectors.shape

# Input vectors should be a batch of column vectors
input_vectors = input_vectors.view(batch_size, embed_size, 1)

# Output vectors should be a batch of row vectors
output_vectors = output_vectors.view(batch_size, 1, embed_size)

# bmm = batch matrix multiplication
# correct log-sigmoid loss
out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()
out_loss = out_loss.squeeze()

# incorrect log-sigmoid loss
noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()
noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors

# negate and sum correct and noisy log-sigmoid losses
# return average batch loss
return -(out_loss + noise_loss).mean()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
negative sampling抽樣方法:

?

# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))


1
2
3
4
5
6
7
negative sampling前向傳遞過程:

class SkipGramNeg(nn.Module):
def __init__(self, n_vocab, n_embed, noise_dist=None):
super().__init__()

self.n_vocab = n_vocab
self.n_embed = n_embed
self.noise_dist = noise_dist

# define embedding layers for input and output words
self.in_embed = nn.Embedding(n_vocab, n_embed)
self.out_embed = nn.Embedding(n_vocab, n_embed)

# Initialize embedding tables with uniform distribution
# I believe this helps with convergence
self.in_embed.weight.data.uniform_(-1, 1)
self.out_embed.weight.data.uniform_(-1, 1)

def forward_input(self, input_words):
input_vectors = self.in_embed(input_words)
return input_vectors

def forward_output(self, output_words):
output_vectors = self.out_embed(output_words)
return output_vectors

def forward_noise(self, batch_size, n_samples):
""" Generate noise vectors with shape (batch_size, n_samples, n_embed)"""
if self.noise_dist is None:
# Sample words uniformly
noise_dist = torch.ones(self.n_vocab)
else:
noise_dist = self.noise_dist

# Sample words from our noise distribution
noise_words = torch.multinomial(noise_dist,
batch_size * n_samples,
replacement=True)

device = "cuda" if model.out_embed.weight.is_cuda else "cpu"
noise_words = noise_words.to(device)

noise_vectors = self.out_embed(noise_words).view(batch_size, n_samples, self.n_embed)

return noise_vectors
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
negative sampling訓練過程:

device = 'cuda' if torch.cuda.is_available() else 'cpu'

# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))

# instantiating the model
embedding_dim = 300
model = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)

# using the loss that we defined
criterion = NegativeSamplingLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)

print_every = 1500
steps = 0
epochs = 5

# train for some number of epochs
for e in range(epochs):

# get our input, target batches
for input_words, target_words in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)
inputs, targets = inputs.to(device), targets.to(device)

# input, output, and noise vectors
input_vectors = model.forward_input(inputs)
output_vectors = model.forward_output(targets)
noise_vectors = model.forward_noise(inputs.shape[0], 5)

# negative sampling loss
loss = criterion(input_vectors, output_vectors, noise_vectors)

optimizer.zero_grad()
loss.backward()
optimizer.step()

?

總結

以上是生活随笔為你收集整理的深度学总结:skip-gram pytorch实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。