日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

【知识发现】隐语义模型LFM算法python实现(二)

發(fā)布時(shí)間:2025/4/16 python 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【知识发现】隐语义模型LFM算法python实现(二) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

http://blog.csdn.net/fjssharpsword/article/details/78015956

基于該篇文章中的代碼優(yōu)化,主要是在生成負(fù)樣例上提高執(zhí)行速度,代碼參考如下:

# -*- coding: utf-8 -*- ''' Created on 2017年10月16日@author: Administrator ''' import numpy as np import pandas as pd from math import exp import time import mathclass LFM:def __init__(self,lclass,iters,alpha,lamda,topk,ratio,traindata):self.lclass = lclass#隱類(lèi)數(shù)量,對(duì)性能有影響self.iters = iters#迭代次數(shù),收斂的最佳迭代次數(shù)未知self.alpha =alpha#梯度下降步長(zhǎng)self.lamda = lamda#正則化參數(shù)self.topk =topk #推薦top k項(xiàng)self.ratio =ratio #正負(fù)樣例比率,對(duì)性能最大影響self.traindata=traindata#初始化開(kāi)始..... def getUserPositiveItem(self, userid):#生成正樣例traindata=self.traindataseries = traindata[traindata['userid'] == userid]['itemid']positiveItemList = list(series.values)return positiveItemListdef getUserNegativeItem(self, userid):#生成負(fù)樣例traindata=self.traindataitemLen=self.itemLenratio=self.ratiouserItemlist = list(set(traindata[traindata['userid'] == userid]['itemid'])) #用戶(hù)評(píng)分過(guò)的物品negativeItemList = []count = ratio*len(userItemlist)#生成負(fù)樣例的數(shù)量for key,value in itemLen.iteritems():#itemLen.indexif count==0:breakif key in userItemlist:continuenegativeItemList.append(key)count=count-1return negativeItemList def initUserItem(self, userid):#traindata=self.traindatapositiveItem = self.getUserPositiveItem( userid)negativeItem = self.getUserNegativeItem( userid)itemDict = {}for item in positiveItem: itemDict[item] = 1for item in negativeItem: itemDict[item] = 0return itemDictdef initModel(self):traindata=self.traindatalcalss=self.lclass #隱類(lèi)數(shù)量userID = list(set(traindata['userid'].values))self.userID=userIDitemID = list(set(traindata['itemid'].values))self.itemID=itemIDitemCount=[len(traindata[traindata['itemid'] == item]['userid']) for item in itemID ]self.itemLen = pd.Series(itemCount, index=itemID).sort_values(ascending=False)#統(tǒng)計(jì)每個(gè)物品對(duì)應(yīng)的熱門(mén)度(次數(shù)并降序#初始化p、q矩陣arrayp = np.random.rand(len(userID), lcalss) #構(gòu)造p矩陣,[0,1]內(nèi)隨機(jī)值arrayq = np.random.rand(lcalss, len(itemID)) #構(gòu)造q矩陣,[0,1]內(nèi)隨機(jī)值p = pd.DataFrame(arrayp, columns=range(0,lcalss), index=userID)q = pd.DataFrame(arrayq, columns=itemID, index=range(0,lcalss))#生成負(fù)樣例userItem = []for userid in userID:itemDict = self.initUserItem(userid)userItem.append({userid:itemDict})return p, q, userItem#初始化結(jié)束..... def sigmod(self,x):# 單位階躍函數(shù),將興趣度限定在[0,1]范圍內(nèi)y = 1.0/(1+exp(-x))return ydef lfmPredict(self,p, q, userID, itemID):#利用參數(shù)p,q預(yù)測(cè)目標(biāo)用戶(hù)對(duì)目標(biāo)物品的興趣度p = np.mat(p.ix[userID].values)q = np.mat(q[itemID].values).Tr = (p * q).sum()r = self.sigmod(r)return rdef latenFactorModel(self):#traindata=self.traindatalclass=self.lclassiters=self.iters #迭代次數(shù)alpha = self.alpha #梯度下降步長(zhǎng)lamda = self.lamda #正則化參數(shù)p, q, userItem = self.initModel()for step in range(0, iters):for user in userItem:for userID, samples in user.items():for itemID, rui in samples.items():eui = rui - self.lfmPredict(p, q, userID, itemID)for f in range(0, lclass):#print('step %d user %d class %d' % (step, userID, f))p[f][userID] += alpha * (eui * q[itemID][f] - lamda * p[f][userID])q[itemID][f] += alpha * (eui * p[f][userID] - lamda * q[itemID][f])alpha *= 0.9#學(xué)習(xí)速率return p, qdef recommend(self,userid,p,q):itemID=self.itemIDTopk=self.topk#traindata=self.traindata#userItemlist = list(set(traindata[traindata['userid'] == userid]['itemid']))#otherItemList = [item for item in set(traindata['itemid'].values) if item not in userItemlist]predictList = [self.lfmPredict(p, q, userid, itemid) for itemid in itemID]series = pd.Series(predictList, index=itemID)series = series.sort_values(ascending=False)[:Topk]return seriesdef recallAndPrecision(self,p,q):#召回率和準(zhǔn)確率traindata = self.traindata#itemID=self.itemIDuserID=self.userIDhit = 0recall = 0precision = 0for userid in userID:trueItem = traindata[traindata['userid'] == userid]['itemid']preitem=self.recommend(userid, p, q)preItem=list(preitem.index)for item in preItem:if item in trueItem:hit += 1recall += len(trueItem)precision += len(preItem)return (hit / (recall * 1.0),hit / (precision * 1.0))def coverage(self,p,q):#覆蓋率traindata = self.traindatarecommend_items = set()all_items = set()userID=self.userIDfor userid in userID:trueItem = traindata[traindata['userid'] == userid]['itemid']for item in trueItem:all_items.add(item)preitem = self.recommend(userid, p, q)preItem=list(preitem.index)for item in preItem:recommend_items.add(item)return len(recommend_items) / (len(all_items) * 1.0)def popularity(self,p,q):#流行度#traindata = self.traindataitemLen=self.itemLen#itemID=self.itemIDuserID=self.userIDret = 0n = 0for userid in userID:preitem = self.recommend(userid, p, q)preItem=list(preitem.index)for item in preItem:ret += math.log(1+itemLen[item])n += 1return ret / (n * 1.0)if __name__ == "__main__": start = time.clock() #導(dǎo)入數(shù)據(jù)#df_sample = pd.read_csv("D:\\dev\\workspace\\PyRecSys\\demo\\ratings.csv",names=['userid','itemid','ratings'],header=0)df_sample = pd.read_csv("D:\\tmp\\ratings.csv",names=['userid','itemid','ratings'],header=0)traindata=df_sample[['userid','itemid']] for ratio in [1,2,3,5,10,20]:for lclass in [5,10,20,30,50]: lfm=LFM(lclass,2,0.02,0.01,10,ratio,traindata) #隱類(lèi)參數(shù)p,q=lfm.latenFactorModel()#推薦#preitem = lfm.recommend(1, p, q)#print (preitem)#模型評(píng)估print ("%3s%20s%20s%20s%20s%20s" % ('ratio','lcalss',"recall",'precision','coverage','popularity'))recall,precision = lfm.recallAndPrecision(p,q)coverage =lfm.coverage(p,q)popularity =lfm.popularity(p,q)print ("%3d%20d%19.3f%%%19.3f%%%19.3f%%%20.3f" % (ratio,lclass,recall * 100,precision * 100,coverage * 100,popularity))end = time.clock() print('finish all in %s' % str(end - start))
關(guān)注三點(diǎn):
1)性能受正負(fù)樣例比率、隱類(lèi)數(shù)量影響最大,要訓(xùn)練出一個(gè)最佳參數(shù)。
2)對(duì)于梯度下降的收斂條件,即迭代次數(shù),限定步長(zhǎng)為0.02,迭代次數(shù)n要訓(xùn)練出一個(gè)最佳值。
3)對(duì)于增量數(shù)據(jù)的訓(xùn)練:保存p、q矩陣,對(duì)于增量樣本集,可以在p、q基礎(chǔ)上訓(xùn)練,有待實(shí)踐驗(yàn)證,避免每次全量訓(xùn)練耗費(fèi)性能。

總結(jié)

以上是生活随笔為你收集整理的【知识发现】隐语义模型LFM算法python实现(二)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。