日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

机器学习算法Python实现:kmeans文本聚类

發(fā)布時(shí)間:2024/1/17 python 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 机器学习算法Python实现:kmeans文本聚类 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
# -*- coding:utf-8 -* #本代碼是在jupyter notebook上實(shí)現(xiàn),author:huzhifei, create time:2018/8/14 #本腳本主要實(shí)現(xiàn)了基于python通過(guò)kmeans做的文本聚類(lèi)的項(xiàng)目目的#導(dǎo)入相關(guān)包 import numpy as np import pandas as pd import re import os import codecs from sklearn import feature_extraction import jieba#對(duì)title文本做分詞 f1 =open("title.txt","r",encoding='utf-8',errors='ignore') f2 =open("title_fenci", 'w',encoding='utf-8',errors='ignore') for line in f1:seg_list = jieba.cut(line, cut_all=False)f2.write((" ".join(seg_list)).replace("\t\t\t","\t")) #print(w) f1.close() f2.close()#對(duì)summary(在這里用content表示summary)文本做分詞 f1 =open("content.txt","r",encoding='utf-8',errors='ignore') f2 =open("content_fenci.txt", 'w',encoding='utf-8',errors='ignore') for line in f1:seg_list = jieba.cut(line, cut_all=False)f2.write((" ".join(seg_list)).replace("\t\t\t","\t")) #print(w) f1.close() f2.close()#打開(kāi)已經(jīng)分好詞的title與content文本內(nèi)容 titles = open('title_fenci.txt',encoding='utf-8',errors='ignore').read().split('\n') #print(titles) print(str(len(titles)) + ' titles') contents = open('content_fenci.txt',encoding='utf-8',errors='ignore').read().split('\n') contents = contents[:len(titles)] #print(contents) print(str(len(contents)) + ' contents')#中文停用詞 def get_custom_stopwords(stop_words_file):with open(stop_words_file,encoding='utf-8')as f:stopwords=f.read()stopwords_list=stopwords.split('\n')custom_stopwords_list=[i for i in stopwords_list]return custom_stopwords_list#停用詞函數(shù)調(diào)用 stop_words_file="stopwordsHIT.txt" stopwords=get_custom_stopwords(stop_words_file)#做tfidf from sklearn.feature_extraction.text import TfidfVectorizer max_df=0.8 min_df=2 tfidf_vectorizer = TfidfVectorizer(max_df=max_df,min_df=min_df, max_features=200000,stop_words='english',use_idf=True, token_pattern=u'(?u)\\b[^\\d\\W]\\w+\\b',tokenizer=tokenize_and_stem, ngram_range=(1,2))%time tfidf_matrix = tfidf_vectorizer.fit_transform(contents)print(tfidf_matrix.shape)#獲取特證詞 terms = tfidf_vectorizer.get_feature_names()#kmeans聚類(lèi) from sklearn.cluster import KMeansnum_clusters = 6km = KMeans(n_clusters=num_clusters)%time km.fit(tfidf_matrix)clusters = km.labels_.tolist()#調(diào)用pkl的kmeans模型 from sklearn.externals import joblibjoblib.dump(km, 'y_cluster.pkl') km = joblib.load('y_cluster.pkl') #print(km) clusters = km.labels_.tolist() print(len(clusters))#將結(jié)果存入pandas import pandas as pd films = { 'title': titles, 'rank': ranks, 'synopsis': contents[0:53612],'cluster': clusters[0:53612]} frame = pd.DataFrame(films, index=[films['cluster']],columns = ['cluster','title','rank', 'synopsis'])#簇統(tǒng)計(jì) frame['cluster'].value_counts()#打印出每個(gè)簇的詳細(xì)簇信息 from __future__ import print_function print("Top terms per cluster:") print() order_centroids = km.cluster_centers_.argsort()[:, ::-1] #print(order_centroids) for i in range(num_clusters):print("Cluster %d words:" % i, end='')#print(order_centroids[1,:100])for ind in order_centroids[i, :50]:print (ind)frame=frame.insert(4,'word',(vocab_frame.ix[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore'), end=','))t=vocab_frame.ix[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore')print(len(t))print(' %s' % vocab_frame.ix[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore'), end=',')print()

總結(jié)

以上是生活随笔為你收集整理的机器学习算法Python实现:kmeans文本聚类的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。