日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

Python-爬取自己博客文章的URL

發布時間:2025/3/21 python 18 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Python-爬取自己博客文章的URL 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Code

# -*- coding:utf8 -*- import string import urllib2 import re import time import randomclass CSDN_Spider:def __init__(self,url):self.myUrl = urlself.datas = []print u"爬蟲已啟動...."def csdn(self):url = self.myUrl + "?viewmode=list"user_agents = ['Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11','Opera/9.25 (Windows NT 5.1; U; en)','Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)','Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)','Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12','Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',"Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7","Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 ",]agent = random.choice(user_agents)req = urllib2.Request(url)req.add_header('User-Agent', agent)req.add_header('Host', 'blog.csdn.net')req.add_header('Accept', '*/*')req.add_header('Referer', 'http://blog.csdn.net/djd1234567?viewmode=contents')req.add_header('GET', url)mypage = urllib2.urlopen(req).read().decode("utf8")#print mypagePagenum = self.page_counter(mypage)#print Pagenumself.find_data(self.myUrl,Pagenum)def page_counter(self,mypage):#<a href="/yangshangwei/article/list/11">尾頁</a>myMatch = re.search(u'/article/list/(\d+?)">尾頁</a>',mypage,re.S)if myMatch:Pagenum = int(myMatch.group(1))print u"爬蟲報告:發現目錄一共%d頁" %Pagenumelse:Pagenum = 0print u"爬蟲報告:沒找到頁面的數量"return Pagenumdef find_data(self,myurl,Pagenum):name = myurl.split("/")f = open(name[-1] + '.txt','w+')for i in range(1,Pagenum+1):print iprint u"爬蟲報告:第%d頁正在加載中......" % iurl = myurl + "/article/list/" + str(i)user_agents = ['Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11','Opera/9.25 (Windows NT 5.1; U; en)','Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)','Mozilla/5.0 (compatible; Konqueror/3.5; Linux) KHTML/3.5.5 (like Gecko) (Kubuntu)','Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.12) Gecko/20070731 Ubuntu/dapper-security Firefox/1.5.0.12','Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 GNUTLS/1.2.9',"Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.7 (KHTML, like Gecko) Ubuntu/11.04 Chromium/16.0.912.77 Chrome/16.0.912.77 Safari/535.7","Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0 ",]agent = random.choice(user_agents)req = urllib2.Request(url)req.add_header('User-Agent', agent)req.add_header('Host', 'blog.csdn.net')req.add_header('Accept', '*/*')req.add_header('Referer', url)req.add_header('GET', url)mypage = urllib2.urlopen(req).read()myItems = re.findall(u'"><a href="/' + myurl.split("/")[-1] + '/article/details/(\d+?)" title="',mypage,re.S)print myItemsfor item in myItems:self.datas.append("http://blog.csdn.net/yangshangwei/article/details/" + item+"\n")#time.sleep(1)f.writelines(self.datas)f.close()print self.datasprint u"爬蟲報告:txt文件生成,請在當前目錄查看"url = "http://blog.csdn.net/yangshangwei"mySpider = CSDN_Spider(url) mySpider.csdn()

運行

總結

以上是生活随笔為你收集整理的Python-爬取自己博客文章的URL的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。