日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

python爬虫 爬取简历模板

發(fā)布時(shí)間:2024/1/1 python 41 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python爬虫 爬取简历模板 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

簡(jiǎn)介:爬取“個(gè)人簡(jiǎn)歷網(wǎng)”中的簡(jiǎn)歷模板并存儲(chǔ)到本地(http://www.gerenjianli.com/moban/index.html)
代碼:

import requests from lxml import etree import os if __name__ == '__main__':# 這是只爬取一頁(yè)數(shù)據(jù)# url = 'http://www.gerenjianli.com/moban/index.html'## headers = {# 'User-Agent':'這里放自己瀏覽器的UA就行啦'# }# # page_text = requests.get(url=url,headers=headers).text# response = requests.get(url=url, headers=headers)# # response.encoding = 'utf-8'# page_text = response.text## tree = etree.HTML(page_text)# li_list = tree.xpath('//div[@class="list_boby"]/ul[@class="prlist"]/li')# # print(li_list)## #創(chuàng)建文件夾# if not os.path.exists('./resumeLibs'):# os.mkdir('./resumeLibs')# for li in li_list:# a = li.xpath('./div/a/@href')[0]# name = li.xpath('./div/a/img/@alt')[0]# name = name.encode('iso-8859-1').decode('gbk')# download_text = requests.get(url=a,headers=headers).text# tree = etree.HTML(download_text)# download_href = tree.xpath('//div[@class="donwurl2"]/a/@href')[0]## doc_data = requests.get(url=download_href,headers=headers).content# doc_path = 'resumeLibs/' + name + '.docx'# with open(doc_path,'wb') as fp:# fp.write(doc_data)# print(name,'下載成功!')# 爬取多頁(yè)數(shù)據(jù)headers = {'User-Agent': '這里放自己瀏覽器的UA就行啦'}# 創(chuàng)建文件夾if not os.path.exists('./resumeLibs'):os.mkdir('./resumeLibs')for pagenum in range(1,4):#這里爬取了1-3頁(yè)中的簡(jiǎn)歷模板if pagenum == 1:url = 'http://www.gerenjianli.com/moban/index.html'else:url = 'http://www.gerenjianli.com/moban/index_' + str(pagenum) + '.html'# page_text = requests.get(url=url,headers=headers).textresponse = requests.get(url=url, headers=headers)# response.encoding = 'utf-8'page_text = response.texttree = etree.HTML(page_text)li_list = tree.xpath('//div[@class="list_boby"]/ul[@class="prlist"]/li')# print(li_list)for li in li_list:a = li.xpath('./div/a/@href')[0]name = li.xpath('./div/a/img/@alt')[0]name = name.encode('iso-8859-1').decode('gbk')download_text = requests.get(url=a, headers=headers).texttree = etree.HTML(download_text)download_href = tree.xpath('//div[@class="donwurl2"]/a/@href')[0]doc_data = requests.get(url=download_href, headers=headers).contentdoc_path = 'resumeLibs/' + name + '.docx'with open(doc_path, 'wb') as fp:fp.write(doc_data)print(name, '下載成功!')

總結(jié)

以上是生活随笔為你收集整理的python爬虫 爬取简历模板的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。