日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

scrapy two

發(fā)布時(shí)間:2024/7/5 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 scrapy two 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

 

一.Scrapy的日志等級(jí)

  - 在使用scrapy crawl spiderFileName運(yùn)行程序時(shí),在終端里打印輸出的就是scrapy的日志信息。

  - 日志信息的種類:

        ERROR : 一般錯(cuò)誤

        WARNING : 警告

        INFO : 一般的信息

        DEBUG : 調(diào)試信息

       

  - 設(shè)置日志信息指定輸出:

    在settings.py配置文件中,加入

????????????????????LOG_LEVEL = ‘指定日志信息種類’即可。

????????????????????LOG_FILE = 'log.txt'則表示將日志信息寫入到指定文件中進(jìn)行存儲(chǔ)。

二.請(qǐng)求傳參

  - 在某些情況下,我們爬取的數(shù)據(jù)不在同一個(gè)頁(yè)面中,例如,我們爬取一個(gè)電影網(wǎng)站,電影的名稱,評(píng)分在一級(jí)頁(yè)面,而要爬取的其他電影詳情在其二級(jí)子頁(yè)面中。這時(shí)我們就需要用到請(qǐng)求傳參。

  - 案例展示:爬取www.id97.com電影網(wǎng),將一級(jí)頁(yè)面中的電影名稱,類型,評(píng)分一級(jí)二級(jí)頁(yè)面中的上映時(shí)間,導(dǎo)演,片長(zhǎng)進(jìn)行爬取。

  爬蟲文件:

# -*- coding: utf-8 -*- import scrapy from moviePro.items import MovieproItem class MovieSpider(scrapy.Spider): name = 'movie' allowed_domains = ['www.id97.com'] start_urls = ['http://www.id97.com/'] def parse(self, response): div_list = response.xpath('//div[@class="col-xs-1-5 movie-item"]') for div in div_list: item = MovieproItem() item['name'] = div.xpath('.//h1/a/text()').extract_first() item['score'] = div.xpath('.//h1/em/text()').extract_first() #xpath(string(.))表示提取當(dāng)前節(jié)點(diǎn)下所有子節(jié)點(diǎn)中的數(shù)據(jù)值(.)表示當(dāng)前節(jié)點(diǎn) item['kind'] = div.xpath('.//div[@class="otherinfo"]').xpath('string(.)').extract_first() item['detail_url'] = div.xpath('./div/a/@href').extract_first() #請(qǐng)求二級(jí)詳情頁(yè)面,解析二級(jí)頁(yè)面中的相應(yīng)內(nèi)容,通過meta參數(shù)進(jìn)行Request的數(shù)據(jù)傳遞 yield scrapy.Request(url=item['detail_url'],callback=self.parse_detail,meta={'item':item}) def parse_detail(self,response): #通過response獲取item item = response.meta['item'] item['actor'] = response.xpath('//div[@class="row"]//table/tr[1]/a/text()').extract_first() item['time'] = response.xpath('//div[@class="row"]//table/tr[7]/td[2]/text()').extract_first() item['long'] = response.xpath('//div[@class="row"]//table/tr[8]/td[2]/text()').extract_first() #提交item到管道 yield item

  items文件:

# -*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class MovieproItem(scrapy.Item): # define the fields for your item here like: name = scrapy.Field() score = scrapy.Field() time = scrapy.Field() long = scrapy.Field() actor = scrapy.Field() kind = scrapy.Field() detail_url = scrapy.Field()

? ? 管道文件:

# -*- coding: utf-8 -*-# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import json class MovieproPipeline(object): def __init__(self): self.fp = open('data.txt','w') def process_item(self, item, spider): dic = dict(item) print(dic) json.dump(dic,self.fp,ensure_ascii=False) return item def close_spider(self,spider): self.fp.close()

三.如何提高scrapy的爬取效率

增加并發(fā):默認(rèn)scrapy開啟的并發(fā)線程為32個(gè),可以適當(dāng)進(jìn)行增加。在settings配置文件中修改CONCURRENT_REQUESTS = 100值為100,并發(fā)設(shè)置成了為100。降低日志級(jí)別:在運(yùn)行scrapy時(shí),會(huì)有大量日志信息的輸出,為了減少CPU的使用率。可以設(shè)置log輸出信息為INFO或者ERROR即可。在配置文件中編寫:LOG_LEVEL = ‘INFO’ 禁止cookie: 如果不是真的需要cookie,則在scrapy爬取數(shù)據(jù)時(shí)可以進(jìn)制cookie從而減少CPU的使用率,提升爬取效率。在配置文件中編寫:COOKIES_ENABLED = False 禁止重試: 對(duì)失敗的HTTP進(jìn)行重新請(qǐng)求(重試)會(huì)減慢爬取速度,因此可以禁止重試。在配置文件中編寫:RETRY_ENABLED = False 減少下載超時(shí): 如果對(duì)一個(gè)非常慢的鏈接進(jìn)行爬取,減少下載超時(shí)可以能讓卡住的鏈接快速被放棄,從而提升效率。在配置文件中進(jìn)行編寫:DOWNLOAD_TIMEOUT = 10 超時(shí)時(shí)間為10s

測(cè)試案例:爬取校花網(wǎng)校花圖片 www.521609.com

# -*- coding: utf-8 -*- import scrapy from xiaohua.items import XiaohuaItem class XiahuaSpider(scrapy.Spider): name = 'xiaohua' allowed_domains = ['www.521609.com'] start_urls = ['http://www.521609.com/daxuemeinv/'] pageNum = 1 url = 'http://www.521609.com/daxuemeinv/list8%d.html' def parse(self, response): li_list = response.xpath('//div[@class="index_img list_center"]/ul/li') for li in li_list: school = li.xpath('./a/img/@alt').extract_first() img_url = li.xpath('./a/img/@src').extract_first() item = XiaohuaItem() item['school'] = school item['img_url'] = 'http://www.521609.com' + img_url yield item if self.pageNum < 10: self.pageNum += 1 url = format(self.url % self.pageNum) #print(url) yield scrapy.Request(url=url,callback=self.parse) # -*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class XiaohuaItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() school=scrapy.Field() img_url=scrapy.Field() # -*- coding: utf-8 -*-# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import json import os import urllib.request class XiaohuaPipeline(object): def __init__(self): self.fp = None def open_spider(self,spider): print('開始爬蟲') self.fp = open('./xiaohua.txt','w') def download_img(self,item): url = item['img_url'] fileName = item['school']+'.jpg' if not os.path.exists('./xiaohualib'): os.mkdir('./xiaohualib') filepath = os.path.join('./xiaohualib',fileName) urllib.request.urlretrieve(url,filepath) print(fileName+"下載成功") def process_item(self, item, spider): obj = dict(item) json_str = json.dumps(obj,ensure_ascii=False) self.fp.write(json_str+'\n') #下載圖片 self.download_img(item) return item def close_spider(self,spider): print('結(jié)束爬蟲') self.fp.close()

配置文件:

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'# Obey robots.txt rules ROBOTSTXT_OBEY = False # Configure maximum concurrent requests performed by Scrapy (default: 16) CONCURRENT_REQUESTS = 100 COOKIES_ENABLED = False LOG_LEVEL = 'ERROR' RETRY_ENABLED = False DOWNLOAD_TIMEOUT = 3 # Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 DOWNLOAD_DELAY = 3

轉(zhuǎn)載于:https://www.cnblogs.com/marry215464/p/10477182.html

總結(jié)

以上是生活随笔為你收集整理的scrapy two的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。