日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

scrapy-splash抓取动态数据例子八

發(fā)布時(shí)間:2025/6/15 编程问答 23 豆豆
生活随笔 收集整理的這篇文章主要介紹了 scrapy-splash抓取动态数据例子八 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

一、介紹

    本例子用scrapy-splash抓取界面網(wǎng)站給定關(guān)鍵字抓取咨詢信息。

    給定關(guān)鍵字:個(gè)性化;融合;電視

    抓取信息內(nèi)如下:

      1、資訊標(biāo)題

      2、資訊鏈接

      3、資訊時(shí)間

      4、資訊來源

?

  二、網(wǎng)站信息

    

?

?

?

    

?

      

?

  三、數(shù)據(jù)抓取

    針對(duì)上面的網(wǎng)站信息,來進(jìn)行抓取

    1、首先抓取信息列表

      抓取代碼:sels = site.xpath('//div[contains(@class,"news-view")]')

    2、抓取標(biāo)題

      抓取代碼:title = sel.xpath('.//div[@class="news-header"]/h3/a/@title')[0].extract()

    3、抓取鏈接

      抓取代碼:it['url'] = sel.xpath('.//div[@class="news-header"]/h3/a/@href')[0].extract()

    4、抓取日期

      抓取代碼:dates = sel.xpath('.//div[@class="news-footer"]/p/span[2]/text()')

    5、抓取來源

      抓取代碼:sources =sel.xpath('.//div[@class="news-footer"]/p/span[1]/a/text()')

?  

  四、完整代碼

# -*- coding: utf-8 -*- import scrapy from scrapy import Request from scrapy.spiders import Spider from scrapy_splash import SplashRequest from scrapy_splash import SplashMiddleware from scrapy.http import Request, HtmlResponse from scrapy.selector import Selector from scrapy_splash import SplashRequest from splash_test.items import SplashTestItem import IniFile import sys import os import re import timereload(sys) sys.setdefaultencoding('utf-8')# sys.stdout = open('output.txt', 'w')class jiemianSpider(Spider):name = 'jiemian'configfile = os.path.join(os.getcwd(), 'splash_test\spiders\setting.conf')cf = IniFile.ConfigFile(configfile)information_keywords = cf.GetValue("section", "information_keywords")information_wordlist = information_keywords.split(';')websearchurl = cf.GetValue("jiemian", "websearchurl")start_urls = []for word in information_wordlist:print websearchurl + wordstart_urls.append(websearchurl + word)# request需要封裝成SplashRequestdef start_requests(self):for url in self.start_urls:index = url.rfind('=')yield SplashRequest(url, self.parse, args={'wait': '2'},meta={'keyword': url[index + 1:]})def Comapre_to_days(self,leftdate, rightdate):'''比較連個(gè)字符串日期,左邊日期大于右邊日期多少天:param leftdate: 格式:2017-04-15:param rightdate: 格式:2017-04-15:return: 天數(shù)'''l_time = time.mktime(time.strptime(leftdate, '%Y-%m-%d'))r_time = time.mktime(time.strptime(rightdate, '%Y-%m-%d'))result = int(l_time - r_time) / 86400return resultdef date_isValid(self, strDateText):currentDate = time.strftime('%Y-%m-%d')datePattern = re.compile(r'\d{4}-\d{1,2}-\d{1,2}')dt = strDateText.replace('/', '-')strDate = re.findall(datePattern, dt)if len(strDate) == 1:if self.Comapre_to_days(currentDate, strDate[0]) == 0:return True, currentDatereturn False, ''def parse(self, response):site = Selector(response)sels = site.xpath('//div[contains(@class,"news-view")]')keyword = response.meta['keyword']item_list = []for sel in sels:dates = sel.xpath('.//div[@class="news-footer"]/p/span[2]/text()')flag,date =self.date_isValid(dates[0].extract())title = sel.xpath('.//div[@class="news-header"]/h3/a/@title')[0].extract()if flag and title.find(keyword)>-1:it = SplashTestItem()it['title'] = titleit['url'] = sel.xpath('.//div[@class="news-header"]/h3/a/@href')[0].extract()it['date'] = dateit['keyword'] = keywordsources =sel.xpath('.//div[@class="news-footer"]/p/span[1]/a/text()')if len(sources)>0:it['source'] = sources[0].extract()item_list.append(it)return item_list

?

總結(jié)

以上是生活随笔為你收集整理的scrapy-splash抓取动态数据例子八的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。