四十一、完成scrapy爬取官方网站新房的数据
@Author:Runsen
文章目錄
- 前言
- 分析網頁
- 新建項目
- 加請求頭
- 搞定item
- 首頁調試
- 詳情頁調試
- 保存json
前言
在前幾天,接到一個大學生的作業的爬蟲單子,要求采用scrapy爬取鏈家官方網站新房的數據(3-5頁即可,太多可能被封禁ip),網址:https://bj.fang.lianjia.com/loupan/,將樓盤名稱、價格、平米數等(可以拓展)數據保存到一個json文件中。
為了50塊錢,廢話不說就是開干。雖說我不是計算機的,還是一個屌絲大三化工學生。
分析網頁
那么今天教大家用Scarpy爬取鏈家網,爬取網頁如下:http://bj.fang.lianjia.com/loupan/。
點擊其中的一個來看看,https://bj.fang.lianjia.com/loupan/p_zjtfbkrhf/?fb_expo_id=303816048586158080
新建項目
新建項目和爬蟲文件這些太簡單,都是老套路, pass了。新建的項目如下。
加請求頭
第一步,傻逼都知道加請求頭。
在setting.py加MY_USER_AGENT
MY_USER_AGENT = ["Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)","Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)","Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)","Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)","Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0","Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1","Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5","Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre","Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11","Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36", ]然后去Middleware.py去搞一個RandomUserAgentMiddleware,這代碼找之前的scrapy爬蟲項目直接復制,簡單。
import random class RandomUserAgentMiddleware(object):def __init__(self, user_agents):self.user_agents = user_agents@classmethoddef from_crawler(cls, crawler):# 從settings.py中導入MY_USER_AGENTs = cls(user_agents=crawler.settings.get('MY_USER_AGENT'))return sdef process_request(self, request, spider):agent = random.choice(self.user_agents)request.headers['User-Agent'] = agentreturn None去setting.py開啟上它。900記得, 老套路。
DOWNLOADER_MIDDLEWARES = {# 'lianjia.middlewares.LianjiaDownloaderMiddleware': 543,'lianjia.middlewares.RandomUserAgentMiddleware': 900, }OK,請求頭headers搞定,代碼都是復制的,3分鐘OK。
搞定item
item就是把爬取的信息儲存起來,爬取樓盤名稱、類型,位置,價格、平米數,代碼編寫需要兩分鐘。
import scrapy ''' 目標:爬取鏈家官方網站新房的數據(3-5頁即可,太多可能被封禁ip) 網址:https://bj.fang.lianjia.com/loupan/ 要求:將樓盤名稱、價格、平米數等(可以拓展)數據保存到一個json文件中。 交付:整個project的壓縮包 '''class LianjiaItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()"""樓盤名稱、類型,位置,價格、平米數"""name = scrapy.Field()type = scrapy.Field()location = scrapy.Field()price = scrapy.Field()number = scrapy.Field()首頁調試
我們需要把詳情頁的url和建面面積匹配出來,scrapy shell https://bj.fang.lianjia.com/loupan進入shell調試。
在url的a標簽發現了class = “resblock-name”的div,下面就是小兒科的東西。
沒有域名,直接follow快捷方式完事,可以自動拼接url。
建面面積一樣,輕松解決。
在span標簽發現了class = “resblock-area”的div,重復下面的操作。
詳情頁調試
下面就是詳情頁了,scrapy shell https://bj.fang.lianjia.com/loupan/p_zjtfbkrhf/?fb_expo_id=303816048586158080
下面把樓盤名稱搞出來,慢慢調試。
樓盤類型,位置,價格這些,我不一一找了。就是耐心的通過scrapy shell 調試。這里花費了有半個小時。
最后,自己看代碼。
import scrapy from ..items import LianjiaItemclass SpiderSpider(scrapy.Spider):name = 'spider'allowed_domains = ['bj.fang.lianjia.com']start_urls = ['http://bj.fang.lianjia.com/loupan/']def parse(self, response):for i in range(1,22):page_url = 'https://bj.fang.lianjia.com/loupan/pg{}'.format(i)yield response.follow(page_url,callback =self.parse_page)def parse_page(self,response):urls = response.xpath("//div[@class='resblock-name']/a/@href").extract()nums = response.xpath("//div[@class='resblock-area']/span/text()").extract()print(urls)print(nums)for num,url in zip(nums,urls):# 用的follow快捷方式,可以自動拼接urlyield response.follow(url=url, meta={'num':num},callback=self.parse_detail)def parse_detail(self,response):item = LianjiaItem()item['name'] = response.xpath("//div[@class='title-wrap']//h2/text()").extract_first()self.logger.info('正在爬取{}……'.format(item['name']))item['type'] = response.xpath("//div[@class='tags-wrap']/span[@class='tag-item house-type-tag']/text()").extract_first()item['location'] = response.xpath("//ul[@class='info-list']//span[@class='content']/text()").extract_first()item['price'] = response.xpath("//div[@class='price']/span[@class='price-number']/text()").extract_first()item['number'] = response.meta['num']yield item保存json
要求保存json,直接 -o 省事,連Pipeline都不用寫。
在setting中開啟Pipeline。
ITEM_PIPELINES = {'lianjia.pipelines.LianjiaPipeline': 300, }運行,開一個main.py,scrapy crawl spider -o spider.json
''' @Author: Runsen @微信公眾號: 潤森筆記 @博客: https://blog.csdn.net/weixin_44510615 @Date: 2020/4/13 '''import sys import os from scrapy.cmdline import execute sys.path.append(os.path.dirname(os.path.abspath(__file__))) execute(['scrapy','crawl','spider','-o spider.json'])運行時發現沒有中文,看了下我scrapy博客https://maoli.blog.csdn.net/article/details/89012106,加上setting配置FEED_EXPORT_ENCODING = 'utf-8'完成。
FEED_EXPORT_ENCODING = 'utf-8'運行,ok
一個小時多點,50塊到手。最后,我對不起顧客了,把代碼公開了,給老師發現那慘了。
總結
以上是生活随笔為你收集整理的四十一、完成scrapy爬取官方网站新房的数据的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 怎么把系统做成u盘 制作系统启动U盘
- 下一篇: 八、爬虫解析利器 PyQuery 的使用