scrapy的name变量_Python3 Scrapy框架学习四:爬取的数据存入MongoDB
1. 新建一個scrapy項目:
2.使用PyCharm打開該項目
3.在settings.py文件中添加如下代碼:#模擬瀏覽器,應對反爬
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'
#解決字符亂碼的問題
FEED_EXPORT_ENCODING = 'gbk'
4.在items.py中定義如下變量:from scrapy import Item,Field
class MaoyanItem(Item):
# define the fields for your item here like:
# name = scrapy.Field()
movie = Field() #電影名稱
actor = Field() #演員
release = Field() #上映時間
score = Field() #貓眼評分
5.打開spiders文件夾下的maoyanTop100.py文件,添加如下代碼:import time
import scrapy
from maoyan.items import MaoyanItem
class Maoyantop100Spider(scrapy.Spider):
name = 'maoyanTop100'
#allowed_domains = ['maoyan.com/board/4']
allowed_domains = ['maoyan.com'] #這里一定要注意修改,否則無法爬取下一頁
start_urls = ['http://maoyan.com/board/4/']
def parse(self, response):
context = response.css('dd') # 分析得知所有的電影item均在該標簽內
for info in context:
item = MaoyanItem()
item['movie'] = info.css('p.name a::text').extract_first().strip()
item['actor'] = info.css('.star::text').extract_first().strip()
item['release'] = info.css('.releasetime::text').extract_first().strip()
score = info.css('i.integer::text').extract_first().strip()
score += info.css('i.fraction::text').extract_first().strip()
item['score'] = score
yield item
time.sleep(1) # 暫停一秒,應對反爬
next = response.css('li a::attr("href")').extract()[-1] # 查找下一頁的鏈接
url = response.urljoin(next)
yield scrapy.Request(url=url, callback=self.parse) # 解析下一頁
6.在Terminal框輸入如下命令:scrapy crawl maoyanTop100
7.可以看到,已經爬取成功:
8.在pipelines.py文件中加入如下代碼:import pymongo
class MongoPipeline(object):
def __init__(self,mongo_url,mongo_db):
self.mongo_url = mongo_url
self.mongo_db = mongo_db
@classmethod
def from_crawler(cls,crawlers):
return cls(
mongo_url = crawlers.settings.get('MONGO_URL'),
mongo_db = crawlers.settings.get('MONGO_DB')
)
def open_spider(self,spider):
self.client = pymongo.MongoClient(self.mongo_url)
self.db = self.client[self.mongo_db]
def process_item(self,item,spider):
name = item.__class__.__name__
self.db[name].insert(dict(item))
return item
def close_spider(self,spider):
self.client.close()
class MaoyanPipeline(object):
def process_item(self, item, spider):
return item
9.在settings.py中添加如下代碼:ITEM_PIPELINES = {
'maoyan.pipelines.MongoPipeline':300,
}
MONGO_URL = 'localhost'
MONGO_DB = 'maoyan'
10.在Terminal框輸入如下命令:scrapy crawl maoyanTop100
11.打開robo3T客戶端,可以看到maoyan項目已經保存在MongoDB里面。
說明已經成功的保存在MongoDB里面了。
總結
以上是生活随笔為你收集整理的scrapy的name变量_Python3 Scrapy框架学习四:爬取的数据存入MongoDB的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: mysql 查询每人每天_PHP+MyS
- 下一篇: python文件存储过程_python调