日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

scrapy的name变量_Python3 Scrapy框架学习四:爬取的数据存入MongoDB

發布時間:2024/10/8 python 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 scrapy的name变量_Python3 Scrapy框架学习四:爬取的数据存入MongoDB 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1. 新建一個scrapy項目:

2.使用PyCharm打開該項目

3.在settings.py文件中添加如下代碼:#模擬瀏覽器,應對反爬

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

#解決字符亂碼的問題

FEED_EXPORT_ENCODING = 'gbk'

4.在items.py中定義如下變量:from scrapy import Item,Field

class MaoyanItem(Item):

# define the fields for your item here like:

# name = scrapy.Field()

movie = Field() #電影名稱

actor = Field() #演員

release = Field() #上映時間

score = Field() #貓眼評分

5.打開spiders文件夾下的maoyanTop100.py文件,添加如下代碼:import time

import scrapy

from maoyan.items import MaoyanItem

class Maoyantop100Spider(scrapy.Spider):

name = 'maoyanTop100'

#allowed_domains = ['maoyan.com/board/4']

allowed_domains = ['maoyan.com'] #這里一定要注意修改,否則無法爬取下一頁

start_urls = ['http://maoyan.com/board/4/']

def parse(self, response):

context = response.css('dd') # 分析得知所有的電影item均在該標簽內

for info in context:

item = MaoyanItem()

item['movie'] = info.css('p.name a::text').extract_first().strip()

item['actor'] = info.css('.star::text').extract_first().strip()

item['release'] = info.css('.releasetime::text').extract_first().strip()

score = info.css('i.integer::text').extract_first().strip()

score += info.css('i.fraction::text').extract_first().strip()

item['score'] = score

yield item

time.sleep(1) # 暫停一秒,應對反爬

next = response.css('li a::attr("href")').extract()[-1] # 查找下一頁的鏈接

url = response.urljoin(next)

yield scrapy.Request(url=url, callback=self.parse) # 解析下一頁

6.在Terminal框輸入如下命令:scrapy crawl maoyanTop100

7.可以看到,已經爬取成功:

8.在pipelines.py文件中加入如下代碼:import pymongo

class MongoPipeline(object):

def __init__(self,mongo_url,mongo_db):

self.mongo_url = mongo_url

self.mongo_db = mongo_db

@classmethod

def from_crawler(cls,crawlers):

return cls(

mongo_url = crawlers.settings.get('MONGO_URL'),

mongo_db = crawlers.settings.get('MONGO_DB')

)

def open_spider(self,spider):

self.client = pymongo.MongoClient(self.mongo_url)

self.db = self.client[self.mongo_db]

def process_item(self,item,spider):

name = item.__class__.__name__

self.db[name].insert(dict(item))

return item

def close_spider(self,spider):

self.client.close()

class MaoyanPipeline(object):

def process_item(self, item, spider):

return item

9.在settings.py中添加如下代碼:ITEM_PIPELINES = {

'maoyan.pipelines.MongoPipeline':300,

}

MONGO_URL = 'localhost'

MONGO_DB = 'maoyan'

10.在Terminal框輸入如下命令:scrapy crawl maoyanTop100

11.打開robo3T客戶端,可以看到maoyan項目已經保存在MongoDB里面。

說明已經成功的保存在MongoDB里面了。

總結

以上是生活随笔為你收集整理的scrapy的name变量_Python3 Scrapy框架学习四:爬取的数据存入MongoDB的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。