基于某网站的信息爬取与保存_指定查询内容
生活随笔
收集整理的這篇文章主要介紹了
基于某网站的信息爬取与保存_指定查询内容
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
需求:對某網站實現輸入指定的查詢內容后動態爬取并能保存到文本文件中
解決方法:通過Python的BeautifulSoup、selenium的Kyes完成該需求。
代碼見下:
import json import urllib.request import urllib.error from urllib.parse import quote from bs4 import BeautifulSoup from builtins import strfrom selenium import webdriver from selenium.webdriver.common.by import By from bs4 import BeautifulSoup from urllib.request import urlopen from selenium.common.exceptions import NoSuchElementException import re import time import datetime import sys sys.setrecursionlimit(1000000) import os import random from selenium.webdriver.common.keys import Keysdef getQuestionsTotalLinks(driver):bs = BeautifulSoup(driver.page_source, 'lxml')AllInfo=bs.findAll('tr', {'class': 'bgcol'})for info in AllInfo: #[0:2]if info.find('a', {'class':'xjxd_nr'}) is None:print("No usefull Info")#returnelse:paras=info.find('a', {'class': 'xjxd_nr'}).get('onclick').replace('detail(','').replace("'",'')[0:-2]listparas=paras.split(',')innerlink='http://www.shenl.com.cn/public/mhwz/todetail?id='+listparas[0]+'&isSearchPassWord='+listparas[1]+'&tag='+listparas[2]innerDetail=info.get_text().replace('\t','').replace('\n','|').split('|')while '' in innerDetail:innerDetail.remove('')f.write('\t'.join(innerDetail) + "\t" + innerlink + "\n")try:print(type(driver.find_element(By.LINK_TEXT, "下一頁")))driver.find_element_by_xpath("//a[contains(text(),'下一頁')]").click()except NoSuchElementException:time.sleep(1)print("No more pages found")returntime.sleep(random.randint(5, 20))getQuestionsTotalLinks(driver)if __name__ == '__main__':for n in range(0,1,1):import time''' Fully crawl by page number 全量指定頁面爬取'''IsoTimeFormat = '%Y_%m_%d'f = open('G:\\temp\\total\\HefeiQuestion_Incr_' + str(time.strftime(IsoTimeFormat)) + '.txt', 'w',encoding='utf-8')driver = webdriver.Chrome("C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe")driver.get("http://www.shenl.com.cn/public/mhwz/xjxdList")titleEle = driver.find_element_by_xpath("//input[@name='zt']")titleEle.send_keys("7號")searchEle = driver.find_element_by_xpath("//a[@class='search_bt']")searchEle.send_keys(Keys.ENTER)getQuestionsTotalLinks(driver)driver.close()f.close()總結
以上是生活随笔為你收集整理的基于某网站的信息爬取与保存_指定查询内容的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 西游大战疆尸2兑换码最新(西游大战疆尸2
- 下一篇: 数据预处理之归一化(normalizat