日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程语言 > python >内容正文

python

[python]---从java到python(03)---爬虫

發(fā)布時(shí)間:2024/7/23 python 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 [python]---从java到python(03)---爬虫 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

1.簡(jiǎn)單網(wǎng)頁(yè)

#!/usr/bin/env python # -*- coding:utf-8 -*-import urllib.requestfile = urllib.request.urlopen("https://www.jd.com") data = file.read() # dataline = file.readline() print(data)fhandle = open("E:/python/1_1.html", "wb") fhandle.write(data) fhandle.close()# filename = urllib.request.urlretrieve("http://edu.51cto.com", filename="E:/python/2.html") # filename2 = urllib.request.urlretrieve("http://www.jd.com", filename="E:/python/3.html")print(file.getcode()) print(file.geturl())

2.模擬瀏覽器

#!/usr/bin/env python # -*- coding:utf-8 -*-import urllib.requesturl = "https://blog.csdn.net/java_zhangshuai/article/details/81749208" headers = ("User-Agent","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36") opener = urllib.request.build_opener() opener.addheaders = [headers] data = opener.open(url).read() print(data) fhandle = open("E:/python/2_1.html", "wb") fhandle.write(data) fhandle.close()

3.http請(qǐng)求

#!/usr/bin/env python # -*- coding:utf-8 -*-import urllib.requestkeywd = "hello" # 中文等不符合url標(biāo)準(zhǔn)的,需要編碼 keywd = urllib.request.quote(keywd) url = "http://www.baidu.com/s?wd=" + keywd req = urllib.request.Request(url) data = urllib.request.urlopen(req).read()fhandle = open("E:/python/3_1.html", "wb") fhandle.write(data) fhandle.close()import urllib.parseurl = "http://www.iqianyue.com/mypost" data = {"name": "zhangsan", "pass": "zhangsanpass"} postdata = urllib.parse.urlencode(data).encode("utf-8")for x in range(1, 3):try:req = urllib.request.Request(url, postdata)req.add_header("User-Agent","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36")data = urllib.request.urlopen(req).read()fhandle = open("E:/python/3_2.html", "wb")fhandle.write(data)fhandle.close()print(len(data))except Exception as e:print("出現(xiàn)異常--->"+str(e))

4.爬取某電商商品列表下的圖片集合

#!/usr/bin/env python # -*- coding:utf-8 -*-import urllib.request import redef craw(url, page):html1 = urllib.request.urlopen(url).read()html1 = str(html1)pat1 = '<div id="plist".+? <div class="page clearfix">'# 根據(jù)pat1,過(guò)濾出圖片部分result1 = re.compile(pat1).findall(html1)result1 = result1[0]pat2 = '<img width="220" height="220" data-img="1" src="//(.+?\.jpg)">'# 根據(jù)pat2,過(guò)濾出圖片listimagelist = re.compile(pat2).findall(result1)x = 1for imageurl in imagelist:print(imageurl)imagename = "E:/python/爬蟲(chóng)/" + str(page) + str(x) + ".jpg"imageurl = "https://" + imageurltry:# 將圖片鏈接是imageurl的圖片存在路徑為imagename的地方urllib.request.urlretrieve(imageurl, filename=imagename)except:x += 1x += 1for i in range(1, 10):url = "https://list.jd.com/list.html?cat=9192,12632,12633&page=" + str(i)craw(url, i)

總結(jié)

以上是生活随笔為你收集整理的[python]---从java到python(03)---爬虫的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。