【Java爬虫】我的第一个爬虫 -- 简单抓取网页源代码
生活随笔
收集整理的這篇文章主要介紹了
【Java爬虫】我的第一个爬虫 -- 简单抓取网页源代码
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
代碼
直接運行即可
package cn.hanquan.file;import java.io.IOException; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL;public class UrlCrawBoke {public static void main(String urlstr[]) throws IOException {InputStream is = doGet("https://blog.csdn.net/sinat_42483341/article/details/89931215");String pageStr = inputStreamToString(is, "UTF-8");is.close();System.out.println(pageStr);}public static InputStream doGet(String urlstr) throws IOException {URL url = new URL(urlstr);HttpURLConnection conn = (HttpURLConnection) url.openConnection();conn.setRequestProperty("User-Agent","Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36");InputStream inputStream = conn.getInputStream();return inputStream;}public static String inputStreamToString(InputStream is, String charset) throws IOException {byte[] bytes = new byte[1024];int byteLength = 0;StringBuffer sb = new StringBuffer();while ((byteLength = is.read(bytes)) != -1) {sb.append(new String(bytes, 0, byteLength, charset));}return sb.toString();} }總結
以上是生活随笔為你收集整理的【Java爬虫】我的第一个爬虫 -- 简单抓取网页源代码的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【转载】我为啥不想用Python
- 下一篇: 【Java数据结构】自己实现一个HahM