java爬虫获取div内容_Java爬虫-简单解析网页内容
獲取百度新聞中所有的中國新聞的標題時間來源
1 獲取網頁2 public static String getContent(String str) throwsClientProtocolException, IOException {3 CloseableHttpClient closeableHttpClient=HttpClients.createDefault(); //創建實例
4 HttpGet httpGet=newHttpGet(str);5 CloseableHttpResponse closeableHttpResponse=closeableHttpClient.execute(httpGet); //執行--返回
6 HttpEntity httpEntity=closeableHttpResponse.getEntity(); //獲取實體
7 String content=EntityUtils.toString(httpEntity, "utf-8");8 closeableHttpResponse.close();9 closeableHttpClient.close();10 returncontent;11 }12 ======= ====== ======= ========
13 篩選所有符合要求的鏈接14 public static ArrayListgetUrl(String str,String strr) {15 Document doc=Jsoup.parse(str);16 Elements elements =doc.select("a[href]"); //獲取a標簽
17 ArrayList strs=new ArrayList();18 for(Element e:elements) {19 String urls=e.attr("abs:href");20 if(urls.startsWith(strr)) {21 strs.add(urls);22 }23 }24 returnstrs;25 }26
測試解析
public class BaiduDemo {
public static void main(String[] args) {
// TODO Auto-generated method stub
String str="http://news.baidu.com";
try {
String content=GetUtil.getContent(str);
ArrayList list=GetUtil.getUrl(content, "https://kandian.youth.cn/");
for(String s:list) {
System.out.println(s);
String strr=GetUtil.getContent(s);
Document doc=Jsoup.parse(strr);
Elements ele1=doc.select("div[class=J-title_detail title_detail] h1");
Elements ele=doc.select("div[class=J-title_detail title_detail]");
if(ele!=null) {
System.out.println("標題: "+ele1.text());
Elements eles=ele.select("div[class=fl] i");
System.out.println("發帖時間: "+eles.text());
Elements eless=ele.select("div[class=fl] a");
System.out.println("發帖來源: "+eless.text());
}
}
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
另一方式獲取
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
String str=GetUtil.getContent("http://sports.163.com/18/0207/09/DA1HPMLI00058781.html");
//System.out.println(str);
Document doc=Jsoup.parse(str);
Element element=doc.getElementById("epContentLeft");
Elements elements=element.getElementsByTag("h1");
System.out.println("標題: "+elements.text());
Elements elementss=doc.getElementsByClass("post_time_source");
System.out.println("發帖時間: "+elementss.text().substring(0,19));
element=doc.getElementById("endText");
System.out.println("正文:");
System.out.println(element.text());
elementss=doc.getElementsByClass("ep-source cDGray");
System.out.println(elementss.text());
//抓取評論
elementss=doc.getElementsByClass("tie-cnt");
//tie-cnt
System.out.println("跟帖 :"+elementss.text());
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
總結
以上是生活随笔為你收集整理的java爬虫获取div内容_Java爬虫-简单解析网页内容的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 11.29万起!大众全新宝来上市:全系取
- 下一篇: Java重载和重写6_深入理解Java中