之前在公司项目使用了webMagic爬虫,对某个网站爬取数据,包括图片下载保存。

现在想想好像也不怎么了解Webmagic,差不多忘掉了。。然后就重新简单的写个例子试试。 应该晚点会用webmagic重新来完成之前任务。 (闲着也是闲着,温故而知新嘛)

用到webMagic爬虫, 最主要的就是 实现 PageProcessor 这个接口, 实现 process这个方法。 

还要掌握正则表达式,css,xpath等。 (对正则不咋熟的我表示很尴尬)

然后。。。

 上代码~

github地址:https://github.com/fightingFisher/webmagicTest.git

 

package com.xu.webmagic.main;

import us.codecraft.webmagic.Page;
import us.codecraft.webmagic.Site;
import us.codecraft.webmagic.Spider;
import us.codecraft.webmagic.processor.PageProcessor;
import us.codecraft.webmagic.selector.Selectable;

import com.xu.webmagic.TestPipeline;

public class TestProcessor implements PageProcessor {

    private Site site = Site.me().setRetryTimes(10).setSleepTime(1000);

    // 列表页 list_url = "http://www.cnblogs.com/dick159/default.html?page=";

    // private Set<String> done_url = new HashSet<String>();

    // 详情页url = "http://www.cnblogs.com/dick159/p/^\\d{7}$.html";

    @Override
    public void process(Page page) {
        String detail_urls_Xpath = "//*[@class='postTitle']/a[@class='postTitle2']/@href";
        String next_page_xpath = "//*[@id='nav_next_page']/a/@href";
        String next_page_css = "#homepage_top_pager > div:nth-child(1) > a:nth-child(7)";
        String title_xpath = "//h1[@class='postTitle']/a/text()";
        String date_xpath = "//span[@id='post-date']/text()";
        page.putField("title", page.getHtml().xpath(title_xpath).toString());
        if (page.getResultItems().get("title") == null) {
            page.setSkip(true);
        }
        page.putField("date", page.getHtml().xpath(date_xpath).toString());

        if (page.getHtml().xpath(detail_urls_Xpath).match()) {
            Selectable detailUrls = page.getHtml().xpath(detail_urls_Xpath);
            page.addTargetRequests(detailUrls.all());
        }

        if (page.getHtml().xpath(next_page_xpath).match()) {
            Selectable nextPageUrl = page.getHtml().xpath(next_page_xpath);
            page.addTargetRequests(nextPageUrl.all());

        } else if (page.getHtml().css(next_page_css).match()) {
            Selectable nextPageUrl = page.getHtml().css(next_page_css).links();
            page.addTargetRequests(nextPageUrl.all());
        }
    }

    @Override
    public Site getSite() {
        return this.site;
    }

    @SuppressWarnings("deprecation")
    public static void main(String[] args) {
        TestProcessor processor = new TestProcessor();
        Spider.create(processor)
                .addUrl("http://www.cnblogs.com/dick159/default.html?page=1")
                .pipeline(new TestPipeline()).thread(5).run();
    }
}

 

 

package com.xu.webmagic;

import java.util.Map;

import us.codecraft.webmagic.ResultItems;
import us.codecraft.webmagic.Task;
import us.codecraft.webmagic.pipeline.Pipeline;

public class TestPipeline implements Pipeline {
    @Override
    public void process(ResultItems resultitems, Task task) {
        System.out.println("get page: " + resultitems.getRequest().getUrl());
        // 爬取的数据 是以 Map<K,V>的结构保存起来的。
        // 在此对爬取的数据 进行操作。
        for (Map.Entry<String, Object> entry : resultitems.getAll().entrySet()) {
            System.out.println(entry.getKey() + "---" + entry.getValue());
        }
    }
}

 

附上结果图:

Webmagic爬虫简单实现