1.安装beauitfulsoup4  cmd-> pip install beautifulsoup4
python提供了一个支持处理网络链接的内置模块urllib,beatuifulsoup是用来解析html

网络爬虫之爬取百度新闻链接

 

 

 验证安装是否成功

网络爬虫之爬取百度新闻链接

 

 

 

2. pycharm配置

网络爬虫之爬取百度新闻链接

 

 

 

网络爬虫之爬取百度新闻链接

 

 

 3.代码如下

import urllib.request
from bs4 import BeautifulSoup
class Scraper:
def __init__(self,site):
self.site=site

def scrape(self):
r=urllib.request.urlopen(self.site)
html=r.read()
parser="html.parser"
sp=BeautifulSoup(html,parser)
for tag in sp.find_all("a"):
url=tag.get("href")
if url is None:
continue
if "html" in url:
print("\n"+url)

news="http://news.baidu.com/"
Scraper(news).scrape()


4.运行结果就是获取了百度新闻的链接

网络爬虫之爬取百度新闻链接

 

 

 

 

5. 如何把获取的链接保存到文件里呢?

import urllib.request
from bs4 import BeautifulSoup


class Scraper:
def __init__(self, site):
self.site = site

def scrape(self):
response = urllib.request.urlopen(self.site)
html = response.read()
soup = BeautifulSoup(html, 'html.parser')
with open("output.txt", "w") as f:
for tag in soup.find_all('a'):
url = tag.get('href')
if url and 'html' in url:
print("\n" + url)
f.write(url + "\n")
Scraper('http://news.baidu.com/').scrape()