一、开发工具:

  1. 运行环境: python3.7  win10
  2. python 第三方库: requests (自行安装 )  >>> cmd --->pip install requests, 具体不做介绍)

二、 检测是否安装成功

      在命令行中输入python,敲击回车,进入python环境。

       再输入以下指令并回车:

  import requests   如果不报错,那一般是已经安装好了。

三、request库简介:

python之初学爬虫并且将爬回来的数据存为csv文件

 

四、response属性

python之初学爬虫并且将爬回来的数据存为csv文件

五、我们用requeses库的个体()函数访问必应主页20次,打印返回状态,text内容,并且计算text()属性和content属性返回网页内容的长度

代码如下:

import requests
def getHTMLText(url):
    try: 
        for i in range(0,20):                   #访问20次
            r = requests.get(url, timeout=30)
        r.raise_for_status() #如果状态不是200,引发异常
        r.encoding = 'utf-8' #无论原来用什么编码,都改成utf-8
        return r.status_code,r.text,r.content,len(r.text),len(r.content)  ##返回状态,text和content内容,text()和content()网页的长度
    except:
        return ""
url = "https://cn.bing.com/?toHttps=1&redig=731C98468AFA474D85AECB7DB98B95D9"
print(getHTMLText(url))

运行结果如下:显示结果太多了  所以只截一部分

python之初学爬虫并且将爬回来的数据存为csv文件

python之初学爬虫并且将爬回来的数据存为csv文件

六、最后分享一下我爬取的2019年中国最好大学的排名(这里只显示排名前十的学校了)并且把它保存为csv文件

代码如下:

import requests
import csv
import os
import codecs
from bs4 import BeautifulSoup
allUniv = []
def getHTMLText(url):
    try:
        r = requests.get(url, timeout=30)
        r.raise_for_status()
        r.encoding = 'utf-8'
        return r.text
    except:
        return ""
def fillUnivList(soup):
    data = soup.find_all('tr')
    for tr in data:
        ltd = tr.find_all('td')
        if len(ltd)==0:
            continue
        singleUniv = []
        for td in ltd:
            singleUniv.append(td.string)
        allUniv.append(singleUniv)
def printUnivList(num):
    print("{:^4}{:^10}{:^5}{:^8}{:^10}".format("排名","学校名称","省市","总分","培养规模"))
    for i in range(num):
        u=allUniv[i]
        print("{:^4}{:^10}{:^5}{:^8}{:^10}".format(u[0],u[1],u[2],u[3],u[6]))

'''def write_csv_file(path, head, data):
    try:
        with open(path, 'w', newline='') as csv_file:
            writer = csv.writer(csv_file, dialect='excel')
 
            if head is not None:
                writer.writerow(head)
 
            for row in data:
                writer.writerow(row)
 
            print("Write a CSV file to path %s Successful." % path)
    except Exception as e:
        print("Write an CSV file to path: %s, Case: %s" % (path, e))'''
def writercsv(save_road,num,title):
    if os.path.isfile(save_road):
        with open(save_road,'a',newline='')as f:
            csv_write=csv.writer(f,dialect='excel')
            for i in range(num):
                u=allUniv[i]
                csv_write.writerow(u)
    else:
         with open(save_road,'w',newline='')as f:
            csv_write=csv.writer(f,dialect='excel')
            csv_write.writerow(title)
            for i in range(num):
                u=allUniv[i]
                csv_write.writerow(u)
 
title=["排名","学校名称","省市","总分","生源质量","培养结果","科研规模","科研质量","顶尖成果","顶尖人才","科技服务","产学研究合作","成果转化"]
save_road="F:\\python\csvData.csv"
def main():
    url = 'http://www.zuihaodaxue.cn/zuihaodaxuepaiming2019.html'
    html = getHTMLText(url)
    soup = BeautifulSoup(html, "html.parser")
    fillUnivList(soup)
    printUnivList(10)
    writercsv('F:\\python\csvData.csv',10,title)
main()

 

 代码显示结果如下:

python之初学爬虫并且将爬回来的数据存为csv文件

 打开文件:

python之初学爬虫并且将爬回来的数据存为csv文件

 

好了,今天的分享就到这里了~~~~~~