python 爬虫数据存入csv格式方法
命令存储方式:
scrapy crawl ju -o ju.csv
第一种方法:
with open("F:/book_top250.csv","w") as f:
f.write("{},{},{},{},{}\n".format(book_name ,rating, rating_num,comment, book_link))
复制代码
第二种方法:
with open("F:/book_top250.csv","w",newline="") as f: ##如果不添加newline="",爬取信息会隔行显示
w = csv.writer(f)
w.writerow([book_name ,rating, rating_num,comment, book_link])
复制代码
方法一的代码:
import requests
from lxml import etree
import time
urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]
with open("F:/book_top250.csv","w") as f:
for url in urls:
r = requests.get(url)
selector = etree.HTML(r.text)
books = selector.xpath('//*[@.format(book_name ,rating, rating_num,comment, book_link))
time.sleep(1)
复制代码
方法二的代码:
import requests
from lxml import etree
import time
import csv
urls = ['https://book.douban.com/top250?start={}'.format(i * 25) for i in range(10)]
with open("F:/book_top250.csv","w",newline='') as f:
for url in urls:
r = requests.get(url)
selector = etree.HTML(r.text)
books = selector.xpath('//*[@
book_link = book.xpath('./div[1]/a/@href')[0]
w = csv.writer(f)
w.writerow([book_name ,rating, rating_num,comment, book_link])
time.sleep(1)
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:python 爬虫数据存入csv格式方法 - Python技术站