【发布时间】:2023-04-07 19:39:01
【问题描述】:
所以我设法制作了一个爬虫,我正在搜索所有链接,当我到达产品链接时,我会找到一些东西并获取所有产品信息,但是当它到达某个页面时,它会给出一个unicode 错误:/
import urllib
import urlparse
from itertools import ifilterfalse
from urllib2 import URLError, HTTPError
from bs4 import BeautifulSoup
urls = ["http://www.kiabi.es/"]
visited = []
def get_html_text(url):
try:
return urllib.urlopen(current_url).read()
except (URLError, HTTPError, urllib.ContentTooShortError):
print "Error getting " + current_url
def find_internal_links_in_html_text(html_text, base_url):
soup = BeautifulSoup(html_text, "html.parser")
links = []
for tag in soup.findAll('a', href=True):
url = urlparse.urljoin(base_url, tag['href'])
domain = urlparse.urlparse(base_url).hostname
if domain in url:
links.append(url)
return links
def is_url_already_visited(url):
return url in visited
while urls:
current_url = urls.pop()
word = '#C'
if word in current_url:
[do sth]
#print "Parsing", current_url
html_text = get_html_text(current_url)
visited.append(current_url)
found_urls = find_internal_links_in_html_text(html_text, current_url)
new_urls = ifilterfalse(is_url_already_visited, found_urls)
urls.extend(new_urls)
错误:
Traceback (most recent call last):
File "<ipython-input-1-67c2b4cf7175>", line 1, in <module>
runfile('S:/Consultas_python/Kiabi.py', wdir='S:/Consultas_python')
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 685, in runfile
execfile(filename, namespace)
File "C:\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 71, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)
File "S:/Consultas_python/Kiabi.py", line 91, in <module>
html_text = get_html_text(current_url)
File "S:/Consultas_python/Kiabi.py", line 30, in get_html_text
return urllib.urlopen(current_url).read()
File "C:\Anaconda2\lib\urllib.py", line 87, in urlopen
return opener.open(url)
File "C:\Anaconda2\lib\urllib.py", line 185, in open
fullurl = unwrap(toBytes(fullurl))
File "C:\Anaconda2\lib\urllib.py", line 1070, in toBytes
" contains non-ASCII characters")
UnicodeError: URL u'http://www.kiabi.es/Barbapap\xe1_s1' contains non-ASCII characters
或
UnicodeError: URL u'http://www.kiabi.es/Petit-B\xe9guin_s2' contains non-ASCII characters
我该如何解决?
【问题讨论】:
标签:
python
unicode
beautifulsoup
web-crawler
non-ascii-characters
本站文章如无特殊说明,均为本站原创,如若转载,请注明出处:UnicodeError: URL 包含非 ASCII 字符 (Python 2.7) - Python技术站