scrapy注意事项汇总


callback执行异常处理

如果Request请求成功后,在解析文本时异常,如下所示:

def parse_details(self, response):
    ...
    item['metres'] = round(float(                      
    response.xpath('/html/body/section[1]/div/div[3]/ul/li[1]/span[1]/text()').extract_first().rstrip(
                    '万公里')) * 10000000)
    ...
    yield item


response.xpath('/html/body/section[1]/div/div[3]/ul/li[1]/span[1]/text()').extract_first().rstrip(
AttributeError: 'NoneType' object has no attribute 'rstrip'

如果是代码错误或者页面改版,重新适配即可,但如果是由于限流规则导致被转发到限流页面,就需要捕获异常进行补救,求解之路如下:
1、DOWNLOADER_MIDDLEWARES中process_exception
本意是请求失败后更换代理,但是未生效,因为process_exception处理的是Request异常,例如:请求超时、请求拒绝、请求未响应等,但上述错误是请求成功后解析造成的,理解错误,陷入误区

2、自行捕获异常,更换代理重试

    try:
    ...
        item['metres'] = round(float(                      
  response.xpath('/html/body/section[1]/div/div[3]/ul/li[1]/span[1]/text()').extract_first().rstrip(
                    '万公里')) * 10000000)
    ...
    except Exception as reason:
        retry_times = response.meta.get('retry_times', 0)
        if retry_times < 3:
            yield scrapy.Request(url=xxx, meta={'url': xxx, 'is_new_proxy': True, 'retry_times': retry_times + 1}, callback=self.parse, dont_filter=True)

需要再meta中设置以下属性:

  • url:限流后请求有可能被重置,response.request.url可能变为重置后的地址
  • is_new_proxy: 声明需要新的代理,在DOWNLOADER_MIDDLEWARES的process_request中作为获取代理的入参
  • retry_times:避免无限重试
    注意:需要设置dont_filter=True,避免重复url被过滤掉

3、使用SPIDER_MIDDLEWARES中process_spider_exception
process_spider_exception(self, response, exception, spider)会捕获callback中抛出的异常,可以在这里添加异常处理策略,例如:邮件报警、短信提示等,可以与自行捕获异常配合使用

scrapy.Request不生效

  • scrapy.Request时未设置dont_filter=True,重复url会被自动过滤(特别需要注意,特别是在异常捕获或者SPIDER_MIDDLEWARES中返回request进行重试时
  • url不在allowed_domains中

反爬取应对策略

  • scrapy参数调整,两个方向:限制并发数、模拟停顿
  • 代理IP和User-Agent,DOWNLOADER_MIDDLEWARES中设置,如下:
    def __init__(self, delay, user_agent_list):
        self.delay = delay
        self.user_agent_list = user_agent_list

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        # RANDOM_DELAY、USER_AGENT_LIST为spider的custom_settings中配置项
        delay = crawler.spider.settings.get("RANDOM_DELAY", 0)
        user_agent_list = crawler.spider.settings.get("USER_AGENT_LIST", [])
        if not isinstance(delay, int):
            raise ValueError("RANDOM_DELAY need a int")
        # 需要使用代理先初始化代理池
        if crawler.spider.name in cls.SPIDERS_USE_PROXY:
            init_proxy("init_proxy")
        s = cls(delay, user_agent_list)
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # 设置随机停顿
        if self.delay > 0:
            delay = random.randint(0, self.delay)
            time.sleep(delay)
        # 设置User-Agent
        if len(self.user_agent_list) > 0:
            request.headers['User-Agent'] = random.choice(self.user_agent_list)
        if spider.name not in self.SPIDERS_USE_PROXY:
            return None

        # 设置代理
        try:
            request.meta['change_proxy_times'] = request.meta.get('change_proxy_times', 0)
            # 构建代理信息
            build_one_proxy(request, spider.name)
        except ProxyError:
            pass

        return None

    def build_one_proxy(request, app):
        # 是否更换新的代理,不是从代理池获取
        is_new_proxy = request.meta.get('is_new_proxy', False)
        # 代理更换次数
        change_proxy_times = request.meta.get('change_proxy_times', 999)
        # 两次从代理池重新取代理的机会,一次重新获取新代理的机会
        if is_new_proxy or change_proxy_times == 3:
            # 获取新代理并加入代理池
            new_proxy = refresh_and_get_one_proxy(app)
            proxy_http_list.append(new_proxy)
            proxy_http = new_proxy
        elif change_proxy_times <= 2:
            proxy_http = random.choice(proxy_http_list)
        else:
            return None
        request.meta['proxy'] = proxy_http
        request.meta['change_proxy_times'] += 1

自定义Cookie失效

     headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Cookie': 'antipas=' + str(xxx),
        'Host': 'www.guazi.com',
        'Referer': 'https://www.xxx.com/',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.75 Safari/537.36'}

     yield scrapy.Request(url=xxx, callback=self.parse, headers=self.headers)

实际请求中Cookie并不是我们设置的值,导致请求203,返回的并不是我们想要的内容,导致后续解决错误,解决方案:
COOKIES_ENABLED设置为False即可

陷入Gave up retrying死循环

...
2021-04-07 09:37:04 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying  (failed 4 times): Connection was refused by other side: 111: Connection refused
...

根据报错信息可以看出,同一个链接重试超过了3次,被放弃了。配置RETRY_TIMES=3,符合上述最大重试次数,但为什么会无线重试呢?

原因在于,3次重试后仍然Connection refused,被SPIDER_MIDDLEWARES的process_spider_exception捕获,而捕获处理逻辑中更换代理返回了新的request,新的请求超过最大重试次数,死循环...

注意:

  • 若启用重试机制,会先自动重试,重试失败后才会被SPIDER_MIDDLEWARES的process_spider_exception捕获
  • 可以通过RETRY_HTTP_CODES更改需要重试的异常请求