<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                合規國際互聯網加速 OSASE為企業客戶提供高速穩定SD-WAN國際加速解決方案。 廣告
                - 找到pipelines.py里的類名MeituanPipeline - settings.py文件激活中間件MeituanPipeline ``` ITEM_PIPELINES = { 'meituan.pipelines.MeituanPipeline': 300, } ``` - cake.py ``` # -*- coding: utf-8 -*- import scrapy from ..items import MeituanItem # 引入items的類,數據通過items傳入 class CakeSpider(scrapy.Spider): name = 'cake' allowed_domains = ['meituan.com'] start_urls = ['http://i.meituan.com/s/changsha-蛋糕/'] def parse(self, response): mt = MeituanItem() # 實例化 title_list = response.xpath('//*[@id="deals"]/dl/dd/dl/dd[1]/a/span[1]/text()').extract() money_list = response.xpath('//*[@id="deals"]/dl/dd[1]/dl/dd[2]/dl/dd[1]/a/div/div[2]/div[2]/span[1]/text()').extract() for i,j in zip(title_list,money_list): # print(i+"-------------"+j) mt['title'] = i # 把數據丟給管道items, mt['title']等同于 items中的title = scrapy.Field() mt['money'] = j yield mt ``` - items.py ``` # -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class MeituanItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() title = scrapy.Field() money = scrapy.Field() # pass ``` - 在pipelines.py里打印測試 ``` # -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html class MeituanPipeline(object): def process_item(self, item, spider): print(spider.name) return item ``` >scrapy crawl cake
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看