<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??碼云GVP開源項目 12k star Uniapp+ElementUI 功能強大 支持多語言、二開方便! 廣告
                我們可以調用 logging 模塊來設置我們想要的log格式和輸出地址。 (1)如果想把logging保存到本地文件中,請在`settings.py`中添加如下配置: ```python LOG_FILE = "books.log" # 日志將保存到本地的books.log文件中 ``` (2)如果想自己定義logging的輸出格式,可以調用`logging.basicConfig`函數來完成。 ```python logging.basicConfig( filename: 指定日志文件名,如果指定了filename,則settings.py中的LOG_FILE可以不指定 filemode: 和file函數意義相同,指定日志文件的打開模式,'w'或'a' format: 指定輸出的格式和內容,format可以輸出很多有用信息,如上例所示: %(levelno)s: 打印日志級別的數值 %(levelname)s: 打印日志級別名稱 %(pathname)s: 打印當前執行程序的路徑,其實就是sys.argv[0] %(filename)s: 打印當前執行程序名 %(funcName)s: 打印日志的當前函數 %(lineno)d: 打印日志的當前行號 %(asctime)s: 打印日志的時間 %(thread)d: 打印線程ID %(threadName)s: 打印線程名稱 %(process)d: 打印進程ID %(message)s: 打印日志信息 datefmt: 指定時間格式,同time.strftime() level: 設置日志級別,默認為logging.WARNING stream: 指定日志的輸出流,可以指定輸出到sys.stderr,sys.stdout或者文件, 默認輸出到sys.stderr,當stream和filename同時指定時,stream被忽略 ) logging打印信息函數: logging.debug('This is debug message') logging.info('This is info message') logging.warning('This is warning message') ``` ```python """ 例:books.py @Date 2021/4/7 """ import scrapy import logging from mySpider.items import MyspiderItem logging.basicConfig(level=logging.WARNING, format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s', datefmt='%a, %d %b %Y %H:%M:%S', filename='zhipin.log', filemode='w') my_logging = logging.getLogger(__name__) class BooksSpider(scrapy.Spider): name = 'books' # 爬蟲名稱 allowed_domains = ['book.jd.com'] # 爬蟲范圍 start_urls = ['http://book.jd.com/'] # 爬取的網站 def parse(self, response): my_logging.warning("-------開始爬蟲!--------") pass ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看