<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                [TOC] ## 1. scrapyd > 1. scrapyd是又scrapy提供的免費開源的工具,用來管理你創建的scrapy項目的有界面的管理工具。 > 2. scrapy-client是是免費開源的工具,用來打包并發布你的scrapy項目到scrapyd。用scrapyd發布要麻煩一些。這個工具簡化了發布步驟。 官方文檔:http://scrapyd.readthedocs.io/en/latest/overview.html ### 1.1 install(Ubuntu) * 前提要求安裝了scrapy:https://doc.scrapy.org/en/latest/topics/ubuntu.html ~~~ # 安裝依賴 sudo apt-get install -y libffi-dev libssl-dev libxml2-dev libxslt1-dev zlib1g-dev build-dep python-lxml git clone https://github.com/scrapy/scrapyd cd scrapyd/ python3 setup.py install ~~~ 或者: ~~~ pip3 install scrapyd ~~~ > 1. 報錯:Invalid environment marker:python_version < '3',解決辦法如下 ~~~ sudo pip3 install --upgrade setuptools ~~~ > 2. 報錯: Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ~~~ sudo apt-get install -y libxml2-dev libxslt1-dev zlib1g-dev ~~~ > 3. 報錯:error: Could not find required distribution pyasn1 ~~~ pip3 install pyasn1 ~~~ > 4. 報錯:error: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ~~~ sudo apt-get build-dep python-lxml ~~~ > 5. 報錯:c/_cffi_backend.c:15:17: fatal error: ffi.h: No such file or directory #include <ffi.h> ~~~ sudo apt-get install libffi-dev ~~~ 6. 報錯:error: Setup script exited with error in cryptography setup command: Invalid environment marker: platform_python_implementation != 'PyPy' ~~~ sudo pip install --upgrade setuptools ~~~ ### 1.2 配置scrapyd > Scrapyd searches for configuration files in the following locations, and parses them in order with the latest one taking more priority: ~~~ /etc/scrapyd/scrapyd.conf (Unix) c:\scrapyd\scrapyd.conf (Windows) /etc/scrapyd/conf.d/* (in alphabetical order, Unix) scrapyd.conf ~/.scrapyd.conf (users home directory) ~~~ scrapyd默認綁定127.0.0.1,我們需要把它修改為服務器ip,這樣client才可以向它發送部署請求 ~~~ # 創建目錄 mkdir /etc/scrapyd # 創建文件 vim /etc/scrapyd/scrapyd.conf # 增加配置 [scrapyd] eggs_dir = eggs logs_dir = logs items_dir = jobs_to_keep = 5 dbs_dir = dbs max_proc = 0 max_proc_per_cpu = 4 finished_to_keep = 100 poll_interval = 5.0 bind_address = 192.168.56.130 http_port = 6800 debug = off runner = scrapyd.runner application = scrapyd.app.application launcher = scrapyd.launcher.Launcher webroot = scrapyd.website.Root [services] schedule.json = scrapyd.webservice.Schedule cancel.json = scrapyd.webservice.Cancel addversion.json = scrapyd.webservice.AddVersion listprojects.json = scrapyd.webservice.ListProjects listversions.json = scrapyd.webservice.ListVersions listspiders.json = scrapyd.webservice.ListSpiders delproject.json = scrapyd.webservice.DeleteProject delversion.json = scrapyd.webservice.DeleteVersion listjobs.json = scrapyd.webservice.ListJobs daemonstatus.json = scrapyd.webservice.DaemonStatus ~~~ ### 1.2 運行scrapyd ~~~ nohup scrapyd & > scrpyd.log 2>&1 & ~~~ ## 2. scrapyd-clinet GitHub地址:https://github.com/scrapy/scrapyd-client ### 2.1 安裝 ~~~ pip3 install scrapyd-client ~~~ ### 2.2 部署爬蟲項目scrapyd-deploy #### 2.2.1 配置爬蟲項目 > 修改爬蟲項目下的scrapy.cfg,設置該爬蟲項目所要發布到的服務器(運行scrapyd的服務器) ~~~ [deploy] url = http://192.168.56.130:6800/ project = proxyscrapy username = proxyscrapy password = tuna ~~~ #### 2.2.2 部署 **1. 執行打包命令** ~~~ scrapyd-deploy ~~~ > Windows下報錯: ~~~ E:\PythonWorkSpace\proxyscrapy>scrapyd-deploy 'scrapyd-deploy' 不是內部或外部命令,也不是可運行的程序 ~~~ > * 通常情況下,開始時在Windows系統下,但是不具有可執行權限,所以要做以下修改 1. 在python的安裝目錄下,找到Scripts目錄,新建scrapyd-deploy.bat文件 ![](https://box.kancloud.cn/7ff68824b1f51d9b522ebad4489f2892_1214x501.png) 2. 添加一下內容 ~~~ @echo off "D:\Python\Python36\python.exe" "D:\Python\Python36\Scripts\scrapyd-deploy" %1 %2 %3 %4 %5 %6 %7 %8 %9 ~~~ > * 再次執行打包,成功返回以下: ~~~ Packing version 1519871059 Deploying to project "proxyscrapy" in http://192.168.56.130:6800/addversion.json Server response (200): {"project": "proxyscrapy", "status": "ok", "node_name": "zabbix01", "version": "1519871059", "spiders": 4} ~~~ **2. 發布爬蟲項目** windos下需要安裝curl :http://www.hmoore.net/tuna_dai_/day01/535005 ~~~ curl http://192.168.56.130:6800/schedule.json -d project=proxyscrapy -d spider=yaoq ~~~ scrapyd還提供了很多請求,包括列舉所有爬蟲項目,所有爬蟲,取消運行的爬蟲等,官方api:http://scrapyd.readthedocs.io/en/latest/api.html 命令成功返回 ~~~ {"status": "ok", "node_name": "zabbix01", "jobid": "3db9af3e1d0011e88b5c080027a60f41" } ~~~ 3. 查看爬蟲狀態 http://192.168.56.130:6800 點擊jobs查看爬蟲 ![](https://box.kancloud.cn/d6ae60474a7ec7773d4b0112beec767e_863x565.png) 之后可以看爬蟲的狀態和日志 ![](https://box.kancloud.cn/429af5ca9c8be6bb88d0fd3695d59fd4_1189x341.png) 修改代碼后要重新scrapyd-deploy打包部署,爽!!!!! ## 3. 部署到多臺scrapyd服務器 ### 3.1 配置爬蟲項目的scrapy.cfg > 1. 指定多個target(scrapyd服務器),格式[deploy:標識名] ~~~ [deploy:zabbix01] url = http://192.168.56.130:6800/ project = proxyscrapy username = proxyscrapy password = tuna [deploy:es01] url = http://192.168.56.130:6800/ project = proxyscrapy username = proxyscrapy password = tuna ~~~ ### 3.2 打包項目到scrapyd(target) #### 3.2.1 單個部署 scrapyd-deploy [target標識名] 例: ~~~ E:\PythonWorkSpace\proxyscrapy>scrapyd-deploy zabbix01 Packing version 1519951093 Deploying to project "proxyscrapy" in http://192.168.56.130:6800/addversion.json Server response (200): {"status": "ok", "version": "1519951093", "node_name": "zabbix01", "spiders": 4, "project": "proxyscrapy"} E:\PythonWorkSpace\proxyscrapy>scrapyd-deploy es01 Packing version 1519951106 Deploying to project "proxyscrapy" in http://192.168.56.130:6800/addversion.json Server response (200): {"status": "ok", "version": "1519951106", "node_name": "zabbix01", "spiders": 4, "project": "proxyscrapy"} ~~~ #### 3.2.2 多個project同時打包 ~~~ E:\PythonWorkSpace\scrapyredis>scrapyd-deploy -a Packing version 1519952580 Deploying to project "scrapyredis" in http://192.168.56.130:6800/addversion.json Server response (200): {"status": "ok", "version": "1519952580", "node_name": "zabbix01", "spiders": 1, "project": "scrapyredis"} Packing version 1519952580 Deploying to project "scrapyredis" in http://192.168.56.130:6800/addversion.json Server response (200): {"status": "ok", "version": "1519952580", "node_name": "zabbix01", "spiders": 1, "project": "scrapyredis"} ~~~ > 1. 此時可以查看有多少可用的target ~~~ E:\PythonWorkSpace\proxyscrapy>scrapyd-deploy -l zabbix01 http://192.168.56.130:6800/ es01 http://192.168.56.130:6800/ ~~~ > 2. 查看某一target上部署那些項目 ~~~ E:\PythonWorkSpace\proxyscrapy>scrapyd-deploy -L zabbix01 scrapyredis proxyscrapy ~~~ > 3. 在服務器上開啟爬蟲
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看