<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                [TOC] # Python爬蟲抓取之客戶端渲染(CSR)頁面抓取方法 ## 客戶端渲染(Client Side Render)頁面的常見方法 - AJAX - JavaScript ### 如何分析AJAX請求接口 AJAX 叫做異步JavaScript與XML,洋文全名(Asynchronous JavaScript And XML)。 AJAX應用程序可能使用XML來傳輸數據,但是以`純文本`或`JSON`文本的形式傳輸數據同樣普遍。 AJAX允許通過與后臺的Web服務器交換數據來異步更新網頁。這意味著可以更新網頁的某些部分,而無需重新加載整個頁面。 那么AJAX是如何實現不刷新頁面而更新頁面數據的呢? 這就依賴于 `XMLHttpRequest`對象,XHR(XMLHttpRequest) 用于在后臺與服務器交換數據。所有現代的瀏覽器都支持 XMLHttpRequest 對象。 打開瀏覽器的`開發者工具(按F12)` => "訪問`https://www.jianshu.com`頁面(滑到頁面最底下,點擊`閱讀更多`)" => "網絡(Network)" => "過濾(Filter)" => 選擇"XHR"。 `XHR`下面就是調用`AJAX`請求地址了(Headers里有`"x-requested-with": "XMLHttpRequest"`以及`"x-pjax": "true"`) 。 找到點擊`閱讀更多`時產生的`trending_notes`請求地址,然后右鍵選擇`復制(Copy)`中的`Copy as fetch`,可以看到類似的如下內容: ```html fetch("https://www.jianshu.com/trending_notes", { "headers": { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36", "accept": "text/html, */*; q=0.01", "accept-language": "zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5", "cache-control": "no-cache", "content-type": "application/x-www-form-urlencoded; charset=UTF-8", "pragma": "no-cache", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "x-csrf-token": "jQ8cjTTjfPWR0dEoYEjtBSzr6v7XxsDg3x21en7Sl2eIuLg2WYmRhl+HR/iKNaeLxkvE7hdZcXILWYXMXyoKZQ==", "x-pjax": "true", "x-requested-with": "XMLHttpRequest" }, "referrer": "https://www.jianshu.com/", "referrerPolicy": "no-referrer-when-downgrade", "body": "page=4&seen_snote_ids%5B%5D=73208581&seen_snote_ids%5B%5D=70592266&seen_snote_ids%5B%5D=56402115&seen_snote_ids%5B%5D=70049570&seen_snote_ids%5B%5D=71683600&seen_snote_ids%5B%5D=54427150&seen_snote_ids%5B%5D=69777587&seen_snote_ids%5B%5D=72391260&seen_snote_ids%5B%5D=73111362&seen_snote_ids%5B%5D=70579958&seen_snote_ids%5B%5D=72330113&seen_snote_ids%5B%5D=72741304&seen_snote_ids%5B%5D=73501668&seen_snote_ids%5B%5D=72890232&seen_snote_ids%5B%5D=71820900&seen_snote_ids%5B%5D=70868542&seen_snote_ids%5B%5D=72294439&seen_snote_ids%5B%5D=72060912&seen_snote_ids%5B%5D=72060165&seen_snote_ids%5B%5D=70942923&seen_snote_ids%5B%5D=71081191", "method": "POST", "mode": "cors", "credentials": "include" }); ``` 下面我們分析一下這條請求: - method方法在`page < 3`時為 `GET`,之后使用的是`POST` , 也就是說兩種方法都是可以的。 - 提交參數`body`中有: `page=4`和一些`seen_snote_ids[]=xxxx`的已讀文章id列表信息,說明攜帶的參數就是頁數和已經閱讀過的文章id列表。 - Headers信息保留即可。 實現一下抓取代碼: ```Python import requests as req from lxml import etree import time url_host='https://www.jianshu.com' headers = { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36", "accept": "text/html, */*; q=0.01", "accept-language": "zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5", "cache-control": "no-cache", "content-type": "application/x-www-form-urlencoded; charset=UTF-8", "pragma": "no-cache", "sec-fetch-dest": "empty", "sec-fetch-mode": "cors", "sec-fetch-site": "same-origin", "x-csrf-token": "jQ8cjTTjfPWR0dEoYEjtBSzr6v7XxsDg3x21en7Sl2eIuLg2WYmRhl+HR/iKNaeLxkvE7hdZcXILWYXMXyoKZQ==", "x-pjax": "true", "x-requested-with": "XMLHttpRequest" } def jianshu_trending(page, payload): max_page = 3 # 抓取簡書發現的文章列表 if page > max_page: url = url_host + '/trending_notes' else: url = url_host # JSON數據接口 print(payload) seen_list = [] if page > max_page: resp = req.post(url, data = payload, headers = headers) else: resp = req.get(url, params=payload, headers = headers) doc = etree.HTML(resp.text) li_list = doc.xpath('//li') print('*'*40) for item in li_list: note_id = item.xpath('//li/@data-note-id')[0] seen_list.append(note_id) url = url_host + item.xpath('div[@class="content"]/a[@class="title"]/@href')[0] title = item.xpath('div[@class="content"]/a[@class="title"]/text()')[0] brief = str(item.xpath('div[@class="content"]/p[@class="abstract"]/text()')[0]) user = item.xpath('div[@class="content"]/div[@class="meta"]/a[@class="nickname"]/text()')[0] user_url = url_host + item.xpath('div[@class="content"]/div[@class="meta"]/a[@class="nickname"]/@href')[0] span = item.xpath('div[@class="content"]/div[@class="meta"]/span/text()') like = span[0] if len(span) == 2: like = span[1] print('標題: ' + title + '| ' + '作者: ' + user + ' 主頁: ' + user_url) # print('鏈接: ' + url) # print('作者: ' + user + ' 主頁: ' + user_url) # print('點贊: ' + like) # print('摘要: ' + brief.strip()) print('*'*40) return seen_list if __name__ == '__main__': seen_list = [] for i in range(0,15): if len(seen_list) > 0: payload = {'page':i, 'seen_snote_ids[]' : seen_list} else: payload = {'page': i,} seen_list += jianshu_trending(i, payload) time.sleep(3) ```
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看