📜  Scrapy-提取项目

📅  最后修改于: 2020-10-31 14:38:53             🧑  作者: Mango


描述

为了从网页提取数据,Scrapy使用了一种称为选择器的技术,该技术基于XPathCSS表达式。以下是XPath表达式的一些示例-

  • / html / head / title-这将在HTML文档的元素内选择元素。</p> </li> <li> <p> <b>/ html / head / title / text()</b> -这将选择同一<title>元素内的文本。</p> </li> <li> <p> <b>// td-</b>这将从<td>中选择所有元素。</p> </li> <li> <p> <b>// div [@class =“ slice”]</b> -这将从<i>div中</i>选择所有包含class =“ slice”属性的元素</p> </li> </ul> <p>选择器具有四种基本方法,如下表所示-</p> <table class="table"> <tr> <th style="text-align:center;">Sr.No</th> <th style="text-align:center;">Method & Description</th> </tr> <tr> <td class="ts">1</td> <td> <p><b>extract()</b></p> <p>It returns a unicode string along with the selected data.</p> </td> </tr> <tr> <td class="ts">2</td> <td> <p><b>re()</b></p> <p>It returns a list of unicode strings, extracted when the regular expression was given as argument.</p> </td> </tr> <tr> <td class="ts">3</td> <td> <p><b>xpath()</b></p> <p>It returns a list of selectors, which represents the nodes selected by the xpath expression given as an argument.</p> </td> </tr> <tr> <td class="ts">4</td> <td> <p><b>css()</b></p> <p>It returns a list of selectors, which represents the nodes selected by the CSS expression given as an argument.</p> </td> </tr> </table> <h2>在Shell中使用选择器</h2> <p>为了演示带有内置Scrapy shell的选择器,您需要在系统中安装<a href="https://ipython.org/" rel="nofollow noopener noreferrer" target="_blank">IPython</a> 。重要的是,运行Scrapy时,URL应包含在引号中;否则,带有’&’字符的网址将不起作用。您可以在项目的顶级目录中使用以下命令来启动shell-</p> <div class="hcb_wrap"> <pre class="prism line-numbers lang-java" data-lang="java"><code class="language-markup">scrapy shell "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/" </code></pre> </div> <p>一个shell将如下所示:</p> <div class="hcb_wrap"> <pre class="prism line-numbers lang-java" data-lang="java"><code class="language-markup">[ ... Scrapy log here ... ] 2014-01-23 17:11:42-0400 [scrapy] DEBUG: Crawled (200) <get http:=""></get>(referer: None) [s] Available Scrapy objects: [s] crawler <scrapy.crawler.crawler object="" at=""> [s] item {} [s] request <get http:=""></get> [s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/> [s] settings <scrapy.settings.settings object="" at=""> [s] spider <spider at=""> [s] Useful shortcuts: [s] shelp() Shell help (print this help) [s] fetch(req_or_url) Fetch request (or URL) and update local objects [s] view(response) View response in a browser In [1]: </spider></scrapy.settings.settings></scrapy.crawler.crawler></code></pre> </div> <p>加载外壳程序时,可以分别使用<i>response.body</i>和<i>response.header</i>访问主体或标题。同样,您可以使用<i>response.selector.xpath()</i>或<i>response.selector.css()</i>对响应运行查询。</p> <p>例如-</p> <div class="hcb_wrap"> <pre class="prism line-numbers lang-java" data-lang="java"><code class="language-markup">In [1]: response.xpath('//title') Out[1]: [<selector xpath="//title" data="u'<title">My Book - Scrapy'>] In [2]: response.xpath('//title').extract() Out[2]: [u'<title>My Book - Scrapy: Index: Chapters'] In [3]: response.xpath('//title/text()') Out[3]: [] In [4]: response.xpath('//title/text()').extract() Out[4]: [u'My Book - Scrapy: Index: Chapters'] In [5]: response.xpath('//title/text()').re('(\w+):') Out[5]: [u'Scrapy', u'Index', u'Chapters']

提取数据

要从普通的HTML网站提取数据,我们必须检查网站的源代码以获取XPath。检查之后,您可以看到数据将在ul标签中。选择li标签中的元素。

以下代码行显示了不同类型数据的提取-

用于在li标签内选择数据-

response.xpath('//ul/li')

用于选择描述-

response.xpath('//ul/li/text()').extract()

用于选择站点标题-

response.xpath('//ul/li/a/text()').extract()

用于选择站点链接-

response.xpath('//ul/li/a/@href').extract()

以下代码演示了上述提取器的用法-

import scrapy

class MyprojectSpider(scrapy.Spider):
   name = "project"
   allowed_domains = ["dmoz.org"]
   
   start_urls = [
      "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
      "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
   ]
   def parse(self, response):
      for sel in response.xpath('//ul/li'):
         title = sel.xpath('a/text()').extract()
         link = sel.xpath('a/@href').extract()
         desc = sel.xpath('text()').extract()
         print title, link, desc