lxmlhtml.txt 26 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771
  1. =========
  2. lxml.html
  3. =========
  4. :Author:
  5. Ian Bicking
  6. Since version 2.0, lxml comes with a dedicated Python package for
  7. dealing with HTML: ``lxml.html``. It is based on lxml's HTML parser,
  8. but provides a special Element API for HTML elements, as well as a
  9. number of utilities for common HTML processing tasks.
  10. .. contents::
  11. ..
  12. 1 Parsing HTML
  13. 1.1 Parsing HTML fragments
  14. 1.2 Really broken pages
  15. 2 HTML Element Methods
  16. 3 Running HTML doctests
  17. 4 Creating HTML with the E-factory
  18. 4.1 Viewing your HTML
  19. 5 Working with links
  20. 5.1 Functions
  21. 6 Forms
  22. 6.1 Form Filling Example
  23. 6.2 Form Submission
  24. 7 Cleaning up HTML
  25. 7.1 autolink
  26. 7.2 wordwrap
  27. 8 HTML Diff
  28. 9 Examples
  29. 9.1 Microformat Example
  30. The main API is based on the `lxml.etree`_ API, and thus, on the ElementTree_
  31. API.
  32. .. _`lxml.etree`: tutorial.html
  33. .. _ElementTree: http://effbot.org/zone/element-index.htm
  34. Parsing HTML
  35. ============
  36. Parsing HTML fragments
  37. ----------------------
  38. There are several functions available to parse HTML:
  39. ``parse(filename_url_or_file)``:
  40. Parses the named file or url, or if the object has a ``.read()``
  41. method, parses from that.
  42. If you give a URL, or if the object has a ``.geturl()`` method (as
  43. file-like objects from ``urllib.urlopen()`` have), then that URL
  44. is used as the base URL. You can also provide an explicit
  45. ``base_url`` keyword argument.
  46. ``document_fromstring(string)``:
  47. Parses a document from the given string. This always creates a
  48. correct HTML document, which means the parent node is ``<html>``,
  49. and there is a body and possibly a head.
  50. ``fragment_fromstring(string, create_parent=False)``:
  51. Returns an HTML fragment from a string. The fragment must contain
  52. just a single element, unless ``create_parent`` is given;
  53. e.g., ``fragment_fromstring(string, create_parent='div')`` will
  54. wrap the element in a ``<div>``.
  55. ``fragments_fromstring(string)``:
  56. Returns a list of the elements found in the fragment.
  57. ``fromstring(string)``:
  58. Returns ``document_fromstring`` or ``fragment_fromstring``, based
  59. on whether the string looks like a full document, or just a
  60. fragment.
  61. Really broken pages
  62. -------------------
  63. The normal HTML parser is capable of handling broken HTML, but for
  64. pages that are far enough from HTML to call them 'tag soup', it may
  65. still fail to parse the page in a useful way. A way to deal with this
  66. is ElementSoup_, which deploys the well-known BeautifulSoup_ parser to
  67. build an lxml HTML tree.
  68. .. _BeautifulSoup: http://www.crummy.com/software/BeautifulSoup/
  69. .. _ElementSoup: elementsoup.html
  70. However, note that the most common problem with web pages is the lack
  71. of (or the existence of incorrect) encoding declarations. It is
  72. therefore often sufficient to only use the encoding detection of
  73. BeautifulSoup, called UnicodeDammit, and to leave the rest to lxml's
  74. own HTML parser, which is several times faster.
  75. HTML Element Methods
  76. ====================
  77. HTML elements have all the methods that come with ElementTree, but
  78. also include some extra methods:
  79. ``.drop_tree()``:
  80. Drops the element and all its children. Unlike
  81. ``el.getparent().remove(el)`` this does *not* remove the tail
  82. text; with ``drop_tree`` the tail text is merged with the previous
  83. element.
  84. ``.drop_tag()``:
  85. Drops the tag, but keeps its children and text.
  86. ``.find_class(class_name)``:
  87. Returns a list of all the elements with the given CSS class name.
  88. Note that class names are space separated in HTML, so
  89. ``doc.find_class_name('highlight')`` will find an element like
  90. ``<div class="sidebar highlight">``. Class names *are* case
  91. sensitive.
  92. ``.find_rel_links(rel)``:
  93. Returns a list of all the ``<a rel="{rel}">`` elements. E.g.,
  94. ``doc.find_rel_links('tag')`` returns all the links `marked as
  95. tags <http://microformats.org/wiki/rel-tag>`_.
  96. ``.get_element_by_id(id, default=None)``:
  97. Return the element with the given ``id``, or the ``default`` if
  98. none is found. If there are multiple elements with the same id
  99. (which there shouldn't be, but there often is), this returns only
  100. the first.
  101. ``.text_content()``:
  102. Returns the text content of the element, including the text
  103. content of its children, with no markup.
  104. ``.cssselect(expr)``:
  105. Select elements from this element and its children, using a CSS
  106. selector expression. (Note that ``.xpath(expr)`` is also
  107. available as on all lxml elements.)
  108. ``.label``:
  109. Returns the corresponding ``<label>`` element for this element, if
  110. any exists (None if there is none). Label elements have a
  111. ``label.for_element`` attribute that points back to the element.
  112. ``.base_url``:
  113. The base URL for this element, if one was saved from the parsing.
  114. This attribute is not settable. Is None when no base URL was
  115. saved.
  116. ``.classes``:
  117. Returns a set-like object that allows accessing and modifying the
  118. names in the 'class' attribute of the element. (New in lxml 3.5).
  119. ``.set(key, value=None)``:
  120. Sets an HTML attribute. If no value is given, or if the value is
  121. ``None``, it creates a boolean attribute like ``<form novalidate></form>``
  122. or ``<div custom-attribute></div>``. In XML, attributes must
  123. have at least the empty string as their value like ``<form
  124. novalidate=""></form>``, but HTML boolean attributes can also be
  125. just present or absent from an element without having a value.
  126. Running HTML doctests
  127. =====================
  128. One of the interesting modules in the ``lxml.html`` package deals with
  129. doctests. It can be hard to compare two HTML pages for equality, as
  130. whitespace differences aren't meaningful and the structural formatting
  131. can differ. This is even more a problem in doctests, where output is
  132. tested for equality and small differences in whitespace or the order
  133. of attributes can let a test fail. And given the verbosity of
  134. tag-based languages, it may take more than a quick look to find the
  135. actual differences in the doctest output.
  136. Luckily, lxml provides the ``lxml.doctestcompare`` module that
  137. supports relaxed comparison of XML and HTML pages and provides a
  138. readable diff in the output when a test fails. The HTML comparison is
  139. most easily used by importing the ``usedoctest`` module in a doctest:
  140. .. sourcecode:: pycon
  141. >>> import lxml.html.usedoctest
  142. Now, if you have an HTML document and want to compare it to an expected result
  143. document in a doctest, you can do the following:
  144. .. sourcecode:: pycon
  145. >>> import lxml.html
  146. >>> html = lxml.html.fromstring('''\
  147. ... <html><body onload="" color="white">
  148. ... <p>Hi !</p>
  149. ... </body></html>
  150. ... ''')
  151. >>> print lxml.html.tostring(html)
  152. <html><body onload="" color="white"><p>Hi !</p></body></html>
  153. >>> print lxml.html.tostring(html)
  154. <html> <body color="white" onload=""> <p>Hi !</p> </body> </html>
  155. >>> print lxml.html.tostring(html)
  156. <html>
  157. <body color="white" onload="">
  158. <p>Hi !</p>
  159. </body>
  160. </html>
  161. In documentation, you would likely prefer the pretty printed HTML output, as
  162. it is the most readable. However, the three documents are equivalent from the
  163. point of view of an HTML tool, so the doctest will silently accept any of the
  164. above. This allows you to concentrate on readability in your doctests, even
  165. if the real output is a straight ugly HTML one-liner.
  166. Note that there is also an ``lxml.usedoctest`` module which you can
  167. import for XML comparisons. The HTML parser notably ignores
  168. namespaces and some other XMLisms.
  169. Creating HTML with the E-factory
  170. ================================
  171. .. _`E-factory`: http://online.effbot.org/2006_11_01_archive.htm#et-builder
  172. lxml.html comes with a predefined HTML vocabulary for the `E-factory`_,
  173. originally written by Fredrik Lundh. This allows you to quickly generate HTML
  174. pages and fragments:
  175. .. sourcecode:: pycon
  176. >>> from lxml.html import builder as E
  177. >>> from lxml.html import usedoctest
  178. >>> html = E.HTML(
  179. ... E.HEAD(
  180. ... E.LINK(rel="stylesheet", href="great.css", type="text/css"),
  181. ... E.TITLE("Best Page Ever")
  182. ... ),
  183. ... E.BODY(
  184. ... E.H1(E.CLASS("heading"), "Top News"),
  185. ... E.P("World News only on this page", style="font-size: 200%"),
  186. ... "Ah, and here's some more text, by the way.",
  187. ... lxml.html.fromstring("<p>... and this is a parsed fragment ...</p>")
  188. ... )
  189. ... )
  190. >>> print lxml.html.tostring(html)
  191. <html>
  192. <head>
  193. <link href="great.css" rel="stylesheet" type="text/css">
  194. <title>Best Page Ever</title>
  195. </head>
  196. <body>
  197. <h1 class="heading">Top News</h1>
  198. <p style="font-size: 200%">World News only on this page</p>
  199. Ah, and here's some more text, by the way.
  200. <p>... and this is a parsed fragment ...</p>
  201. </body>
  202. </html>
  203. Note that you should use ``lxml.html.tostring`` and **not**
  204. ``lxml.tostring``. ``lxml.tostring(doc)`` will return the XML
  205. representation of the document, which is not valid HTML. In
  206. particular, things like ``<script src="..."></script>`` will be
  207. serialized as ``<script src="..." />``, which completely confuses
  208. browsers.
  209. Viewing your HTML
  210. -----------------
  211. A handy method for viewing your HTML:
  212. ``lxml.html.open_in_browser(lxml_doc)`` will write the document to
  213. disk and open it in a browser (with the `webbrowser module
  214. <http://python.org/doc/current/lib/module-webbrowser.html>`_).
  215. Working with links
  216. ==================
  217. There are several methods on elements that allow you to see and modify
  218. the links in a document.
  219. ``.iterlinks()``:
  220. This yields ``(element, attribute, link, pos)`` for every link in
  221. the document. ``attribute`` may be None if the link is in the
  222. text (as will be the case with a ``<style>`` tag with
  223. ``@import``).
  224. This finds any link in an ``action``, ``archive``, ``background``,
  225. ``cite``, ``classid``, ``codebase``, ``data``, ``href``,
  226. ``longdesc``, ``profile``, ``src``, ``usemap``, ``dynsrc``, or
  227. ``lowsrc`` attribute. It also searches ``style`` attributes for
  228. ``url(link)``, and ``<style>`` tags for ``@import`` and ``url()``.
  229. This function does *not* pay attention to ``<base href>``.
  230. ``.resolve_base_href()``:
  231. This function will modify the document in-place to take account of
  232. ``<base href>`` if the document contains that tag. In the process
  233. it will also remove that tag from the document.
  234. ``.make_links_absolute(base_href, resolve_base_href=True)``:
  235. This makes all links in the document absolute, assuming that
  236. ``base_href`` is the URL of the document. So if you pass
  237. ``base_href="http://localhost/foo/bar.html"`` and there is a link
  238. to ``baz.html`` that will be rewritten as
  239. ``http://localhost/foo/baz.html``.
  240. If ``resolve_base_href`` is true, then any ``<base href>`` tag
  241. will be taken into account (just calling
  242. ``self.resolve_base_href()``).
  243. ``.rewrite_links(link_repl_func, resolve_base_href=True, base_href=None)``:
  244. This rewrites all the links in the document using your given link
  245. replacement function. If you give a ``base_href`` value, all
  246. links will be passed in after they are joined with this URL.
  247. For each link ``link_repl_func(link)`` is called. That function
  248. then returns the new link, or None to remove the attribute or tag
  249. that contains the link. Note that all links will be passed in,
  250. including links like ``"#anchor"`` (which is purely internal), and
  251. things like ``"mailto:bob@example.com"`` (or ``javascript:...``).
  252. If you want access to the context of the link, you should use
  253. ``.iterlinks()`` instead.
  254. Functions
  255. ---------
  256. In addition to these methods, there are corresponding functions:
  257. * ``iterlinks(html)``
  258. * ``make_links_absolute(html, base_href, ...)``
  259. * ``rewrite_links(html, link_repl_func, ...)``
  260. * ``resolve_base_href(html)``
  261. These functions will parse ``html`` if it is a string, then return the new
  262. HTML as a string. If you pass in a document, the document will be copied
  263. (except for ``iterlinks()``), the method performed, and the new document
  264. returned.
  265. Forms
  266. =====
  267. Any ``<form>`` elements in a document are available through
  268. the list ``doc.forms`` (e.g., ``doc.forms[0]``). Form, input, select,
  269. and textarea elements each have special methods.
  270. Input elements (including ``<select>`` and ``<textarea>``) have these
  271. attributes:
  272. ``.name``:
  273. The name of the element.
  274. ``.value``:
  275. The value of an input, the content of a textarea, the selected
  276. option(s) of a select. This attribute can be set.
  277. In the case of a select that takes multiple options (``<select
  278. multiple>``) this will be a set of the selected options; you can
  279. add or remove items to select and unselect the options.
  280. Select attributes:
  281. ``.value_options``:
  282. For select elements, this is all the *possible* values (the values
  283. of all the options).
  284. ``.multiple``:
  285. For select elements, true if this is a ``<select multiple>``
  286. element.
  287. Input attributes:
  288. ``.type``:
  289. The type attribute in ``<input>`` elements.
  290. ``.checkable``:
  291. True if this can be checked (i.e., true for type=radio and
  292. type=checkbox).
  293. ``.checked``:
  294. If this element is checkable, the checked state. Raises
  295. AttributeError on non-checkable inputs.
  296. The form itself has these attributes:
  297. ``.inputs``:
  298. A dictionary-like object that can be used to access input elements
  299. by name. When there are multiple input elements with the same
  300. name, this returns list-like structures that can also be used to
  301. access the options and their values as a group.
  302. ``.fields``:
  303. A dictionary-like object used to access *values* by their name.
  304. ``form.inputs`` returns elements, this only returns values.
  305. Setting values in this dictionary will effect the form inputs.
  306. Basically ``form.fields[x]`` is equivalent to
  307. ``form.inputs[x].value`` and ``form.fields[x] = y`` is equivalent
  308. to ``form.inputs[x].value = y``. (Note that sometimes
  309. ``form.inputs[x]`` returns a compound object, but these objects
  310. also have ``.value`` attributes.)
  311. If you set this attribute, it is equivalent to
  312. ``form.fields.clear(); form.fields.update(new_value)``
  313. ``.form_values()``:
  314. Returns a list of ``[(name, value), ...]``, suitable to be passed
  315. to ``urllib.urlencode()`` for form submission.
  316. ``.action``:
  317. The ``action`` attribute. This is resolved to an absolute URL if
  318. possible.
  319. ``.method``:
  320. The ``method`` attribute, which defaults to ``GET``.
  321. Form Filling Example
  322. --------------------
  323. Note that you can change any of these attributes (values, method,
  324. action, etc) and then serialize the form to see the updated values.
  325. You can, for instance, do:
  326. .. sourcecode:: pycon
  327. >>> from lxml.html import fromstring, tostring
  328. >>> form_page = fromstring('''<html><body><form>
  329. ... Your name: <input type="text" name="name"> <br>
  330. ... Your phone: <input type="text" name="phone"> <br>
  331. ... Your favorite pets: <br>
  332. ... Dogs: <input type="checkbox" name="interest" value="dogs"> <br>
  333. ... Cats: <input type="checkbox" name="interest" value="cats"> <br>
  334. ... Llamas: <input type="checkbox" name="interest" value="llamas"> <br>
  335. ... <input type="submit"></form></body></html>''')
  336. >>> form = form_page.forms[0]
  337. >>> form.fields = dict(
  338. ... name='John Smith',
  339. ... phone='555-555-3949',
  340. ... interest=set(['cats', 'llamas']))
  341. >>> print tostring(form)
  342. <html>
  343. <body>
  344. <form>
  345. Your name:
  346. <input name="name" type="text" value="John Smith">
  347. <br>Your phone:
  348. <input name="phone" type="text" value="555-555-3949">
  349. <br>Your favorite pets:
  350. <br>Dogs:
  351. <input name="interest" type="checkbox" value="dogs">
  352. <br>Cats:
  353. <input checked name="interest" type="checkbox" value="cats">
  354. <br>Llamas:
  355. <input checked name="interest" type="checkbox" value="llamas">
  356. <br>
  357. <input type="submit">
  358. </form>
  359. </body>
  360. </html>
  361. Form Submission
  362. ---------------
  363. You can submit a form with ``lxml.html.submit_form(form_element)``.
  364. This will return a file-like object (the result of
  365. ``urllib.urlopen()``).
  366. If you have extra input values you want to pass you can use the
  367. keyword argument ``extra_values``, like ``extra_values={'submit':
  368. 'Yes!'}``. This is the only way to get submit values into the form,
  369. as there is no state of "submitted" for these elements.
  370. You can pass in an alternate opener with the ``open_http`` keyword
  371. argument, which is a function with the signature ``open_http(method,
  372. url, values)``.
  373. Example:
  374. .. sourcecode:: pycon
  375. >>> from lxml.html import parse, submit_form
  376. >>> page = parse('http://tinyurl.com').getroot()
  377. >>> page.forms[0].fields['url'] = 'http://lxml.de/'
  378. >>> result = parse(submit_form(page.forms[0])).getroot()
  379. >>> [a.attrib['href'] for a in result.xpath("//a[@target='_blank']")]
  380. ['http://tinyurl.com/2xae8s', 'http://preview.tinyurl.com/2xae8s']
  381. Cleaning up HTML
  382. ================
  383. The module ``lxml.html.clean`` provides a ``Cleaner`` class for cleaning up
  384. HTML pages. It supports removing embedded or script content, special tags,
  385. CSS style annotations and much more.
  386. Note: the HTML Cleaner in ``lxml.html.clean`` is **not** considered
  387. appropriate **for security sensitive environments**.
  388. See e.g. `bleach <https://pypi.org/project/bleach/>`_ for an alternative.
  389. Say, you have an overburdened web page from a hideous source which contains
  390. lots of content that upsets browsers and tries to run unnecessary code on the
  391. client side:
  392. .. sourcecode:: pycon
  393. >>> html = '''\
  394. ... <html>
  395. ... <head>
  396. ... <script type="text/javascript" src="evil-site"></script>
  397. ... <link rel="alternate" type="text/rss" src="evil-rss">
  398. ... <style>
  399. ... body {background-image: url(javascript:do_evil)};
  400. ... div {color: expression(evil)};
  401. ... </style>
  402. ... </head>
  403. ... <body onload="evil_function()">
  404. ... <!-- I am interpreted for EVIL! -->
  405. ... <a href="javascript:evil_function()">a link</a>
  406. ... <a href="#" onclick="evil_function()">another link</a>
  407. ... <p onclick="evil_function()">a paragraph</p>
  408. ... <div style="display: none">secret EVIL!</div>
  409. ... <object> of EVIL! </object>
  410. ... <iframe src="evil-site"></iframe>
  411. ... <form action="evil-site">
  412. ... Password: <input type="password" name="password">
  413. ... </form>
  414. ... <blink>annoying EVIL!</blink>
  415. ... <a href="evil-site">spam spam SPAM!</a>
  416. ... <image src="evil!">
  417. ... </body>
  418. ... </html>'''
  419. To remove the all superfluous content from this unparsed document, use the
  420. ``clean_html`` function:
  421. .. sourcecode:: pycon
  422. >>> from lxml.html.clean import clean_html
  423. >>> print clean_html(html)
  424. <div><style>/* deleted */</style><body>
  425. <a href="">a link</a>
  426. <a href="#">another link</a>
  427. <p>a paragraph</p>
  428. <div>secret EVIL!</div>
  429. of EVIL!
  430. Password:
  431. annoying EVIL!<a href="evil-site">spam spam SPAM!</a>
  432. <img src="evil!"></body></div>
  433. The ``Cleaner`` class supports several keyword arguments to control exactly
  434. which content is removed:
  435. .. sourcecode:: pycon
  436. >>> from lxml.html.clean import Cleaner
  437. >>> cleaner = Cleaner(page_structure=False, links=False)
  438. >>> print cleaner.clean_html(html)
  439. <html>
  440. <head>
  441. <link rel="alternate" src="evil-rss" type="text/rss">
  442. <style>/* deleted */</style>
  443. </head>
  444. <body>
  445. <a href="">a link</a>
  446. <a href="#">another link</a>
  447. <p>a paragraph</p>
  448. <div>secret EVIL!</div>
  449. of EVIL!
  450. Password:
  451. annoying EVIL!
  452. <a href="evil-site">spam spam SPAM!</a>
  453. <img src="evil!">
  454. </body>
  455. </html>
  456. >>> cleaner = Cleaner(style=True, links=True, add_nofollow=True,
  457. ... page_structure=False, safe_attrs_only=False)
  458. >>> print cleaner.clean_html(html)
  459. <html>
  460. <head>
  461. </head>
  462. <body>
  463. <a href="">a link</a>
  464. <a href="#">another link</a>
  465. <p>a paragraph</p>
  466. <div>secret EVIL!</div>
  467. of EVIL!
  468. Password:
  469. annoying EVIL!
  470. <a href="evil-site" rel="nofollow">spam spam SPAM!</a>
  471. <img src="evil!">
  472. </body>
  473. </html>
  474. You can also whitelist some otherwise dangerous content with
  475. ``Cleaner(host_whitelist=['www.youtube.com'])``, which would allow
  476. embedded media from YouTube, while still filtering out embedded media
  477. from other sites.
  478. See the docstring of ``Cleaner`` for the details of what can be
  479. cleaned.
  480. autolink
  481. --------
  482. In addition to cleaning up malicious HTML, ``lxml.html.clean``
  483. contains functions to do other things to your HTML. This includes
  484. autolinking::
  485. autolink(doc, ...)
  486. autolink_html(html, ...)
  487. This finds anything that looks like a link (e.g.,
  488. ``http://example.com``) in the *text* of an HTML document, and
  489. turns it into an anchor. It avoids making bad links.
  490. Links in the elements ``<textarea>``, ``<pre>``, ``<code>``,
  491. anything in the head of the document. You can pass in a list of
  492. elements to avoid in ``avoid_elements=['textarea', ...]``.
  493. Links to some hosts can be avoided. By default links to
  494. ``localhost*``, ``example.*`` and ``127.0.0.1`` are not
  495. autolinked. Pass in ``avoid_hosts=[list_of_regexes]`` to control
  496. this.
  497. Elements with the ``nolink`` CSS class are not autolinked. Pass
  498. in ``avoid_classes=['code', ...]`` to control this.
  499. The ``autolink_html()`` version of the function parses the HTML
  500. string first, and returns a string.
  501. wordwrap
  502. --------
  503. You can also wrap long words in your html::
  504. word_break(doc, max_width=40, ...)
  505. word_break_html(html, ...)
  506. This finds any long words in the text of the document and inserts
  507. ``&#8203;`` in the document (which is the Unicode zero-width space).
  508. This avoids the elements ``<pre>``, ``<textarea>``, and ``<code>``.
  509. You can control this with ``avoid_elements=['textarea', ...]``.
  510. It also avoids elements with the CSS class ``nobreak``. You can
  511. control this with ``avoid_classes=['code', ...]``.
  512. Lastly you can control the character that is inserted with
  513. ``break_character=u'\u200b'``. However, you cannot insert markup,
  514. only text.
  515. ``word_break_html(html)`` parses the HTML document and returns a
  516. string.
  517. HTML Diff
  518. =========
  519. The module ``lxml.html.diff`` offers some ways to visualize
  520. differences in HTML documents. These differences are *content*
  521. oriented. That is, changes in markup are largely ignored; only
  522. changes in the content itself are highlighted.
  523. There are two ways to view differences: ``htmldiff`` and
  524. ``html_annotate``. One shows differences with ``<ins>`` and
  525. ``<del>``, while the other annotates a set of changes similar to ``svn
  526. blame``. Both these functions operate on text, and work best with
  527. content fragments (only what goes in ``<body>``), not complete
  528. documents.
  529. Example of ``htmldiff``:
  530. .. sourcecode:: pycon
  531. >>> from lxml.html.diff import htmldiff, html_annotate
  532. >>> doc1 = '''<p>Here is some text.</p>'''
  533. >>> doc2 = '''<p>Here is <b>a lot</b> of <i>text</i>.</p>'''
  534. >>> doc3 = '''<p>Here is <b>a little</b> <i>text</i>.</p>'''
  535. >>> print htmldiff(doc1, doc2)
  536. <p>Here is <ins><b>a lot</b> of <i>text</i>.</ins> <del>some text.</del> </p>
  537. >>> print html_annotate([(doc1, 'author1'), (doc2, 'author2'),
  538. ... (doc3, 'author3')])
  539. <p><span title="author1">Here is</span>
  540. <b><span title="author2">a</span>
  541. <span title="author3">little</span></b>
  542. <i><span title="author2">text</span></i>
  543. <span title="author2">.</span></p>
  544. As you can see, it is imperfect as such things tend to be. On larger
  545. tracts of text with larger edits it will generally do better.
  546. The ``html_annotate`` function can also take an optional second
  547. argument, ``markup``. This is a function like ``markup(text,
  548. version)`` that returns the given text marked up with the given
  549. version. The default version, the output of which you see in the
  550. example, looks like:
  551. .. sourcecode:: python
  552. def default_markup(text, version):
  553. return '<span title="%s">%s</span>' % (
  554. cgi.escape(unicode(version), 1), text)
  555. Examples
  556. ========
  557. Microformat Example
  558. -------------------
  559. This example parses the `hCard <http://microformats.org/wiki/hcard>`_
  560. microformat.
  561. First we get the page:
  562. .. sourcecode:: pycon
  563. >>> import urllib
  564. >>> from lxml.html import fromstring
  565. >>> url = 'http://microformats.org/'
  566. >>> content = urllib.urlopen(url).read()
  567. >>> doc = fromstring(content)
  568. >>> doc.make_links_absolute(url)
  569. Then we create some objects to put the information in:
  570. .. sourcecode:: pycon
  571. >>> class Card(object):
  572. ... def __init__(self, **kw):
  573. ... for name, value in kw:
  574. ... setattr(self, name, value)
  575. >>> class Phone(object):
  576. ... def __init__(self, phone, types=()):
  577. ... self.phone, self.types = phone, types
  578. And some generally handy functions for microformats:
  579. .. sourcecode:: pycon
  580. >>> def get_text(el, class_name):
  581. ... els = el.find_class(class_name)
  582. ... if els:
  583. ... return els[0].text_content()
  584. ... else:
  585. ... return ''
  586. >>> def get_value(el):
  587. ... return get_text(el, 'value') or el.text_content()
  588. >>> def get_all_texts(el, class_name):
  589. ... return [e.text_content() for e in els.find_class(class_name)]
  590. >>> def parse_addresses(el):
  591. ... # Ideally this would parse street, etc.
  592. ... return el.find_class('adr')
  593. Then the parsing:
  594. .. sourcecode:: pycon
  595. >>> for el in doc.find_class('hcard'):
  596. ... card = Card()
  597. ... card.el = el
  598. ... card.fn = get_text(el, 'fn')
  599. ... card.tels = []
  600. ... for tel_el in card.find_class('tel'):
  601. ... card.tels.append(Phone(get_value(tel_el),
  602. ... get_all_texts(tel_el, 'type')))
  603. ... card.addresses = parse_addresses(el)