faq.rst 5.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105
  1. Frequently asked questions
  2. ==========================
  3. What is character encoding?
  4. ---------------------------
  5. When you think of “text”, you probably think of “characters and symbols
  6. I see on my computer screen”. But computers don’t deal in characters and
  7. symbols; they deal in bits and bytes. Every piece of text you’ve ever
  8. seen on a computer screen is actually stored in a particular *character
  9. encoding*. There are many different character encodings, some optimized
  10. for particular languages like Russian or Chinese or English, and others
  11. that can be used for multiple languages. Very roughly speaking, the
  12. character encoding provides a mapping between the stuff you see on your
  13. screen and the stuff your computer actually stores in memory and on
  14. disk.
  15. In reality, it’s more complicated than that. Many characters are common
  16. to multiple encodings, but each encoding may use a different sequence of
  17. bytes to actually store those characters in memory or on disk. So you
  18. can think of the character encoding as a kind of decryption key for the
  19. text. Whenever someone gives you a sequence of bytes and claims it’s
  20. “text”, you need to know what character encoding they used so you can
  21. decode the bytes into characters and display them (or process them, or
  22. whatever).
  23. What is character encoding auto-detection?
  24. ------------------------------------------
  25. It means taking a sequence of bytes in an unknown character encoding,
  26. and attempting to determine the encoding so you can read the text. It’s
  27. like cracking a code when you don’t have the decryption key.
  28. Isn’t that impossible?
  29. ----------------------
  30. In general, yes. However, some encodings are optimized for specific
  31. languages, and languages are not random. Some character sequences pop up
  32. all the time, while other sequences make no sense. A person fluent in
  33. English who opens a newspaper and finds “txzqJv 2!dasd0a QqdKjvz” will
  34. instantly recognize that that isn’t English (even though it is composed
  35. entirely of English letters). By studying lots of “typical” text, a
  36. computer algorithm can simulate this kind of fluency and make an
  37. educated guess about a text’s language.
  38. In other words, encoding detection is really language detection,
  39. combined with knowledge of which languages tend to use which character
  40. encodings.
  41. Who wrote this detection algorithm?
  42. -----------------------------------
  43. This library is a port of `the auto-detection code in
  44. Mozilla <http://lxr.mozilla.org/seamonkey/source/extensions/universalchardet/src/base/>`__.
  45. I have attempted to maintain as much of the original structure as
  46. possible (mostly for selfish reasons, to make it easier to maintain the
  47. port as the original code evolves). I have also retained the original
  48. authors’ comments, which are quite extensive and informative.
  49. You may also be interested in the research paper which led to the
  50. Mozilla implementation, `A composite approach to language/encoding
  51. detection <http://www-archive.mozilla.org/projects/intl/UniversalCharsetDetection.html>`__.
  52. Yippie! Screw the standards, I’ll just auto-detect everything!
  53. --------------------------------------------------------------
  54. Don’t do that. Virtually every format and protocol contains a method for
  55. specifying character encoding.
  56. - HTTP can define a ``charset`` parameter in the ``Content-type``
  57. header.
  58. - HTML documents can define a ``<meta http-equiv="content-type">``
  59. element in the ``<head>`` of a web page.
  60. - XML documents can define an ``encoding`` attribute in the XML prolog.
  61. If text comes with explicit character encoding information, you should
  62. use it. If the text has no explicit information, but the relevant
  63. standard defines a default encoding, you should use that. (This is
  64. harder than it sounds, because standards can overlap. If you fetch an
  65. XML document over HTTP, you need to support both standards *and* figure
  66. out which one wins if they give you conflicting information.)
  67. Despite the complexity, it’s worthwhile to follow standards and `respect
  68. explicit character encoding
  69. information <http://www.w3.org/2001/tag/doc/mime-respect>`__. It will
  70. almost certainly be faster and more accurate than trying to auto-detect
  71. the encoding. It will also make the world a better place, since your
  72. program will interoperate with other programs that follow the same
  73. standards.
  74. Why bother with auto-detection if it’s slow, inaccurate, and non-standard?
  75. --------------------------------------------------------------------------
  76. Sometimes you receive text with verifiably inaccurate encoding
  77. information. Or text without any encoding information, and the specified
  78. default encoding doesn’t work. There are also some poorly designed
  79. standards that have no way to specify encoding at all.
  80. If following the relevant standards gets you nowhere, *and* you decide
  81. that processing the text is more important than maintaining
  82. interoperability, then you can try to auto-detect the character encoding
  83. as a last resort. An example is my `Universal Feed
  84. Parser <https://pythonhosted.org/feedparser/>`__, which calls this auto-detection
  85. library `only after exhausting all other
  86. options <https://pythonhosted.org/feedparser/character-encoding.html>`__.