UTF-16

From Wikipedia, the free encyclopedia
Jump to: navigation, search

UTF-16 (16-bit Unicode Transformation Format) is a character encoding for Unicode capable of encoding 1,112,064 numbers (called code points) in the Unicode code space from 0 to 0x10FFFF. The encoding is a variable-length encoding as code points are encoded with one or two 16-bit code units.

The older UCS-2 (2-byte Universal Character Set) is a similar character encoding that was superseded by UTF-16 in version 2.0 of the Unicode standard in July 1996.[1] UCS-2 produces a fixed-length format by simply using the code point as the 16-bit code unit. UTF-16 expands the code space significantly by using surrogate pairs to encode code points above 0xFFFF, and produces the same result as UCS-2 for all code points in the range 0-0xFFFF that had been or ever will be assigned a character.

UTF-16 is officially defined in Annex Q of the international standard ISO/IEC 10646.[2] It is also described in "The Unicode Standard" version 2.0 and higher, as well as in the IETF's RFC 2781.

Description[edit]

The Unicode code space is divided into seventeen planes of 216 (65,536) code points each, though some code points have not yet been assigned character values, some are reserved for private use, and some are permanently reserved as non-characters. The code points in each plane have the hexadecimal values xx0000 to xxFFFF, where xx is a hex value from 00 to 10, signifying the plane to which the values belong.

Code points U+0000 to U+D7FF and U+E000 to U+FFFF[edit]

The first plane (code points U+0000 to U+FFFF) contains the most frequently used characters and is called the Basic Multilingual Plane or BMP. Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. The code points in the BMP are the only code points that can be represented in UCS-2. Within this plane, code points U+D800 to U+DFFF (see below) are reserved for lead and trail surrogates.

Code points U+010000 to U+10FFFF[edit]

Code points from the other planes (called Supplementary Planes) are encoded in UTF-16 by pairs of 16-bit code units called surrogate pairs, by the following scheme:

UTF-16 decoder
Lead \ Trail DC00 DC01    …    DFFF
D800 010000 010001 0103FF
D801 010400 010401 0107FF
  ⋮
DBFF 10FC00 10FC01 10FFFF
  • 0x010000 is subtracted from the code point, leaving a 20 bit number in the range 0..0x0FFFFF.
  • The top ten bits (a number in the range 0..0x03FF) are added to 0xD800 to give the first code unit or lead surrogate, which will be in the range 0xD800..0xDBFF. (Previous versions of the Unicode Standard referred to these as high surrogates.)
  • The low ten bits (also in the range 0..0x03FF) are added to 0xDC00 to give the second code unit or trail surrogate, which will be in the range 0xDC00..0xDFFF. (Previous versions of the Unicode Standard referred to these as low surrogates.)

Since the ranges for the lead surrogates, trail surrogates, and valid BMP characters are disjoint, searches are simplified: it is not possible for part of one character to match a different part of another character. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units. UTF-8 shares these advantages, but many earlier multi-byte encoding schemes did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string. UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte.

Because the most commonly used characters are all in the Basic Multilingual Plane, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g. CVE-2008-2938, CVE-2012-2135).[3]

Code points U+D800 to U+DFFF[edit]

The Unicode standard permanently reserves these code point values for UTF-16 encoding of the lead and trail surrogates, and they will never be assigned a character, so there should be no reason to encode them. The official Unicode standard says that no UTF forms, including UTF-16, can encode these code points.

However UCS-2, UTF-8, and UTF-32 can encode these code points in trivial and obvious ways, and large amounts of software does so even though the standard states that such arrangements should be treated as encoding errors. It is possible to unambiguously encode them in UTF-16 by using a code unit equal to the code point, as long as no sequence of two code units can be interpreted as a legal surrogate pair (that is, as long as a lead surrogate is never followed by a trail surrogate). The majority of UTF-16 encoder and decoder implementations translate between encodings as though this were the case.[citation needed]

Byte order encoding schemes[edit]

UTF-16 and UCS-2 produce a sequence of 16-bit code units. Each unit thus takes two 8-bit bytes, and the order of the bytes may depend on the endianness (byte order) of the computer architecture.

To assist in recognizing the byte order of code units, UTF-16 allows a Byte Order Mark (BOM), a code unit with the value U+FEFF, to precede the first actual coded value.[4] (U+FEFF is the invisible zero-width non-breaking space/ZWNBSP character.)[5] If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the non-character value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values. If the BOM is missing, RFC 2781 says that big-endian encoding should be assumed. (In practice, due to Windows using little-endian order by default, many applications also assume little-endian encoding by default.) If there is no BOM, one method of recognizing a UTF-16 encoding is searching for the space character (U+0020) which is very common in texts in most languages.

The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specifically not supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. Many applications ignore the BOM code at the start of any Unicode encoding. Web browsers often use a BOM as a hint in determining the character encoding.[6]

For Internet protocols, IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings. (The names are case insensitive.) The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.

UCS-2 encoding is defined to be big-endian only. In practice most software defaults to little-endian,[citation needed] and handles a leading BOM to define the byte order just as in UTF-16. Although the similar designations UCS-2BE and UCS-2LE imitate the UTF-16 labels, they do not represent official encoding schemes.

Use in major operating systems and environments[edit]

UTF-16 is used for text in the OS API in Microsoft Windows 2000/XP/2003/Vista/7/8/CE.[7] Older Windows NT systems (prior to Windows 2000) only support UCS-2.[8] In Windows XP, no code point above U+FFFF is included in any font delivered with Windows for European languages.[9][10] Files and network data tend to be a mix of UTF-16, UTF-8, and legacy byte encodings.

IBM iSeries systems designate code page CCSID 13488 for UCS-2 character encoding, CCSID 1200 for UTF-16 encoding, and CCSID 1208 for UTF-8 encoding.[11]

UTF-16 is used by the Qualcomm BREW operating systems; the .NET environments; and the Qt cross-platform graphical widget toolkit.

Symbian OS used in Nokia S60 handsets and Sony Ericsson UIQ handsets uses UCS-2.

The Joliet file system, used in CD-ROM media, encodes file names using UCS-2BE (up to sixty-four Unicode characters per file name).

The Python language environment officially only uses UCS-2 internally since version 2.0, but the UTF-8 decoder to "Unicode" produces correct UTF-16. Since Python 2.2, "wide" builds of Unicode are supported which use UTF-32 instead;[12] these are primarily used on Linux. Python 3.3 no longer ever uses UTF-16, instead strings are stored in one of ASCII/Latin-1, UCS-2, or UTF-32, depending on which code points are in the string, with a UTF-8 version also included so that repeated conversions to UTF-8 are fast.[13]

Java originally used UCS-2, and added UTF-16 supplementary character support in J2SE 5.0.

In many languages quoted strings need a new syntax for quoting non-BMP characters, as the "\uXXXX" syntax explicitly limits itself to 4 hex digits. The most common (used by C#, D and several other languages) is to use an upper-case 'U' with 8 hex digits such as "\U0001D11E"[14] In Java 7 regular expressions and ICU and Perl, the syntax "\x{1D11E}" must be used. In many other cases (such as Java outside of regular expressions)[15] the only way to get non-BMP characters is to enter the surrogate halves individually, for example: "\uD834\uDD1E" for U+1D11E.

These implementations all return the number of 16-bit code units rather than the number of Unicode code points when the equivalent of strlen() is used on their strings, and indexing into a string returns the indexed code unit, not the indexed code point,[16][17][18] this leads some people to claim that UTF-16 is not supported. However the term "character" is defined and used in multiple ways within the Unicode terminology,[19] so an unambiguous count is not possible and there is no reason for strlen to attept to return any such value. Most of the confusion is due to obsolete ASCII-era documentation using the term "character" when a fixed-size "byte" or "octet" was intended.[citation needed]

Examples[edit]

code point glyph* character UTF-16 code units (hex) UTF-16BE code units (hex) UTF-16LE code units (hex)
U+007A z LATIN SMALL LETTER Z 007A 00, 7A 7A, 00
U+6C34 CJK UNIFIED IDEOGRAPH-6C34 (water) 6C34 6C, 34 34, 6C
U+FEFF  Byte Order Mark FEFF FE, FF FF, FE
U+10000 𐀀 LINEAR B SYLLABLE B008 A (first non-BMP code point) D800, DC00 D8, 00, DC, 00 00, D8, 00, DC
U+1D11E 𝄞 MUSICAL SYMBOL G CLEF D834, DD1E D8, 34, DD, 1E 34, D8, 1E, DD
U+10FFFD 􏿽 PRIVATE USE CHARACTER-10FFFD (last Unicode code point) DBFF, DFFD DB, FF, DF, FD FF, DB, FD, DF

* Appropriate font and software are required to see the correct glyphs.

Example UTF-16 encoding procedure[edit]

The character at code point U+64321 (hexadecimal) is to be encoded in UTF-16. Since it is above U+FFFF, it must be encoded with a surrogate pair, as follows:

v  = 0x64321
v′ = v - 0x10000
   = 0x54321
   = 0101 0100 0011 0010 0001
vh = v′ >> 10
   = 01 0101 0000 // higher 10 bits of v′
vl = v′ & 0x3FF
   = 11 0010 0001 // lower  10 bits of v′
w1 = 0xD800 + vh
   = 1101 1000 0000 0000
   +        01 0101 0000
   = 1101 1001 0101 0000
   = 0xD950 // first code unit of UTF-16 encoding
w2 = 0xDC00 + vl
   = 1101 1100 0000 0000
   +        11 0010 0001
   = 1101 1111 0010 0001
   = 0xDF21 // second code unit of UTF-16 encoding

See also[edit]

References[edit]

  1. ^ "Questions about encoding forms". Retrieved 12 November 2010. 
  2. ^ ISO/IEC 10646-1:2000(E), pp. 890-892; ISO/IEC 10646:2003(E), pp. 1364-1366; ISO/IEC 10646:2012(E) Final Committee Draft (FCD), p. 2208; The FCD contains a reference to clauses 9 and 10, pp. 15-17.
  3. ^ https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-2938 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-2135
  4. ^ UTF-8 encoding produces byte values strictly less than 0xFE, so either byte in the BOM sequence also identifies the encoding as UTF-16.
  5. ^ Use of U+FEFF as the character ZWNBSP instead of as a BOM has been deprecated in favor of U+2060 (WORD JOINER); see Byte Order Mark (BOM) FAQ at unicode.org. But if an application interprets an initial BOM as a character, the ZWNBSP character is invisible, so the impact is minimal.
  6. ^ goto.glocalnet.net Encoding test pages (load these files and check the encoding that the browser selects)
  7. ^ Unicode (Windows). Retrieved 08 March 2011 "These functions use UTF-16 (wide character) encoding (...) used for native Unicode encoding on Windows operating systems."
  8. ^ "Description of storing UTF-8 data in SQL Server". microsoft.com. December 7, 2005. Retrieved 2008-02-01. 
  9. ^ "Unicode". microsoft.com. Retrieved 2009-07-20. 
  10. ^ "Surrogates and Supplementary Characters". microsoft.com. Retrieved 2009-07-20. 
  11. ^ "Character conversion". IBM. Retrieved 2012-05-22. 
  12. ^ PEP 261
  13. ^ PEP 393
  14. ^ ECMA-334, section 9.4.1
  15. ^ Java Language Specification, Third Edition, section 3.3
  16. ^ Austin, Calvin (May 2004). "J2SE 5.0 in a Nutshell". Retrieved 2008-06-17. "Supplementary Character Support" ,
  17. ^ "Javadoc for java.lang.String.charAt(int)". 
  18. ^ "C Sharp Language Specification". microsoft.com. Retrieved 2009-07-08. 
  19. ^ http://unicode.org/glossary/#C

External links[edit]