UTF-16 (16-
bit Unicode Transformation Format) is a
character encoding for Unicode capable of encoding 1,112,064 numbers (called ''code points'') in the Unicode
code space from 0 to 0x10FFFF. It produces a
variable-length result of either one or two 16-bit ''code units'' per code point.
The older UCS-2 (2-byte Universal Character Set) is a similar character encoding that was superseded by UTF-16 in version 2.0 of the Unicode standard in July 1996. It produces a fixed-length format by simply using the code point as the 16-bit code unit and produces exactly the same result as UTF-16 for 96.9% of all the code points in the range 0-0xFFFF, including all characters that had been assigned a value at that time.
UTF-16 is officially defined in Annex C of the international standard ISO/IEC 10646. It is also described in "The Unicode Standard" version 2.0 and higher, as well as in the IETF's RFC 2781.
Description
The Unicode code space is divided into 17 ''
planes'' of 2
16 (65,536) code points each, though some code points have not yet been assigned character values, some are reserved for private use, and some are permanently reserved as non-characters. The code points in each plane have the
hexadecimal values xx0000 to xxFFFF, where xx is a hex value from 00 to 10, signifying which plane the values belong to.
Code points U+0000 to U+D7FF and U+E000 to U+FFFF
The first plane (code points U+0000 to U+FFFF) contains the most frequently used characters and is called the
Basic Multilingual Plane or ''BMP''. Both UTF-16 and UCS-2 encode valid code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. The code points in the BMP are the ''only'' code points that can be represented in UCS-2.
Code points U+10000 to U+10FFFF
Code points from the other planes (called Supplementary Planes) are encoded in UTF-16 by pairs of 16-bit code units called a ''surrogate pair'', by the following scheme:
+ UTF-16 decoder
|
hi \ lo
|
!DC00
|
!DC01
|
! …
|
!DFFF
|
!D800
|
10000|| | 10001 |
… |
103FF
|
!D801
|
10400|| | 10401 |
… |
107FF
|
! ⋮
|
⋮|| | ⋮ |
⋱ |
⋮
|
!DBFF
|
10FC00|| | 10FC01 |
… |
10FFFF
|
0x10000 is subtracted from the code point, leaving a 20 bit number in the range 0..0xFFFFF.
The top ten bits (a number in the range 0..0x3FF) are added to 0xD800 to give the first code unit or ''high surrogate'', which will be in the range 0xD800..0xDBFF.
The low ten bits (also in the range 0..0x3FF) are added to 0xDC00 to give the second code unit or ''low surrogate'', which will be in the range 0xDC00..0xDFFF.
Since the ranges for the high surrogates, low surrogates, and valid BMP characters are disjoint, searches are simplified: it is not possible for part of one character to match a different part of another character. It also means that UTF-16 is ''self-synchronizing'': the start of the next character following a given code unit can be found by examining only that one code unit. UTF-8 shares these advantages, but many earlier encoding schemes did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string.
Because the most commonly used characters are all in the Basic Multilingual Plane, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software.
Code points U+D800..U+DFFF
UTF-16 offers no legal way to encode the code points used by surrogate pairs: they can appear legally only in pairs, with a high surrogate immediately followed by a low surrogate, to represent a code point in one of the Supplementary Planes. The Unicode standard permanently reserves the values U+D800..U+DFFF for UTF-16 encoding and these code points will never be assigned a character, so there should be no reason to encode them. The number of code points set aside in the BMP for surrogate pair encoding can be judged from the following diagram, which shows their position in color within the entire range of the BMP:
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
It is possible to encode these values in UCS-2 (by using the code point as the single code unit). It is also possible to illegally but unambiguously encode them in UTF-16 (as long as they are not in a high+low pair) by using a code unit equal to the code point. The majority of UTF-16 encoder and decoder implementations translate between other encodings as though this were the case, although the standard states that such arrangements should be treated as encoding errors. Other possibilities include simply dropping the surrogate or replacing it with U+FFFD (the standard "illegal character" code), as some web browsers do.
Byte order encoding schemes
UTF-16 and UCS-2 produce a sequence of 16-bit code units. Each unit thus takes two 8-bit bytes, and the order of the bytes may depend on the
endianness (byte order) of the computer architecture.
To assist in recognizing the byte order of code units, UTF-16 allows a Byte Order Mark (BOM), a code unit with the value U+FEFF, to precede the first actual coded value. (U+FEFF is the invisible Zero-width non-breaking space (ZWNBSP) character.) If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the non-character value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values. If the BOM is missing, the standard says that big-endian encoding should be assumed. (In practice, due to Windows using little-endian order by default, many applications also assume little-endian encoding by default.)
The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specifically ''not'' supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. (Many applications erroneously generate a BOM simply to mark the following output as Unicode text, even in encodings like UTF-8 where the byte order makes no difference; so in practice, most software ignores these "accidental" BOMs.)
For Internet protocols, IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings. (The names are case insensitive.) The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.
UCS-2 encoding is defined to be big-endian only. In practice most software defaults to little-endian, and handles a leading BOM to define the byte order just as in UTF-16. Although the similar designations UCS-2BE and UCS-2LE imitate the UTF-16 labels, they do not represent official encoding schemes.
Use in major operating systems and environments
UTF-16 is used for the native internal representation of text in Microsoft Windows 2000/XP/2003/Vista/CE. Since Windows NT started with UCS-2 (before Windows 2000), conversion of internal interfaces to 16-bit code units is an ongoing project. This is more difficult than it can first appear due to the inability to represent invalid code sequences from 8-bit encodings. For instance the Registry is stored in an 8-bit encoding, but the tools to manipulate it fail to work properly if the bytes are not in valid sequences, making such errors impossible to fix. Windows often presents remote filesystems that use UTF-8 for the file names as though the file names are in CP-1252 to work around the inability to represent invalid names, resulting in mojibake which often leads users to mistakenly blame the remote file system rather than Windows for poor internationalization support.
Older Windows NT systems (prior to Windows 2000) only support UCS-2. In Windows XP, no code point above U+FFFF is included in any font delivered with Windows for European languages.
UTF-16 is used by the Qualcomm BREW operating systems; the .NET environments; Mac OS X's Cocoa and Core Foundation frameworks; and the Qt cross-platform graphical widget toolkit.
Symbian OS used in Nokia S60 handsets and Sony Ericsson UIQ handsets uses UCS-2.
The Joliet file system, used in CD-ROM media, encodes file names using UCS-2BE (up to 64 Unicode characters per file).
The Python language environment officially only uses UCS-2 internally since version 2.1, but the UTF-8 decoder to "Unicode" produces correct UTF-16. Python can be compiled to use UCS-4 (UTF-32) but this is commonly only done on Unix systems.
Java originally used UCS-2, and added UTF-16 supplementary character support in J2SE 5.0. However, non-BMP characters require the individual surrogate halves to be entered individually, for example: "\uD834\uDD1E" for U+1D11E.
In many languages quoted strings need a new syntax for quoting non-BMP characters, as the "\uXXXX" syntax explicitly limits itself to 4 hex digits. The most common (used by C#, D and several other languages) is to use an upper-case 'U' with 8 hex digits such as "\U0001D11E"
All of these implementations return the number of 16-bit code units rather than the number of Unicode "characters" when you use the equivalent of strlen() on their strings, and that indexing into a string returns the indexed code unit, not the indexed "character". The term "character" is defined and used in multiple ways within the Unicode termiology, so a count of them is not possible. Most of the confusion is due to obsolete ASCII-era documentation using the term "character" when a fixed-size "byte" was intended.
Examples
! code point
|
! glyph*
|
! character
|
hexadecimal>hex)
|
! UTF-16BE code units (hex)
|
! UTF-16LE code units (hex)
|
U+007A
|
z
|
|
007A
|
00, 7A
|
7A, 00
|
U+6C34
|
水
|
(water)
|
6C34
|
6C, 34
|
34, 6C
|
U+10000
|
|
(first non-BMP code point)
|
D800, DC00
|
D8, 00, DC, 00
|
00, D8, 00, DC
|
U+1D11E
|
|
|
D834, DD1E
|
D8, 34, DD, 1E
|
34, D8, 1E, DD
|
U+10FFFD
|
|
(last Unicode code point)
|
DBFF, DFFD
|
DB, FF, DF, FD
|
FF, DB, FD, DF
|
''* Appropriate font and software are required to see the correct glyphs.''
Example UTF-16 encoding procedure
The character at code point U+64321 (hexadecimal) is to be encoded in UTF-16. Since it is above U+FFFF, it must be encoded with a surrogate pair, as follows:
v = 0x64321
v′ = v - 0x10000
= 0x54321
= 0101 0100 0011 0010 0001
vh = v′ >> 10
= 01 0101 0000 // higher 10 bits of v′
vl = v′ & 0x3FF
= 11 0010 0001 // lower 10 bits of v′
w1 = 0xD800 + vh
= 1101 1000 0000 0000
+ 01 0101 0000
= 1101 1001 0101 0000
= 0xD950 // first code unit of UTF-16 encoding
w2 = 0xDC00 + vl
= 1101 1100 0000 0000
+ 11 0010 0001
= 1101 1111 0010 0001
= 0xDF21 // second code unit of UTF-16 encoding
See also
Comparison of Unicode encodings
Unicode plane
UTF-8
References
External links
A very short algorithm for determining the surrogate pair for any codepoint
Unicode Technical Note #12: UTF-16 for Processing
Unicode FAQ: What is the difference between UCS-2 and UTF-16?
Unicode Character Name Index
RFC 2781: UTF-16, an encoding of ISO 10646
java.lang.String documentation, discussing surrogate handling
Category:Character sets
Category:Encodings
Category:Character encoding
Category:Unicode Transformation Formats
als:UTF-16
ca:UTF-16
cs:UTF-16
da:UTF-16
de:UTF-16
es:UTF-16
fr:UTF-16
ko:UTF-16
hr:UTF-16
it:UTF-16
he:UTF-16
hu:UTF-16/UCS-2
nl:UTF-16
ja:UTF-16
pl:UTF-16
pt:UTF-16
ru:UTF-16
sk:UTF-16
sv:UTF-16
th:UTF-16/UCS-2
uk:UTF-16
zh:UTF-16