This is an old revision of this page, as edited by Kendall-K1 (talk | contribs) at 11:34, 5 June 2015 (remove unsourced OR; see talk page). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 11:34, 5 June 2015 by Kendall-K1 (talk | contribs) (remove unsourced OR; see talk page)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (February 2013) (Learn how and when to remove this message) |
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "UTF-9 and UTF-18" – news · newspapers · books · scholar · JSTOR (July 2010) (Learn how and when to remove this message) |
UTF-9 and UTF-18 (9- and 18-bit Unicode Transformation Format, respectively) were two April Fools' Day RFC joke specifications for encoding Unicode on systems where the nonet (nine bit group) is a better fit for the native word size than the octet, such as the 36-bit PDP-10 and the UNIVAC 1100/2200 series. Both encodings were specified in RFC 4042, written by Mark Crispin (inventor of IMAP) and released on April 1, 2005. The encodings suffer from a number of flaws and it is confirmed by their author that they were intended as a joke.
However, unlike some of the "specifications" given in other April 1 RFCs, they are actually technically possible to implement, and have in fact been implemented in PDP-10 assembly language. They are however not endorsed by the Unicode Consortium.
Technical details
Like the 8-bit code commonly called variable-length quantity, UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and Latin 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non-BMP code points take three. Code points that require multiple nonets are stored starting with the most significant non-zero nonet.
UTF-18 is a fixed length encoding using an 18 bit integer per code point. This allows representation of 4 planes, which are mapped to the 4 planes currently used by Unicode (planes 0–2 and 14). This means that the two private use planes (15 and 16) and the currently unused planes (3–13) are not supported. The UTF-18 specification does not say why they did not allow surrogates to be used for these code points, though when talking about UTF-16 earlier in the RFC, it says "This transformation format requires complex surrogates to represent code points outside the BMP". After complaining about their complexity, it would have looked a bit hypocritical to use surrogates in their new standard. It is unlikely that planes 3–13 will be assigned by Unicode any time in the foreseeable future. Thus, UTF-18, like UCS-2 and UCS-4, guarantees a fixed width for all code points (although not for all glyphs).
See also
Notes
- "Mark Crispin's Web Page". Retrieved 2006-09-17. Points out April Fool's Day for two of his RFCs.
External links
- RFC 4042: UTF-9 and UTF-18 Efficient Transformation Formats of Unicode
Character encodings | |
---|---|
Early telecommunications | |
ISO/IEC 8859 |
|
Bibliographic use | |
National standards | |
ISO/IEC 2022 | |
Mac OS Code pages ("scripts") | |
DOS code pages | |
IBM AIX code pages | |
Windows code pages | |
EBCDIC code pages | |
DEC terminals (VTx) | |
Platform specific |
|
Unicode / ISO/IEC 10646 | |
TeX typesetting system | |
Miscellaneous code pages | |
Control character | |
Related topics | |
Character sets |