Misplaced Pages

UTF-9 and UTF-18: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 17:45, 11 August 2013 edit82.113.121.184 (talk) Technical details: avoid using ambiguous term "characters" where "code points" would be more applicable← Previous edit Revision as of 16:20, 13 October 2013 edit undoSmjg (talk | contribs)Extended confirmed users, Pending changes reviewers26,881 editsm rewrite first bit of Problems sectionNext edit →
Line 11: Line 11:


==Problems== ==Problems==
UTF-9 and UTF-18 are not likely to be put to practical use on modern computer systems, whose memory structure and communication protocols are based on ]s rather than nonets. As such, these systems will generally use ], ] or ] instead to store and transmit Unicode text. However, UTF-9 and UTF-18 may be of interest to ] enthusiasts, who may use these schemes to represent Unicode text on PDP-10 and similar systems.
Both specifications suffer from the problem that standard communication protocols are built around octets rather than nonets, and so it would not be possible to exchange text in these formats without further encoding or specially designed protocols (except possibly with protocols based on 1-bit streams). This alone would probably be sufficient reason to consider their use impractical in most cases. However, this would be less of a problem with pure bit-stream communication protocols<!-- do such protocols actually exist? yes but only within low-level framing protocols, mixed with special control bits for synchronization and clock alignments or for error recovery, but not in actual transport protocols or storage formats-->.


Furthermore, both UTF-9 and UTF-18 have specific problems of their own: Furthermore, both UTF-9 and UTF-18 have specific problems of their own:

Revision as of 16:20, 13 October 2013

This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed. (February 2013) (Learn how and when to remove this message)
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "UTF-9 and UTF-18" – news · newspapers · books · scholar · JSTOR (July 2010) (Learn how and when to remove this message)

UTF-9 and UTF-18 (9- and 18-bit Unicode Transformation Format, respectively) were two April Fools' Day RFC joke specifications for encoding Unicode on systems where the nonet (nine bit group) is a better fit for the native word size than the octet, such as the 36-bit PDP-10. Both encodings were specified in RFC 4042, written by Mark Crispin (inventor of IMAP) and released on April 1, 2005. The encodings suffer from a number of flaws and it is confirmed by their author that they were intended as a joke.

However, unlike some of the "specifications" given in other April 1 RFCs, they are actually technically possible to implement, and have in fact been implemented in PDP-10 assembly language. They are not endorsed by the Unicode Consortium.

Technical details

Like the 8-bit code commonly called variable-length quantity, UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and Latin 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non-BMP code points take three. Code points that require multiple nonets are stored starting with the most significant non-zero nonet.

UTF-18 is a fixed length encoding using an 18 bit integer per code point. This allows representation of 4 planes, which are mapped to the 4 planes currently used by Unicode (planes 0–2 and 14). This means that the two private use planes (15 and 16) and the currently unused planes (3–13) are not supported. The UTF-18 specification does not say why they did not allow surrogates to be used for these code points, though when talking about UTF-16 earlier in the RFC, it says "This transformation format requires complex surrogates to represent code points outside the BMP". After complaining about their complexity, it would have looked a bit hypocritical to use surrogates in their new standard. It is unlikely that planes 3–13 will be assigned by Unicode any time in the foreseeable future. Thus, UTF-18, like UCS-2 and UCS-4, guarantees a fixed width for all code points (although not for all glyphs).

Problems

UTF-9 and UTF-18 are not likely to be put to practical use on modern computer systems, whose memory structure and communication protocols are based on octets rather than nonets. As such, these systems will generally use UTF-8, UTF-16 or UTF-32 instead to store and transmit Unicode text. However, UTF-9 and UTF-18 may be of interest to retrocomputing enthusiasts, who may use these schemes to represent Unicode text on PDP-10 and similar systems.

Furthermore, both UTF-9 and UTF-18 have specific problems of their own:

  • UTF-9 requires special care when searching, as a shorter sequence can be found at the end of a longer sequence. This means that it is necessary to search backwards before the start of the sequence in order to find the actual start of the sequence, because only the highest bit of each nonet indicates continuation when it is set, but not the start of the sequence (this problem does not occur with UTF-8 where you can safely determine the start of the sequence from a random position without having to scan before the start position).
  • UTF-18 cannot represent all Unicode code points (although unlike UCS-2 it can represent all the planes that currently have non-private use code point assignments, i.e. characters in the 4 planes 0, 1, 2, and 14, but not planes 3 though 13, which are currently unused, nor planes 15 or 16, which are for private use) making it a bad choice for a system that may need to support new languages (or rare CJK ideographs that are added after the SIP fills up) in the future: plane 3 will very likely be used for newer CJK extensions, and other planes may be used other ideographic scripts or pictographic sets still not encoded, so UTF-18 would not support these characters (UTF-18 provides no surrogates mechanism like UTF-16: it not only prohibits the use of the range U+D800–U+DBFF not just for encoding the supported supplementary planes 1, 2, and 14, but also prohibits using them for all other standard planes 3 through 13, and supplementary private use planes 15 and 16).

See also

External links

  • RFC 4042: UTF-9 and UTF-18 Efficient Transformation Formats of Unicode

Notes

  1. "Mark Crispin's Web Page". Retrieved 2006-09-17. Points out April Fool's Day for two of his RFCs.
Character encodings
Early telecommunications
ISO/IEC 8859
Bibliographic use
National standards
ISO/IEC 2022
Mac OS Code pages
("scripts")
DOS code pages
IBM AIX code pages
Windows code pages
EBCDIC code pages
DEC terminals (VTx)
Platform specific
Unicode / ISO/IEC 10646
TeX typesetting system
Miscellaneous code pages
Control character
Related topics
Character sets
Categories: