Revision as of 21:31, 28 June 2006 edit213.216.199.2 (talk) →Technical details: The author probably meant "necessarily"; but there are already glyphs that cannot be encoded with just one Unicode character.← Previous edit | Revision as of 06:35, 16 August 2006 edit undoStevage (talk | contribs)Autopatrolled, Extended confirmed users, Pending changes reviewers11,864 editsm make clearer that this is a jokeNext edit → | ||
Line 1: | Line 1: | ||
'''UTF-9''' and '''UTF-18''' (9- and 18-] ], respectively) |
'''UTF-9''' and '''UTF-18''' (9- and 18-] ], respectively) were two joke specifications for encoding unicode on systems where the ] (nine bit group) is a better fit for the native word size than the ] such as the ]. Both encodings were specified in RFC 4042, written by ] (inventor of ]) and released on ] ]. The encodings suffer from a number of flaws and it is confirmed by their author that they were intended as a joke. | ||
However unlike some of the "specifications" given in other ]s they are actually technically possible to implement, and have in fact been implemented in ] assembly language. They are not endorsed by the ]. | However unlike some of the "specifications" given in other ]s they are actually technically possible to implement, and have in fact been implemented in ] assembly language. They are not endorsed by the ]. |
Revision as of 06:35, 16 August 2006
UTF-9 and UTF-18 (9- and 18-bit Unicode Transformation Format, respectively) were two joke specifications for encoding unicode on systems where the nonet (nine bit group) is a better fit for the native word size than the octet such as the PDP-10. Both encodings were specified in RFC 4042, written by Mark Crispin (inventor of IMAP) and released on April 1 2005. The encodings suffer from a number of flaws and it is confirmed by their author that they were intended as a joke.
However unlike some of the "specifications" given in other April 1st RFCs they are actually technically possible to implement, and have in fact been implemented in PDP-10 assembly language. They are not endorsed by the Unicode Consortium.
Technical details
UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and Latin 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non BMP code points take 3. Code points that require multiple nonets are stored starting with the most significant non-zero octet.
UTF-18 is the simpler of the two encodings, and uses a single 18 bit integer per code point. This allows representation of 4 planes which are mapped to the 4 planes currently used by Unicode (planes 0-2 and 14). This means that the two private use planes (15 and 16) and the currently unused planes (3-13) are not supported. The UTF-18 specification doesn't say why they didn't allow surrogates to be used for these code points though when talking about UTF-16 earlier in the RFC it says "This transformation format requires complex surrogates to represent code points outside the BMP". After complaining about their complexity it would have looked a bit hypocritical to use surrogates in their new standard. It is unlikely that planes 3-13 will be assigned by Unicode any time in the foreseeable future. Thus, UTF-18, like UCS-2 and UCS-4, guarantees a fixed width for all characters (although not for all glyphs).
Problems
Both specifications suffer from the problem that standard octet-based communication protocols are simply not built around nonets, and so it would not be possible to exchange text in these formats without further encoding or specially designed protocols. This alone would probably be sufficient reason to consider their use impractical in most cases. However, this would be less of a problem with pure bit-stream communication protocols.
Furthermore, both UTF-9 and UTF-18 have specific problems of their own. UTF-9 requires special care when searching, as a shorter sequence can be found at the end of a longer sequence. This means that it is necessary to search backwards in order to find the start of the sequence. UTF-18 cannot represent all Unicode code points (although it can represent all the planes that currently have non-private use code point assignments) making it a bad choice for a system that may develop in the future. On the other hand, UTF-18 is certainly a better choice than UCS-2.
External links
- RFC 4042: UTF-9 and UTF-18 Efficient Transformation Formats of Unicode