Misplaced Pages

UTF-9 and UTF-18: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactivelyNext edit →Content deleted Content addedVisualWikitext
Revision as of 21:04, 7 October 2005 editPlugwash (talk | contribs)Extended confirmed users9,427 edits merged from UTF-9 and UTF-18 to make new article (saving as work in progress so there is something for people to see will be continuing to expand)  Revision as of 21:24, 7 October 2005 edit undoPlugwash (talk | contribs)Extended confirmed users9,427 edits I think this sticks to the facts feel free to criticiseNext edit →
Line 1: Line 1:
'''UTF-9''' (9-] ]) and UTF-18 (9-] ]) are two specifications for encoding unicode on systems where the ] (nine bit group) is a better fit for the native word size than the ] such as the ]. Both encodings were specified in RFC 4042 which was released on april 1st]] 2005. The encodings suffer from a number of flaws and it is reasonable to assume that they were intended as a joke. However unlike some of the "specifications" given in other ]s they are actually techincally possible to implement. '''UTF-9''' (9-] ]) and UTF-18 (9-] ]) are two specifications for encoding unicode on systems where the ] (nine bit group) is a better fit for the native word size than the ] such as the ]. Both encodings were specified in RFC 4042 which was released on april 1st 2005. The encodings suffer from a number of flaws and it is reasonable to assume that they were intended as a joke. However unlike some of the "specifications" given in other ]s they are actually techincally possible to implement. They are not endorsed by the ].


==Technical details==
UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and LATIN 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non BMP code points take 3. Code points that require multiple bytes are stored with the most significant byte first (at least according to the examples in the specification it doesn't actually appear to be stated anywhere).


UTF-18 is the simpler of the two encodings using a single 18 bit integer per code point. This allows representation of 4 planes which are mapped to the 4 planes currently used by unicode (planes 1-3 and 14) The two private use planes (15 and 16) and the currently unused planes (4-13) are not supported. The UTF-18 specification doesn't say why they didn't allow surrogates to be used for theese code points though when talking about UTF-16 ealier in the RFC they said "This transformation format requires complex surrogates to represent codepoints outside the BMP". After complaining about thier complexity it would have looked a bit hypocritical of them to use them in thier new standard.


==Problems==
<!-- was presented as ] in ]. It is a character encoding proposed for representing ]-encoded text on systems with 9-bit bytes. It is not endorsed by the ].-->
Both specifications suffer from the problem that standard communication protocols are simply not built arround nonets and so it would not be possible to exchange text in theese formats without further encoding or specially designed protocols. This alone would probablly be sufficiant reason to consider there use impractical in most cases.

Furthermore both UTF-9 and UTF-18 have specific problems of there own. UTF-9 requires special care when searching as a shorter sequence can be found at the end of a longer sequence and utf-18 cannot represent all unicode code points (though it can represent all the planes that currently have non private use code point assignments) making it a bad choice for a system that may develop in future.


==External links== ==External links==

Revision as of 21:24, 7 October 2005

UTF-9 (9-bit Unicode Transformation Format) and UTF-18 (9-bit Unicode Transformation Format) are two specifications for encoding unicode on systems where the nonet (nine bit group) is a better fit for the native word size than the octet such as the PDP-10. Both encodings were specified in RFC 4042 which was released on april 1st 2005. The encodings suffer from a number of flaws and it is reasonable to assume that they were intended as a joke. However unlike some of the "specifications" given in other April 1st RFCs they are actually techincally possible to implement. They are not endorsed by the Unicode Consortium.

Technical details

UTF-9 uses a system of putting an octet in the low 8 bits of each nonet and using the high bit to indicate continuation. This means that ASCII and LATIN 1 characters take one nonet each, the rest of the BMP characters take two nonets each and non BMP code points take 3. Code points that require multiple bytes are stored with the most significant byte first (at least according to the examples in the specification it doesn't actually appear to be stated anywhere).

UTF-18 is the simpler of the two encodings using a single 18 bit integer per code point. This allows representation of 4 planes which are mapped to the 4 planes currently used by unicode (planes 1-3 and 14) The two private use planes (15 and 16) and the currently unused planes (4-13) are not supported. The UTF-18 specification doesn't say why they didn't allow surrogates to be used for theese code points though when talking about UTF-16 ealier in the RFC they said "This transformation format requires complex surrogates to represent codepoints outside the BMP". After complaining about thier complexity it would have looked a bit hypocritical of them to use them in thier new standard.

Problems

Both specifications suffer from the problem that standard communication protocols are simply not built arround nonets and so it would not be possible to exchange text in theese formats without further encoding or specially designed protocols. This alone would probablly be sufficiant reason to consider there use impractical in most cases.

Furthermore both UTF-9 and UTF-18 have specific problems of there own. UTF-9 requires special care when searching as a shorter sequence can be found at the end of a longer sequence and utf-18 cannot represent all unicode code points (though it can represent all the planes that currently have non private use code point assignments) making it a bad choice for a system that may develop in future.

External links

  • RFC 4042: UTF-9 and UTF-18 Efficient Transformation Formats of Unicode