Ask Sawal

Discussion Forum
Notification Icon1
Write Answer Icon
Add Question Icon

Where is utf-16 used?

4 Answer(s) Available
Answer # 1 #

UTF16 is generally used as a direct mapping to multi-byte character sets, ie onyl the original 0-0xFFFF assigned characters.

[3]
Edit
Query
Report
Stephany Swank
Wardrobe Crew
Answer # 2 #

Well, there are two caveats to my comment.

Erik states: "UTF-16 covers the entire BMP with single units - So unless you have a need for the rarer characters outside the BMP, UTF-16 is effectively 2 bytes per character."

Caveat 1)

If you can be certain that your application will NEVER need any character outside of the BMP, and that any library code you write for use with it will NEVER be used with any application that will ever need a character outside the BMP, then you could use UTF-16, and write code that makes the implicit assumption that every character will be exactly two bytes in length.

That seems exceedingly dangerous (actually, stupid).

If your code assumes that all UTF-16 characters are two bytes in length, and your program interacts with an application or library where there is a single character outside of the BMP, then your code will break. Code that examines or manipulates UTF-16 must be written to handle the case of a UTF-16 character requiring more than 2 bytes; therefore, I am "dismissing" this caveat.

UTF-16 is not simpler to code for than UTF-8 (code for both must handle variable-length characters).

Caveat 2)

UTF-16 MIGHT be more computationally efficient, under some circumstances, if suitably written.

Like this: Suppose that certain long strings are seldom modified, but often examined (or better, never modified once built - i.e., a string builder creating unmodifiable strings). A flag could be set for each string, indicating whether the string contains only "fixed length" characters (i.e., contains no characters that are not exactly two bytes in length). Strings for which the flag is true could be examined with optimized code that assumes fixed length (2 byte) characters.

How about space-efficiency?

[2]
Edit
Query
Report
Ragesh Lokur
RECREATION FACILITY ATTENDANT
Answer # 3 #

UTF-16 (16-bit Unicode Transformation Format) is a character encoding capable of encoding all 1,112,064 valid code points of Unicode (in fact this number of code points is dictated by the design of UTF-16). The encoding is variable-length, as code points are encoded with one or two 16-bit code units. UTF-16 arose from an earlier obsolete fixed-width 16-bit encoding, now known as UCS-2 (for 2-byte Universal Character Set), once it became clear that more than 216 (65,536) code points were needed.[1]

UTF-16 is used by systems such as the Microsoft Windows API, the Java programming language and JavaScript/ECMAScript. It is also sometimes used for plain text and word-processing data files on Microsoft Windows. It is used by SMS (the SMS standard specifies UCS-2, but almost all users actually implement UTF-16 so that emojis work).[citation needed]

UTF-16 is the only web-encoding that is incompatible with ASCII[2][nb 1] and never gained popularity on the web, where it is declared by under 0.002% of web pages[4] (and many of these are actually UTF-8 because of "contradictory character encoding specifications" and/or "incorrect character encoding defined").[5][6] UTF-8, by comparison, accounts for 98% of all web pages.[7] The Web Hypertext Application Technology Working Group (WHATWG) considers UTF-8 "the mandatory encoding for all [text]" and that for security reasons browser applications should not use UTF-16.[8]

In the late 1980s, work began on developing a uniform encoding for a "Universal Character Set" (UCS) that would replace earlier language-specific encodings with one coordinated system. The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music. The original idea was to replace the typical 256-character encodings, which required 1 byte per character, with an encoding using 65,536 (216) values, which would require 2 bytes (16 bits) per character.

Two groups worked on this in parallel, ISO/IEC JTC 1/SC 2 and the Unicode Consortium, the latter representing mostly manufacturers of computing equipment. The two groups attempted to synchronize their character assignments so that the developing encodings would be mutually compatible. The early 2-byte encoding was originally called "Unicode", but is now called "UCS-2".[9]

When it became increasingly clear that 216 characters would not suffice,[1] IEEE introduced a larger 31-bit space and an encoding (UCS-4) that would require 4 bytes per character. This was resisted by the Unicode Consortium, both because 4 bytes per character wasted a lot of memory and disk space, and because some manufacturers were already heavily invested in 2-byte-per-character technology. The UTF-16 encoding scheme was developed as a compromise and introduced with version 2.0 of the Unicode standard in July 1996.[10] It is fully specified in RFC 2781, published in 2000 by the IETF.[11][12]

In the UTF-16 encoding, code points less than 216 are encoded with a single 16-bit code unit equal to the numerical value of the code point, as in the older UCS-2. The newer code points greater than or equal to 216 are encoded by a compound value using two 16-bit code units. These two 16-bit code units are chosen from the UTF-16 surrogate range 0xD800–0xDFFF which had not previously been assigned to characters. Values in this range are not used as characters, and UTF-16 provides no legal way to code them as individual code points. A UTF-16 stream, therefore, consists of single 16-bit code points outside the surrogate range for code points in the Basic Multilingual Plane (BMP), and pairs of 16-bit values within the surrogate range for code points above the BMP.

UTF-16 is specified in the latest versions of both the international standard ISO/IEC 10646 and the Unicode Standard. "UCS-2 should now be considered obsolete. It no longer refers to an encoding form in either 10646 or the Unicode Standard."[13] UTF-16 will never be extended to support a larger number of code points or to support the code points that were replaced by surrogates, as this would violate the Unicode Stability Policy with respect to general category or surrogate code points.[14] (Any scheme that remains a self-synchronizing code would require allocating at least one BMP code point to start a sequence. Changing the purpose of a code point is disallowed.)

Each Unicode code point is encoded either as one or two 16-bit code units. How these 16-bit codes are stored as bytes then depends on the endianness of the text file or communication protocol.

A "character" may need from as few as two bytes to fourteen[15] or even more bytes to be recorded. For instance an emoji flag character takes 8 bytes, since it is "constructed from a pair of Unicode scalar values"[16] (and those values are outside the BMP and require 4 bytes each).

Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. These code points in the Basic Multilingual Plane (BMP) are the only code points that can be represented in UCS-2.[citation needed] As of Unicode 9.0, some modern non-Latin Asian, Middle-Eastern, and African scripts fall outside this range, as do most emoji characters.

Code points from the other planes (called Supplementary Planes) are encoded as two 16-bit code units called a surrogate pair, by the following scheme:

Illustrated visually, the distribution of U' between W1 and W2 looks like:[17]

The high surrogate and low surrogate are also known as "leading" and "trailing" surrogates, respectively, analogous to the leading and trailing bytes of UTF-8.[18]

Since the ranges for the high surrogates (0xD800–0xDBFF), low surrogates (0xDC00–0xDFFF), and valid BMP characters (0x0000–0xD7FF, 0xE000–0xFFFF) are disjoint, it is not possible for a surrogate to match a BMP character, or for two adjacent code units to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units (i.e. the type of code unit can be determined by the ranges of values in which it falls). UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as Shift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string. UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte.

Because the most commonly used characters are all in the BMP, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g. CVE-2008-2938, CVE-2012-2135).

The Supplementary Planes contain emojis, historic scripts, less used symbols, less used Chinese ideographs, etc. Since the encoding of Supplementary Planes contains 20 significant bits (10 of 16 bits in each of the high and low surrogates), 220 code points can be encoded, divided into 16 planes of 216 code points each. Including the separately-handled Basic Multilingual Plane, there are a total of 17 planes.

The Unicode standard reserves these code point values for the high and low surrogates, and they will never be assigned a character, so there should be no reason to encode them. The official Unicode standard says that no UTF forms, including UTF-16, can encode these code points. However, Windows allows unpaired surrogates in filenames[19] and other places, which generally means they have to be supported by software in spite of their exclusion from the Unicode standard.

UCS-2, UTF-8, and UTF-32 can encode these code points in trivial and obvious ways, and a large amount of software does so, even though the standard states that such arrangements should be treated as encoding errors.

It is possible to unambiguously encode an unpaired surrogate (a high surrogate code point not followed by a low one, or a low one not preceded by a high one) in the format of UTF-16 by using a code unit equal to the code point. The result is not valid UTF-16, but the majority of UTF-16 encoder and decoder implementations do this then when translating between encodings.[citation needed]

To encode U+10437 (𐐷) to UTF-16:

To decode U+10437 (𐐷) from UTF-16:

The following table summarizes this conversion, as well as others. The colors indicate how bits from the code point are distributed among the UTF-16 bytes. Additional bits added by the UTF-16 encoding process are shown in black.

UTF-16 and UCS-2 produce a sequence of 16-bit code units. Since most communication and storage protocols are defined for bytes, and each unit thus takes two 8-bit bytes, the order of the bytes may depend on the endianness (byte order) of the computer architecture.

To assist in recognizing the byte order of code units, UTF-16 allows a byte order mark (BOM), a code point with the value U+FEFF, to precede the first actual coded value.[nb 2] (U+FEFF is the invisible zero-width non-breaking space/ZWNBSP character.)[nb 3] If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the noncharacter value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values.

If the BOM is missing, RFC 2781 recommends[nb 4] that big-endian (BE) encoding be assumed. In practice, due to Windows using little-endian (LE) order by default, many applications assume little-endian encoding. It is also reliable to detect endianness by looking for null bytes, on the assumption that characters less than U+0100 are very common. If more even bytes (starting at 0) are null, then it is big-endian.

The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specifically not supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. Most applications ignore a BOM in all cases despite this rule.

For Internet protocols, IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings (the names are case insensitive). The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.

Similar designations, UCS-2BE and UCS-2LE, are used to show versions of UCS-2.

UTF-16 is often claimed to be more space-efficient than UTF-8 for East Asian languages, since it uses two bytes for characters that take 3 bytes in UTF-8. Since real text contains many spaces, numbers, punctuation, markup (for e.g. web pages), and control characters, which take only one byte in UTF-8, this is only true for artificially constructed dense blocks of text.[citation needed] A more serious claim can be made for Devanagari and Bengali, which use multi-letter words and all the letters take 3 bytes in UTF-8 and only 2 in UTF-16.

In addition the Chinese Unicode encoding standard GB 18030 always produces files the same size or smaller than UTF-16 for all languages, not just for Chinese (it does this by sacrificing self-synchronization).

UTF-16 is used for text in the OS API of all currently supported versions of Microsoft Windows (and including at least all since Windows CE/2000/XP/2003/Vista/7[20]) including Windows 10. In Windows XP, no code point above U+FFFF is included in any font delivered with Windows for European languages.[21][22] Older Windows NT systems (prior to Windows 2000) only support UCS-2.[23] Files and network data tend to be a mix of UTF-16, UTF-8, and legacy byte encodings.

While there's been some UTF-8 support for even Windows XP,[24] it was improved (in particular the ability to name a file using UTF-8) in Windows 10 insider build 17035 and the May 2019 update. As of May 2019, Microsoft recommends software use UTF-8, on Windows and Xbox, instead of other 8-bit encodings.[25] It is unclear if they are recommending usage of UTF-8 over UTF-16, though they do state "UTF-16 [..] is a unique burden that Windows places on code that targets multiple platforms."[26]

The IBM i operating system designates CCSID (code page) 13488 for UCS-2 encoding and CCSID 1200 for UTF-16 encoding, though the system treats them both as UTF-16.[27]

UTF-16 is used by the Qualcomm BREW operating systems; the .NET environments; and the Qt cross-platform graphical widget toolkit.

Symbian OS used in Nokia S60 handsets and Sony Ericsson UIQ handsets uses UCS-2. iPhone handsets use UTF-16 for Short Message Service instead of UCS-2 described in the 3GPP TS 23.038 (GSM) and IS-637 (CDMA) standards.[28]

The Joliet file system, used in CD-ROM media, encodes file names using UCS-2BE (up to sixty-four Unicode characters per file name).

Python version 2.0 officially only used UCS-2 internally, but the UTF-8 decoder to "Unicode" produced correct UTF-16. There was also the ability to compile Python so that it used UTF-32 internally, this was sometimes done on Unix. Python 3.3 switched internal storage to use one of ISO-8859-1, UCS-2, or UTF-32 depending on the largest code point in the string.[29] Python 3.12 drops some functionality (for CPython extensions) to make it easier to migrate to UTF-8 for all strings.[30]

Java originally used UCS-2, and added UTF-16 supplementary character support in J2SE 5.0. Recently they have encouraged dumping support for any 8-bit encoding other than UTF-8[31] but internally UTF-16 is still used.

JavaScript may use UCS-2 or UTF-16.[32] As of ES2015, string methods and regular expression flags have been added to the language that permit handling strings from an encoding-agnostic perspective.

Swift, Apple's preferred application language, used UTF-16 to store strings until version 5 which switched to UTF-8.[33]

Quite a few languages make the encoding part of the string object, and thus store and support a large set of encodings including UTF-16. Most consider UTF-16 and UCS-2 to be different encodings. Examples are the PHP language[34] and MySQL.[35]

A method to determine what encoding a system is using internally is to ask for the "length" of string containing a single non-BMP character. If the length is 2 then UTF-16 is being used. 4 indicates UTF-8. 3 or 6 may indicate CESU-8. 1 may indicate UTF-32, but more likely indicates the language decodes the string to code points before measuring the "length".

In many languages, quoted strings need a new syntax for quoting non-BMP characters, as the C-style "\uXXXX" syntax explicitly limits itself to 4 hex digits. The following examples illustrate the syntax for the non-BMP character U+1D11E 𝄞 MUSICAL SYMBOL G CLEF:

[1]
Edit
Query
Report
Reiley Winer
Vedette
Answer # 4 #

UTF-16 is a standard method of encoding data. but what actually encoding means?

Every day we are transmitting a tremendous amount of data from one place to another via some communication channel, and that channel only understands the binary data that are packed in the form of bits normally called packets. This is what the encoding is, the conversion of readable data by some standard (UTF-8, UTF-16) into the equivalent bits/cipher is called encoding.

Again, what is UTF-16? Before diving into the UTF-16 standard, Let’s understand the ASCII and UTF-8 first. But, why ASCII and UTF-8? because UTF-16 is the superset of UTF-8 and UTF-8 is the superset of ASCII.

Let’s talk about ASCII first.

ASCII is the first Encoding Scheme, which encodes our data but it is only limited by 128 characters(256 extendable-size). ASCII can only encode the most common English characters, numbers, punctuations, etc. ASCII uses 7-bits to represent a character, by using 7-bit we can have a maximum of 2⁷ i.e., 128 distinct combinations which generally means we can only represent 128 maximum characters.

You can use the following link to know about the ASCII values of characters, numbers, and punctuations.

Now, let’s dive into the UTF standards. But, before studying UTF, the problem with ASCII is that it is only limited to 128 characters means ASCII can encode only some amount of data, but nowadays we are dealing with a very huge amount of data and that data can be of any type and may be in any language.

All over the entire world, there is about 600 type of natural languages and that is used by the people of different region and different countries. Now the question we should ask ourselves is how the data is being transferred and encoded such that the same data is received by another person in a language that the receiver understands.

This is where UTF Standards Came into the picture.

UTF(Unicode Transformation Format) is the standard for representing a great variety of characters from any language. To overcome the problem of ASCII which is only limited to 128 characters, UTF was developed to encode all characters for each and every language.

UTF-8 is a variable-size encoding.It is mainly worked by manipulating numbers(code point) at the binary level.

Here, variable- size encoding defines:

In UTF-8, high order bits of each byte tells how many bytes are used to encode values. Let me explain this more deeply.

How UTF will encode the data let’s take HexaDecimal number 1FACBD?

We have converted our Hexadecimal number to Binary(4-bit) format.

This is how our binary data will look like in 8-bit blocks.

We know that our Hexa number is greater than ffff which means we have to use 4-byte encoding. In 4-byte encoding, high order bits for 1-byte is 11110, for 2-byte 10,3-byte 10, and for 4th is 10.

After adding the high order bits to every 8-bit blocks, all the binary values of a hexadecimal number is added in the respective blocks and get a new Hexadecimal Number which is also known as Encoded data.

The final encoded value will be: F7BAB2BD

This is how the UTF-8 standard works.

As we know that UTF-16 is the superset of UTF-8.

UTF-8 and UTF 16 are only two of the established standards for encoding. They only differ in how many bytes they use to encode each character. Since both are variable-width encoding I’ve briefly described above, they can use up to four bytes to encode the data, UTF-8 only uses 1 byte (8bits) and UTF-16 uses 2 bytes(16bits).

Let's look at this example:

In the above code, these Unicode escapes begin with the characters \u and are followed by exactly four hexadecimal digits.

In this above example, ‘¿Cómo estás?’ is the Spanish String, we know that UTF-16 uses 2-bytes(16-bits) to represent characters, and if we run this code we’ll get something like:

[0]
Edit
Query
Report
Geekiyanage mwdn Owais
STEEL PAN FORM PLACING SUPERVISOR