|[ << ]||[ >> ]||[Top]||[Contents]||[Index]||[ ? ]|
This library provides functions for manipulating Unicode strings and for manipulating C strings according to the Unicode standard.
It consists of the following parts:
elementary string functions
conversion from/to legacy encodings
formatted output to strings
character classification and properties
string width when using nonproportional fonts
grapheme cluster breaks
line breaking algorithm
normalization (composition and decomposition)
regular expressions (not yet implemented)
libunistring is for you if your application involves non-trivial text processing, such as upper/lower case conversions, line breaking, operations on words, or more advanced analysis of text. Text provided by the user can, in general, contain characters of all kinds of scripts. The text processing functions provided by this library handle all scripts and all languages.
libunistring is for you if your application already uses the ISO C / POSIX
<wctype.h> functions and the text it operates on is
provided by the user and can be in any language.
libunistring is also for you if your application uses Unicode strings as internal in-memory representation.
Unicode is a standardized repertoire of characters that contains characters from all scripts of the world, from Latin letters to Chinese ideographs and Babylonian cuneiform glyphs. It also specifies how these characters are to be rendered on a screen or on paper, and how common text processing (word selection, line breaking, uppercasing of page titles etc.) is supposed to behave on Unicode text.
Unicode also specifies three ways of storing sequences of Unicode characters in a computer whose basic unit of data is an 8-bit byte:
Every character is represented as 1 to 4 bytes.
Every character is represented as 1 to 2 units of 16 bits.
Every character is represented as 1 unit of 32 bits.
For encoding Unicode text in a file, UTF-8 is usually used. For encoding Unicode strings in memory for a program, either of the three encoding forms can be reasonably used.
Unicode is widely used on the web. Prior to the use of Unicode, web pages were in many different encodings (ISO-8859-1 for English, French, Spanish, ISO-8859-2 for Polish, ISO-8859-7 for Greek, KOI8-R for Russian, GB2312 or BIG5 for Chinese, ISO-2022-JP-2 or EUC-JP or Shift_JIS for Japanese, and many many others). It was next to impossible to create a document that contained Chinese and Polish text in the same document. Due to the many encodings for Japanese, even the processing of pure Japanese text was error prone.
Internationalization is the process of changing the source code of a program so that it can meet the expectations of users in any culture, if culture specific data (translations, images etc.) are provided.
Use of Unicode is not strictly required for internationalization, but it makes internationalization much easier, because operations that need to look at specific characters (like hyphenation, spell checking, or the automatic conversion of double-quotes to opening and closing double-quote characters) don't need to consider multiple possible encodings of the text.
Use of Unicode also enables multilingualization: the ability of having text in multiple languages present in the same document or even in the same line of text.
But use of Unicode is not everything. Internationalization usually consists of three features:
A locale is a set of cultural conventions. According to POSIX, for a program, at any moment, there is one locale being designated as the “current locale”. (Actually, POSIX supports also one locale per thread, but this feature is not yet universally implemented and not widely used.) The locale is partitioned into several aspects, called the “categories” of the locale. The main various aspects are:
In particular, the
LC_CTYPE category of the current locale determines
the character encoding. This is the encoding of ‘char *’ strings.
We also call it the “locale encoding”. GNU libunistring has a function,
locale_charset, that returns a standardized (platform independent)
name for this encoding.
All locale encodings used on glibc systems are essentially ASCII compatible: Most graphic ASCII characters have the same representation, as a single byte, in that encoding as in ASCII.
Among the possible locale encodings are UTF-8 and GB18030. Both allow to represent any Unicode character as a sequence of bytes. UTF-8 is used in most of the world, whereas GB18030 is used in the People's Republic of China, because it is backward compatible with the GB2312 encoding that was used in this country earlier.
The legacy locale encodings, ISO-8859-15 (which supplanted ISO-8859-1 in most of Europe), ISO-8859-2, KOI8-R, EUC-JP, etc., are still in use in many places, though.
UTF-16 and UTF-32 are not used as locale encodings, because they are not ASCII compatible.
There are three ways of representing strings in memory of a running program.
The classical C strings, with its C library support standardized by ISO C and POSIX, can be used in internationalized programs with some precautions. The problem with this API is that many of the C library functions for strings don't work correctly on strings in locale encodings, leading to bugs that only people in some cultures of the world will experience.
The first problem with the C library API is the support of multibyte
locales. According to the locale encoding, in general, every character
is represented by one or more bytes (up to 4 bytes in practice — but
MB_LEN_MAX instead of the number 4 in the code).
When every character is represented by only 1 byte, we speak of an
“unibyte locale”, otherwise of a “multibyte locale”. It is important
to realize that the majority of Unix installations nowadays use UTF-8
or GB18030 as locale encoding; therefore, the majority of users are
using multibyte locales.
The important fact to remember is:
A ‘char’ is a byte, not a character.
As a consequence:
<ctype.h>API is useless in this context; it does not work in multibyte locales.
strlenfunction does not return the number of characters in a string. Nor does it return the number of screen columns occupied by a string after it is output. It merely returns the number of bytes occupied by a string.
strncpy, can have the effect of truncating it in the middle of a multibyte character. Such a string will, when output, have a garbled character at its end, often represented by a hollow box.
strrchrdo not work with multibyte strings if the locale encoding is GB18030 and the character to be searched is a digit.
strstrdoes not work with multibyte strings if the locale encoding is different from UTF-8.
strspncannot work correctly in multibyte locales: they assume the second argument is a list of single-byte characters. Even in this simple case, they do not work with multibyte strings if the locale encoding is GB18030 and one of the characters to be searched is a digit.
strtok_rdo not work with multibyte strings unless all of the delimiter characters are ASCII characters < 0x30.
strcasestrfunctions do not work with multibyte strings.
The workarounds can be found in GNU gnulib http://www.gnu.org/software/gnulib/.
mbswidththat can be used instead of
strlenwhen the number of characters or the number of screen columns of a string is requested.
mbsrrchrthat are like
strrchr, but work in multibyte locales.
strstr, but works in multibyte locales.
mbsspnthat are like
strspn, but work in multibyte locales.
mbstok_rthat are like
strtok_rbut work in multibyte locales.
mbscasestrthat are like
strcasestr, but work in multibyte locales. Still, the function
ulc_casecmpis preferable to these functions; see below.
The second problem with the C library API is that it has some assumptions built-in that are not valid in some languages:
The correct way to deal with this problem is
This is implemented in this library, through the functions declared in
<unicase.h>, see Case mappings
The ISO C and POSIX standard creators made an attempt to fix the first problem mentioned in the previous section. They introduced
<wctype.h>that were meant to supplant the ones in
Unfortunately, this API and its implementation has numerous problems:
wchar_tis a 16-bit type. This means that it can never accommodate an entire Unicode character. Either the
wchar_t *strings are limited to characters in UCS-2 (the “Basic Multilingual Plane” of Unicode), or — if
wchar_t *strings are encoded in UTF-16 — a
wchar_trepresents only half of a character in the worst case, making the
wchar_tencoding is locale dependent and undocumented. This means, if you want to know any property of a
wchar_tcharacter, other than the properties defined by
<wctype.h>— such as whether it's a dash, currency symbol, paragraph separator, or similar —, you have to convert it to
char *encoding first, by use of the function
fgetws, and when the input stream/file is not in the expected encoding, you have no way to determine the invalid byte sequence and do some corrective action. If you use these functions, your program becomes “garbage in - more garbage out” or “garbage in - abort”.
As a consequence, it is better to use multibyte strings, as explained in
the previous section. Such multibyte strings can bypass limitations
wchar_t type, if you use functions defined in gnulib and
libunistring for text processing. They can also faithfully transport
malformed characters that were present in the input, without requiring
the program to produce garbage or abort.
libunistring supports Unicode strings in three representations:
As with C strings, there are two variants:
|[ << ]||[ >> ]||[Top]||[Contents]||[Index]||[ ? ]|
This document was generated by Daiki Ueno on December, 2 2016 using texi2html 1.78a.