Mojibake

From Wikipedia, the free encyclopedia

Jump to: navigation, search
This article contains Japanese text. Without proper rendering support, you may see question marks, boxes, or other symbols instead of kanji and kana.
The UTF-8-encoded Japanese Wikipedia article for mojibake, as displayed in the Windows-1252 ("ISO-8859-1") encoding.

Mojibake (文字化け ?, pronounced /ˌmɔdʒi'beɪk/) is the happenstance of incorrect, unreadable characters shown when computer software fails to render a text correctly according to its associated character encoding.

Contents

[edit] Etymology

Mojibake is a loanword from Japanese; the Japanese word 文字化け ([moʥibake]) is composed of 文字 (moji), which means letter, character, and 化け (bake), from the verb 化ける (bakeru), which means to appear in disguise, to take the form of, to change for the worse. Literally, it means "character mutation".

[edit] Causes

Mojibake is often caused when a character encoding is not correctly tagged in a document, or when a document is moved to a system with a different default encoding. Such display of writing systems or character encodings that are mistagged or "foreign" to the user's computer system: if a computer does not have the software required to process a foreign language's characters, it will attempt to process them in its default language encoding, usually resulting in gibberish. Messages transferred between different encodings of the same language can also have mojibake problems. Japanese language users, with several different encodings historically employed, would encounter this problem relatively often. For example, the intended word "文字化け", encoded in UTF-8, is incorrectly displayed as "文字化け" in software that is configured to expect text in the Windows-1252 or ISO-8859-1 encodings, usually labeled Western.

A web browser may not be able to distinguish a page coded in EUC-JP and another in Shift-JIS if the coding scheme is not assigned explicitly using the HTTP headers sent along with the documents, or using the HTML document's meta tags that are used to substitute for missing HTTP headers if the server cannot be configured to send the proper HTTP headers. Heuristics can be applied to guess at the character set, but these are not always successful.

In the mid 1990s, as this problem became common, several websites featured mojibake not as a problem to be tackled but simply for amusement. Words and even sentences were "deciphered" with meanings made up to deliver funny messages.[citation needed]

Mojibake can also occur between what appears to be the same encodings. For example, Windows and Mac OS X both use the name ISO-8859-1 for a character encoding. However, each system includes characters in their encoding not in ISO-8859-1, and these are not compatible across systems. It is possible to be unaware of these extra characters' incompatibility across platforms, and to unintentionally cause mojibake by using them.

[edit] Resolutions

Applications using UTF-8 as a default encoding may achieve a greater degree of interoperability due to its widespread use and backwards compatibility with US-ASCII.

The difficulty of resolving an instance of mojibake varies depending on the application within which it occurs and the causes of it. Two of the most common applications in which mojibake may occur are web browsers and word processors. Modern browsers and word processors often support a wide array of character encodings. Browsers would often allow a user to change its rendering engine's encoding setting on the fly, while word processors would allow the user to select the appropriate encoding when opening a file. It may take some trial and error for users to find the correct encoding.

The problem gets more complicated when it occurs in an application that normally does not support a wide range of character encoding, such as in a non-Unicode computer game. In this case, the user must change the operating system's encoding settings to match that of the game. However, changing the system-wide encoding settings can also cause Mojibake in pre-existing applications. In Windows XP or later, a user also has the option to use Microsoft AppLocale, an application that allows the changing of per-application locale settings. Even so, changing the operating system encoding settings is not possible on earlier operating systems such as Windows 98; to resolve this issue on earlier operating systems, a user would have to use third party font rendering applications.

[edit] Problems in specific languages

Mojibake in English texts generally occurs in punctuation, such as emdashes (—), endashes (–), and curly quotes (“, ”), but rarely in character text since most encodings agree with ASCII on the encoding of the English alphabet. For example, the pound sign "£" will appear as "£" if it was encoded by the sender as UTF-8 but interpreted by the recipient as CP1252 or ISO 8859-1. If iterated, this can lead to "£", "£", etc.

In Japanese, the phenomenon is as mentioned called mojibake 文字化け. It is often encountered by non-Japanese when attempting to run software written for the Japanese market.

In Chinese, this phenomenon is called luanma (simplified Chinese: 乱码; traditional Chinese: 亂碼; pinyin: luànmǎ; literally "messy/disorderly code").

In Korean, this phenomenon is called Oigye-eo 외계어, meaning 'alien language'.

Users of Central and Eastern European languages can also be affected. Because most computers were not connected to any network, during the mid- to late 1980s there were different character encodings for every language with diacritical characters.

Handwritten kryakozyabry corrected by a postal employee.

In Russian slang, mojibake is humorously called kryakozyabry (крякозя́бры, "crackozyabres"). During the 1990s, several different encodings for the Cyrillic alphabet (Unix KOI8-R, Windows CP-1251, DOS 866, standard ISO 8859-5, and several others) competed. Poorly configured servers and lack of compatibility made garbled text a common and frustrating experience. Many e-mail servers stripped the 8th bit from the characters as permitted by earlier standards (which renders UTF-8 unreadable, as well as all of the above). For this reason many Cyrillic users resorted to Volapuk encoding. An even more frustrating problem emerged in the early 2000s, when the popular e-mail client Microsoft Outlook started to replace correctly entered Cyrillic characters with question marks when replying to or forwarding messages created in competing encodings.

In Bulgarian, mojibake is often called maymunitsa (маймуница), meaning monkey's alphabet. In Serbian, it is called ђубре (đubre), meaning trash. In German, Zeichensalat (character salad) and Krähenfüße (crow's feet) are common terms for this phenomenon.

In Poland every company selling early DOS computers created its own encoding, and simply reprogrammed the EPROMs of the video cards (typically CGA, EGA or Hercules) with the according character shapes. Additionally, users of then-popular home computers (such as the Atari ST) invented their own encodings, incompatible with international standards (ISO 8859-2), vendor standards (IBM CP852, Windows CP1250) and locally agreed-upon PC/MS DOS standards (Mazovia). The situation began to improve when, after pressure from academic and user groups, ISO 8859-2 succeeded as the "Internet standard" with limited support of the dominant vendors' software (today largely replaced by Unicode). With the numerous problems caused by the variety of encodings, even today some users tend to refer to Polish diacritical characters as krzaki ("bushes").

Commodore brand 8-bit computers used PETSCII encoding, primarily notable for inverting upper and lower case compared to standard ASCII. PETSCII printers worked fine on other computers of the era, but flipped the case of all letters.

Among languages of Scandinavia, mojibake is not uncommon, but isn't much of a problem and is more of an annoyance. Finnish and Swedish use the letters of the English alphabet and three more characters: å, ä and ö, and typically these three are the only ones that become corrupted. The situation is similar for Norwegian and Danish, except the three relevant letters are æ, ø and å. In Swedish, Norwegian and Danish, vowels are rarely repeated, and it is usually obvious when one character gets corrupted, such as the second letter in "kärlek" (kärlek, "love"). This way, even though the reader has to guess between å, ä and ö, almost all texts remain perfectly readable. However, Finnish does have repeating vowels in words like "Hääyö" (hääyö), which can sometimes render text very hard to read.

Another type of mojibake occurs when text is erroneously parsed in a multi-byte encoding, such as one of the east Asian encodings. With this kind of mojibake more than one (typically two) characters is corrupted at once, e.g. "k舐lek" (kärlek) in Swedish, where "är" is parsed as "舐". Compared to the above mojibake, this is harder to read for humans since now also letters unrelated to the problematic åäö are missing, and is especially problematic for short words starting with åäö such as "än" (becomes e.g. "舅"). Also, since two letters are combined, the mojibake seems more random (over 50 variants compared to the normal 3, not counting the rarer capitals). In some rare cases, an entire text string, such as "Bush hid the facts", may be misinterpreted.

In most of Former Yugoslavia, an addition to the basic Latin alphabet are the letters š, đ, č, ć, ž, and their capital counterparts Š, Đ, Č, Ć, Ž. All of these letters are defined in Latin2 and Windows-1250, while only some (š, Š, ž, Ž, Đ) exist in the usual OS-default Western. Although even those that exist in extended Western ASCII (Windows-1252) are not immune to errors, the ones that don't are much more prone to errors. Thus "šđčćž ŠĐČĆŽ" all too often (even nowadays) gets interpreted as "šðèæž ŠÐÈÆŽ", making users wonder where ð, è, æ, È, Æ are used. When confined to basic ASCII (most usernames, for example), common replacements are: š→s, đ→dj, č→c, ć→c, ž→z (capital forms analogously, with Đ→Dj or Đ→DJ depending on word case). All of these replacements introduce ambiguities, so reconstructing original from such a form is usually done manually if required.

Hungarian is another affected language, which uses the 26 basic English characters, plus the accented forms á, é, í, ó, ú, ö, ü (which are all present in the Latin-1 character set), plus the 2 characters ő and ű, which are not in Latin-1. These 2 characters can be correctly encoded in Latin-2, Windows-1250 and Unicode. Before Unicode became common in e-mail clients, e-mails containing Hungarian text often had the letters ő and ű corrupted, sometimes to the point of unrecognizability. It is common to respond to an e-mail rendered unreadable by character mangling (referred to as "betűszemét", meaning "garbage lettering") with the phrase "Árvíztűrő tükörfúrógép", a nonsense phrase (literally "Flood-resistant mirror-drilling machine") containing all accented characters used in Hungarian.

Esperanto was once commonly encoded in Latin-3 to display the accented characters: ĉ, ĝ, ĥ, ĵ, ŝ, and ŭ. In some browsers these characters would be incorrectly displayed as Latin-1 characters with the same encoding: æ, ø, ¶, ¼, þ, and ý.

[edit] External links

Personal tools