As I’ve shown in previous posts, cataloging multilingual library materials is not so simple. For the next few posts, I plan to discuss my issues with Romanization (i.e. the converting of foreign scripts into Latin characters). As with MARC-8, there is a long (and very good) historical precedent for Romanization. Also with MARC-8, technology and interconnectedness have advanced to the point where Romanization has become a problem. I propose that we stop Romanizing and instead enter data in the vernacular form, leaving it up to algorithms or other forms of automation to Romanize should this be necessary. A few of the problems I have with Romanization are as follows; each will be discussed in later posts.
- Current standards allow for the inclusion of only certain vernacular scripts (JACKPHY, Cyrillic, and Greek – more on these later). These standards are hangovers from MARC-8, so they should be discarded immediately. This also prevents native-script users from searching the catalog in their own scripts.
- For most non-Latin scripts there exists a wide variety of Romanization schemes. Why force library users to learn ALA-LC (the standard in US cataloging)?
- Quite a few languages have official Latin-based alphabets, either because they switched from a non-Latin alphabet to a Latin-based one (e.g. Azerbaijani, Uzbek) or because a governmental body promotes a specific Romanization (as in South Korea).
- It is not unheard of for a title (or other tidbit of catalogable material) to exist in multiple scripts. This makes Romanization messy.
- Romanization goes against the spirit of RDA. If RDA instructs us to enter data as it exists on the piece, then why do we Romanize?
For further details on the “official” Romanization, see the ALA-LC Romanization Tables.