Readability test

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Readability tests, readability formulas, or readability metrics are formulae for evaluating the readability of text, usually by counting syllables, words, and sentences. Readability tests are often used as an alternative to conducting an actual statistical survey of human readers of the subject text (a readability survey). Word processing applications often have readability tests in-built, which can be deployed on documents in-editing.

The application of a useful readability test protocol will give a rough indication of a work's readability, with accuracy increasing when finding the average readability of a large number of works. The tests generate a score based on characteristics such as statistical average word length (which is a used as a proxy for semantic difficulty) and sentence length (as a proxy for syntactic complexity) of the work.

Some readability formulas refer to a list of words graded for difficulty. These formulas attempt to overcome the fact that some words, like "television", are well known to younger children, but have many syllables. In practice, however, the utility of simple word and sentence length measures make them more popular for readability formulas.[citation needed] Scores are compared with scales based on judged linguistic difficulty or reading grade level. Many readability formulas measure word length in syllables rather than letters, but only SMOG has a computerized readability program incorporating an accurate syllable counter.

Since readability formulas do not take the meanings of words into account, they are not considered definitive measures of readability.

[edit] Readability tests

[edit] See also

[edit] References

Personal tools