Hm, except I think the scope broadening was a _great_ idea. A standard that only covered the human-written characters that had encodings created for them at that time would have been much less useful than what we've got.
Thanks for your explanation of the history, that it wasn't exactly short-sightedness. Still, I wouldn't "blame" scope-creep -- maybe it's just another unusual example of standards-makers involved here managing to make the right decision at almost every point, even when it involved 'competition' between standards bodies.
The UCS 2 leftover stuff is one of the biggest problems in practical unicode at the moment, alas.
Oh, certainly. In an ideal world we would have had the ISO10646 scope from the start, combined with maybe UTF-8. I do occaionally come across people "explaining" UTF-16 by saying the Unicode consortium couldn't count, which I feel is unfair even if it's a lie-to-children.
Thanks for your explanation of the history, that it wasn't exactly short-sightedness. Still, I wouldn't "blame" scope-creep -- maybe it's just another unusual example of standards-makers involved here managing to make the right decision at almost every point, even when it involved 'competition' between standards bodies.
The UCS 2 leftover stuff is one of the biggest problems in practical unicode at the moment, alas.