If you wish to join us, information on events are at http://voices4iran.org/.
If you wish to join us, information on events are at http://voices4iran.org/.
I lost at least three hours today finding about this, and I found about it by accident, because I had Calendrical Tabulations at hand and happened to look at the Chinese calendar column. There are several conflicting pieces of information on the internet here and there, which really confused me to the point that I thought the actual algorithm is not publicly available.
Just wanted to share a bit of my own experience with being overweight, losing a lot of it, and then gaining some of it back:
I highly recommend The Hacker’s Diet, available online for free. It is written by John Walker, of AutoCAD fame.
The very short book helped me lose about 15 kilos easily (and with no exercising) a few years ago. I have started to diet again these days, with a goal of losing about 30 pounds (almost the same amount, but I know live in the US).
Even if you hate diets and diet books, still read it. I would recommend reading it even if you are not overweight!
Footnote: The author of the book has made all the code he used in the book (with several updates) available as public domain code online. He also runs a server with the tools installed for public use, if you are the lazy type, like me. It's all here.
Font files don’t have that information directly. How would a font designer know that his font supports Arbuan Papiamento just fine, which uses a different orthography than Papiamento as written in Netherlands Antilles, for example? What about African or native American languages? Or Mongolian? Or Kurdish? He just designs and tests glyphs for characters and languages he is interested in. If the resulting font happens to support Filipino too, good for him and his users, if it doesn’t, he may not care. At best, a list of the languages the font designer believes the font is supporting may be found somewhere in the documentation.
In the present freedesktop stack, the language support detection task is done by fontconfig. When an application, like Firefox, wants to display text in some language, a text layout engine, like Pango, will ask fontconfig for a font that supports displaying text in the language (possibly with some other properties, like the font being bold and sans serif). fontconfig then uses its various font suggestion rules and orthography files to give the best font it can find back to the engine. If FontConfig doesn't know anything about the language, or has wrong information, it may give you something totally off, like a Latin or Devanagari font for a language written in the Arabic script.
What font designers may not know (or care about), fontconfig needs to know. The usual way of knowing, especially for not-very-famous fonts or languages, is through orthography files. These files contain a list of Unicode characters that play a letter-like role in the language. For example, for French, it is a list of basic Latin letters plus all the ligatures (like œ) and accented letters (like ï). fontconfig runs the list through each font installed on your machine and sees if it has glyphs for all the characters listed. If it does, the font is assumed to support the language.
Getting back to my own story, I thought of checking orthography files to see which languages my packaged fonts support. But when I looked into a few, I found several bugs and unsupported languages. Behdad encouraged me to fix them early, for a chance for them to get them into fontconfig 2.7.
During the past few weeks, I’ve been trying to hunt things down and fix them during my free time. I achieved my first target of matching glibc locales (those without ‘@’). I’m now on my second target of matching languages with two-letter codes; remaining are: Akan, Avestan, Cree, Ewe, Herero, Sichuan Yi, Javanese, Kanuri, Kongo, Kuanyama, Luba-Katanga, Nauru, Navajo, North Ndebele, Ndonga, Ojibwa, Pali, Quechua, Rundi, Sango, Shona, Sundanese, Tahitian, and Zhuang. After that, there are thousands of languages with three letter codes, which would need an army the size of SIL International.
Everything I did is in my git tree here. If you want to help, file bugs with your findings at http://bugs.freedesktop.org/. You can also check out the existing orthography bugs to avoid duplication.
I was just reading an article (in Persian) about the registration of the 100,000th domain in “.ir”. There’s been an event, with a long list of speakers that includes quite a few Iranian politicians involved in linguistic or Information Technology issues.
The best quote ever is from the highest ranking government official in charge of IT issues: “Engineer Rezaee, the Secretary of the Supreme Council of Information Technology, [...] expressed his gratitude toward the people responsible in the institute [in charge of .ir] for their vigilance in in selecting the domain name .ir for Iran, and added that if the choice had not happened in time, other countries like Ireland or Iraq may have chosen it for themselves”. That’s all that is quoted from him, which tells the rest of his speech has probably been worse...
The poor guy probably doesn’t know about standards, and I’m quite sure no one corrected him, pointing to ISO 3166, first published in 1974, years before the founding of the institute in 1989. Even those codes were based on the codes introduced in the 1949 Geneva Convention on Road Traffic. When “IR” was first internationally introduced for Iran, Siavash Shahshahani, the gentleman in charge of .ir’s growth, had been seven years old!
Update: According to this Wikipedia page, “IR” has been in use for Iranian cars since 1936 (interesting date, since until early 1935, Iran was internationally called “Persia”). But the article does not cite its sources, so I can’t really confirm it. Still, even if it came into use in 1936, it was definitely not standardized internationally until 1949.
What’s really annoying is that to someone knows a bit about Middle Eastern culture and language, a lot of things are very phony. These are some random things from 24 that I found. (Note: I am not a native speaker of Arabic. I just learned some in school.)
Of course, 24 is famous for showing torture to be working sometimes, depicting huge conspiracies, showing government officials on very foolish errands and breaking laws left and right, and very interestingly, a Democratic Chief of Staff becoming a Republican Chief of Staff in the next administration. (All in all, I really think the world of 24 is a parallel universe. Fun to watch, but not much connection to real world.)
The disjoint Arabic phenomenon is not unique to 24, of course. Even better-produced shows like Lost do it. In Season 4, Episode 9, a TV news programming is shown, supposedly in Tunisia broadcasting something happening in Iraq. The Arabic text is totally disjoint, and unacceptable to anybody who knows anything about the language or script.
I suppose the producers pay people to translate the text into Arabic. Can’t they also make sure the software they use to render the text also displays it fine? If it doesn’t, why bother? Just show some squiggles!
Tintin did it much better, with much lower budget, I guess.
But I think the world renewed itself earlier this year.
But today, I witnessed a new US president, clearly wise, clearly intelligent, and clearly a thinker. I was longing for the day to hear such a thing as “we reject as false the choice between our safety and our ideals” from a US president. Or pearls of wisdom like “know that your people will judge you on what you can build, not what you destroy” or “we can no longer afford indifference to the suffering outside our borders, nor can we consume the world's resources without regard to effect”.
I am so happy to be in this country at such a time as this. And I am surprised of myself for considering him my ideal US candidate for president since I found about him back in 2004. I didn’t think he would run, I didn’t think he would win, but I followed all his moves. All this time, I cried, laughed, drank, read, informed, and debated. Back home in Iran, in transit, and here in California. I could not vote him, and would not be able to vote for him in 2012 either, but as a fellow citizen of the world, he has my support.
Congratulations, World! Or should I say, Happy New World!
I saw interesting stuff and boring stuff, but the best thing that happened was meeting "spot". He spent a couple of hours with me over drinks, providing free wisdom (and selling me ideas?). He’s so amazing!
The reporter did not really contact me after the interviews, so I thought the article was cancelled. Apparently it was not.
He called it “Word War III”. It gives interesting insight towards the lifetime of a Wikipedia article. It also has some quotes from me that I find a bit funny now. It’s like the missing piece of this weblog. Read it!
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!