Older blog entries for caolan (starting at number 189)

libexttextcat 3.2.0

Released libexttextcat 3.2.0 (Extended Text Categorization used to guess the language that input text is written in). It can be found in this download dir. No code changes from 3.1.1, but adds a large collection of extra language signatures to nearly add the same language support to libexttextcat as LibreOffice supports, modulo languages that LibreOffice supports which don’t have a convenient UDHR translation to use as a basis to generate a language fingerprint.

Syndicated 2011-11-13 22:41:59 from Caolan McNamara

CTL/CTL format character previews

As Lior Kaplan demonstrated at LibreOffice 2011 Paris, our format character preview really sucks for CTL and CJK users. If no CTL/CJK text is selected then no CTL sample text is shown, and the CJK sample text is from the fontname itself. Many font names are just Latin text, so give no indication what they look like in the actual script/language that is being written in.

e.g. Old dialog for CTL, will only preview some Western text if no text is selected, no attempt to show any sample CTL text, or even the CTL fontname. For CJK it will additional show the fontname of the CJK font in the preview, which isn’t helpful if the CJK fontname contains no CJK glyphs.

Simply adding the CTL fontname wouldn’t help much, seeing as the fontname is David CLM. So, currently reusing the preview text used in the font-dropdown first stab at “doing the right thing” gives me…

Code for all this is mostly in svtools/source/misc/sampletext.cxx where there is now some hugely over-engineered set of heuristics to guess the best script a font is tuned for and various functions to generate suitable text when all we have is the font, versus the font+language vs just the language and if we want a short identifier to classify what script a font might be good to render vs a longer sequence of sample text for a font preview.

Probably best to drop rendering the fontname in the Western case for the text preview and use some sample text there too, at least for the mixed Western+CTL+CJK case as its confusing to have a font name rendered and some sample text in another font.

Syndicated 2011-10-21 10:59:53 from Caolan McNamara

PhagsPa and Tai Le, sample text ?

Looking through my fonts that are clearly tuned for a single specific script, there remain two scripts that niggle me as I don’t have suitable sample text for them. i.e. PhagsPa and Tai Le. I’m looking for a short snippet of sample text in those scripts which is suitable to stick into the font drop down preview. Ideally something fairly equivalent to “Alphabet”, “Script”, “PhagsPa/Tai Le” or “Tibetan/Tai Lü”.

Syndicated 2011-10-19 22:29:50 from Caolan McNamara

libexttextcat: text guessing feature

LibreOffice inherited a text language guesser, based on textcat from wise-guys.nl and extended by Jocelyn Merand to basically handle UTF-8 text. This is the thing that makes the suggestions as to what language your text might really be in when you right click on some misspelled text and chose set language.

We’ve now spun this off as a standalone libexttextcat and fixed up some conversion problems from the original selection of 8bit encodings and generated new language fingerprints in other cases, which should give better results for various languages, and allow us to enable checking for some languages which was disabled until now.

The current list of languages it attempts to detect can be seen here

Here’s a plausible process to add your favourite language to it, given git clone git://anongit.freedesktop.org/libreoffice/libexttextcat and bootstrapping from the insanely-translated UDHR using Abkhaz as an example.

cd libexttextcat/langclass/ShortTexts/
wget http://unicode.org/udhr/d/udhr_abk.txt
#skip english header, name result using BCP-47
tail -n+7 udhr_abk.txt > ab.txt
cd ../LM
../../src/createfp < ../ShortTexts/ab.txt > ab.lm
echo ab.lm ab--utf8 >> ../fpdb.conf

Then update the check target in src/Makefile.am to confirm the detection of ShortTexts/ab.txt as ab works using make check

I’ll remove the necessity of a configuration file in a later version, and convert the result to a BCP-47 tag. For the moment it remains a drop in replacement for the original solution which necessitates retaining the slightly odd language tag syntax.

Syndicated 2011-09-28 14:10:11 from Caolan McNamara

git, really nifty after all

Maybe there’s something to the cult-of-git after all :-) . vcl/unx/source/fontmanager/fontcache.cxx had some code which painstakingly constructed a string, only to do nothing with it. Clearly at some time in the past it was used, so when did its use go away. This is a file which has been moved around over the years from place to place, hmm, potentially tricky to scratch the itch of knowing when it happened ?, not at all…

git log --follow --oneline -S'suspiciously missing variable' /path/to/file.cxx

and 2 seconds later I have a list of 5 commits, there it is at the top of the list. Back in 2005, a rework of the font cache where the stat on a file was optimized out, while the constructed path to the file remained. No undetected nightmare merge bug then, just a missed micro optimization opportunity.

Syndicated 2011-09-19 23:57:25 from Caolan McNamara

all text rendered with cairo

So, as of today all LibreOffice (3.5 onwards) text rendering under X goes through cairo. This was already the case in practice for horizontal text for quite a while, the additional change is that its true for vertical text as well now.



Yes, I know it’s still rather sub-optimal. The current implementation is basically intended to be bug-for-bug compatible for now, though I couldn’t resist improving the positioning of 0x30FC.

Test-case at http://cgit.freedesktop.org/libreoffice/core/tree/qadevOOo/testdocs/vertical-testcase.odt

Syndicated 2011-08-19 11:56:02 from Caolan McNamara

sgv, StarDraw 2.0 examples with text ?

I wonder if anyone has any sgv documents left around, not svg, but sgv, the StarDraw 2.0 format. Looking for .sgv documents that contain text, and ideally text outside of the ASCII range. A few umlaut’s would probably suffice.

Syndicated 2011-08-08 11:12:50 from Caolan McNamara

unused code, libreoffice style

The return of callcatcher derived lists of unused code list in LibreOffice. I tweaked callcatcher to understand the additional gcc command line options used by the new gbuild module so it can be dropped in as a gcc replacement in that environment.

There’s now a findunusedcode target in the top level Makefile and a cached list of easy to remove methods in the tree as unusedcode.easy. These are non-virtual C++ methods which are not called directly, nor have their address taken by any code in a stock debug level Linux build.

What distinguishes unusedcode.easy from not-easy is simply that the easy list is restricted to C++ name-mangled class-level symbols and so omits any non-mangled C-style symbols which might be dlsymed from some not easy to find entry point.

At a count of 5176 easy unused methods there’s enough there to be getting on with for the moment, and can revisit the C-style symbols with a whitelist of known dlsym names on completion of those.

Syndicated 2011-07-11 12:34:48 from Caolan McNamara

regression testing libreoffice filters

For regression testing LibreOffice filters I’ve now arranged things so that each import filter’s cppunit test comprises of three data dirs, a pass dir, a fail dir and an indeterminate dir. Files in pass must parse without error, those in fail are expected to fail, but fail gracefully by returning an error or throwing an exception, i.e. a crash is a fail on a “fail” test, while “can’t parse” is the expected pass state.

The pass/fail dirs are typically pre-filled in the tree with a small sample of tricky documents which get tested at every build time to ensure they remain working.

indeterminate dirs on the other hand are expected to be empty in the tree, and the cppunit tests don’t care if their contents can be parsed or not, only that they don’t crash. This is really convenient for searching for crashes in a large document collection (horde), given that its an order of magnitude faster than using the full application to load and layout the results.

So I/we can just take a large document horde of e.g. docs and throw them in sw/qa/core/data/ww8/indeterminate and run make -sr in sw and sit back and wait to see if anything in there is a crasher at the parser level. For extra goodness export VALGRIND=memcheck to run the whole lot under valgrind.

FWIW, today anyway

  1. All 3721 attachments of (alleged) mime-type application/msword in openoffice.org’s bugzilla pass without crash when placed into ww8/indeterminate. To be re-run under valgrind later
  2. And all (ok, only 128) attachments of (alleged) mime-type application/msword in freedesktop.org’s bugzilla pass under VALGRIND=memcheck when placed into indeterminate.

I’ve got doc, rtf, qpro, wmf, emf, hwp, lwp and sxw organized and pre-seeded with a sample handful files so far. Plenty more filters than that of course, but .doc is my current focus as the richest vein of available had-bugs-reported documents.

Syndicated 2011-07-11 11:58:27 from Caolan McNamara


I’ve been somewhat remiss, our new baby Naoise being held by his big sister.

Syndicated 2011-06-06 20:56:24 from Caolan McNamara

180 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!