Older blog entries for const (starting at number 11)

Finally found the time to create the TightVNC project page at advogato.org. Is there a way to remove obsolete projects here (VNC Tight Encoder)?

Just announced VNC Tight Encoder 1.1. Now I'm watching at amazing number of hits to the project homepage. I did not expect such interest. It's seems like everybody out there needs binaries which are absent for now. :-)

Last two weeks I was too busy to write into the diary at advogato.org. Now the official part of VNC compression project is finished. The source is available to download from the project homepage. New encoder shows compression ratios 5..30% higher compared to zlib compression and it's faster than zlib encoder. The latter fact has surprised me a lot as I have not thought about speed optimisations yet and there are many places in the code where such optimisations are possible.

Just finished with small separate part of the VNC compression project: automatic SSH tunneling for unix vncviewer is finished. Everybody is invited to try/test the code: the patch and brief instructions will be available at the project homepage shortly.

Night, 4:10. The day was lost. Did not work today for no reason. That's bad. Main part of plans for tommorrow is drinking some beer with a friend. I haven't seen him for monthes.


Worked hard on the VNC compression project. First result is that subset of new "tight" encoding has been implemented in the vncviewer and in a standalone VNC proxy. Currently compression is equivalent to pure zlib encoding as implemented in the TridiaVNC (by the way, I was mistaken saying that zlib compression is implemented inefficiently in the TridiaVNC).

Now it's time to actually improve compression. I see two primary directions to do that:

  1. There should be a way to split screen updates into smaller rectangles representing different types of screen data: full-color areas, bi-level drawings, solid areas etc.). Each subrectangle should be compressed in a separate zlib stream and different filters should be applied for each type of data.
  2. Efficient filters (predictors) should be designed to handle data types which usualy compose typical screen contents.

And there are other problems with VNC that interfere with good compression: server sends many unnecessary screen updates (even when screen contents is untouched), potentially large updates frequently being split into many small pieces, etc.


My girlfriend dislikes the way I work. I sleep huge part of day and I work the whole night. And I'm tired of smoking: it's time to quit.

Worked on the VNC compression project. Finished with the draft version of new encoding. Need to say that the proposed compression scheme is somewhat too flexible: anything can fit in such encoding (including uncompressed data, hextile-encoded data or even uncompressed data expanded with some garbage :-) It's something like meta-encoding. But it's still too far from the final version. And I'm still not able to compare performance as far as it's not implemented...

While designing this encoding, I've tried to make VNC server do most complex work, so the client modifications will be relatively simple, but the server part can be complex enough. I'm afraid that the amount of time for developing this project will be not enough to do everything ideally, so I'll have to cut my huge ambitions and develop something that is easier to implement... And I'll continue the work after finishing the request at the Cosource.com...

To Whizziwig: I have one good book about FoxPro, and I don't need this book (BTW, I hate FoxPro too). But, unfortunately, the book is in russian. :-)

Yesterday I came to sleep at 8am in the morning and woke up at 7pm today. It's terrible -- I'm a night animal.

I've discovered good news in my mailbox today: mgetty-1.1.21-to-28072000 has been released. I've built new RPM packages for KSI Linux and I'm happy to see that it is the first mgetty version for which I do not have to apply patches to make it build and work as desired. Only distribution-specific config.patch is applied. All corrections I have contributed to mgetty/vgetty are in place. So my patch in the article at Advogato is not actual any more.

But there are other problems: I'm tired to fight with mgetty+pppd combination. It seems like I have kernel/pppd/chat/setserial/mgetty conflict. Pppd had worked but I had to run ifup (or ppp-on) twice with a pause between tries. Now I've changed the chat script a little: modem dials from the first attempt, but I'm still not satisfied because I fail to find the precise reason for the problem. Everything worked ok at another machine with the same configuration, and other programs such as minicom (and chat itself executed without pppd) also share the line with mgetty without any problems. Well, I have not time to refine the problem, and anyway I plan to reinstall the system (BlackCat Linux instead of KSI Linux). Let's see...

Another thing I'm tired of: my provider sucks. And any other provider in our town is even worse. Why am I living in Siberia?

It wanted to know how zlib compression is integrated into the TridiaVNC. I've got several files from their source via CVS and now I know. Exactly as expected: their implementation is not very efficient: the context (dictionary) is not being saved between screen updates. Due to this fact it would be easy to outperform their compression ratios.

2 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!