Older blog entries for IlyaM (starting at number 11)

MongoDB client library: C vs C++

I've been playing a bit with MongoDB recently. Particularly I've looked into source code for client libraries as I was interested in how hard is to change the client API to support async mode of operation. One thing I noticed is that C version of the client library compared to C++ version is shorter and much easier to read. I cannot shake off the feeling that sometimes C++ feels like a step backwards compared to C.

Syndicated 2010-02-10 14:41:00 from Ilya Martynov's blog

Running Puppet on big scale

This is a rehash of my comment in slashdot discussion and my comment on Alexey Kovrygin's blog post.

We run Puppet on hundreds of servers in two datacenters and it was pain to get it working right. There are many issues which show up here and there: memory leaks in both client (puppetd) and server (puppetmaster), periodic lock ups and even file corruption. Besides it is quite slow. These problems are being slowly fixed with each new release but right now using Puppet for big installations is a source of constant problems. Unfortunately you do not notice these problems until you get many servers to manage; on smaller installations it seems to work without problems or at least they happen much less often to be noticable. In our case number of servers we managed increased slowly so we felt into the trap and now rely on Puppet too much and now it is too late to change. At the end we have managed to work around of most of issues in the Puppet we have met so combined with monitoring to catch problems it works good enough for us. On the other hand if I were to start from scratch I would evaluate something different for the project. Perhaps I would use Cfengine. It is not as flexible and nice as puppet but probably is more stable simply because it is much more old. I talked to people who used Cfengine on much bigger scale (thousands of servers) and they did not recall stability problems with it. In the long run Puppet will be probably ok too as it is being developed actively but right now I'd consider it to be in “beta” state. Or maybe even in "alpha".

For anyone interested in how to get Puppet work for real work load this is what we do:


  • We run Puppet under Apache+Mongrel. By default it runs using WEBrick what breaks easily under any moderate load. So we use Apache+Mongrel instead of it. Another benefit of using Apache is that you can run multiple backends. This helps if you have multi-core server for puppetmaster as by itself it can use only one core. Alternatively you can use Nginx+Mongrel or another other web server with proxying capabilities + Mongrel.

  • Because Puppet is slow we load balance it across two boxes in each datacenter.

  • We restart backends from time to time because they leak memory. We have a cron job to do this every 15 minutes (yes, it is that bad).

  • Puppetmaster has a cache which we saw to get corrupted sometimes. Our "fix" is to delete it before each restart. This might be fixed in later version: I've seen some closed bug reports which loooked relevant but we still have this cache clean up just in case.

  • We do not run puppet client as daemon. We run it as a cron job. Puppet client when run as daemon leaks memory and gets stuck from time to time. In our cron job we add random sleep before starting client to make sure requests do not hit server at the same time and overload it.

  • We never serve big files over puppet using its fileserver. Puppet does a number of stupid things with big files like say reading them into memory first before serving it to puppet client. If you need to distribute big files use other means (HTTP, FTP, NFS, etc).

Syndicated 2009-04-08 12:50:00 from Ilya Martynov's blog

STOMP messaging for non-Java programmers on top of Apache ActiveMQ

Recently I was researching available options for messaging between Perl programs. In the past I had quite a lot of experience with Spread and I don't want to repeat. I hated Spread as it was buggy and unstable. So I looked into other alternatives: XMPP, STOMP and AMQP. AMQP has no Perl client so it was out. STOMP and XMPP are closely tied in my view but then STOMP looked simplier so I decided to go with STOMP. There is very good Perl client library for STOMP: Net::STOMP.

Then there is a choose of the server. This is quite an important choice and here is why: STOMP is theoretically language agnostic protocol but in reality you are very likely to depend on semantics of specific STOMP server implementation. For example like I mention below STOMP protocol doesn't really define any rules of message delivery.

There are several servers which support STOMP but Apache ActiveMQ looked to me like one of the most robust implementations. While Apache ActiveMQ supports a wide range of interfaces its design is centered around JMS and it helps to understand basic concepts of JMS even if you use STOMP only. This was a problem for me and as I don't really program in Java and all JMS concepts were alien to me. Moreover most of documentation on STOMP and ActiveMQ takes for granted that you know JMS basics.

So I'm recording all my finding on STOMP/ActiveMQ from viewpoint of non-Java programmer. I hope it might be helpful for other non-Java programmers. Word of warning: all below might be specific to Apache ActiveMQ implementation of STOMP server. I didn't bother to check other STOMP servers.

Basic model

As I mentioned earlier STOMP protocol by itself doesn't specify rules of message delivery. It is up to STOMP server to define the rules. This is where JMS API model becomes important as STOMP implementation is basically just a mapping of JMS model to non-Java specific protocol. Below is the short summary of API model which is relevant to STOMP clients (this is mostly based on my reading of JMS tutorial, STOMP protocol description and description of JMS extensions to STOMP).

There are two distinct ways to organize messaging:

  1. Use queues. If one message gets into queue, only one of subscribers gets it. If there are no subscribers then server stores the message until someone shows up.

  2. Use topics. For each message sent to the topic all active (i.e. connected) subscribers get a copy of it. Actually non-active subscribers can get a copy as well if they register their subscription as durable in advance. If there are no subscribers message gets lost.

How do use queues and topics in STOMP client? It is all controlled by destination you specify when subscribing to messages or sending messages. Destinations like /queue/* act as queues. Destinations like /topic/* act as topics.

There is also a concept of temporary queues and topics in JMS. The idea is that they are visible only to connection which creates them so that client might have private queues and topics. I'm not sure if this is exposed to STOMP clients at all. It might be - I haven't researched this as I don't need it in my application.

Control over reliability of messaging

JMS API gives you some control over reliability of messaging and at least some of it is exposed to STOMP layer.

Message acknowledgement: STOMP client on subscription tells if it acknowledges messages automatically or not. Automatic means that messages is considered delivered even if subscriber doesn't actually read it. I guess there are cases when it makes sense but I'd argue that default behavior should be opposite as for most applications it doesn't.

Message persistence: if STOMP server dies it either losses undelivered messages or it rereads them from some permanent storage. Message persistence controls this.

Message priority: in theory JMS provider tries to deliver higher-priority messages before lower-priority. In practice I have no idea - I didn't research how ActiveMQ implements this as it is not important for my application. Anyway this bit is exposed into STOMP protocol as well.

Message expiration: this defines for how long time server keeps undelivered messages.

Transactions: not sure about this one. Both JMS and STOMP support concept of transactions but I'm not sure what is the exact overlap. I might look into this later but for my application transactions are probably not important.

Configuring ActiveMQ as a STOMP server

Latest version (5.2) seems to support STOMP out of box without need for any additional configuration. As a quick test you can run the following program. It is just a copy&paste from Net::STOMP perldoc - I'm adding it here in case they change perldoc later:

# send a message to the queue 'foo'
use Net::Stomp;
my $stomp = Net::Stomp->new( { hostname => 'localhost', port => '61613' } );
$stomp->connect( { login => 'hello', passcode => 'there' } );
$stomp->send(
    { destination => '/queue/foo', body => 'test message' } );
$stomp->disconnect;

# subscribe to messages from the queue 'foo'
use Net::Stomp;
my $stomp = Net::Stomp->new( { hostname => 'localhost', port => '61613' } );
$stomp->connect( { login => 'hello', passcode => 'there' } );
$stomp->subscribe(
    {   destination             => '/queue/foo',
        'ack'                   => 'client',
        'activemq.prefetchSize' => 1
    }
);
while (1) {
  my $frame = $stomp->receive_frame;
  warn $frame->body; # do something here
  $stomp->ack( { frame => $frame } );
}
$stomp->disconnect;

Default installation doesn't seem to do any authorization so any login/passcode works.

Syndicated 2009-03-25 15:07:00 from Ilya Martynov's blog

Erlang debugging tips

I've just started playing with Erlang so I have a lot to discover but so far I've found several things which help me to debug my programs:


  1. I tried to write my programs using OTP principles but the problem for me was that by default it often causes Erlang to hide most of the problems. The faultly process just get silently restarted by its supervisor or even worse - the whole application just exits with unclear "shutdown temporary" message. The solution is simple: start sasl application and it'll start logging all crashes. For development starting Erlang shell as erl -boot start_sasl does the trick.
  2. If you compile your modules with debug_info switch then you can use quite nifty visual debugger to step through your program. Quick howto: you open debugger window with Erlang console command im() and then you add modules for inspection via menu Module/Interpret. Then you can either add breakpoints manually or configure debugger to auto attach on one of conditions (say on first call). Instead of clicking menus you can also use Erlang console commands to control the debugger. See i:help().
  3. With command appmon:start() you can launch visual application monitor which shows all active applications. One particular useful thing is ability to click on application what shows a tree of processes it consist of. Then you can enable tracing of individual processes. When tracing is enabled it seems to be showing messages send or recieved by the traced process.

Syndicated 2008-11-17 11:44:00 from Ilya Martynov's blog

STL strings vs C strings for parsing

I'm working on a project where I need to build custom high performance HTTP server. One piece of this server is a parser for URLs in incoming requests. It is very simple and on the first glance it shouldn't be that slow compared with other parts of the server. Yet it was taking quite a lot of CPU according to the profiler. The parser is using STL and basically does several string::find() calls to find parts of URL. So I thought maybe string::find() is too slow and decided to benchmark it against strchr(). This is my benchmark code:


#include <string.h>
#include <string>
#include <time.h>
#include <iostream>

using std::string;
using std::cout;

int main() {
const char* str1 = " a ";
const string& str2 = str1;

const unsigned long iterations = 500000000l;

{
clock_t start = clock();

for (unsigned long i = 0; i char* pos = strchr(str1, 'a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}

{
clock_t start = clock();

for (unsigned long i = 0; i string::size_type pos = str2.find('a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}
}

Turns out strchr is much faster as long as the benchmark code is compiled with optimizations on:

ilya@denmark:~$ g++ -O3 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 15.5 sec
Iterations: 500000000 it
Time per iteration: 3.1e-05 msec
Rate: 3.22581e+07 it/sec

ilya@denmark:~$ g++ -O2 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 15.76 sec
Iterations: 500000000 it
Time per iteration: 3.152e-05 msec
Rate: 3.17259e+07 it/sec

ilya@denmark:~$ g++ -O1 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 19.23 sec
Iterations: 500000000 it
Time per iteration: 3.846e-05 msec
Rate: 2.6001e+07 it/sec

ilya@denmark:~$ g++ -O0 test.cc && ./a.out
Total time: 18.64 sec
Iterations: 500000000 it
Time per iteration: 3.728e-05 msec
Rate: 2.6824e+07 it/sec
Total time: 16.89 sec
Iterations: 500000000 it
Time per iteration: 3.378e-05 msec
Rate: 2.96033e+07 it/sec

I checked the same code with callgrind and from call graph it looks like strchr() call was inlined while string::find() wasn't. It could be the reason for the difference in the performance. Maybe compiler is even smarter and optimized whole cycle with strchr() out. I'm not sure that the benchmark is completly fair. Anyway one thing is certain: I'll should try to rewrite my URL parser using strchr() and see if the real code is faster.

Syndicated 2007-12-06 13:06:00 from Ilya Martynov's blog

Beyound XSS and SQL injections

What is common about HTML, XML and CSV files, SQL and LDAP queries, filenames and shell commands? All these things are based on text which is often generated by programs. And one commonly observed flaw in such programs is encoding rules are not being followed. These days many developers are aware about SQL injection and XSS problems as many books, online tutorials, blogs, coding standards, etc speak about them. Yet I'm not sure there is enough education so that developers use correct methods to protect their code from these problems. And besides this there is a lack of awareness that it is not just SQL and HTML. Definitely developers should think more broadly: if you generate programmatically any kind of text you must think about proper encoding of all data used in the generated text.

Talking about correct methods to secure code from text encoding related problems one my pet peeve is when people try to strip input data when they really should be thinking about protecting output. Nitesh Dhanjani covers this really well in his blog "Repeat After Me: Lack of Output Encoding Causes XSS Vulnerabilities". Quote:
The most common mistake committed by developers (and many security experts, I might add) is to treat XSS as an input validation problem. Therefore, I frequently come across situations where developers fix XSS problems by attempting to filter out meta-characters (<, >, /, “, ‘, etc). At times, if an exhaustive list of meta-characters is used, it does solve the problem, but it makes the application less friendly to the end user – a large set of characters are deemed forbidden. The correct approach to solving XSS problems is to ensure that every user supplied parameter is HTML Output Encoded
A good example of wrong approach is PHP's invention called magic quotes. I have mixed feelings about this thing. On one hand it was probably a good thing because so many web based software is developed by dilettantes so overall we are living in a slightly better world as magic quotes do somewhat limit damage from bad code. On the other hand it teaches bad habits while not fixing all problems in bad code. Also it causes everybody else to suffer. Good news is that they are getting rid of this abomination in PHP6.

Now let's take a look for some examples how not to generate text which I saw in real life. I'll skip HTML and SQL as this is well covered elsewhere and I'll take a look on other things I mentioned in the beginning of this article.

XML files: bad code which generates XML often shares similar problems as bad code which generates HTML - after all these two are closely related. But as XML is a more generic tool it is used in many domains other then web development where developers are not "blessed" with knowledge of XSS like problems. Moreover I noticed even web developers for some reason often consider XML to be something very different then HTML and suddenly forget they have to escape data. I'm especially amused when that many people are not aware that you cannot put arbitrary binary data in XML. You have to either encode it into text (base64 encoding is quite popular for this) or put it outside of the XML document.

CSV files: this format is still quite popular for exchange of tabular data between programs. Guess what? I've seen so many naive CSV producers and parsers that ignore reserved characters and which break later when these programs get real data. No, to write CSV file you cannot just do
print join ",", @columns
What if one of columns contains say "," (comma)?

LDAP queries: being text based query language it is a target of very similar problems as SQL. But while many developers are aware of SQL injection problem, not many are aware that you have exactly the same problem with LDAP queries too. Also it doesn't help that while nearly all SQL libraries provide tools to escape data in SQL queries it doesn't always seem to be the case for LDAP libraries. For example: PHP's LDAP extension - there is no API to escape data at all.

Using shell to execute commands: if you are running a command using system() in C, Perl, PHP or any other language and you are constructing the command from your data you again should treat this as a problem of proper encoding. The example below is from mozilla's source code
sprintf(cmd, "cp %s %s", orig_filename, dest_filename);
system(cmd);
Guess what happens if any of these filenames were not escaped for characters which are special for shell?

While I'm at this I'd mention that it is probably a good idea to avoid APIs which use shell to execute commands at all. Simply because shell programming is too hard to get right.

What would help a lot if tools would support developers better when writing correct code which deals with text based APIs. Sometimes it is just lack of documentation on encoding rules. For example a month ago I was learning Facebook APIs. One of the provided APIs is API to execute so called FQL queries. This is an SQL like query language and naturally I'd expect FQL injections to be covered in documentation. They don't, it is not even documented how to escape string data in FQL queries! I played with different queries in FQL console and it seems like standard SQL-like method (i.e. using "\" (backslash)) does work as an escape character in strings but why do I have to find this on my own? It is also shame when libraries built around text APIs do not provides means to properly encode data for used text formats. I mentioned one such example above: PHP's LDAP extension provides no functions to escape data for LDAP queries. How hard is it to add this? If you are creating text based APIs or libraries around such APIs it is your duty to help developers who will be using them. So do document encoding rules and do provide tools to automatically encode data!

Syndicated 2007-09-21 22:55:00 from Ilya Martynov's blog

Perl as replacement for shell scripting (Part I)

By shell scripting I mean bash as it is what most (all?) Linux distributions use. Bash can be used as a quite capable programming language. Bash allows programmer to build rather complex scripts by using other programs as building blocks. System comes with a number of such building blocks: find, grep, sed, awk and many others and unsurprisingly there is a lot you can do with them. But it is often a challenge to write robust shell scripts which work or at least fail gracefully for any kind of input. The main reason is that historically shell scripts could use one only data type - string*. Those building blocks, external programs you use in shell scripts have very restricted interface: there are program arguments which are strings, stream of strings as input, stream of strings as output and exit code.

Even a simple concept like a list have to be emulated. For example a list of file names often is passed as a string which contains these file names separated by whitespace. But what if one of these file names contains whitespace? You get a problem. To fix it you need to escape whitespace characters in the filename. And it is rather easy to miss places where you have to do escaping. A bit convolved example:

rm `ls`
This would delete all files in the current directory .. unless they have whitespace characters in their names. There are many similar cases where an unwary programmer can make a mistake in his(her) shell script. Passing data from one process to another often requires a lot of care and the simplest code is often wrong. Another problem is that you are very limited in how you can handle errors in shell scripts - you only have process's exit code to tell you if it finished successfully. And usually it is just a boolean value saying you if there was any error or not. Quote from the linked document:
However, many scripts use an exit 1 as a general bailout upon error. Since exit code 1 signifies so many possible errors, this probably would not be helpful in debugging.
If say mkdir fails your script cannot easily tell if it is because another directory with the same name already exists or you just don't have permissions for this operation.

So any solutions to this problem? As for myself any moment I see my shell script getting longer then three lines of code I rewrite whole thing into Perl. In Perl you don't need to use external programs as much as often as you need in bash. Therefore you are not limited to their restrictive interfaces of them (remember, only strings and exit codes for input and output); native Perl APIs can be much more expressive when they need to.

There is a price though. Perl code is not always as compact as similar shell code for some scripting tasks. This is because the shell scripting is optimized very well to handle interaction of processes and Perl is not as much. It is worth to mention that many things which come for granted in the shell scripting often require you using Perl modules including non standard CPAN Perl modules. It is not problem as such except that not all Perl programmers know where to look for things if they are not covered by perlfunc. This mainly a concern for newbie Perl programmers but it is still a real problem. Also using CPAN modules is not always an option.

Of course in your Perl program you can fail back to using same external programs you would use in a shell script but then you lose advantages of Perl over shell scripting. So .. don't do this if possible. As interesting example of this principle: Perl before version 5.6.0 would fail back to shell to execute operation glob. That was causing various problems for Perl developers: for example I saw Perl programs using glob to fail when run on one tightly secured web hosting server because binary Perl was calling was simply removed from the server for security reasons. In later versions of Perl the implementation of glob was changed: it is implemented purely in Perl now and doesn't use external programs.

To be continued in Part II: mapping between common shell operations and corresponding Perl modules.


[*] New versions of bash support arrays. I'd argue that usefulness of arrays in bash is limited as programs you call from shell scripts cannot use them to pass output data. You are still limited to string streams and exit codes. Not to mention that this is not very portable across different systems.

Syndicated 2007-09-06 14:33:00 from Ilya Martynov's blog

23 Aug 2007 (updated 24 Aug 2007 at 10:21 UTC) »

libxml++ vs xerces C++


When I was reading "API: Design Matters" I recalled one example of good API vs bad API. Actually my example is more about good API documentation vs bad API documentation but I suspect there is a correlation between these two things. It is definitely hard to write good documentation if your API sucks.

So my story is that I had a task to read XML data in C++ application. XML data was small and performance of this part of the application was not critical so it looked like the simplest way to read this data was to load DOM tree for XML document and just use DOM API and maybe couple simple XPath queries. It was the first time I needed to do this in C++; I had no previous experience with any XML C++ libraries. So, I do google search (or maybe it was apt-cache search - I don't remember) and the first thing I find is xerces C++. Quote from project's website:
Xerces-C++ makes it easy to give your application the ability to read and write XML data.
Sounds good, just what I need. So I dig documentation and find it to be completely unhelpful as it is just Doxygen autogenerated undocumentation. Fine, I can read code, let's check sample code then. I open sample code and I find that the shortest example how to parse XML into DOM tree and how to access data in the tree (DOMCount) consists of two files which are more then 600 lines long in total. Huh? I don't want to read 15 pages of code just to learn how to do two simple actions: parse XML into DOM and get data from DOM. Other examples are even more bad. Several files, several classes just to read and print freaking XML (DOMPrint). You've got to be kidding me. It cannot be that hard.

I don't really want to waste hours to learn API I'm unlikely to use ever again. After all I don't write much C++ code and I definitely don't write much C++ code that needs XML. So time to search further. Next hit is libxml++. It is C++ wrapper over popular C XML library libxml. This time there is actually some documentation that does try to explain how to use the library. And this documentation contains an example which while being just about 150 lines manages to demonstrate most of library's DOM API.

End result: I finish my code to read my XML data in next 30 minutes using libxml++. It is simple, short and it works.

So what's wrong with xerces C++? There is no introduction level documentation at all. Examples look too complex for the problem they are supposed to show solution for. And the reason for this is that API is just bad: it requires writing unnecessary complex client code.

Update: boris corrected me about lack of introduction level documentation in a comment to this blog post. Turned out I missed it. As a weak excuse I'll blame bad navigation on the project's site :)

Syndicated 2007-08-23 20:54:00 (Updated 2007-08-24 10:00:03) from Ilya Martynov

23 Aug 2007 (updated 23 Aug 2007 at 21:10 UTC) »

4 silly mistakes in use of MySQL indexes


1. Not learning how to use EXPLAIN SELECT

I'm really surprised how many developers who use MySQL all the time and who do not know or understand how to use EXPLAIN SELECT. I've seen several times developers proposing serious architectural changes to their code to minimize, partition or cache data in their database when the actual solution was to spend 30 minutes thinking over result of EXPLAIN SELECT and adding or changing couple indexes.

2. Wasting space with redundant indexes

If you have multicolumn index it means you don't need a separate index which is subset of the first index. It is easier to explain with an example:
CREATE TABLE table1 (
col1 INT,
col2 INT,
PRIMARY (col1, col2),
KEY (col1)
);
Index on col1 is redundant as any search on col1 can use primary index. This just wastes disk space and might make some queries which change this table a bit slower.

There is one but! See below..

3. Incorrect order of columns in index

Order of columns in multicolumn index is important. From MySQL documentation:
MySQL cannot use an index if the columns do not form a leftmost prefix of the index.
Example:
CREATE TABLE table2 (
id INT PRIMARY,
col1 INT,
col2 INT,
col3 INT,
KEY (col1, col2)
);
MySQL wont use any indexes for query like
SELECT * FROM table2 WHERE col2=123
EXPLAIN SELECT shows this instantly. If you want to run this query faster either change order of columns in the index or add another one.

4. Not using multicolumn indexes when you need to

MySQL can use only one index per table in a time so if you query by several columns in the table you may need to add multicolumn index. Example:
CREATE TABLE table3 (
id INT PRIMARY,
col1 INT,
col2 INT,
col3 INT,
KEY (col1)
);
Query like
SELECT * FROM table2 WHERE col1=123 AND col2=456
would use the index on col1 to reduce number of rows to check but MySQL can do much better if you add multicolumn index which covers both col1 and col2. The effect of adding such index is very easy to see with EXPLAIN SELECT.

Syndicated 2007-08-16 12:22:00 (Updated 2007-08-23 21:02:00) from Ilya Martynov

volatile and threading


Until recently I hadn't much experience writing multi-threading programs in C++ so when I tried to I found that I'm really confused how multi-threading programs mix with volatile variables. So I did a little research and quick summary is: this topic is confusing. It looks like if you put locks around global variables shared between threads you shouldn't care about volatile flag. Definitely under POSIX threads and most likely when using other threading libraries as well. If you don't and rely on atomic operations it seems that you have to use volatile flag for shared global variables but concerning portability it is a grey area.

Longer story is below:

Suppose we have a piece of code which waits for a certain external condition to happen. The code could look like
bool gEvent = false;

void waitLoop() {
while (!gEvent) {
sleep(1);
}
...
}
Let's assume that this is a single threaded program and the external condition we are waiting for is a Unix signal. The signal handler is very simple - it simply sets gEvent to true:
void wakeUp() {
gEvent = true;
}
The problem with the code above is that compiler would optimize out check of the condition inside waitLoop() incorrectly assuming from local analysis of the code that gEvent never changes. The fix is to declare gEvent with volatile modifier which basically tells compiler that the variable can be changed at any time and that is unsafe to perform any optimization based on the analysis of local code:
volatile bool gEvent = false;
Let's take another example. The code is same but this time it is a mutli-threaded program where one thread waits for another. So waitLoop() runs inside one thread and wakeUp() eventually called from another. Is the code still correct? Probably yes if we keep volatile flag and if operations which read or write gEvent variable can be considered as atomic. The later assumptions seems to be correct for most (all?) platforms.

But what if we cannot treat operations which read or write gEvent variable as atomic? For example it might be an instance of a more complex type; for example an instance of class which contains other information then just a information whenever event have happened or not:
struct EventInfo {
EventInfo(bool happened = false, const string& source = "")
: fHappened(happened), fSource(source)
{}
bool fHappened;
string fSource;
}

volatile EventInfo gEventInfo;

void waitLoop() {
while (!fEventInfo.fHappened) {
sleep(1);
}
const string& eventSource = fEventInfo.fSource;
...
}

void wakeUp() {
gEventInfo = EventInfo(true, "wakeUp");
}
This code is still ok for single-threaded program where wakeUp() is a signal handler but is unsafe for multi-threaded program where wakeUp() runs in a separate thread as operations on gEventInfo cannot be treated as atomic anymore.

So how do we fix it? We should surround places where code reads or writes gEventInfo with locks to make sure only one thread accesses gEventInfo at a time. I'll use boost thread library in the example.
boost::mutex gMutex;

void waitLoop() {
string eventSource;

for (bool eventHappened = false; !eventHappened; ) {
{
boost::mutex::scoped_lock lock(gMutex);
eventHappened = fEventInfo.fHappened;
eventSource = fEventInfo.fSource;
}
sleep(1);
}
...
}

void wakeUp() {
boost::mutex::scoped_lock lock(gMutex);

gEventInfo = EventInfo(true, "wakeUp");
}
Comparing this code with earlier examples it looks like we still need to declare gEventInfo variable as volatile but it turns out we don't really need to. Quote from Thread Cannot be Implemented as a Library [PDF]:
In practice, C and C++ implementations that support
Pthreads generally proceed as follows:
  1. Functions such as pthread_mutex_lock() that are guaranteed by the standard to “synchronize memory” include hardware instructions (“memory barriers”) that prevent hardware reordering of memory operations around the call.
  2. To prevent the compiler from moving memory operations around calls to functions such as pthread_mutex_lock(), they are essentially treated as calls to opaque functions, about which the compiler has no information. The compiler effectively assumes that pthread_mutex_lock() may read or write any global variable. Thus a memory reference cannot simply be moved across the call. This approach also ensures that transitive calls, e.g. a call to a function f() which then calls pthread_mutex_lock(), are handled in the same way more or less appropriately, i.e. memory operations are not moved across the call to f() either, whether or not the entire user program is being analyzed at once.
So at least if you using POSIX threads (boost::threads under Linux uses them) your code is probably safe without use of volatile as long as you use locks around global variables shared between several threads. Good question whenever this example code is portable to other platforms; after all boost::threads supports threading libraries other then POSIX which may have other rules for mutexes and locks. I haven't researched this yet as for now I don't really care about other platforms.

Some interesting links on this topic:
  • A Memory model for C++: FAQ - mentions shortly reasons why volatile keyword is insufficient to ensure synchronization between threads and has links on papers for further reading.
  • http://www.artima.com/cppsource/threads_meeting.html - Not much to read there but I love this quote: "Not all the dragons were so easily defeated, unfortunately. Among the issues guaranteed to waste at least 20 minutes of group time with little or nothing to show ... What does volatile mean?" (this in context of multi-threaded programs). If C++ experts cannot agree on this ...
  • Another person gets confused over use of volatile and threads. Interesting discussion on comp.programming.threads.

Syndicated 2007-07-31 14:58:00 (Updated 2007-08-12 00:44:39) from Ilya Martynov

2 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!