**Sysprof 1.2.0**

A new stable releasenew stable release of Sysprof is now available. Download version 1.2.0.

**Sysprof 1.2.0**

A new stable releasenew stable release of Sysprof is now available. Download version 1.2.0.

**Big-O Misconceptions**

In computer science and sometimes mathematics, big-O notation is used to talk about how quickly a function grows while disregarding multiplicative and additive constants. When classifying algorithms, big-O notation is useful because it lets us abstract away the differences between real computers as just multiplicative and additive constants.

Big-O is not a difficult concept at all, but it seems to be common even for people who should know better to misunderstand some aspects of it. The following is a list of misconceptions that I have seen in the wild.

But first a definition: We write

$f(n) = O(g(n))$

when

**Misconception 1:** “The Equals Sign Means Equality”

The equals sign in

$f(n) = O(g(n))$

is a widespread travestry. If you take it at face value, you can
deduce that since

The expression

The way to interpret the right-hand side is as a *set* of functions:

$ O(f) = \{ g \mid g(n) \le M f(n) \text{ for some \(M > 0\) for large \(n\)}\}. $

With this definition, the world makes sense again: If

**Misconception 2:** “Informally, Big-O Means ‘Approximately Equal’"

If an algorithm takes

So informally, big-O means *approximately less than or equal*,
not *approximately equal*.

If someone says “Topological Sort, like other sorting algorithms, is
*technically* correct, but severely
misleading, because Toplogical Sort is also

If someone says “In the worst case, any comparison based sorting
algorithm must make *not* a
correct statement. Translated into English it becomes:

“In the worst case, any comparison based sorting algorithm must make fewer than or equal to

$M n \log (n)$ comparisons”

which is not true: You can easily come up with a comparison based sorting algorithm that makes more comparisons in the worst case.

To be precise about these things we have other types of notation at our disposal. Informally:

$O()$ :Less than or equal, disregarding constants $\Omega()$ :Greater than or equal, disregarding constants $o()$ :Stricly less than, disregarding constants $\Theta()$ :Equal to, disregarding constants

and some more. The correct statement about lower bounds is this: “In the worst case,
any comparison based sorting algorithm must make

“In the worst case, any comparison based sorting algorithm must make at least

$M n \log (n)$ comparisons”

which is true. And a correct, non-misleading statement about
Topological Sort is that it is

**Misconception 3:** “Big-O is a Statement About Time”

Big-O is used for making statements about functions. The functions can measure time or space or cache misses or rabbits on an island or anything or nothing. Big-O notation doesn’t care.

In fact, when used for algorithms, big-O is almost never about time. It is about primitive operations.

When someone says that the time complexity of MergeSort is *time*
complexity of any particular MergeSort might be because that would
depend how much time it takes to make a comparison. In other words,
the *comparisons* as the primitive operation.

The important point here is that when big-O is applied to algorithms,
there is always an underlying model of computation. The claim that the
*time* complexity of MergeSort is

Which is fine as far as it goes. It lets us compare MergeSort to other comparison based sorts, such as QuickSort or ShellSort or BubbleSort, and in many real situations, comparing two sort keys really does take constant time.

However, it doesn’t allow us to compare MergeSort to RadixSort because
RadixSort is not comparison based. It simply doesn’t ever make a
comparison between two keys, so its time complexity in the comparison
model is 0. The statement that RadixSort is

To compare RadixSort to MergeSort, we must first define a shared model
of computation. If we are sorting strings that are

In this model, MergeSort makes

**Misconception 4:** Big-O Is About Worst Case

Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm, but big-O notation doesn’t imply anything of the sort.

If someone is talking about the randomized QuickSort and says that it
is *expected running
time* is

**Porter/Duff Compositing and Blend Modes**

In the Porter/Duff compositing algebra, images are equipped with an
alpha channel that determines on a per-pixel basis whether the image
is there or not. When the alpha channel is 1, the image is fully
there, when it is 0, the image isn’t there at all, and when it is in
between, the image is partially there. In other words, the alpha
channel describes the *shape* of the image, it does not describe
opacity. The way to think of images with an alpha channel is as
irregularly shaped pieces of cardboard, not as colored glass. Consider
these two images:

When we combine them, each pixel of the result can be divided into four regions:

One region where only the source is present, one where only the destination is present, one where both are present, and one where neither is present.

By deciding on what happens in each of the four regions, various effects can be generated. For example, if the destination-only region is treated as blank, the source-only region is filled with the source color, and the ‘both’ region is filled with the destination color like this:

The effect is as if the destination image is trimmed to match the source image, and then held up in front of it:

The Porter/Duff operator that does this is called “Dest Atop”.

There are twelve of these operators, each one characterized by its behavior in the three regions: source, destination and both. The ‘neither’ region is always blank. The source and destination regions can either be blank or filled with the source or destination colors respectively.

The formula for the operators is a linear combination of the contents of the four regions, where the weights are the areas of each region:

$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]$

Where

$A_\text{src} = \alpha_\text{s} \cdot (1 - \alpha_\text{d})$ $A_\text{dst} = \alpha_\text{d} \cdot (1 - \alpha_\text{s})$ $A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}$

The alpha channel of the result is computed in a similar way:

$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$

where

Here is a table of all the Porter/Duff operators:

$[\text{s}]$ | $[\text{d}]$ | $[\text{b}]$ | |

Src | $s$ | $0$ | s |

Atop | $0$ | $d$ | s |

Over | $s$ | $d$ | s |

In | $0$ | $0$ | s |

Out | $s$ | $0$ | $0$ |

Dest | $0$ | $d$ | d |

DestAtop | $s$ | $0$ | d |

DestOver | $s$ | $d$ | d |

DestIn | $0$ | $0$ | d |

DestOut | $0$ | $d$ | $0$ |

Clear | $0$ | $0$ | $0$ |

Xor | $s$ | $d$ | $0$ |

And here is how they look:

Despite being referred to as alpha blending and despite alpha often
being used to model opacity, in concept Porter/Duff is not a way to
blend the source and destination shapes. It is way to overlay, combine
and trim them as if they were pieces of cardboard. The only places
where source and destination pixels are actually *blended* is where
the antialiased edges meet.

**Blending**

Photoshop and the Gimp have a concept of layers which are images
stacked on top of each other. In Porter/Duff, stacking images on top
of each other is done with the “Over” operator, which is also what
Photoshop/Gimp use by default to composite layers:

Conceptually, two pieces of cardboard are held up with one in front of the other. Neither shape is trimmed, and in places where both are present, only the top layer is visible.

A layer in these programs also has an associated *Blend Mode* which
can be used to modify what happens in places where both are
visible. For example, the ‘Color Dodge’ blend mode computes a mix of
source and destination according to this formula:

$ \begin{equation*} B(s,d)= \begin{cases} 0 & \text{if \(d=0\),} \\ 1 & \text{if \(d \ge (1 - s)\),} \\ d / (1 - s) & \text{otherwise} \end{cases} \end{equation*} $

The result is this:

Unlike with the regular Over operator, in this case there is a substantial chunk of the output where the result is actually a mix of the source and destination.

Layers in Photoshop and Gimp are not tailored to each other (except for layer masks, which we will ignore here), so the compositing of the layer stack is done with the source-only and destination-only region set to source and destination respectively. However, there is nothing in principle stopping us from setting the source-only and destination-only regions to blank, but keeping the blend mode in the ‘both’ region, so that tailoring could be supported alongside blending. For example, we could set the ‘source’ region to blank, the ‘destination’ region to the destination color, and the ‘both’ region to ColorDodge:

Here are the four combinations that involve a ColorDodge blend mode:

In this model the original twelve Porter/Duff operators can be viewed as the results of three simple blend modes:

Source: | $B(s, d) = s$ |

Dest: | $B(s, d) = d$ |

Zero: | $B(s, d) = 0$ |

In this generalization of Porter/Duff the blend mode is chosen from a large set of formulas, and each formula gives rise to four new compositing operators characterized by whether the source and destination are blank or contain the corresponding pixel color.

Here is a table of the operators that are generated by various blend modes:

The general formula is still an area weighted average:

$A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)$

where [s] and [d] are the source and destination colors respectively
or 0, but where

The output of the alpha channel is the same as before:

$A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]$

except that [ab] is now determined by the blend mode. For the Zero blend mode there is no coverage in the both region, so [ab] is 0; for most others, there is full coverage, so [ab] is 1.

**Porter/Duff Compositing and Blend Modes**

In the Porter/Duff compositing algebra, images are equipped with an alpha channel that determines on a per-pixel basis whether the image is there or not. When the alpha channel is 1, the image is fully there, when it is 0, the image isn’t there at all, and when it is in between, the image is partially there. In other words, the alpha channel describes the *shape* of the image, it does not describe opacity. The way to think of images with an alpha channel is as irregularly shaped pieces of cardboard, not as colored glass. Consider these two images:

When we combine them, each pixel of the result can be divided into four regions:

One region where only the source is present, one where only the destination is present, one where both are present, and one where neither is present.

By deciding on what happens in each of the four regions, various effects can be generated. For example, if the destination-only region is treated as blank, the source-only region is filled with the source color, and the ‘both’ region is filled with the destination color like this:

The effect is as if the destination image is trimmed to match the source image, and then held up in front of it:

The Porter/Duff operator that does this is called “Dest Atop”.

There are twelve of these operators, each one characterized by its behavior in the three regions: source, destination and both. The ‘neither’ region is always blank. The source and destination regions can either be blank or filled with the source or destination colors respectively.

The formula for the operators is a linear combination of the contents of the four regions, where the weights are the areas of each region:

\(A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both} \cdot [b]\)

Where \([s]\) is either 0 or the color of the source pixel, \([d]\) either 0 or the color of the destination pixel, and \([b]\) is either 0, the color of the source pixel, or the color of the destination pixel. With the alpha channel being interpreted as coverage, the areas are given by these formulas:

\(A_\text{src} = \alpha_\text{s} \cdot (1 – \alpha_\text{d})\)

\(A_\text{dst} = \alpha_\text{d} \cdot (1 – \alpha_\text{s})\)

\(A_\text{both} = \alpha_\text{s} \cdot \alpha_\text{d}\)

The alpha channel of the result is computed in a similar way:

\(A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]\)

where \([\text{as}]\) and \([\text{ad}]\) are either 0 or 1 depending on whether the source and destination regions are present, and where \([\text{ab}]\) is 0 when the ‘both’ region is blank, and 1 otherwise.

Here is a table of all the Porter/Duff operators:

\([\text{s}]\) | \([\text{d}]\) | \([\text{b}]\) | |

Src | \(s\) | \(0\) | s |

Atop | \(0\) | \(d\) | s |

Over | \(s\) | \(d\) | s |

In | \(0\) | \(0\) | s |

Out | \(s\) | \(0\) | \(0\) |

Dest | \(0\) | \(d\) | d |

DestAtop | \(s\) | \(0\) | d |

DestOver | \(s\) | \(d\) | d |

DestIn | \(0\) | \(0\) | d |

DestOut | \(0\) | \(d\) | \(0\) |

Clear | \(0\) | \(0\) | \(0\) |

Xor | \(s\) | \(d\) | \(0\) |

And here is how they look:

Despite being referred to as alpha blending and despite alpha often being used to model opacity, in concept Porter/Duff is not a way to blend the source and destination shapes. It is way to overlay, combine and trim them as if they were pieces of cardboard. The only places where source and destination pixels are actually *blended* is where the antialiased edges meet.

**Blending**

Photoshop and the Gimp have a concept of layers which are images stacked on top of each other. In Porter/Duff, stacking images on top of each other is done with the “Over” operator, which is also what Photoshop/Gimp use by default to composite layers:

Conceptually, two pieces of cardboard are held up with one in front of the other. Neither shape is trimmed, and in places where both are present, only the top layer is visible.

A layer in these programs also has an associated *Blend Mode* which can be used to modify what happens in places where both are visible. For example, the ‘Color Dodge’ blend mode computes a mix of source and destination according to this formula:

\(\begin{equation*}

B(s,d)=

\begin{cases} 0 & \text{if \(d=0\),}

\\

1 & \text{if \(d \ge (1 – s)\),}

\\

d / (1 – s) & \text{otherwise}

\end{cases}

\end{equation*}\)

The result is this:

Unlike with the regular Over operator, in this case there is a substantial chunk of the output where the result is actually a mix of the source and destination.

Layers in Photoshop and Gimp are not tailored to each other (except for layer masks, which we will ignore here), so the compositing of the layer stack is done with the source-only and destination-only region set to source and destination respectively. However, there is nothing in principle stopping us from setting the source-only and destination-only regions to blank, but keeping the blend mode in the ‘both’ region, so that tailoring could be supported alongside blending. For example, we could set the ‘source’ region to blank, the ‘destination’ region to the destination color, and the ‘both’ region to ColorDodge:

Here are the four combinations that involve a ColorDodge blend mode:

In this model the original twelve Porter/Duff operators can be viewed as the results of three simple blend modes:

Source: | \(B(s, d) = s\) |

Dest: | \(B(s, d) = d\) |

Zero: | \(B(s, d) = 0\) |

In this generalization of Porter/Duff the blend mode is chosen from a large set of formulas, and each formula gives rise to four new compositing operators characterized by whether the source and destination are blank or contain the corresponding pixel color.

Here is a table of the operators that are generated by various blend modes:

The general formula is still an area weighted average:

\(A_\text{src} \cdot [s] + A_\text{dest} \cdot [d] + A_\text{both}\cdot B(s, d)\)

where [s] and [d] are the source and destination colors respectively or 0, but where \(B(s, d)\) is no longer restricted to one of \(0\), \(s\), and \(d\), but can instead be chosen from a large set of formulas.

The output of the alpha channel is the same as before:

\(A_\text{src} \cdot [\text{as}] + A_\text{dest} \cdot [\text{ad}] + A_\text{both} \cdot [\text{ab}]\)

except that [ab] is now determined by the blend mode. For the Zero blend mode there is no coverage in the both region, so [ab] is 0; for most others, there is full coverage, so [ab] is 1.

Syndicated 2013-03-17 18:50:24 (Updated 2013-03-25 13:06:40) from Søren Sandmann Pedersen

**Big-O Misconceptions**

In computer science and sometimes mathematics, big-O notation is used

to talk about how quickly a function grows while disregarding multiplicative and additive constants. When classifying algorithms, big-O notation is useful because it lets us abstract away the differences between real computers as just multiplicative and additive constants.

Big-O is not a difficult concept at all, but it seems to be common even for people who should know better to misunderstand some aspects of it. The following is a list of misconceptions that I have seen in the wild.

But first a definition: We write $$f(n) = O(g(n))$$ when \(f(n) \le M g(n)\) for sufficiently large \(n\), for some positive constant \(M\).

**Misconception 1:** “The Equals Sign Means Equality”

The equals sign in $$f = O(g(n))$$ is a widespread travestry. If you take it at face value, you can deduce that since \(5 n\) and \(3 n\) are both equal to \(O(n)\), then \(3 n\) must be equal to \(5 n\) and so \(3 = 5\).

The expression \(f = O(g(n))\) doesn’t type check. The left-hand-side is a function, the right-hand-side is a … what, exactly? There is no help to be found in the definition. It just says “we write” without concerning itself with the fact that what “we write” is total nonsense.

The way to interpret the right-hand side is as a *set* of functions: $$O(f) = \{ g \mid g(n) \le M f(n) \text{ for some \(M > 0\) for large \(n\)}\}.$$ With this definition, the world makes sense again: If \(f(n) = 3 n\) and \(g(n) = 5 n\), then \(f \in O(n)\) and \(g \in O(n)\), but there is no equality involved so we can’t make bogus deductions like \(3=5\). We can however make the correct observation that \(O(n) \subseteq O(n \log n)\subseteq O(n^2) \subseteq O(n^3)\), something that would be difficult to express with the equals sign.

**Misconception 2:** “Informally, Big-O Means ‘Approximately Equal’”

If an algorithm takes \(5 n^2\) seconds to complete, that algorithm is \(O(n^2)\) because for the constant \(M=7\) and sufficiently large \(n\), \(5 n^2 \le 7 n^2\). But an algorithm that runs in constant time, say 3 seconds, is also \(O(n^2)\) because for sufficiently large \(n\), \(3 \le n^2\).

So informally, big-O means *approximately less than or equal*, not *approximately equal*.

If someone says “Topological Sort, like other sorting algorithms, is O(n log n)”, then that is *technically* correct, but severely misleading, because Toplogical Sort is also \(O(n)\) which is a subset of \(O(n \log n)\). Chances are whoever said it meant something false.

If someone says “In the worst case, any comparison based sorting algorithm must make \(O(n \log n)\) comparisons” that is *not* a correct statement. Translated into English it becomes:

“In the worst case, any comparison based sorting algorithm must make fewer than or equal to \(M n \log (n)\) comparisons”

which is not true: You can easily come up with a comparison based sorting algorithm that makes more comparisons in the worst case.

To be precise about these things we have other types of notation at our disposal. Informally:

\(O()\): | Less than or equal, disregarding constants |

\(\Omega()\): | Greater than or equal, disregarding constants |

\(o()\): | Stricly less than, disregarding constants |

\(\Theta()\): | Equal to, disregarding constants |

and some more. The correct statement about lower bounds is this: “In the worst case, any comparison based sorting algorithm must make \(\Omega(n \log n)\) comparisons. In English that becomes:

“In the worst case, any comparison based sorting algorithm must make at least \(M n \log (n)\) comparisons”

which is true. And a correct, non-misleading statement about Topological Sort is that it is \(\Theta(n)\), because it has a lower bound of \(\Omega(n)\) and an upper bound of \(O(n)\).

**Misconception 3:** “Big-O is a Statement About Time”

Big-O is used for making statements about functions. The functions can measure time or space or cache misses or rabbits on an island or anything or nothing. Big-O notation doesn’t care.

In fact, when used for algorithms, big-O is almost never about time. It is about primitive operations.

When someone says that the time complexity of MergeSort is \(O(n \log n)\), they usually mean that the number of comparisons that MergeSort makes is \(O(n \log n)\). That in itself doesn’t tell us what the *time* complexity of any particular MergeSort might be because that would depend how much time it takes to make a comparison. In other words, the \(O(n \log n)\) refers to *comparisons* as the primitive operation.

The important point here is that when big-O is applied to algorithms, there is always an underlying model of computation. The claim that the *time* complexity of MergeSort is \(O(n \log n)\), is implicitly referencing an model of computation where a comparison takes constant time and everything else is free.

Which is fine as far as it goes. It lets us compare MergeSort to other comparison based sorts, such as QuickSort or ShellSort or BubbleSort, and in many real situations, comparing two sort keys really does take constant time.

However, it doesn’t allow us to compare MergeSort to RadixSort because RadixSort is not comparison based. It simply doesn’t ever make a comparison between two keys, so its time complexity in the comparison model is 0. The statement that RadixSort is \(O(n)\) implicitly references a model in which the keys can be lexicographically picked apart in constant time. Which is also fine, because in many real situations, you actually can do that.

To compare RadixSort to MergeSort, we must first define a shared model of computation. If we are sorting strings that are \(k\) bytes long, we might take “read a byte” as a primitive operation that takes constant time with everything else being free.

In this model, MergeSort makes \(O(n \log n)\) string comparisons each of which makes \(O(k)\) byte comparisons, so the time complexity is \(O(k n \log n)\). One common implementation of RadixSort will make \(k\) passes over the \(n\) strings with each pass reading one byte, and so has time complexity \(O(n k) \).

**Misconception 4:** Big-O Is About Worst Case

Big-O is often used to make statements about functions that measure the worst case behavior of an algorithm, but big-O notation doesn’t imply anything of the sort.

If someone is talking about the randomized QuickSort and says that it is \(O(n \log n)\), they presumably mean that its *expected running time* is \(O(n \log n)\). If they say that QuickSort is \(O(n^2)\) they are probably

talking about its worst case complexity. Both statements can be considered true depending on what type of running time the functions involved are measuring.

Syndicated 2012-10-15 09:16:39 (Updated 2012-10-19 08:11:32) from Søren Sandmann Pedersen

**Sysprof 1.2.0**

A new stable release of Sysprof is now available. Download version 1.2.0.

Syndicated 2012-09-08 21:32:30 (Updated 2012-09-15 00:00:28) from Søren Sandmann Pedersen

**Over is not Translucency**

The Porter/Duff Over operator, also known as the “Normal” blend mode in Photoshop, computes the amount of light that is reflected when a pixel partially covers another:

The fraction of bg that is covered is denoted alpha. This operator is the correct one to use when the foreground image is an opaque mask that partially covers the background:

A photon that hits this image will be reflected back to your eyes by either the foreground or the background, but not both. For each foreground pixel, the alpha value tells us the probability of each:

This is the definition of the Porter/Duff Over operator for non-premultiplied pixels.

But if alpha is interpreted as *translucency,* then the Over operator is not the correct one to use. The Over operator will act as if each pixel is partially covering the background:

Which is not how translucency works. A translucent material reflects some light and lets other light through. The light that is let through is reflected by the background and *interacts with the foreground again*.

Let’s look at this in more detail. Please follow along in the diagram to the right. First with probability *a*, the photon is reflected back towards the viewer:

With probability (1 – *a*), it passes through the foreground, hits the background, and is reflected back out. The photon now hits the *backside* of the foreground pixel. With probability (1 – *a*), the foreground pixel lets the photon back out to the viewer. The result so far:

But we are not done yet, because with probability *a* the foreground pixel reflects the photon once again back towards the background pixel. There it will be reflected, hit the backside of the foreground pixel again, which lets it through to our eyes with probability (1 – *a*). We get another term where the final (1 – *a*) is replaced with *a* * fg * bg * (1 – *a*):

And so on. In each round, we gain another term which is identical to the previous one, except that it has an additional *a* * fg * bg factor:

Or more compactly:

Because we are dealing with pixels, both *a*, fg, and bg are less than 1, so the sum is a geometric series:

Putting them together, we get:

I have sidestepped the issue of premultiplication by assuming that background alpha is 1. The calculations with premultipled colors are similar, and for the color components, the result is simply:

The issue of destination alpha is more complicated. With the Over operator, both foreground and background are opaque masks, so the light that survives both has the same color as the input light. With translucency, the transmitted light has a different color, which means the resulting alpha value must in principle be different for each color component. But that’s not possible for ARGB pixels. A similar argument to the above shows that the resulting alpha value would be:

where *b* is the background alpha. The problem is the dependency on fg and bg. If we simply assume for the purposes of the alpha computation that fg and bg are equal to a and b, we get this:

which is equal to

Ie., exactly the same computation as the one for the color channels. So we can define the *Translucency Operator* as this:

for all four channels.

Here is an example of what the operator looks like. The image below is what you will get if you use the Over operator to implement a selection rectangle. Mouse over to see what it would look like if you used the Translucency operator.

Both were computed in linear RGB. Typical implementations will often compute the Over operator in sRGB, so that’s what see if you actually select some icons in Nautilus. If you want to compare all three, open these in tabs:

And for good measure, even though it makes zero sense to do this,

Syndicated 2011-09-26 17:28:20 (Updated 2011-10-09 20:12:55) from Søren Sandmann Pedersen

**Gamma Correction vs Premultiplied Pixels**

Pixels with 8 bits per channel are normally sRGB encoded because that allocates more bits to darker colors where human vision is the most sensitive. (Actually, it’s really more of a historical accident, but sRGB nevertheless remains useful for this reason). The relationship between sRGB and linear RGB is that you get an sRGB pixel by raising each component of a linear pixel to the power of 1/2.2.

A lot of graphics software does alpha blending directly on these sRGB pixels using alpha values that are linearly coded (ie., an alpha value of 0 means no coverage, 0.5 means half coverage, and 1 means full coverage). Because alpha blending is best done with premultiplied pixels, such systems store pixels in this format:

[ alpha, alpha * red_s, alpha * green_s, alpha * blue_s ]

where alpha is linearly coded, and (`red_s`

, `green_s`

, `blue_s`

) are sRGB coded. As long as you are happy with blending in sRGB, this works well. Also, if you simply discard the alpha channel of such pixels and display them directly on a monitor, it will look as if the pixels were alpha blended (in the sRGB space) on top of a black background, which is the desired result.

But what if you want to blend in linear RGB? If you use the format above, some expensive conversions will be required. To convert to premultiplied linear, you have to first divide by alpha, then raise each color to 2.2, then multiply by alpha. To convert back, you must divide by alpha, raise to 1/2.2, then multiply with alpha.

The conversions can be avoided if you store the pixels linearly, ie., keeping the premultiplication, but coding red, green, and blue linearly instead of as sRGB. This makes blending fast, but the downside is that you need deeper pixels. With only 8 bits per pixel, the linear coding loses too much precision in darker tones. Another problems is that to display these pixels, you will either have to convert them to sRGB, or if the video card can scan them out directly, you have to make sure that the gamma ramp is set to compensate for the fact that the monitor expects sRGB pixels.

Can we get the best of both worlds? Yes. The format to use is this:

[ alpha, alpha_s * red_s, alpha_s * green_s, alpha_s * blue_s ]

That is, the alpha channel is stored linearly, and the color channels are stored in sRGB, premultiplied with the alpha value raised to 1/2.2. Ie., the red component is now

(red * alpha)^(1/2.2),

where before it was

alpha * red^(1/2.2).

It is sufficient to use 8 bits per channel with this format because of the sRGB encoding. Discarding the alpha channel and displaying the pixels on a monitor will produce pixels that are alpha blended (in linear space) against black, as desired.

You can convert to linear RGB simply by raising the R, G, and B components to 2.2, and back by raising to 1/2.2. Or, if you feel like cheating, use an exponent of 2 so that the conversions become a multiplication and a square root respectively.

This is also the pixel format to use with texture samplers that implement the sRGB OpenGL extensions (textures and framebuffers). These extensions say precisely that the R, G, and B components are raised to 2.2 before texture filtering, and raised to 1/2.2 after the final raster operation.

Syndicated 2011-08-10 14:25:55 (Updated 2011-09-04 01:42:00) from Søren Sandmann Pedersen

**Sysprof 1.1.8**

A new version 1.1.8 of Sysprof is out.

This is a release candidate for 1.2.0 and contains mainly bug fixes.

Syndicated 2011-07-15 17:48:39 (Updated 2011-09-04 01:42:00) from Søren Sandmann Pedersen

**Fast Multiplication of Normalized 16 bit Numbers with SSE2**

If you are compositing pixels with 16 bits per component, you often need this computation:

uint16_t a, b, r; r = (a * b + 0x7fff) / 65535;

There is a well-known way to do this quickly without a division:

uint32_t t; t = a * b + 0x8000; r = (t + (t >> 16)) >> 16;

Since we are compositing pixels we want to do this with SSE2 instructions, but because the code above uses 32 bit arithmetic, we can only do four operations at a time, even though SSE registers have room for eight 16 bit values. Here is a direct translation into SSE2:

a = punpcklwd (a, 0); b = punpcklwd (b, 0); a = pmulld (a, b); a = paddd (a, 0x8000); b = psrld (a, 16); a = paddd (a, b); a = psrld (a, 16); a = packusdw (a, 0);

But there is another way that better matches SSE2:

uint16_t lo, hi, t, r; hi = (a * b) >> 16; lo = (a * b) & 0xffff; t = lo >> 15; hi += t; t = hi ^ 0x7fff; if ((int16_t)lo > (int16_t)t) lo = 0xffff; else lo = 0x0000; r = hi - lo;

This version is better because it avoids the unpacking to 32 bits. Here is the translation into SSE2:

t = pmulhuw (a, b); a = pmullw (a, b); b = psrlw (a, 15); t = paddw (t, b); b = pxor (t, 0x7fff); a = pcmpgtw (a, b); a = psubw (t, a);

This is not only shorter, it also makes use of the full width of the SSE registers, computing eight results at a time.

Unfortunately SSE2 doesn’t have 8 bit variants of pmulhuw, pmullw, and psrlw, so we can’t use this trick for the more common case where pixels have 8 bits per component.

Exercise: Why does the second version work?

Syndicated 2011-07-03 12:11:20 (Updated 2011-09-04 01:42:00) from Søren Sandmann Pedersen

**New HTML Parser**: The long-awaited libxml2 based HTML parser
code is live. It needs further work but already handles most
markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!