Anyway, there's a fascinating article in there on low-lock memory techniques -- more on the subject of how to write multithreaded apps without resorting to heavyweight locks. Without getting into exhaustive detail, the article focuses on exactly why low-lock techniques are difficult, and examines why you have to deeply understand the memory model you're working in to be able to use these techniques safely. It contrasts several memory models relevant to .NET programming (the ECMA model, the x86 model and the .NET 2.0 model) and what you can and can't do in each, and then examines several of the major techniques one can legitimately use to reduce locking.
(Precis of the main point: in a modern multiprocessor, you have to assume that your threads are running on separate cores, each of which has its own memory cache. Each of these cores is permitted, within certain parameters, to rearrange the reads and writes of that memory. This means that many apparently-safe code paths can wind up failing unpredictably, due to processors doing things in unexpected order. So safe low-lock programming requires understanding the legal rearrangements in detail, and analyzing the problem quite carefully in that light.)
Very neat article, and worthwhile reading for all hardcore programming geeks. While the article is understandably focused on the .NET world, a good 80% of the content should be applicable to any multiprocessor environment -- it's mainly concerned with explaining the gotchas you have to watch for, and how to think about the problem...