Previous Entry Share Next Entry
Steam engine time for memristors?
device
jducoeur
Serious techies, and especially long-thinking programmers, should take a read through this article in Ars Technica. It's well worth the five minutes, but the tl;dr is that HP appears to seriously believe that memristor-based machines are coming in the next five years or so, and they are likely to *seriously* upset everyone's assumptions about how computers tick.

The key points are:
  • Memristors are a reasonably fast form of memory, so they can be used as RAM

  • Memristors are nonvolatile, so they can be used as flash

  • Memristors are incredibly dense -- moreso than current disk storage
Now, put those together and think about it. If you have a form of RAM that is denser than what we think of as "storage", why do we want "storage"? The upshot of the article is that, if this pans out, it totally upsets our distinction of "memory" vs. "storage", which is deeply baked into almost every computer, even into our software architectures.

Mind, the thing that I *haven't* yet seen is how much this costs. Being able to store a terabyte on a chip isn't very useful if it costs a penny per bit. But the computer industry has a talent for driving prices down, so it's not a bad bet that, even if this stuff is prohibitively expensive now, it is likely to become less so over time.

This story has been building for a while, but the article makes a good point that implications for the software industry are dramatic. With this sort of tech, every computer becomes potentially instant-on. We have to think about reliability in very different ways if the computer is potentially never really turned "off": every system has to become much more rigorous about cleaning up after itself. The way we structure data changes dramatically when essentially everything is in "live" RAM, with the result that many processes potentially get much, much faster. On the tricky side, if your server truly never turns off, then evolving a cloud-based package becomes a truly entertaining problem unto itself: it would become even more important to develop protocols for managing an always-on system.

Of course, since they are leading the charge here, HP has a shot at becoming a true leader in the tech arena again, so they are talking it up hard. They are not mere disinterested researchers here: there's a good deal of sales pitch. Still, if they can make the tech live up to its potential, they're probably correct that it can upend an awful lot of assumptions.

There's a lot that could go wrong here, but I wantWantWANT this tech to pan out: the potential for making software that is more beautiful, elegant *and* fast is awfully neat. We'll see what happens.

  • 1
Breaking some of the Von Neuman computer constructs doesn't bug a guy programming in Scala. :-)

In fact, it's downright lovely from a Querki POV. The way I've built things, Querki is mainly an in-memory "database"; the need to offload much of the bulk onto disk is mostly an annoyance, and the system is designed to interact with the disk DB as little as possible.

So basically, this fits beautifully with my intended evolutionary path. Right now, Querki's backing store is MySQL, but that's mostly because it is convenient for getting things going. In the medium term, I plan to replace most of that with essentially journaling event queues, possibly implicit ones. (Akka has some recent constructs that mostly encapsulate the notion of an Actor's state as the sum of a queue of events.) And being able to keep that queue in "memory", so that it could be replayed really quickly when something goes wrong, is just about the best architecture I can think of -- fast *and* robust.

Not to mention that, if they get to the point where memory has a density and cost comparable to that of storage today, the motivation to limit the size of a Querki Space pretty much goes away. It's only there due to the cost of RAM as it is. So this would potentially remove the need for a hard engineering project later. (The need to rewrite Querki's guts for enterprise-level scalability.)

Hence my desire for this tech. If it was available at reasonable prices today, I would be all over it, and probably building Querki around it; as it is, I need to keep a close eye on it, because it might actually affect my long-term plans...

If this pans out, it will be lovely - but I've seen this particular transformation preached at least twice before (with different techs), so I'm not holding my breath. Kinda like the "more efficient than PV!" solar panel techs that keeps being 5-10 years out.

(But progress does happen. We have solar panels on our house! :)

Yep. I'm mainly taking hope from the fact that HP keeps beating this drum, and month by month their story is getting more concrete. Over the past two years, it's gone from "Memristors are the future! Really! Trust us!" to "Fine -- we're going to actually *build* a freaking next-gen optical/memristor computer, and Show You All." I have a certain faith that you have to be willing to recognize that you're indulging in Mad Science to really overturn the apple cart...

Bah. LJ hasn't been accepting my comments today. Wonder if this time's the charm.

First: the "what's really a memristor" issue isn't as pedantic as it sounds. It directly affects how widely HP's existing patents restrict other possible players.

Second: enterprise scaling also involves redundancy and physically distributed systems, which memristor-based systems don't really help with. In fact, even for perfectly normal scaling techniques like multi-core processors, it kind of takes us in the opposite direction, given how poorly programming techniques have adapted to non-homogeneous memory architectures already.

Third: memory fragmentation is a real thing for long-lived processes. Despite all the progress in allocator techniques, those of us in embedded systems still avoid dynamic allocation in many cases, even when we're running a real OS like Linux.

Yep -- that's a good example of where truly persistent memory forces a serious rethink of common software techniques. Memory fragmentation is a program-level case where we sort of casually count on programs "ending" on a regular basis, as a way of cleaning up the messes we've left behind.

It's not a minor issue, but I doubt it's an insoluble one. If we do wind up with long-lived processes like this, we're going to need to learn what that means in terms of memory space, and how to keep fragmentation under reasonable control at the architectural level.

As for enterprise scaling, I think of that as largely orthogonal to the memristor thing. I don't actually agree that programming techniques have adapted poorly (at least the app level, where I play) -- it's just that too few programmers have adjusted to the new ways of doing things. That'll shake out, but it takes time...

This reminds me of my Mac G3 laptop; It had enough space and was configured to be able to run the OS from a RAMdisk, and had hardware to auto-backup the RAMdisk on shutdown or crash.

Which meant that when expecting to be tight on battery life, I'd boot off the RAMdisk, unmount the hard drive, and my storage was a specially allocated slice of RAM.

Not quite the same thing, as the sections of RAM were not interchangeable at run time, but similar. (this was also back in the OS8 era, so its been a while.

---


A first generation system shouldn't be that hard. Allocate as persistent storage using whatever file system scheme, and then have a program allocate RAM on execution. This would be inefficient as the RAM allocations would rarely be fully used, but it means little needs to change in the rest of the OS or Filesystem.

Long term, there is going to need to be significant changes to filesystems. The current redundant tables would make memory allocation far from atomic. You would also need to set up permissions and fences to prevent buffer overflows from overwriting non-volatile memory segments and to prevent unintentional and hostile in-memory code changes.

All of these are, however, problems with existing, known solutions.

---

I'm not sure memory fragmentation really is a big concern. With programs using managed memory (JVM, Mono/CLR), memory is already able to be spontaneously re-assigned. this leaves unmanaged code, but with isolated memory addressing, there already is a single layer of redirection between any memory access and its physical address. Add in another layer, and defragmentation becomes simple.

The thing is, with a system like this, I probably wouldn't even want to think in "file system-ish" terms -- I suspect that, while I would have have somewhat storage-like parts of memory, I'd want to be thinking in terms of data structures instead of file systems. Instead of files, I would want backup memory.

For example, Querki is mainly built out of Actors, little bundles of state. The most natural way to manage the "storage" for an Actor is with an event journal: that is, the Actor's state is the sum of all of the events it has received. If the Actor crashes, you replay the journal to rebuild it. (Usually with periodic snapshotting and compression to keep the journal from growing unreasonably, but conceptually it's just an event log.)

So the way I'd *like* to manage this would be that each Actor has a matching History Actor, which is deliberately simplistic to make it more reliable: it just maintains an ordered list of events. (And probably an ordered list of snapshots.) Doing that in storage requires thinking about filesystems or databases, but in the model they're talking about, it seems like it would simply be a data structure in persistent "memory".

Of course, reality isn't likely to be quite that simple: if nothing else, backup and restore would still be important issues to address. But it illustrates how you might wind up thinking about these sorts of things *very* differently. (I'm actually not at all worried about atomicity: the Actor paradigm is very well-suited to this sort of thing.)

Again, the main problem I see is economics. We'll see how it plays out, but I would be pleasantly surprised if the price of memristors comes within even a couple of orders of magnitude of magnetic storage any time soon. Until it does, it'll be challenging to build modern data-intensive servers out of it...

Maybe, but I'm not certain that you will want to simplify into simple journaling.

Self-modifying code or, even moreso, code whose runtime image is modified by third party programs, is necessarily going to need to store the original image in persistent storage, as the runtime modifications can vary from instance to instance. Journalling might help reduce the number of distinct instances of the code in memory, but it won;t eliminate the need for the storage/active division.

I think we may be talking past each other. I consider self-modifying code to be *wildly* evil, and I don't go anywhere near it.

What I'm talking about here is classic Akka Persistence. In the Actor paradigm that Akka exemplifies, the world is intensely object-oriented: all state is broken into actors, each of which completely owns and encapsulates that bit of state. Actors communicate only via messages, and pretty much the only way the world state ever changes is because of these messages: there is absolutely none of the usual backdoor access to memory that ordinary OO uses routinely. Messages are pretty much the be-all and end-all of this architecture.

In this model, journaling those messages is by *far* the most sensible approach to persistence -- indeed, it is the well-established best practice, to the point where it is getting baked into the higher levels of Akka. None of that has to do with altering the code -- it's all about how you represent the data...

  • 1
?

Log in

No account? Create an account