Archive for the 'Data Structures' Category

Breadth-First Traversal with Alternating Directions at Each Level

Yesterday I discussed the interview coding problem I blew: Code, in Java, a breadth-first traversal of a binary tree, alternating directions at each level.

TestTree

The key observation is obvious: At each level you process nodes in the reverse order of the previous level.  (Duh! That’s the problem statement.)  To turn it around: At each level you must preserve for later processing the child nodes in reverse order of the nodes you’re processing at this level.  A data structure which let’s you pull off nodes in reverse order in which you saved them is a stack, one of the simplest data structures of all. Continue reading ‘Breadth-First Traversal with Alternating Directions at Each Level’

Comment on: A Memory-Efficient Queue for Primitive Types (in Java)

Have you ever designed and coding something and got it working and felt really satisfied and then went to bed and woke up then next morning with the sparkling clear thought in your mind that all of that work was unneeded, that there was a much better and much simpler way to do the exact same thing?   Did that happen after you had blogged about your cool piece of work?  Well, it just happened to me.

Well, in my previous post I carefully justified a queue that directly contained primitive types, in order to have a queue that could hold hundreds of millions of elements.  That’s alright, and the discussion of memory consumption and when to optimize it is good, but …

It is now clear as can be to me that the right way to handle this queue is to write my tuples-of-primitive-types to a file, not keep them in memory at all, and to run the file as a queue.  The code is very simple – here it is: externalqueue.zip.  Basically, I open a RandomAccessFile and keep track of head and tail filePointers.  The queue class is parameterized by a small interface that the caller must pass in to the constructor, where the caller takes responsibility for serializing and deserializing his element objects to via a DataInput or DataOutput.  (This isn’t official Java serialization, where objects are tracked.  This queue is meant largely for “struct-like” things.)

This simple class is enough for my needs but it has a limitation that might need to be fixed, depending on the usage.  The external file grows to contain all the elements that have ever been written to the queue.  It is only truncated when the queue becomes empty.  For my application (the retrograde analysis of the 15-puzzle) the queue never becomes empty until the algorithm terminates, yet this is fine since all the enqueued elements will still fit on disk (it should take no more than 5-7Gb).  But in other applications it might be desirable to enhance this queue to start using alternate files once the files reach a certain threshold.  Then you can start deleting files once the head pointer (next element to be dequeued) advances past the end of a file and you rotate to the next one.

A Memory-Efficient Queue for Primitive Types (in Java)

Data structures in Java can take more memory than equivalent structures in C++ or C#, for various reasons, including general per-object overhead, and the dichotomy between primitive types and objects. For many applications that doesn’t really matter, but for some the excess memory usage in Java is critical and can mean the difference between success and failure.

I’m studying combinatorial search techniques now, using (for some reason) Java. At this point I’m using retrograde analysis to compute pattern databases for the 15-puzzle. (Retrograde analysis = searching backward from the goal state.) The easiest algorithm for this uses a breadth-first search of all positions from the goal.

Breadth-first search is done by enqueuing states-next-to-search onto a queue, and processing them one by one off the queue. For a lot of problems the queue size gets prohibitively large and can’t be used, which is why IDA* and other algorithms that go depth-first have been developed.

For building pattern databases for the 15-puzzle the queue can get quite long, but should still be tractable if care is taken. For Java, in particular, you can’t just blindly use the standard collection classes.

The two main Java classes that implement the interface Queue<E> are ArrayDeque, which is based on a Java array, and LinkedList, which is a typical linked list with external links.

Suppose, for the sake of argument, that we’re working with a 32-bit OS, and with queues of 100 million elements.

Well, if you’re talking 100 million distinct objects you’re already in trouble. Each object takes 16 bytes, minimum, and your 100M objects will need at least 1.6G of memory, more heap than you can get (with the Sun Hotspot VM). But maybe you’re talking primitive types, like long. (A 15-puzzle state will fit into a long – a 64-bit word: 16 tile positions of 4 bits each.) 100M longs will fit in 800M bytes of memory, if placed in an array, so your queue could work.

Except for a few things. First, both ArrayDeque and LinkedList use as their representation type the type that they’re instantiated with. Generics in Java can’t be instantiated with primitive types, so you need to use, e.g., Long instead of long. This means boxing all of the longs you want to put in the queue, which means you’re back to separate objects of 16 bytes instead of array slots of 8 bytes. (And of course, things are worse, proportionally, if you want a queue of int or byte. Why would you want a queue of 100M bytes given that there are only 256 unique values for a byte? Answer: If you really want to have a queue of a tuple of long and byte, and you’re going to run it as two queues of primitive objects, rather than one queue of a reference type. Which is the case for the breath-first search in the 15-puzzle.)

In addition to your boxed elements, the LinkedList has a 16-byte object for every element in the queue. So each element in the queue actually takes 32-bytes, and furthermore, is a separate object to manage.

That point is also important: In an experiment I ran with a 1Gb heap space, the test program started thrashing in the garbage collector and made no further progress after only 33,740,000 Longs were allocated and put into an ArrayDeque. (That’s only 540Mb, there should have been plenty of space left.)

Anyway, to make a long story short, I implemented a class, CompactQueue, that works with either primitive types or reference types, and, if using a primitive type, stores the enqueued elements directly into an array without boxing them. The operations of add() and remove() are constant time. (By the way, this isn’t true of ArrayDeque where add() is only amortized constant time because of the need to occasionally reallocate the array holding the elements, if the queue size increases.)

Using this class, and the heap size set to 1000Mb, I was able to put 126M longs, or 1012M bytes, into the queue. And there’s no GC performance problem caused by having 126M separate objects to manage (252M for the LinkedList!).

The class uses two techniques: First, it is parameterized by a array wrapper class that provides a factory for arrays of primitive (or reference) type, and array-like get() and set() operations. And second, it uses many smaller arrays (“blocks”) to store the elements of the queue, instead of one large array (as in ArrayDeque) or an object-per-element (as in LinkedList). This means that it can smoothly expand to take all necessary (available?) memory without hitting a roadblock when the array size doubles (as in ArrayDeque).

Code is provided here. (Note that only the array wrapper classes for byte and long are provided, array wrapper classes for the other primitive types are left as an exercise for the reader.)

Persistent Data Structures – now (possibly) practical

The typical data structures most programmers know and use require imperative programming: they fundamentally depend on replacing the values of fields with assignment statements, especially pointer fields.  A particular data structure represents the state of something at that particular moment in time, and that moment only.  If you want to know what the state was in the past you needed to have made a copy of the entire data structure back then, and kept it around until you needed it.  (Alternatively, you could keep a log of changes made to the data structure that you could play in reverse until you get the previous state – and then play it back forwards to get back to where you are now.  Both these techniques are typically used to implement undo/redo, for example.)

Or you could use a persistent data structure. A persistent data structure allows you to access previous versions at any time without having to do any copying.  All you needed to do at the time was to save a pointer to the data structure.  If you have a persistent data structure, your undo/redo implementation is simply a stack of pointers that you push a pointer onto after you make any change to the data structure.

This can be quite useful—but it is typically very hard to implement a persistent data structure in an imperative language, especially if you have to worry about memory management1.   If you’re using a functional programming language—especially a language with lazy semantics like Haskell—then all your data structures are automatically persistent, and your only problem is efficiency (and of course, in your functional languages, the language system takes care of memory management).  But for practical purposes, as a hardcore C++ programmer for professional purposes, I was locked out of the world of persistent data structures.

Now, however, with C# and C++/CLI in use (and garbage collection coming to C++ any time now …2) I can at last contemplate the use of persistent data structures in my designs.  And that’s great, because it gave me an excuse to take one of my favorite computer science books off the shelf and give it another read.

The book is Purely Functional Data Structures, by Chris Okasaki.  I find it to be a very well written and easy to understand introduction to the design and analysis of persistent data structures—or equivalently—for the design and analysis of any data structure you’d want to use in a functional language.

There are two key themes of the book: First, to describe the use and implementation of several persistent data structures, such as different kinds of heaps, queues, and random-access lists, and second, to describe how to create your own efficient persistent data structures.

Continue reading ‘Persistent Data Structures – now (possibly) practical’