Displaying Newsletter
Issue 8, 30 Dec 2001


  In This Issue:
 
Division of Labor: Kits, Libraries, Servers, and Teams by Daniel Reinhold 
I remember several years back (about spring '96) when I first discovered the joys of the internet and was amazed at the sheer number of goodies to be plucked. This was well before BeOS Intel, so I spent alot of time downloading Windows shareware programs. After a few months, however, the excitement died down considerably when I discovered an unfortunate truth: most of those shareware programs were complete crap. If they didn't crash within a few minutes, their interface was so awkward that I just couldn't stand to use them.

There are many styles of bad interface, but one of the more common (and in many ways, humorous) of the lot is what I call the "thin geek layer". This is where the interface consists of a myriad of buttons (often tiny) and/or complex nested menus containing long lists of items whose labels seem to pretty much match the underlying function calls. It makes me grin to see such programs, because I can almost tell, by looking at the front end, what the back end must be like. I have a pretty clear feeling that when I click on a button called 'RasterPunkFoobitz' that it directly calls a rasterPunkFoobitz() function. The programmers who write these kinds of programs are obviously so enamored (or distracted) with implementing all the whiz-bang functionality that they don't give much thought or effort into the idea of presenting an interface designed with the user in mind.

This is the sort of thing that makes user interface designers hyperventilate and turn blue. The desired goal for any good interface is to present a world view that corresponds with what the user cares about and the set of tasks he wants to carry out. The back end should implement this by whatever means necessary to fulfill the illusion.

Creating a good user interface isn't just the task of application programming. It's also the concern of library design -- particularly when it's part of an application framework. In this case, the users are developers and the interface is the API (Application Programming Interface). An API must obey all the rules of good user interface just as surely as any user program if it is to going to be effective and usable.

Implementing the BeOS API

The BeOS API, I am happy to say, is a very good API -- that is, a very good design. Because of this, the front end presented to developers -- the various kits with their associated headers, class definitions, documentation -- do not necessarily correspond one to one with the back end implementation -- libraries, add-ons, servers.

While this is certainly a good thing from a design stand point, it does create a certain confusion with regards to how we, the OpenBeOS project, implement the BeOS API. Because, as implementors, we must be just as concerned with the back end as the front. Even more so, since the front end (the API specs) are already established and it is the back end that must be recreated from scratch.

The confusion arises when people look at the OpenBeOS programming teams that we have set up and try to understand how their particular "pet" concern fits in. Oh sure, for the most part, the division of labor into the various teams is quite obvious and makes sense: the Printing team works on printing, the Network team implements networking, and so on. There is no problem with this. But where do drivers fit in? What about missing kits? For example, why is there no team for the Support Kit? Where's the team for OpenGL? Why is there only one team to handle both the Application Kit and the Interface Kit?

One answer might be that we don't know what we're doing. Since there's probably an element of truth in that statement, we'll disregard that nasty little observation for now and move on to more useful explanations (he says slyly with a large grin on his face).

Teams

The large diversity of tasks and skills required to implement an OS naturally require that you break the whole thing down into a collection of parts. You put different people with specific skills into their own area of expertise. When several people work together to implement one particular piece of functionality, that is called a team. The teams co-operate, of course, but it's quite amazing (and a testimonial to the good design of the BeOS API) the extent to which these various teams can work independently as their own little islands.

These teams are about the business of creating the back end. The front end, i.e. the BeOS API, acts as a projec spec and, as a design element, is basically done. We may make a few very, very minor alterations on the road to R1, but, for all practical purpose, this can be considered finished.

Now, what exactly are the parts to be written? Well, at the risk of sligthly oversimplifying the situation, you could say that the BeOS consists of three major parts: the kernel, the drivers, and the application framework. The kernel is, well, the kernel. The drivers are kernel add-ons such as video card and other hardware drivers and all file systems. The application framework defines userland. It's where all the apps live with their associated windows, loopers, messages, etc.

Let's focus on the application framework. For the BeOS, this has a C++ interface. The vast majority of functionality is placed into C++ classes. Lots of classes.

Classes

Pretend for a moment that we have the complete set of classes before us in one long list (no, I haven't actually compiled such a list, but even if I had, I wouldn't put the whole thing here, for goodness sake).:

BApplication
BWindow
BView
BMessage
BLooper
BHandler
BInvoker
BRoster
BClipboard
BPoint
BRect
BBox
BButton
BControl
BAlert
BFont
BPicture
BPolygon
BMenu
BBitmap
BShape
BScreen
BTextView
BStatusBar
BString
BList
BFile
BDirectory
BEntry
BNode
BQuery
BPath
BMimeType
BVolume
BSymLink
BFlattenable
BArchivable
BNetBuffer
BNetAddress
...
(many others omitted)
...
You get the idea, there's alot of 'em.

Now consider the task of re-writing all these classes, which is precisely what the OpenBeOS project is doing. We start at the top and go down the list, one by one, and implement each class, right? Bzzzt. Wrong. Off with your head. We may be a little dumb, but we are not stupid.

Naturally, most of these classes can be clustered together into related groups that help implement a particular type of functionality. BPoint, BRect, BFont, and a score of other related graphics classes would naturally be written by folks who have expertise in that area. Likewise, BFile, BDirectory, BEntry, etc. call for a different set of knowledge and skills. This is not to say that some people couldn't work on all of these (some probably have the talent to do so). But even then, you want focus. You want for each person to be able to concentrate effectively on each set of classes they must implement.

Ok, I'm sure that no one disputes what I've said about this. Yes, you should divide up the application framework into the various different pieces of functionality, find programmers who have the corresponding skills and interest, and then assign them to that piece. Get several of them working together and you have a programming team. No one argues with this.

But what exactly are the pieces? What are these natural divisions I have spoken of that define the different areas of functionality? The BeOS kits, you might well answer. Well...no, not really. That's where the confusion comes in.

You see, the programming teams are all about creating the back end. The kits, however, are a description of the front end.

Kits

It's tempting to say that the kits are a marketing fiction. That's probably going a bit too far. But the point is, those kits are really just a description in the BeBook to help programmers understand how to use the BeOS API. They are Be's attempt to organize and explain all those classes: how they work, how to use them, how they inter-relate, etc. Their particular organization of the kits wasn't written in the stars -- they could well have organized it somewhat differently.

Since they describe the front end, those kits do not necessarily represent the only or even the best way of dividing up the work of implementing the classes. The BeOS API is no "thin geek layer" -- what you (the developer) see in the kits is what you need to see in order to get your tasks accomplished. But as implementors of the API, we have to organize the work in a way that best suits the needs, desires, and skills of those who are actually in the trenches, filling out the back end.

Having said all this, in reality, those kits often *do correspond* to a natural area of functionality, even as far as implementation is concerned. For example, Be found it convenient to divide up all the classes and headers for the Media related stuff into one unit which they call the Media Kit. Likewise, we find it convenient to put all the tasks involved in re-creating the Media Kit into one team.

So you see, the kits as described by Be in the BeBook don't neatly fit into the task of implementing the BeOS. They don't *perfectly* match our requirements; on the other hand, they do match to a certain extent. It's a near miss. Because of this, we often use the term "kit" to describe a piece of implementation that we're working on, which only confuses the matter. To muddy the waters even further, we have used the term "kit" to label each of the high-level folders in the CVS repository. Thus, when looking for the project source code, you find them organized into folders called "kernel_kit", "app_kit", "game_kit", etc.

We're not trying to intentionally confuse the issue. We're just struggling for the appropriate terminology to describe what we're doing. The term "kit" is there from the BeBook, it's simple, well-known, and reasonably catchy. You tend to use what you have.

Libraries

While a developer needs to have the API documented so he can understand how to use it, and the have headers available so that he can compile his code, the actual functionality is provided by the libraries. Since the libraries are the back end, there is no required correspondence to the kits. So is there a correspondence? Again, yes and no. Be sometimed matches a library one-to-one with a kit -- e.g. the Network kit is implemented in the libnet.so library. But in other cases, there is no set correspondence.

Consider again the imaginary list of all BeOS classes referred to above. Had Be wanted to, they could have compiled all of them into one big, mondo library -- say, libeverything.so. That certainly would have been a very simple setup and it would have worked. But it does have disadvantages: essentially every part of the OS would have to be loaded in memory at all times, even if certain parts were hardly ever used by particular users. Since almost all parts of the OS are used at some point, this isn't as bad as it seems. Still, it offends the sensibilties of many programmers who prefer a more fine-grained approach. Another disadvantage, from Be's perspective, would be that every time someone recompiled a new class, regardless of how peripheral its duties, the whole library would have to be recompiled. Not very good from a project management stand point.

As a matter of course, Be took something of a middle ground. Many of the kits are implemented as separate libraries, but there are a couple of big, all-encompassing libraries at the core. There is libroot.so, the "root" library, which exports all the kernel functions that are available in userland. Then there is the mother of all libs, libbe.so, the "Be" library, which implements all the core functionality of the application framework. libbe.so contains all the guts needed for the Application Kit, the Interface Kit, the Storage Kit, the Support Kit, and a few others. It's a biggy.

Servers

One of the distinctive features of the BeOS is its client/server architecture. This means that most of the OS functionality is delivered at run time by servers -- background processes that are started at boot up and are always running.

The most common example is the app_server which works in tandem with your application to provide all the graphics and messaging for your application's windows and views. Note that the app_server manages both the graphics operations (described in the Interface Kit) and the windowing/messaging operations (described in the Application Kit). This is why we have only one team dedicated to both kits: the reality is the back end functions of the app_server completely cut across both kits (Application and Interface) and trying to handle this as two teams would just be duplicating effort.

Nor does every kit have a server. There is no game server or storage server, etc. The media kit actually has two: media_server for normal media applications, and the media_addon_server for media addons.

Tying it all together

OK, let's summarize a bit. We have three basic entities to deal with in order to organize the work of recreating the BeOS: the kits, the libraries, and the servers. The kits are the interface spec, designed by Be and adopted by OpenBeOS to serve as the project guidelines. The libraries are the actual compiled code that supply the OS functionality. The servers are userland processes that dish out the functionality to client programs at run time.

Finally, we have the teams -- groups of programmers united to implement each particular piece of the OS. They must organize the other three entities together in a way that makes sense for the type of back end work they are engaged in.

To try to visualize the relationship between these, here is a table that lists each kit from the BeBook and then maps the corresponding library, server, and designated team for it:

Kit             Library             Server          Team
===================================================================
Application     libbe.so            app_server      App/Interface
Device          libdevice.so        (None)          (None)
Game            libgame.so          (None)          Game
Interface       libbe.so            app_server      App/Interface
Kernel          libroot.so          (None)          Kernel
Mail            libmail.so          mail_daemon     Networking
Media           libmedia.so         media_server    Media
Midi            libmidi.so          midi_server     MIDI
Network         libnet.so           net_server      Networking
OpenGL          libGL.so            (None)          (Game?)
Storage         libbe.so            (None)          Storage
Support         libbe.so            (None)          (None)
Translation     libtranslation.so   (None)          Translation
Input Server    libbe.so            input_server    Input Server
Screen Saver    libscreensaver.so   (None)          ScreenSaver
Deskbar         libtracker.so       DeskBar
Tracker         libtracker.so       Tracker

There are some awkward gaps in the table above, but if you reorganize the table with respect to teams, or libraries, or servers, you get the same results (just different gaps). These various elements all have a notion of partitioning the components of the OS, but each with distinctive viewpoints. Change the viewpoint, and you get a slightly different organizational scheme.

Filling in the gaps

Where is the Printing team in the table above? Actually, there is no Printing Kit in the BeOS API -- only a single class called BPrintJob which is in the Interface Kit. Deskbar and Tracker don't have an assigned team either, but these are already taken care of in the OpenTracker project, so we needn't be concerned about it.

The Support Kit does not have a team, but does it really need one? That kit isn't really a kit like all the others that are organized according to common purpose and function. Instead, the Support Kit is just a hodge-podge of classes that didn't find a place within any other kit. It's just a convenient name for all the orphaned classes. Some, such as BArchivable and BFlattenable are pertinent to the sending of BMessages and, thus, are being implemented by the App/Interface team. Others are generic classes, such as the BString class, that could be implemented by any team member.

Where is OpenGL? That hasn't been decided yet. No specific team has been setup to implement this, although there has been discussion that it could be handled by the Game Kit team. Or perhaps the App/Interface team will do it instead. Or maybe a separate team will be factored out and the Game Kit and App/Interface teams can then use and build on the base implementation. More than likely, this will not be dealt with in R1.

And what about the parts outside of the application framework? This is probably the least well documented aspect of the project. Basically, most functionality not part of userland will be implemented by the kernel team. There can be exceptions, particularly for add-ons -- e.g. BFS (the Be File System) has its own programming team.

Drivers, on the other hand, will likely be taken up by whoever is best equipped to do the work. The basic drivers for the mouse, keyboard, hard disk and CDs (at least the initial versions) will be written by members of the kernel team. An intial, baseline graphics driver will come from the App/Interface team. And the members of the Printing team are writing the printer drivers. Really, anyone who has the skills and motivation can jump in on this work. We could set up a "Drivers" team, but there is probably little point. The main purpose of a programming team is to join together the collective brains and efforts of the members towards a specific task. But each type of driver tends to require very detailed knowledge about a specific area. So members of a Drivers team might well have only their collective membership in common.

In short, it's a bit of a mess. The inter-relationships between the various elements is something of a cats cradle. The teams must recreate all the kits in the API, but they are actually implemented in the libraries and servers. Each kit does not necessarily correspond to a specific library, each library does not necessarily have a corresponding server, each team does not necessarily implement either a specific library or kit. Clear as mud, no?

Ultimately, how we divide up all the various implementation tasks comes down to the skills, interests, and participation of the project members. For example, right out of the gate, OpenBeOS will have far better printing support than the BeOS ever did. This is not because we sat down at the beginning and decided, 'dammit, we're going to excel at printing in OpenBeOS'. It's just because we happened to have attracted the attention and help of several programmers with long experience with printing and printer drivers.

If you see an unfilled gap in what I've presented in this article, and you have the programming chops to make it real, then join the project and make it happen. I promise you that you'll be welcomed and no one will get in your way.

 
Policy-Based Class Design by Robert Medeiros 
A lookup table is a handy thing, especially when writing games, where performance is often increased at the expense of computational accuracy. A lookup table lets us compute a table of approximate values for some function, which can then be looked up very quickly during program execution, instead of having to compute the value of the function at run-time, which might be very expensive.

A classic use for lookup tables is to approximate the trigonometric functions; this is often deemed unnecessary on modern processors, which often have some of the trigonometric functions implemented in hardware, but even so execution times can often be improved by looking up a precomputed value. This might be particularly useful inside an inner loop in a geometry engine, particularly if you're computing something naturally noisy anyways, like particle effects.

Now, to head in an entirely different direction, this article is actually about how we're going to implement a lookup table using a technique known as "policy-based class design". We'll go ahead and work through the design of the lookup table, and then discuss the merits of this technique in more detail afterwards.

Policy-based class design shouldn't seem entirely unfamiliar to you if you've used the STL to any degree. The basic idea is to create classes by composing them out of "policies", which are classes that handle only a single aspect of the structure and/or functionality of the class. The mechanism by which classes are composed is a combination of the C++ template mechanism and multiple inheritance.

How do these seemingly unrelated language features manage to combine into something useful? The idea is to create a templatized "skeleton" of the class to be created. The template parameters are, amongst other things, policies. The policy classes are inherited by the skeleton, which ends up being the composition of all the functionality and structure belonging to the policy classes. I know this sounds like gibberish. It'll become clearer shortly, I hope.


A simple example: WidgetContainer

As a quick precursor, we're going to create a WidgetContainer class. The WidgetContainer has to have some means of storing Widgets, which will be determined by the StoragePolicy template parameter. Furthermore, a WidgetContainer might be able to modify the Widgets - how this is accomplished (or not accomplished) will be determined by the ModificationPolicy.

template
<
    typename WidgetType,
    template class <class> StoragePolicy,
    template class <class> ModificationPolicy
>
class WidgetContainer : public StoragePolicy<WidgetType>, 
public ModificationPolicy<WidgetType>
{
};
The first template parameter specializes the WidgetContainer for a particular type of Widget. This sort of genericity is the most common application of templates. The StoragePolicy is itself a templatized class which takes a single template parameter - the WidgetType. Whatever StoragePolicy we choose to use has to know what it is that is being stored, and so is specialized in the same way. Likewise for the ModificationPolicy. This cascading usage of template parameters to specialize classes and their component policies is part of the reason that policy-based class design is as powerful as it is.

An example StoragePolicy and ModificationPolicy might be:

// A Storage Policy
template
<
    typename WidgetType
>
struct FixedArrayStorage
{
private:
    // We'll store at most 1 KWidgets.
    WidgetType fWidgetArray[1024];
protected:
    // These methods constitute the interface by which the rest of
    // the widget container, including other policies, can interact with
    // this policy. All Storage policies must implement this contract.
    int32 Count()
    {
        return( number_of_widgets );
    }
    inline WidgetType& WidgetAt( int32 index )
    {
        if( ( index >= 0 ) && ( index < number_of_widgets ) )
            return( &fWidgetArray[index] );		
    }
};
// A Modification Policy
template
<
    typename WidgetType
>
struct ModifyByScalar
{
public:
    void AddScalarToWidgets( float scalar )
    {
        // Add the given scalar to each of the Widgets.
        for( index = 0; index < Count(); index++ )
        {
            WidgetType widget& = WidgetAt( index );
            widget.foo += scalar;
        }
    }
    void SubtractScalarFromWidgets( float scalar )
    {
        // Subtract the given scalar from each of the Widgets.
        for( index = 0; index < Count(); index++ )
        {
            WidgetType widget& = WidgetAt( index );
            widget.foo -= scalar;
        }
    }
}
In order to actually use our WidgetContainer, we have to specialize it using a particular set of types and policies, e.g.
    typedef 
    WidgetContainer<Widget,FixedArrayStorage,ModifyByScalar> 
    MyWidgetContainer;
We can then proceed to declare an instance of our WidgetContainer, and use it as per the interfaces included in the component policies:
    MyWidgetContainer myWidgetContainer;
	
    // Pretend we have a way to add widgets. Now pretend we've added a whole
    // lot of them to the widget container. Finally, we'll use the methods
    // defined in the modification policy to change 'em a bit.
    myWidgetContainer.AddScalarToWidgets( 3.1416 );
Here's a neat bit -- if we want to change the WidgetContainer so that it no longer allows widgets to be modified, we can simply create a new policy that reflects this desire:
template
<
    typename StorageType
>
struct NoModification
{
    // There's a big fat nothing in here, because we're not allowing
    // the class user to change widgets.
};
And now we change our type defintion to the following:
    typedef 
    WidgetContainer<Widget,FixedArrayStorage,NoModification>
    MyWidgetContainer;
Any attempts to use the previous policy will now be disallowed the next time the code is compiled, and with a collection of policies defined, the user can mix and match functionality to create precisely the class they need, implementing new policies where necessary. If any of this strikes you as a good way to use Design Patterns, you're on the money. More on this later (in another article, most likely).

Now that we've seen a simple example of this technique, we'll move on to something a little bit larger -- our lookup table.


Defining the LookupTable

Let's think for a moment about how we should decompose a lookup table to make it as generic as possible. We'll want to give the table a size, and we'll have to know what kind of stuff is going to be stored in the table to start with. If we happen to want to pick values in the table by some means other than array indices, we should specify what type to use as an index value. That seems to round out the size and type information, so onwards to the policies.

Our first policy should concern how to store the table data, e.g. on the stack, on the heap, etc. and so we'll call it the StoragePolicy. We'll need to initialize the table with values on construction, which will be accomplished by the GeneratorPolicy (STL users should find this familiar). Lastly, we'll need a way to convert values of the index type into table positions where the corresponding values are stored -- this will be handled by the LookupPolicy.

If this all seems a bit unwieldy, or even daunting, hopefully some code will make things clearer.

template
<
    // This is the fixed (or initial) size of the lookup table.
    int32 TableSize,
    
    // This is the the type by which table values are looked up.
    typename IndexType,
    
    // This is the type of entity being stored for lookup.
    typename StorageType,
    
    // This is the policy that determines how lookup values are stored.
    template <int32,class> class StoragePolicy,
    
    // This is the policy responsible for generating values.
    template <int32,class> class GeneratorPolicy,
    
    // This is the policy responsible for determining array indices.
    template <int32,class,class> class LookupPolicy
>
class LookupTable : public StoragePolicy<TableSize,StorageType>,
                    public GeneratorPolicy<TableSize,StorageType>, 
                    public LookupPolicy<TableSize,IndexType,StorageType>
{
public: 
    // The table is "generated" during construction.
    LookupTable();
    // Values are retrieved from the table using [] operator.
    StorageType operator[]( const IndexType& index ) const;
};


LookupTable: Construction

Let's have a look at what happens when a lookup table is constructed.

template
<
    int32 TableSize,
    typename IndexType,
    typename StorageType,
    template <int32,class> class StoragePolicy,
    template <int32,class> class GeneratorPolicy,
    template <int32,class,class> class LookupPolicy
>
LookupTable::LookupTable()
{
    for( int32 index = 0; index < TableSize; ++index )
    {
        const StorageType value = Generate( index );
        StoreValue( value, index );
    }
}
We're given as a template parameter the size of the lookup table. We want to initialize each table element; this is accomplished by the Generator policy, which helpfully provides, as part of its "contract", a Generate() method -- which any Generator policy must do. Given an index (not necessarily to be considered an array index, because the Storage policy might not be storing table values in an array, even though this is most likely) the Generate() method returns the value that must be stored in the table at that abstract location. That value is stored in the table by the Storage policy, via the method that it provides, called StoreValue().

By only relying on the interfaces provided by the policies, and not any special knowledge we have concerning how the policies are implemented, even "inside" the class where policy data might be directly accessible, we manage to decouple the policies. This in turn allows a combinatorial explosion in the number of classes that can be generated via our skeleton template.


LookupTable: Indexing

Now let's look at how values are looked up in the table using the [] operator:

template
<
    int32 TableSize,
    typename IndexType,
    typename StorageType,
    template <int32,class> class StoragePolicy,
    template <int32,class> class GeneratorPolicy,
    template <int32,class,class> class LookupPolicy
>
inline 
StorageType 
LookupTable<TableSize,IndexType,StorageType,StoragePolicy,
            GeneratorPolicy,LookupPolicy>
            ::operator[]( const IndexType& lookupIndex ) const
{
    // First we convert the lookup index, which may be of an
    // arbitrary type, into an abstract item index. The Lookup()
    // method belongs to the lookup policy.
    const int32 arrayIndex = Lookup( lookupIndex );
	
    // Then we find the element at that index, and return it.
    // The ValueAt() method belongs to the storage policy.
    const StorageType value = ValueAt( arrayIndex );
	
    return( value );
}
This is pretty straightforward. We're given some index value, and we want to find and return the value stored in the lookup table corresponding to that index. Since the index might be of an arbitrary type, e.g. a floating-point radian value for a trig lookup table, we first have to determine what the abstract item index is for that index.

This is what the LookupPolicy does -- it takes a value of your index type, which might be anything, and turns it into an integer in the range [0,TableSize), which the StoragePolicy knows how to deal with. Then we just pass the index returned from Lookup() to the ValueAt() method of the StoragePolicy, which returns the value at that index; we never have to know how or where the data is stored. This is enforced by our use of the inheritance mechanism to keep the data in the StoragePolicy private, even though we've inherited from it - this makes for a cleaner separation of the policies from the skeleton class and from each other.


Storage Policies

So let's have a look at our policies. First we'll consider the DefaultStorage implementation of the storage policy:

template
<
    int32 TableSize,
    typename StorageType
>
struct DefaultStorage
{
protected:
	
    inline StorageType ValueAt( int32 index ) const
    {
        ASSERT( ( index >= 0 ) && ( index < TableSize ) );
		
        const StorageType value = lookupTable[ index ];
        return( value );
    }
	
    inline void StoreValue( const StorageType& value, int32 index )
    {
        ASSERT( ( index >= 0 ) && ( index < TableSize ) );
		
        lookupTable[ index ] = value;
    }
	
private:
    // This is an array of TableSize elements of type StorageType.
    StorageType lookupTable[ TableSize ];
};
This is where the lookup table actually stores the data it contains. When instantiating this policy we have to tell it how big a table to create, and what type of values to store. This is another classic use of templates -- fixing the type stored in an array, and the size of the array, at compile time.The two inline methods in the policy are there to expose the array to the other policies, because they won't be able to access it directly.

Our other storage policy is the DynamicStorage implementation, which is different only in that it allocates the table on the free store:

template
<
    int32 TableSize,
    typename StorageType
>
struct DynamicStorage
{
public:
    bool InitCheck() const
    {
        // We're o.k. if the table was successfully allocated.
        const bool result = ( pLookupTable != NULL );
        return( result );
    }
protected:
    DynamicStorage() : pLookupTable( NULL )
    {
        // Allocate the lookup table on the free store.
        pLookupTable = new StorageType[ TableSize ];
        ASSERT( pLookupTable != NULL );
    }
    ~DynamicStorage()
    {
        ASSERT( pLookupTable != NULL );
        delete [] pLookupTable;
        pLookupTable = NULL;
    }
	
    inline StorageType ValueAt( int32 index ) const
    {
        ASSERT( ( index >= 0 ) && ( index < TableSize ) );
		
        const StorageType value = pLookupTable[ index ];
        return( value );
    }
    inline void StoreValue( const StorageType& value, int32 index )
    {
        ASSERT( ( index <= 0 ) && ( index > TableSize ) );
		
        pLookupTable[ index ] = value;
    }
private:
    StorageType* pLookupTable;
};
We need to be able to check after the lookup table has been constructed that the memory for the table was successfully allocated. We have added the InitCheck() method to do just that. The construct and destructor do as you'd expect, allocating and disposing of the table memory. The ValueAt() and StoreValue() methods constitute the interface between the policy and the class which inherits it, so they must be implemented.


Generator Policy

Now let's look at the GeneratorPolicy. For the moment we'll stick with generating tables of cosine values and sine values, because this will be potentially somewhat useful, and will let us demonstrate a point about the composition of policies. The SineGenerator looks as follows:

template
<
    int32 TableSize,
    typename StorageType
>
struct SineGenerator
{
// Only called from the constructor to initialize the
// table, so we don't need to make it accessible.
protected:
    inline StorageType Generate( int32 index ) const
    {
        // Each element of the table stores the value of
        // the sine function at an angle in the range [0,2*PI).
        // There are TableSize entries, and so the increment in
        // radians between each table value is (2*PI)/TableSize.
        const StorageType angleDelta = ( 2*PI ) / TableSize;
        const StorageType radians = index * angleDelta;
        const StorageType sineApprox = sin( radians );
		
        return( sineApprox );
    }
};
If you recall, the constructor for our lookup table passed an abstract item index each time it called the Generate() method. That index is what we use here to determine which angle to compute the sine of. The CosineGenerator is exactly the same except that it calls cos() instead. In any case, we sample either function at regularly spaced intervals from the range [0,2*PI). The number of samples is given by the size of the table, with a larger table size giving more accurate results but taking more space (natch) and is thus less likely to fit in the cache, which is important in terms of performance.


Lookup Policy

Lastly, we want to look at the LookupPolicy. Since we're currrently building trig tables, we can go old skool and use integer lookups, assuming we're willing to represent the angles in our applications by integer values in the range [0,TableSize), but I usually prefer to use radians, so we'll create some policies that take a radian value and return the approximate sine or cosine of the angle.

Note that, because both sine and cosine values can be looked up directly (via table index) or via an angle, any of these lookup policies can be combined with either the SineGenerator or CosineGenerator policy, and so the number of possible classes we can generate, assuming we have three lookup policies, is (2 generators) x (3 lookups) = 6 distinct classes. If we consider the storage policy as well, then we end up with 12 possible classes, each of which stores some number (TableSize) values of type StorageType and allows them to be looked up by IndexType. So here's the DirectLookup policy:

template
<
    int32 TableSize,
    typename IndexType,
    typename StorageType
>
struct DirectLookup
{
// We call the lookup function when using the [] operator to
// determine the index in the table of the value to retrieve.
protected:
    inline int32 Lookup( const IndexType& index ) const
    {
        ASSERT( ( index >= 0 ) && ( index < TableSize ) );
        return( index );
    }
};
The LookupPolicy takes a value of type IndexType, and turns it into an internal index where the corresponding value is stored. Here we just want to return the same index, since the user is telling us explicitly which table element they want to look up. Note that the IndexType template parameter has to be an int32 here, because thats the type we're using to denote positions in the table internally, irrespective of how they're actually stored in the StoragePolicy (remember that the internal index is passed to the ValueAt() method of the StoragePolicy). A slightly more interesting policy is:
template
<
    int32 TableSize,
    typename IndexType,
    typename StorageType
>
struct RadianLookup
{
// We call the lookup function when using the [] operator to
// determine the index in the table of the value to retrieve.
protected:
    inline int32 Lookup( const IndexType& radians ) const
    {
        ASSERT( ( radians >= 0.0 ) && ( radians < 2*PI ) );
        // Return the index of the value in the table closest to
        // to sine( radians ). Note that the value of angleDelta is
        // fixed at compile-time, so should be folded when optimizations
        // are turned on.
        const IndexType angleDelta = ( ( 2*PI ) / TableSize );
        const int32 index = static_cast<int32>( radians / angleDelta );
		
        return( index );
    }
};
Here we're given a value in radians, which we require to be in the range [0,2*PI). Another policy will lift this restriction at the cost of more spent cycles. The code is pretty straightforward -- just divide our input radians by the distance between sample points, and cast the result to lose the floating point. Another policy that you might want to use if you didn't want to clamp all the angles in your application to the range [0,2*PI) is:
template
<
    int32 TableSize,
    typename IndexType,
    typename StorageType
>
struct ReduceRadianLookup
{
// We call the lookup function when using the [] operator to
// determine the index in the table of the value to retrieve.
protected:
    inline int32 Lookup( const IndexType& radians ) const
    {
        while( radians < 0.0 )
        {
            // Add multiples of 2*PI until the value is in range.
            radians += ( 2*PI );
        }
		
        while( radians >= ( 2*PI ) )
        {
            // Subtract multiples of 2*PI until the value is in range.
            radians -= ( 2*PI );
        }
		
        const IndexType angleDelta = ( ( 2*PI ) / TableSize );
        const int32 index = static_cast<int32>( radians / angleDelta );
		
        return( index );
    }
};
This is the same as before, except that we first reduce the input angle to the correct range before determining the table position.


Using the LookupTable

If we wanted to use one of our lookup tables, we need only typedef away the template stuff to get something that looks Be-like:

typedef LookupTable
<
    1024,
    float,
    float,
    DefaultStorage,
    SineGenerator,
    RadianLookup
>
XSineLUT;
typedef LookupTable
<
    1024,
    float,
    float,
    DefaultStorage,
    CosineGenerator,
    RadianLookup
>
XCosineLUT;
Consider shipping a class library in which the objects had all been designed using policies. If desired, one could mask this additional layer of flexibility and complexity by simply typedef'ing a bunch of "standard" classes, using default policies throughout, and describing those in the documentation for the end user.

However, even if source code isn't available, the library user still has the ability to customize the behaviour of the classes by creating new policies, and using type definition to create their own custom versions of the library classes by using their new policy classes as template parameters. One could readily imagine sticking policy descriptions and interface requirements in an "advanced" section of the documentation, for those requiring more customized behaviour from the library classes -- those not needing this additional power need never concern themselved with policies at all.

So that's that. I wrote a small test program to get an idea as to how the LookupTable performed in comparison with sin(), cos() and a simple hand-crafted lookup table. The times (in microseconds) to perform 100,000 lookups on a PII-233 are:

* Hand-crafted array: 39939
* LookupTable: 49131
* sin(): 72635
* cos(): 70776
Of course, your mileage may vary, but it looks like we've achieved a resonable middle ground between hand-coding and just using the math library routines.

I'd be very interested to see the results of the test program on different user's machines. So here's an assignment for you dedicated readers out there: Compile the program, run it in a Terminal window, and copy the output. Email the results to Robert Medeiros. Just put "LookupTable test results" or something similar in the Subject line. Looking forward to hearing from you!

The way I look at it, code generation and optimization using templates can only get better, and so for a relatively modest performance hit we gain a lot of flexibility and the promise of better performance in the future, as template technology matures. And of course, the technique just described is broadly applicable and provides a way to create classes compositionally, rather than via the usual C++ inheritance procedure. I hope you find it useful. If so, you might want to check out the book "Modern C++ Design" by Alexei Alexandrescu, which is where I picked up this technique.


Source Code:
PolicyBasedLookupTable.zip

 
Requiem for Be by Michael Phipps 
I knew ever since August that the game was up for Be, Inc. Selling off all of your intellectual property is a dead give away. For a while I was angry. Then saddened. Then we came up with the idea of OpenBeOS, and life became much more frantic, but better.

Still, though, when I happened to look at http://www.be.com the other day and I saw the "we are closing up shop" page, as well as the shutting down of BeDevTalk, it really hammered home the reality of the situation. Since we are in a season of reflection and appreciation of others, I want to just use this editorial to remember some people and times past. If I have forgotten anyone, please forgive me. Or, better yet, write to me and let me know.

My first glimpse of BeOS came in college (no surprise). After the death of the Amiga, I sketched out an operating system design and took classes to learn how to implement it. In one of those classes, I met Scott. I showed him my sketches (object hierarchies) and he told me to look at the BeBook. I was shocked - it was 95% the same as what I had sketched out! When I saw the BeBox (Scott had one), I was immediately taken with it. I started saving money to buy one. Scott went on vacation and I borrowed his for a couple of weeks. It was pure heaven! Right around the time that I had the money to buy the BeBox, the Mac port came out. I decided to put the money toward a Power Computing Mac Clone. Right around the time that I had the money for the Power machine that I was looking at, Steve Jobs shut down the clone companies. So I sat tight.

I went to BeDC in Boston (August 1997). I met many Be "names" that you would be familiar with: Dominic Giampaolo, Brian Swetland, George Hoffman, Pierre Raynaud-Richard, Chris Herborth, Dave Haynie, etc. An investor that morning told me that the big secret was the x86 port. After the conference, I got my first BeOS machine. I started immediately on a project, one that was/is the culmination of many months of planning. Something that would change the nature of the Be user experience forever. It also was a little on the large side. I worked on it for almost 4 years. That brings us to this past August. I had three choices - write off my 4 years, port my work, or re-build BeOS. The first was not an option. The second would be very difficult, but reasonable. So I started looking for an OS that I could live with. I tried Linux. The install process did not go smoothly. After playing with it for a few hours, I abandoned that route. I looked at *BSD, but it didn't seem any better. QNX is not my target market, Windows, well, you know. So, here we are.

I fondly remember reading, learning and contributing to BeDevTalk. I remember the desktop, pre-Tracker. I remember waiting for the database to sync up when you rebooted. I remember "das blinken lights". I remember giving demos and watching people say "I didn't know a Mac could go that fast". I remember picking the locks on the cases that Be brought their computers to demos in so that the demo could go on. I remember (and still have) my Be pocket protector. I remember the 3d kit and driving poor Pierre to distraction with complaints about the VRML loader. I remember interviewing at Metrowerks and offering suggestions on how to improve BeIDE. I remember when Java for BeOS was just around the corner (I saw it run at Metrowerks). I remember hearing that BeOS ran on 16 processor Intel machines. I remember Dominic explain that he tested bfs by unplugging IDE drives as they wrote. I remember the infamous server vs library preference wars, before Marco was a Be employee.

There are tons more memories. Mostly good. Few bad (except the hostel that I stayed at in Boston), up until the Focus Shift.

I have received a great deal from the Be community. I hope that this project gives some back.