Displaying Newsletter
Issue 27, 02 Oct 2002

  In This Issue:
BeGeistert by Michael Phipps 
Invitation to BeGeistert 009 - "Step by step" in Duesseldorf on 19th and 20th 2002

Wow, we're approaching our 9th BeGeistert. The BeOS community is still alive and well. People still use BeOS, people still develop for BeOS and most importantly people are still developing the BeOS.

BeGeistert is *the* opportunity for the BeOS community to get together face to face, work, exchange ideas and enjoy a beer. Except, of course, for the coders who get locked in a room kept happy with caffeine-based drinks and pizza;-).

We expect a considerable number of developers from all over Europe (it's a French invasion) to attend, hook up and work. If you're interested in getting involved, and the OpenBeOS team repeatedly emphasises they need all kinds of people, then come along and learn how you can help. You don't have to know how to program: you can learn on the job.

A note to all those planning to attend: we are currently hoping to keep the entrance fee down to Euro 10 for the whole event but the requirements for the locality are such that we need as many people as possible to stay on Friday and Saturday night. If we don't get sufficient people staying overnight we will have to increase the attendance fee and our chances of coming back are reduced. BeGeistert is supported by BeFAN, the local user group, but our funds are not unlimited. The accomodations are on the floor above the event, so you can sleep when you want and need: it is not unusual for people going to bed at 5:00 in the morning or catching some sleep in the afternoon.

For those travelling a long distance: don't forget to check the website for offers of lifts. People come from all over to BeGeistert and quite a few offer lifts from North, South, East or West. Can you get to Paris, Hamburg, Berlin, Frankfurt or Delft? Chances are someone is coming from there so check the website and get in contact. Apart from that Duesseldorf is well connected by rail, road and air. Those wishing to travel from Central and Eastern Europe are encouraged to contact the orga-team directly.

See you there.

Contact: info@begeistert.org.

Glass Elevator, BeUnited, OBOS, and others by Michael Phipps 

There has been some confusion as to the relationship between OBOS and many other BeOS related groups out there. Since this is (definately) an FAQ, I wanted to dispel some of the cobwebs and clarify where we are.

Glass Elevator is a "sub-project" or mini-project within OBOS. It's purpose in life is to be the repository for all of the post-R1 ideas. If you have an idea about what OBOS "should" be, this is the best place to take it. GE's charter (or plan) is to provide OBOS, at some nebulous point in time, with a "wish list". A master document that will contain all of the features that people have brought up to GE. Some features do not need any further explaination (example - no limit to the number of threads that an application may spawn). Others require a whole seperate document explaining how they should function (example - pie menus). GE will deliver a full list of features. OBOS does not promise to implement each and every one of them. These are suggestions, thoughts and ideas, not demands! Every idea from GE will be seriously considered. But not every idea will be used, if for no other reason than some are mutually exclusive.

BeUnited is an organization that is completely and totally seperate from OBOS. We have some members in common (myself included), but we are not them and they are not us. They do not and will not tell us what to build. And we do not and will not dictate to them what to do, other than by using the prescribed voting methods. BU's "job" in life, as has been detailed in previous newsletters, is to be a standards building organization. Like, say, POSIX or IEEE. The POSIX committee doesn't tell Linus what to do. Nor does BU tell us what to do. But we do work together. Their point in life is to bring the OSBOS's (B.E.OS, OBOS and others) together to try to standardize APIs. Like POSIX did/does.

BeDrivers.com is a very dedicated group of people who are down in the trenches building drivers for R5. This is (obviously) very important work and we applaud them for it. They are working hard to make R5 usable for us. All of their drivers are also open source wherever possible. The existance of BeDrivers.com relieves a lot of the stress on OBOS to make drivers. There are certain areas that OBOS needs to be responsible for (PCI, USB, FireWire (maybe), etc). BeDrivers.com is really taking responsibility for everything else. And they are doing a wonderful job. Drivers are tough, thankless work (everyone notices you only when it breaks).

OSNews.com is a wonderful website for all of the latest operating system news. Despite many people's protests, I don't believe that the editorial staff of OSNews.com is in any way biased against OBOS. I don't agree with all of their views and perspectives, but their views are not biased just because they differ from mine. I firmly believe that they *WANT* OBOS to succeed. We have had many private emails about that very topic.

YellowTab.com is a company that is producing Zeta. What is Zeta? I can't tell you. :-) But I can tell you this - YT has a definite interest in what we are doing and where we are going. :-)

Palm - you know who they are. We haven't spoken to Palm, nor have they spoken to us. Yes, yes, it is *possible* that they would give us permission to use the BeOS name. But I would *seriously* doubt it, and here is why - they have nothing to gain. Everything a corporation does costs money. In this case, many executives would have to get together (paid hours) to decide that they don't care about the name. They would then have to get with their lawyers. Their lawyers (paid by the hour) would then have to decide that there is *no* harm at all to Palm, Palmsource, etc if we use the name. What if we put out total garbage and destroy the value of the BeOS name? Palm would have to work to disassociate themselves from that. In short, it would cost Palm a fair amount of money to release the BeOS name. And the *best* case scenario is that they gain the good will of this community. Half of whom are furious at them for not releasing the BeOS source to anyone, despite the well published reasons not to do so. No, the truth is that this is a sleeping dog that is best left alone. We don't disturb them and we hope that they do not disturb us.

I hope that this makes the interaction between the groups more clear. I communicate with all of the above (except Palm) on a fairly regular basis. I am a member of BeUnited (on the board of advisors), Glass Elevator and BeDrivers. I have a good working relationship with OSNews and YellowTab. There is a lot of discussion and communication going on behind the scenes than might be apparent. Not to be secretive, but because a few people in a "quiet place" can get a lot more done than a thousand people in a crowded room.

Sorting things out (sort of) by Daniel Reinhold 
I recently came across one of my old flights of fancy -- a one-line bubble sort. Awhile back, in the midst of working on some sorting routines, I had the brilliant (?) idea of seeing how far I could compress an implementation of bubble sort in C. The resulting function, called 'lilbub' (little bubble sort), contains a single line, only 65 characters long:
lilbub (int *a, int *b)
    int *p=b,t;for(;b>(p-a?a:(p=b,++a));*p>t?(p[1]=*p,*p=t):0)t=*p--;
It kind of looks like a submission for the obfuscated C contest.

Does it work? Heck, yeah! Just pass pointers to the first and last elements of the array to be sorted. That is, for an array a[] of size n, do this: lilbub(a, a+n-1);. It works that way, not so much for efficiency, but because I found I could reduce the code by a few characters. In fact, it's a very inefficient sort routine, but it is small! Anyone out there think they can reduce it even further?

What kind of geek am I?

The kind of folks who are attracted to programming and computer science seem to have a knack for obsessing on certain 'ideal' problems. For example, how many academic papers have been written on efficient methods for determining whether an integer is prime? How many prime seive functions have been implemented in all the different languages? Seems to be totally out of proportion to the importance of the problem. But nerds love to play with primes.

And not just primes. How about searching for the perfect hash algorithm? Computing a billion digits of pi? Trying to find a new and better sort algorithm? We just can't help ourselves. I'm prey to all of the temptations listed above -- and others -- but I'm particularly prone to playing around with sorting routines. Hence my stupid sort trick above.

But it's not all in vain. Sorting techniques do have important real world uses. I'm not about to attempt to cover the wide range of sort algorithms available (that would be a four volume book, in itself), but will look at a couple of basic ones that you ought to know.

Selection sort

Just about the simplest sort is selection sort. It scans thru an array, looking for the smallest of all remaining elements, then moves it to the front. Each pass places one item into its correct, final position.

Consider an array with indexes a[0]..a[N]. The first pass searches the entire array for the smallest element. When found, it is swapped with a[0]. It is where it belongs. The next pass searches a[1]..a[N] for the smallest element. When found, it is swapped with a[1]. It is now where it belongs. And so on until every element is properly placed:

selsort (int *a, int num_elems)
    // selection sort
    int lo = 0;
    int hi = num_elems - 1;
    int tmp;
    for (; lo < hi; ++lo)
        int min = lo;
        int i   = lo+1;
        for (; i <= hi; ++i)
            if (a[i] < a[min])
                min = i;
        tmp = a[lo], a[lo] = a[min], a[min] = tmp;
This kind of simple sort routine is very effective for small arrays. This is because the more complicated sort algorithms have more overhead. When the number of items to sort is large, the overhead is well worth the effort. But as the size decreases, it becomes a case of diminishing returns -- it's generally faster to spin thru a simple algorithm for a small set of data.


The king of all sorting routines is quicksort. It's not the only fast sorting technique available, but, it is as efficient as they get -- no other routine can improve upon it in the general case. Sometimes, in specific instances, knowing certain information about the data to be sorted may allow you to use a different algorithm that will be faster. But for the general case (especially with more or less random data), it can't be beat.

Over the years, I've read many explanations of how quicksort works. I've seen it described in various ways ranging from very straightforward to incredibly obfuscated. In reality, quicksort is very easy to understand, altho a bit trickier to implement. The basic algorithm is quite intuitive, but this can be hard to see by looking at various source code implementations where the layers of optimization tricks shroud the basic simplicity of what is going on.

Quicksort works somewhat like the selection sort above in that each pass goes about the business of placing one array item in its final position. However it does not seek to find the minimum (or maximum) value, but instead, grabs an element at random and determines its correct ending position. This element, called the pivot, is used to arrange the remainder of the array: all items smaller than the pivot are moved to the left end of the array, and all the larger items are moved to the right. Consider this small array of numbers:

5 8 23 9 17 12 4 16 2 11 3 20
If we choose 12 as the pivot, then after the first pass of quicksort, we would arrive at the following array:
2 8 3 9 11 5 4 [12] 16 17 23 20
The pivot item, 12, is in its final, correct location. It divides the original array into two sub-arrays that now need sorting. So, quicksort is called again for the left and right sub-arrays. This subdivision continues recursively until the sub-arrays are so small that they cannot be subdivided any further.

One of the best implementations of quicksort to look at, if you are trying to learn about it, is the following definition (which I have slightly modified) from the excellent book, The Practice of Programming.

swap (int *a, int i, int j)
    int temp;
    temp = a[i], a[i] = a[j], a[j] = temp;

quicksort (int *a, int n)
    int i;
    int pivot;
    int last = 0;
    if (n <= 1)
    swap (a, 0, rand () % n);
    pivot = a[0];
    for (i = 1; i < n; ++i)
        if (a[i] < pivot)
            swap (a, ++last, i);
    swap (a, 0, last);
    quicksort (a, last);
    quicksort (a+last+1, n-last-1);
This is not the most efficient implementation of quicksort, by any means, but it is simple to follow and makes the basic algorithm fairly clear. The rand() call grabs a random element which is then placed at the beginning of the array. The for-loop does the partitioning, which puts all the elements smaller than the pivot on the left side. The final swap puts the pivot where it belongs. The left and right sub-arrays can then be run thru quicksort themselves.


By covering just selection sort and quicksort, I've omitted quite a few alternative sorting algorithms. Ah yes, there are probably as many different sorting techniques out there as stars twinkling in the night sky. You can find many online resources about these: NIST Data Structures and Algorithms is a good one.

But my own experience leads me to the following conclusion: you either have a small number of items to sort, or you don't. If the number of items is small, use selection sort (or some other similarly simple algorithm). Otherwise, use quicksort. This is pretty simplistic, but, for most real world scenarios, it's really all you need. Study various other sorting techniques, by all means -- it is interesting and educational. But for day-to-day business, the advice above will nearly always suffice.

Actually, this advice can be made even simpler for C programmers: just use qsort(). There is a library function called qsort() that is part of the Standard C Library (declared in stdlib.h). It has been designed to handle sorting in a very general fashion. This function will probably handle 99% of your sorting needs in normal programs.

Interestingly, despite the name, qsort() is not guaranteed by the C standard to be an implementation of quicksort. A library writer can use any algorithm they choose. However, I have never actually encountered an implementation of qsort() that did not use the quicksort algorithm -- it's mighty hard to beat.

The qsort() function has a generalized calling interface which allows you to sort objects of any type and size. By passing in your own comparison function, and the size (in bytes) of the array elements, you can use qsort() to sort just about any type of data. For example, the following sorts an ordinary integer array:

int_cmp (const void *p, const void *q)
    int a = *(int *)p;
    int b = *(int *)q;

    if      (a <  b) return -1;
    else if (a == b) return 0;
    else             return 1;

main ()
    int a[] = {5, 8, 23, 9, 17, 12, 4, 16, 2, 11, 3, 20};
    int n = sizeof(a) / sizeof(a[0]);
    qsort (a, n, sizeof(a[0]), int_cmp);
    // arg1: the array to be sorted
    // arg2: the number of elements in the array
    // arg3: the size (in bytes) of each array element
    // arg4: a pointer to a comparison function
The comparison function must always look like the one above. That is, pointers to the two elements to be compared are passed as arguments (declared as void * in the parameter list), and an int value returned. The return value should be {-1, 0, 1} depending on whether the first element is {less than, equal to, or greater than} the second element.

The general nature of qsort() is even more useful when sorting arrays whose elements are structures. For example:

typedef struct
    char * name;
    int    age; 

name_cmp (const void *p, const void *q)
    person_info *p1 = (person_info *)p;
    person_info *p2 = (person_info *)q;

    return strcmp (p1->name, p2->name);

age_cmp (const void *p, const void *q)
    person_info *p1 = (person_info *)p;
    person_info *p2 = (person_info *)q;
    int a = p1->age;
    int b = p2->age;
    if      (a <  b) return -1;
    else if (a == b) return 0;
    else             return 1;

main ()
    person_info a[] =
        {"fred",  22},
        {"adam",  26},
        {"bert",  19},
        {"larry", 38},
        {"zack",  16},
        {"jimmy",  8}
    int n = sizeof(a) / sizeof(a[0]);
    char ansr[100];
    printf ("sort by name or age (n/a)? ");
    fgets (ansr, 100, stdin);
    switch (ansr[0])
        case 'n':
            qsort (a, n, sizeof(a[0]), name_cmp);
        case 'a':
            qsort (a, n, sizeof(a[0]), age_cmp);
    int i;
    for (i = 0; i < n; ++i)
        printf ("\n%12s, %2d", a[i].name, a[i].age);
    printf ("\n\n");
    return 0;

A lean and mean integer quicksort

Alright, I've just stipulated that 99% of the time, you can use qsort() to handle all of your sorting needs. What about the other 1%? Maybe you need a super-duper efficient sorting routine for a quite special case scenario. Or maybe you are just the maverick type who really wants to implement his own quicksort routine (IOW, like me!)

Well, a couple of years ago (about the same time I was writing the silly 'lilbub' function), I set about to write the fastest quicksort function that I could. Since the qsort() function already handles general data types, I decided to concentrate on sorting integers -- a basic and very common activity, and something that might truly benefit from a special purpose, fast-as-lightning implementation.

I'm pretty satisfied with what I came up with -- altho I won't guarantee that it couldn't be improved upon (which always seems to be the case). I will say that, after testing it fairly thoroughly, it appears to be faster than the library qsort() for the intended domain of sorting integer arrays. It's a little unfair tho -- the library function has to call an external routine for every comparison, while my comparisons are built-in. But that's the whole point to writing a special purpose sorting routine -- to carefully tune it for a specific set of needs.

The source code is heavily commented so that it may help with understanding the basic workings of quicksort and the partitioning loop. Here's my version:

int_qsort (int *lo, int *hi)
    // an efficient, non-recursive quicksort for integer arrays
    // the args should be pointers to the first and last elements
    const int minimum_section_length = 12;
    int *p;
    int *q;
    int  pivot;
    int  tmp;
    int  num_elems;
    uint rv = 1;
    int *stack[2 * sizeof(int) * CHAR_BIT];
    int **top = stack;
    int **bot = stack;

    while (lo < hi || top > bot)
        // get valid [lo, hi] for this iteration
        // -------------------------------------
        if (lo >= hi)
            hi = *top--, lo = *top--;  // pop [lo, hi]

        num_elems = hi-lo+1;

        // run selection sort to finish off small sections
        // ------------------------------------------------
        if (num_elems <= minimum_section_length)
            for (; lo < hi; ++lo)
                // find minimum
                q = lo;
                for (p = lo+1; p <= hi; ++p)
                    if (*p < *q)
                        q = p;
                // put minimum into low element
                tmp = *lo, *lo = *q, *q = tmp;
            // [lo, hi] is now sorted, go to next section
            lo = hi+1;

        // check for already sorted section
        // --------------------------------
        // find first unordered element
        for (p = lo; p < hi; ++p)
            if (*p < *(p+1))
        if (p == hi)
            // [lo, hi] already sorted, go to next section
            lo = hi+1;
        // partition into two sections
        // ---------------------------
        // select a pivot
        rv = ((rv ^ *lo) << 1) ^ *hi;   // hash a random value
        p = lo + (rv % num_elems);      // and use as pivot index
        pivot = *p;
        // move pivot into low element
        tmp = *lo, *lo = *p, *p = tmp;
        // = = = = = = = = = = = = = = = = = = = = =
        //                 loop entry:
        //   lo    lo+1                        hi
        //  [pivot|{q..........................p}]
        //                   unexamined
        // = = = = = = = = = = = = = = = = = = = = =
        // partition loop
        for (q = lo+1, p = hi; q <= p;)
            // given:  {q.............p}			
            while (*q <  pivot)
                if (++q > p) break;   // inc left ==> [q........p}
            while (*p >= pivot)
                if (--p < q) break;   // dec right    [q...p] <==
            if (q < p)
                // swap: yields new {q...p}
                tmp = *q, *q++ = *p, *p-- = tmp;
            // = = = = = = = = = = = = = = = = = = = = =
            //              loop invariant:
            //   lo    lo+1                          hi
            //  [pivot| < pivot  {q........p}  >= pivot]
            //          smaller   unexamined    larger
            // = = = = = = = = = = = = = = = = = = = = =

        // = = = = = = = = = = = = = = = = = = = =
        //               loop exit:
        //   lo    lo+1          p+1        hi
        //  [pivot| < pivot    p q    >= pivot]
        //          smaller    ^       larger
        //                     |
        //     insert pivot here (at p)
        // = = = = = = = = = = = = = = = = = = = =
        // move pivot (at lo) into final position (at p)
        tmp = *lo, *lo = *p, *p = tmp;
        // = = = = = = = = = = = = = = = = = = = =
        //          after placing pivot:
        //  lo      p-1    p    p+1       hi
        // [{ < pivot } |pivot| { >= pivot }]
        //   new left            new right
        // = = = = = = = = = = = = = = = = = = = =

        // stack larger section, iterate on smaller section
        // ------------------------------------------------
        if (p-lo > hi-p)
            *++top = lo, *++top = p-1;   //    push [lo, p-1]
            lo = p+1;                    // iterate [p+1, hi]
            *++top = p+1, *++top = hi;   //    push [p+1, hi]
            hi = p-1;                    // iterate [lo, p-1]

The trickiest part to implementing quicksort is to get the partitioning loop correct. It's not that hard to get right, but it's very easy to get subtly wrong. Once you have a proper working partitioning loop, the rest of the routine falls into place naturally.

One particular sneeky item (in my version) is an optimization within the partitioning loop. Just as the two out-of-order elements are found and are being swapped to different sides, the q pointer is incremented and the p pointer decremented. This is a micro-optimization that speeds up the loop just a bit, but is almost hidden because the inc/dec is performed right there in the swap line. A bit too much code compression, perhaps, but I couldn't resist.

So, what's better about it?

Using the simple quicksort function given near the beginning of the article as a point of reference, in what ways did my implementation make improvements? Well, there are several:

* selection sort used for small sections

The naturally recursive nature of quicksort, where each array is broken down into two sub-arrays, is mostly a strength, but also a minor weakness of the algorithm. At a certain point, when the sub-array sections become small enough, it becomes wasteful to further sub-divide them. It's quicker to just run a simple-minded algorithm over the small section and be done with it.

Since it's hard to beat selection sort for sorting tiny arrays, my quicksort uses it for sorting an array who size is below a certain threshold. What that threshold should be is hard to say in general (and could, indeed, depend on several factors including the performance characterstics of a particular computer). In my case, I simply tested various values and found that for a dozen or less items, using selection sort improved the overall time.

* no calls to rand()

There are a number of methods to use for selecting the pivot (use the first element, use the last, use the middle one, pick a random one, etc.), but I stuck with selecting a random item. But, since the pivot selection is part of the critical partitioning code that is run repeatedly in an inner loop, I decided that avoiding a function call to rand() would be beneficial. So, instead, I hash a random value by using the values of the current first and last element of the array section. It seems to work quite well in practice.

* recusion eliminated

A standard optimization trick is to eliminate the recursion. The recursive calls are not too expensive, and yet they are called so often within the inner loop, that getting rid of the calls definitely improves the speed. Eliminating the recursion is done by keeping and maintaining a local stack.

This has an extra benefit too. In a worst case scenario -- having an already sorted array combined with selecting the first array element as the pivot -- the run-time stack would need to be as large as the array itself. This is a guaranteed stack overflow for big arrays! By managing a local stack, you can avoid this entirely by pushing the bounds of the larger section on the stack and iterating on the smaller section.

We can even determine the maximum depth that the local stack can reach. Since the sort routine only compares integer sized values at a time, there are only so many subdivisions possible. Specifically, for a 4-byte integer, which is 32 bits, you can only subdivide up to 32 times. It's like a game of "guess the number" where someone tries to guess the value and you say "higher" or "lower". In the case of 32-bit integers, after you've compared each bit, you're finished. Another way of saying this is that for 32-bit integers, a binary tree could hold all possible 4GB values without needing a depth of more than 32 levels.

Therefore, by iterating on the smaller sub-array sections, we minimize the depth needed for the stack. The worst case scenario mentioned above leads to a 0-element left sub-array and (N-1)-element right sub-array. The iteration on the 0-element section finishes immediately, and the bounds of the larger section are popped from the stack. And then again, for the next pass, etc. Thus, the stack never goes deeper than one level. Oddly, what was the worst stack allocation scenario for standard function calls becomes the minimal stack requirements for doing the {iterate-smaller, stack-larger} technique (altho it will still be the slowest scenario).

For this improved technique, the maximum stack depth occurs when the array is truly subdivided into basically equal parts each time thru. Still, in this instance (which results in the fastest sort times), the depth can never go more than 32, the maximum number of subdivisions for a 32-bit domain. Thus, the stack need not be larger than 32 elements. Actually, I only use one stack for both lower and upper bounds (pushed/popped in pairs), so the stack is actually dimensioned as 64 ints.

* check for already sorted sections

The last optimization that I couldn't resist is checking to see if the sub-array being worked on is already in sorted order. This completely removes the worst-case scenario for the standard implementation of quicksort, but at a small price. Whenever the routine is sorting fairly random data, the extra checks for sorted order will be mostly wasted cycles. However, the cost, judging by timing tests, seems quite small. So I leave it in, knowing that it gives the function more predictable run times and completely avoids the horrid worst-case scenario.

Source Code:

Becoming an organization by Michael Phipps 
Becoming an organization

We are starting on a (slightly) new direction here at OpenBeOS. We (the admin team) and I have decided to apply to become a non-profit organization in the state of New York.

Your first question might well be "why"? Haven't things been just fine as they are now? Well, yes and no. There have been some problems and some risks with the way things have been going.

The first (and foremost in my mind) is that I (personally) have legal risk. If someone decided that they wanted to stop OpenBeOS from happening, I could be personally taken to court at my own expense, to defend myself. And if I lose, the courts could take everything that I own and a portion of my wages until whatever settlement is paid off. The chances of that are pretty slim, but it could happen, according to my legal council.

Secondly, many, many people have come to me and asked me if they could donate financially in a way to help get the project done. I have always had to say "not yet", as I certainly do not want to try to handle all of this in a haphazard manner. If people donate, it has to be handled in a way that is very up front, and professionally accounted for. The monies would have to be accounted for in the proper and legal ways. Something that I do not know how to do, especially from overseas. Also, any money that I recieved could have tax implications that I am not aware of. Having OpenBeOS be an organization takes care of all of that. The organization will have an accountant who understands these things and can advise us on these issues.

Thirdly, any patents (doubtful), trademarks (likely) or other intellectual property could be assigned to the organization. I don't want to start a flame war about this stuff. Everyone has an opinion on how these issues *should* work. Myself included. But, in today's reality, concepts like trademarks are important. And they should be important to us. They are what stopped Microsoft from subverting Java. And they could, potentially, stop someone from subverting OpenBeOS/NewName into something else. If we ever were to have something patentable, and we decided to patent it, it would be under the same sort of license as we have now - MIT.

Fourthly, this would allow the project to spend money. Some things cost money. Posix specs, books, equipment, etc. There are developers out there, right now, who are being slowed or stopped for lack of certain (cheap) equipment. Others have spent a lot of money to work on this project (personally, I am over $300). Finally, if such finances were to become available, I live in a town where there are dozens of good software engineers who are looking for any work that they can get their hands on.

Finally, it lends us some legitimacy. Most of the successful open source projects are organizations. BSD, Apache, Perl are three off of the top of my head. An organization can go and talk to hardware manufacturers, donors, etc. It can sign NDAs and contracts. Being an organization will, I believe, open doors for OpenBeOS that we would never have seen before.

I know that this is a big step for us. And one that we are just starting. If you have questions, concerns, comments or thoughts, you can *always* contact me. We are not going to go all corporate on you. Be did that. Look where it got them. We are not looking to make this a business, IPO and buy Ferraris. We are just taking the next logical step to move things along.