Displaying Newsletter
Issue 32, 12 Dec 2002

  In This Issue:
Beatrice: The Installer by Niels S. Reedijk 

So you want to have a pointer on where to start developing a BeOS application? One of the most asked questions on the mailing lists is: 'I want to start developing, but where do I start?' It isn't easy. Naturally you can't do anything without having some C++ knowledge. This is a subject that was worth many books. So get yourself one!

But even for experienced C++ developers it may be a bit difficult to get a hand on where to start. In the course of this bi-weekly tutorial I will develop an installer and I will highlight how some things are done with the BeOS API. The difference between this series and a normal tutorial will be that this is a real life example, rather than something useless. Thus I will go through many practical problems of the BeOS API.

Last of all, I'm not an experienced BeOS developer either. I'm actually new to BeOS too. Thus, for me it is a discovery as well. Naturally I will be open for all suggestions regarding my work. Have you discovered a bug? Do you have a question on a particular implementation, or do you have something better? Please email me!


BeOS programming is supposed to be simple. But first there are some concepts you need to master. In this document I will refer to internet pages as well as the Be Book, where you can read more of this subject. Also at the bottom of this page you can see where you can download the source as it is.

Requirements: most definately the toolkit for BeOS compilation. If you don't have BeOS yet, or you are willing to reinstall everything, take a look at the BeOS Developers Edition somewhere (let google loose on this one). It will include all you need. Secondly, you need C++ experience; this tutorial is not for someone who hasn't learned C++ yet. Please let me know if the level is too high for you, it isn't an expert's tutorial either.

And last you need a lot of courage. Especially in the beginning, some concepts seem difficult. Nevertheless, once you master them, they are easy and useful. So if you have checked all the items in the list, we can go to our first piece of code!

But wait a minute, I haven't told you what Beatrice is yet! Beatrice is an installer that will be able to replace the BeOS software with their OpenBeOS counterparts, and to restore them in case something doesn't work out. I will only show you the most interesting programming techniques in the tutorial, the bare implementation of the backend probably won't be that exciting. Alas, we still have a far way to go, so let's start now!


Here's our first little piece of code. You can find it in http://www.openbeos.org/samples/beatrice-t1-1.zip

     // Standard practice to avoid multiple inclusion

      // This header defines BApplication

    // We derive from BApplication to extend its capabilities
     class BeatriceApplication : public BApplication
                virtual ~BeatriceApplication() {};
              // A pointer to our window
            BWindow *window;

Listing 1.1 BeatriceApplication.h in t1-1

This listing shows the complete contents of BeatriceApplication.h. What are we doing exactly here? Simply put we are defining our own BApplication class. Every application needs at least one BApplication that connects to the app server. The app server is, simply put, a 'master' application that provides drawing routines for the application itself. The app server signals when for example the mouse enters. Without a connection to the app server, a program is useless, you won't be able to create windows.

     #include "BeatriceApplication.h"
      // Defines a rectangle
      // Defines BWindow - the base class for windows

    int main()
             // Creates an instance of BApplication
                BeatriceApplication a;
                // Starts the messaging loop. From this point on the program
          // is 'running' and handles mouse events etcetera.
              return 0;

                      // The constructor of BApplication requires an argument
                       : BApplication("application/x-vnd.Beatrice")
             // Create a new window with the dimension 30 x 30
             window = new BWindow( BRect( 100 , 100 , 30 , 30 ) , "BeatriceWindow",
          // Draw the window on the screen
Listing 1.2 BeatriceApplication.cpp in t1-1

What happens? We create an instance of BeatriceApplication, which also implies that we create an instance of BApplication. As soon as BeatriceApplication is initiated, it connects to the app server. Thus from that point on we can define new windows. In BeatriceApplication::BeatriceApplication() we call the constructor of BApplication with one argument. The definition of the constructor is:

BApplication(const char *signature)

A signature is a unique identifier to the application. It gives BeOS a way to identify the application. The signature is a MIME-type string. There are complex rules to follow, but as this is still early on, you should hold on to the rule that your signature should start with 'application/x-vnd.' From there on, you will probably want to add a company name (if applicable) and the application name (see the example in the code snippet above).

As soon as BApplication is made, it has connected to the app server, and we can add a new window. And thus we do that. We have defined a window in the BeatriceApplication.h, and here we initiate it. The constructor of BWindow looks like this:

BWindow(BRectA frame,
     const char *title,
     window_type type,
     uint32 flags,
     uint32 workspaces = B_CURRENT_WORKSPACE

The BRect type is a type that defines a rectangle. Here we have created the rectangle directly inside the code, but writing

     BRect rectange( 100 , 100 , 30 , 30 );
        window = new BWindow( rectangle , ... );

is also correct and will have no different function.

The second argument is the window title. Just a simple string in here to make it look nice.

The third argument is the window type. When the time is right I will tell more about window types, but now we just chose B_TITLED_WINDOW which just gives us a simple window.

The fourth argument are the window flags. You can logically OR (|) multiple of these. I have entered B_NOT_ZOOMABLE, which prevents the user from zooming the window. Also I have chosen B_NOT_RESIZABLE, which makes coding the window easier as its dimensions will not change. There are a lot more window flags, look at them if you have some spare time!

After I have set up the window, I can Show() it, which will make it draw it on the screen. What happens next is that the control is transfered to main() again. main() then calls BApplication.Run, which sort of starts the execution of the application. At that point, the application starts accepting events, which is good! Compile the program and start to run, and see what you have created...


I assume that you anxiously have started the program, and that you have seen the marvels of your first window. Satisfied you close the window and you feel happy, but ... you look at the status bar and still see the application. Something isn't right. We must adapt the code somewhat, in order to make it quit automatically. The code is in http://www.openbeos.org/samples/beatrice-t1-2.zip.

What we need to do first is subclass BWindow. I have called it BeatriceMainWindow, because we will use it for our mainwindow some time. Notice that I have moved the arguments of the window to the constructor of BeatriceMainWindow.

When a user closes the window, the window first gets a request if that is possible. This is nice because a text editor with unsaved documents then can first ask if the user wants to save his/her work. We also use that opening given by the BeOS API. The method that is called is BeatriceMainWindow::QuitRequested(). A snippet from the code:

     bool BeatriceMainWindow::QuitRequested()
                return true;
Listing 1.3 BeatriceMainWindow.cpp

When the window is closed, we send a message to the BApplication object. The BApplication object an always be called by using the global pointer be_app. BApplication defines this one itself, so we won't have to do anything about it and we can just use it. Messages are small packets of data that can be received and processed, and they are a central concept of the Be API. Please note that I will discuss this concept more the next time. When you post the B_QUIT_REQUESTED message, BApplication will assess (in the same way as our window) if it is safe to quit. When we have posted the message, we return true, which means that we can safely close the window. This is done so that in another app when a user is asked if it is alright to close the window, he/she can click 'No'. The method will then return false and the window will not be closed.


What have we learned:

  • Each application needs one BApplication object
  • The BApplication object communicates with the app server
  • Each application is identified with a MIME string starting with application/x-vnd-
  • Within the BApplication constructor we can create windows
  • BWindow is the class that embodies windows
  • The BWindow constructor needs the dimensions, a title, the window type and the window flags
  • B_NOT_RESIZABLE makes it impossible to resize the window
  • B_NOT_ZOOMABLE makes it impossible to zoom the window
  • You need to subclass if you want to quit the app on window closure
  • BWindow::QuitRequested is called when the users wants to close the window
  • To post messages, uses the PostMessage method
  • To quit an application you post the B_QUIT_REQUESTED message to be_app
  • be_app is a global pointer to the BApplication object
  • QuitRequested() returns if it is okay to quit the window, and false if not

Exercises (these apply to beatrice-t1-2):

  • Make the window bigger, make it size 33x105
  • Create a second window, which doesn't quit the application
  • Make the window resizable

Next time I will discuss messages and BViews! Bear with me. And if you have any questions, feel free to email me.

Pix, your friend and mine by Michael Phipps 
My parents, probably like yours, have photo albums. As families move apart and grow up, there is only one physical album that has to be shared. Maybe those pictures can be copied, at a fairly high price. If you can get them out of the album, that is. Many of ours were permanently attached.

Some members of my family hadn't seen some of these pictures for years. So I decided to make a CD based photo album. I asked all of the members of my family to scan in whatever pictures they have and get them to me to include.

The problem came in that many members of my family scanned whole photo album pages in. This doesn't make a very nice slide show and is not what I had in mind. So I used the tools I had available. I used Reflection to crop and save. Crop and save. After about 5 minutes of that, I said "there has to be a better way". Or, as my Dad always says "If you take the laziest man and give him the hardest job, he will find the easiest way to do it."

Such was born Pix. There was a small false start involved - I started, originally, with Hello World as a sample. When I was adding scroll bars, I looked at the ScrollBar sample app and found out that it did most of what I was looking for. So I ditched my first effort and mangled ScrollBarSample into what I wanted.

OK, so what did I want? Well, I needed to drag and drop pictures, far quicker than an open dialog. I needed to rotate them left and right by 90 degrees. And I needed the ability to select an area and save it out to disk with an automatically generated filename. This allows the user to click, drag, click save, click, drag, click save, etc. Very quick and easy to use.

ScrollBar app, as found in the sample code, accepts both text and image files. I started by stripping the text capability. I have probably missed some of this somewhere, but the vast majority of it is now gone. I added rotation as a menu option and implemented it. 90 degree rotation is fairly trivial. No real trigonometry involved. The algorithm that I used is not the most efficient in the world, I know. But it rotates a 1600X1200 image in less than a second on my machine and that was enough for me.

Next I added selection. I am particularly proud of this code, as I wrote it and then rewrote it using a better paradigm. If you are working on a GUI or considering doing so, I would highly recommend "Constructing the User Interface with Statecharts" by Ian Horrocks. This book opened my eyes to a whole different way of building GUIs that is *far* simpler and easier to code. The short version of a very long story is that you build a method to redraw every trivial widget from internal state variables. This is a fairly non-intuitive concept for most coders; it seems lazy. It turns out that you save yourself a lot of effort when you find yourself saying things like "oh - the user changed the radio button selection. That means that the slider has to be reset and the pull down can only have items 1,3,7 and 15 in it". Instead of trying to code these rules in every widget's update, you have one method that updates everything. With a GUI, there are (almost) never speed concerns with doing this. I have done it under both BeOS and Windows with great success.

Next, I added saving. There was a tricky bit here that took me some puzzling out. Originally, I tracked the coordinates of the bounding box of the selection. I soon realized that this would not save what I necessarily had in mind - that the view was zooming the original bitmap. There is a small scaling function to deal with that.

Finally, I added the tool bar. This was a fairly quick effort that really made the power of BeOS's model obvious to me. By having another window (the toolbar window) send specific messages to the document window, I was able to build a toolbar window in very short order that had all of the capabilities of the menuing system.

I know that this is not a perfect app. There are some small bugs, mostly oriented around the display of selection, that make it a little less than polished. Still, I was very surprised and impressed at how quickly I was able to build this. And it lead me to remember my favorite Amiga applications - DPaint and ArtDepartment. Many people have tried (and most have failed) to create a Photoshop clone for BeOS/OBOS. Many of the things that were very easy with DPaint 4 are hard/impossible with Photoshop. It wouldn't be all that hard to add a lot of the functionality from DPaint to this little app. Maybe someone will want to take up that challenge...

Code is available on request...

Experiments in scheduling by Daniel Reinhold 
The OS scheduler is the heart (nearly literally) of any operating system, so it's got to be in good working order. It needs to be fast, efficient, and distribute the computer's resources fairly. But what does that really mean? What makes a good scheduler?

In order to better understand this critical component, I recently wrote a userland prototype that mimics a real scheduler. This is taking advantage of the 'friendly third option' in kernel development. That is, when you want to test (and/or otherwise examine) a piece of kernel level code, you either:

  1. get a serial cable connected to a second machine and prepare to capture and scan thru monster amounts of debug statements
  2. boot and run under a emulator such as Bochs (funny that I say "such as" -- as if there is another one to use)
  3. write a userland stub that simulates the portion of the kernel that you're interested in and hook your code up to that
The last option is by far the most attractive to me. I definitely prefer staying in the warmth and cozy safety of userland. Besides, simulators are kind of fun to write. And the turnaround time while editing/compiling/debugging is orders of magnitude faster.

It does have some drawbacks: you have to write the simulation stub (which may be a fair amount of work, depending on what you need) and then you have the deal with the fact that your simulation may not model the real thing well enough. Therefore, you don't want to draw too many conclusions from a prototype running in a simulated kernel.

What doth a scheduler do?

The scheduler creates the illusion that multiple programs are running simultaneously on your computer. That's probably the most succinct definition that I can give.

You do know that it's an illusion, right? After all, at any point in time, you've got dozens (maybe even hundreds) of threads running, but only one CPU to run them on. Well, maybe two (or four) CPUs if you have one of those computers, but there is still hardly a one-to-one ratio between running threads and CPUs. Something has got to act as traffic cop, sharing the resources of the CPU(s) with all the running threads that need attention. That something is the scheduler.

The basic idea of a scheduler is this: you have a pool of threads waiting to be run. The scheduler selects one, based upon some criterion, and then runs it. Then another one is selected, and so on. Since computers can switch between threads very quickly, it gives the impression of simultaneous execution (just as watching video gives the appearance of seamless motion even tho, in truth, it's just a sequence of static individual frames). Hopefully, the selection method results in all the threads being allowed their "fair share" of the CPU (whatever that means).

Pool of Waiting Threads

A B C D . . . . . . . Z
    (Thread 'D' is selected to run)
Once you try to start filling in the details for this basic idea, however, things get complicated very quickly. Which threads get to go first (i.e. how should they be ordered)? How long should each run last? Do some threads need to run longer or more often than others? When (and how often) should scheduling decisions be made? And I haven't even brought up the issue of prioritizing.

Now, rather than continue this discussion in a theoretical fashion, I'll cover these topics within the framework of my demonstration scheduler. We can use it to play around with various scheduling ideas. And then actually observe the effects that each one has, rather than just talking about it and running thought experiments in the mind.

A tiny virtual machine

To create a scheduler, you need some basics things: a thread structure, a queue to store them in, a thread selection algorithm (at least one), and a machine to run them on. The machine definition itself requires a machine language and an execution environment for interpreting the machine code. Additionally, as we shall see, it needs support for interrupts and basic time reporting. To support all this, I created a simulated machine called 'tvm' (for 'tiny virtual machine').

After some thought and general messing about, I came up with following fairly natural division of components in the source code:

  • machine - the machine definition
  • kernel - the pseudo kernel (including thread creation/management)
  • scheduler - the scheduler itself and various scheduling policies
  • tvm - top-level component (read command line args and then start the machine)
They are all linked together somewhat. In particular, the machine definition knows about threads, which wouldn't be true on a real machine. But heck, no harm is done. It keeps the code simpler and doesn't affect the basic design of the scheduler in any substantiative way (that I am aware of).


Before describing the tvm machine, I need to point out one thing that has a major impact on the design of the whole thing: the scheduler is preemptive. Preemption allows the scheduler to stop running threads because their allotted burst has finished. This requires the machine to support timer interrupts.

Schedulers can be non-preemptive. In this case, a thread will run until it is finished or it decides voluntarily to relinquish control. Operating systems which are non-preemptive require the cooperation of running programs to be "nice". It's the honor system. But if just one program is allowed to run that turns out to be a selfish, resource hogging pig, the rest of the system can be reduced to a crawl, and the OS can do nothing about it (the user will have to power off the computer).

This is too grotesque for me, so I have designed a preemptive scheduler. How does this work? Each thread is allotted a fixed time slice (or quantum) in which to run. Once that quantum is finished, the thread's burst is over, and another thread is scheduled to run. The quantum is enforced by timer interrupts.

The program contains a global value called TimerQuantum. It can be set on the command line when starting the scheduler. The default value is 3 (measured in machine ticks). This value determines the frequency of the timer interrupt. When this interrupt occurs, the scheduler is invoked and another thread can then be selected to run.

But what if the thread finishes before its allotted quantum has expired? No problem, actually. The thread simply generates a timer interrupt itself as it is dying. This will invoke the scheduler right away and allow another thread to run.

The machine definition

Ok, on to the machine definition. The tvm machine consists of:

  • a simple stack-based machine language
  • an interpreter to execute the machine code
  • a basic interrupt handling mechanism
Interrupt handlers are stored in the Interrupt Vector Table. The interrupts themselves are processed by the Interrupt Event Queue. The event queue acts like a controller: interrupts are generated by pushing an interrupt number onto the queue, and then handled by retrieving the number from the queue. A few software interrupts are included, such as stack {under/over}flows and general errors. But the only "hardware" interrupt (using the term loosely) supported is the timer interrupt.

Timer interrupts are generated by a machine_tick() routine. This routine simply increments a couple of tick counters. These machine ticks form the basis of the timing functionality: an absolute counter keeps the system time, and a relative counter generates the timer interrupts. The march of time is somewhat quirky: a few calls to machine_tick() are sprinkled thru-out the main machine loop. There is no attempt to measure the relative timing cost of each machine operation -- it is just assumed that each basic routine will consume one tick. It's not a very sophisticated system, but I think it does its task well enough for the scheduler simulation.

What about the machine language? Well, I decided to go with something very simple here: just a tiny stack machine that supports about half a dozen operations. For a brief time (very brief), I flirted with something more in line with a real PC, such as a tiny subset of the Intel instruction set. But fortunately, I tossed that idea away very quickly. Stack machines are simple to create and demonstrate enough properties of a real machine to be useful. And you don't have to worry about registers and memory operands: everything is just pushed on and popped off the stack.

Furthermore, to keep things really simple, all that this simulated machine does is output characters and loop (to automate the output of multiple characters). This way, it will be easy to detect the effects of running threads, observe switching between them, etc. To get a feel for the machine code, consider the following small program:

op prog[] = {'$', opEmit, opHalt};
This program will emit a single '$' character and then halt. The 'op' type (which stands for opcode) is just a typedef for int, so the machine code can be easily placed into integer arrays. All programs for this machine will end with the 'halt' instruction to signify that the execution is finished.

An idle thread

Every computer needs an idle thread. That is, the scheduler has to have something at all times to schedule, so at startup time, an idle thread is created and run. It doesn't do anything itself. It just guarantees that the scheduler will have at least one thread in the queue. That's its whole purpose in life. Here's the idle thread that I created for my simulated machine:

op idle[] = {1, opLoop, opInc, opEndLoop, opHalt};
How does it work? The 'loop' instruction pops the stack to get the counter, i.e. the number of times to run the loop. In this case it finds the 1. It decrements the counter and pushes it back, along with it's own index. Next, the 'inc' instruction increments the top stack entry by one. Thus, the loop counter, which was down to 0, is back up to 1 again. The 'endloop' instruction just jumps backward to the beginning of the loop (by setting the instruction pointer to the index that 'loop' had previously saved).

In other words, the loop counter keeps toggling endlessly between 0 and 1. The 'halt' instruction is never reached. This is equivalent to the following C code:

i = 1;
if (i-- > 0) {
    goto loop;

The thread data structure

With a machine language defined, we now need a thread to encapsulate the code. Which is what a thread is -- the smallest self-contained structure that can be executed. Here is the definition of a tvm_thread:

// the thread structure

typedef struct
    char *name;         // thread name (allocated dynamically)
    int   id;           // a unique id

    int   state;        // thread state (BIRTH, READY, RUNNING, DEATH)
    int   priority;     // thread priority (IDLE, LOW, NORMAL, HIGH)
    int   prio_adjust;  // priority adjustment

    op   *code;         // base address of the thread's machine code
    int   ip;           // instruction pointer (index into the machine code)

    op    stack[10];    // thread local stack
    int   sp;           // stack pointer

    thread_stat st;     // thread stats
The thread name is, well, the thread name. Actually, it's not truly needed here, but it doesn't hurt anything and makes thread dumps more readable. The id, of course, is what truly identifies the thread: while we humans prefer textual names, the OS simply looks up threads by their id. Like most systems (probably) these thread id's are just generated sequentially (the first thread is 0, the next thread is 1, and so on.)

The thread state determines the life cycle of the thread. There are certainly more possible states than the ones I've allowed for in the tvm scheduler, but that's in keeping with the simple design. More on the priority and priority adjustment in a bit.

The 'code' and 'ip' members reference the machine code for execution. The 'code' member points to the base address of the (array of) opcodes, and the 'ip' member is the instruction pointer -- i.e. points to the next opcode to be interpreted.

Each thread contains its own local stack. This is merely expediency. Normally, a thread would just include pointers to the start and end of it assigned frame on the system stack. But I found it easier to avoid the joys (?) of carving up a global data structure, so each thread gets its own 10-opcode deep stack. More than big enough for the simple test threads that I've created. The 'sp' member is the stack pointer. It always points to the top of the thread's stack.

The thread stat structure holds a few statistics that enable the scheduler's performance to be measured. I won't cover this here.

The Thread Queue

Once threads have been created, they need to be placed into the Thread Queue. This way, the scheduler can find them! All threads in the queue will be marked with a state that indicates their readiness to be scheduled. Actually, in the case of tvm, the threads are almost always ready.

For a heavy duty thread queue capable of handling a large number (and range) of threads, you should have a dynamically allocated structure. But for my little simulator, I found it easiest to go with a simple fixed size array of thread pointers. This makes inserting and removing threads trivial: a new thread is allocated and its pointer is stored in the next available slot. When the thread is finished, it's resources are freed and a NULL is written back into the slot.

Initially then, the queue is just an array of NULL pointers. Of course, at startup time, the idle thread is created and inserted, so at least that pointer will always be present.

A global index called 'ThreadNext' is used to keep track of the location in the queue. As each thread is scheduled, the index is incremented. When it hits the maximum array slot, it just wraps around to index 0. This makes the array behave as a circular queue.

Thread Queue 

After inserting the idle thread

ThreadNext ->
   . . .

Thread Queue 

After inserting several more threads

ThreadNext ->
   . . .

The Scheduler

The scheduler is invoked regularly, via a timer interrupt. At that point (except for the very first schedule call), there will be a currently running thread. This thread must be stopped, and another thread selected to run (altho, in fact, it might turn out to be the same thread again). Once the next thread is selected, a context switch is made -- that is, the machine's execution environment is adjusted so that the next thread's machine code will be evaluated by the machine interpreter.

Therefore, I have decided that the following basic steps are the foundation of any thread scheduler:

  1. 'Unschedule' the current thread
  2. Select the next thread
  3. Switch to the next thread
This is a bit vague, but really, that's all there is to it! Switching to and from particular threads involves changing their state. Here are the thread states defined in tvm:
enum thread_states
    BIRTH_STATE,          // just created, not yet in the queue
    READY_STATE,          // sitting in the queue (not running)
    RUNNING_STATE,        // currently executing
    DEATH_STATE           // execution is finished, marked for removal from queue
Most of the time, threads sitting in the Thread Queue will be marked as "ready". Only one thread at a time will actually be in the "running" state (remember -- multiple programs running at the same time is only an illusion!). 'Unscheduling' the current thread basically means moving its state back to "ready". Unless the thread has finished, in which case its state will be "death" and the scheduler will remove it from the Thread Queue immediately.

Switching to the next thread is similarly trivial. The machine interpreter keeps a global called CurrentThread. A context switch merely involves resetting this global to the next thread that has been selected.

Therefore, most of the real work, in terms of scheduling decisions, is made in the middle step -- i.e. selecting the next thread. I decided while writing tvm that I didn't want to wire in just one algorithm for doing this selection, but rather allow a range of possibilities. So I created several scheduling policies. These are just various routines for determining how to pick the next thread for scheduling. A global hook function called get_next_thread() is set to point to one of these routines based on which command line option is specified.

Round Robin

If you have a preemptive scheduler, then you can't overlook Round Robin. This is the simplest and probably most fair (or rather, democratic) scheduling policy there is. It does nothing more than go from thread to thread, allotting each their own quantum.

For example, if you have threads A, B, and C running, then it just schedules A, then B, then C, then A, then B, then C, etc. on and on. Each thread gets completely equal access to the CPU. Here's Round Robin implemented in my scheduler:

static tvm_thread *
gnt_round_robin (void)
    int i;
    int slot = ThreadNext;

    for (i = 0; i < MAX_THREADS; ++i)
        tvm_thread *t = ThreadQ[slot++];
        if (slot >= MAX_THREADS)
            slot = 0;

        if (t && t->state == READY_STATE)
            ThreadNext = slot;
            return t;

    return NULL;
The problem with Round Robin is that it is a bit too democratic. After all, does every thread deserve equal time? Consider that you have two threads in the queue, and one of them is (of course) the idle thread. Does the idle thread really need to have 50% of the running time? Of course not! It's a do-nothing thread and should be relegated to the background. It's a low priority operation. Aha! The notion of thread priorities.


In the world of threads, there is the idea of priorities. Simply put, some threads are more important than others. For example, you really don't want the mouse or keyboard to become sluggish and lag in their response because some background process (like the mime sniffer, for example) starts to run. Thus a tiered hierarchy is defined for threads from low to high (or urgent) priority.

Categorizing threads by their urgency is great in theory, but a bit hard to determine in practice. Just exactly what kind of activity is most important? For a desktop OS, which must regularly interact with a person, user-oriented IO is usually near the top of the list. Or things which are very sensitive to CPU attention, such as running video codecs.

Well, you can't play video in tvm, so I don't dwell on this too much. Here's all the options that you get in my simulator:

enum thread_priorities
    IDLE_PRIO,        // the lowest of the low (pond scum of the thread world)
    LOW_PRIO,         // lowly, but honorable
    NORMAL_PRIO,      // just your average thread
    HIGH_PRIO         // must be a VIT (Very Important Thread)

Strict priority and starvation

Alright then, prioritizing threads is useful. So let's add a scheduling policy that is priority based. The first stab at this is a policy called "Priority Strict". This means that thread priority is the only criterion for selecting the next thread. It scans thru the Thread Queue, first looking for a HIGH priority thread to run. If not found, then it searches again for a NORMAL priority thread. Then for a LOW priority. Finally, an IDLE priority is accepted, if nothing else can be found.

This policy is harsh. It only has acceptable run time characteristics if higher priority threads always finish quickly -- or at least ahead of the lower category threads. If not, then you get "starvation". This is where the higher priority threads hog up all the CPU time, while lower priority threads languish for long periods without being run.

This simple minded priority based policy is included in the tvm scheduler only to demonstrate its bad effects and to show why you wouldn't want to actually use it for a real scheduler. Scheduling by priority is fine, but you need to have some kind of balancing mechanism to ensure that high priority threads don't steal all the processor time.

Avoiding thread starvation

I have included three variations of techniques for overcoming "thread starvation" because of priority based scheduling.

  1. Aging
  2. Leveling
  3. Random Promotion
Aging refers to the idea that threads that have been overlooked (because they are not high enough priority) are "aging" or "decaying". So to compensate, whenever a thread is rejected, it is examined to see how long it has been waiting, marked as "ready" in the Thread Queue. If this waiting time exceeds some particular threshold, then its priority is boosted in the hope that this will improve its chances of being selected on the next round.

If low priority threads can be boosted when neglected, then why not do the inverse and "punish" high priority threads that have been given too much attention? That is, lower the priority of a thread hog. A reasonable idea, but... since a thread with a higher priority must be doing some kind of important work, you wouldn't want to have it reduced for too long, or you throw away the advantages of prioritizing. So it makes sense to combine any priority reduction technique along with aging.

Therefore, the next available policy is called "leveling". This refers to the fact that both lower priority threads can be boosted and higher priority threads can be diminished, thus giving somewhat of a "leveling out" of priorities. But you don't want things to get too level, because, again, the priority categories are useful. So for my implementation of leveling, the priority reduction is only in effect until the next selection. At that point, the priority returns to its original value.

For both aging and leveling, the priority change is held in the thread's 'prio_adjust' field. This can be either positive or negative, based on whether it is a boost or a reduction. In the case of both policies, the change in priority only lasts until the thread is selected again. Being selected is taken as an indicator that the priority adjustment has done its work and can be tossed away. So the 'prio_adjust' field is set back to 0 and the original priority is assumed.

The final compensation technique, which I call Random Promotion, is quite simple. When it comes time to select the next thread, then once in awhile (at some random interval) the next thread available is selected, no matter what its priority. Exactly how to define what the random interval should be is not easy -- you could try to pick an optimal value thru experimentation. I settled on using the value 5 or the ThreadCount-1 (which ever is greater). That is, about every fifth time the scheduler is invoked, it will select the next thread (marked as "ready") in the queue, regardless of its priority. This does a pretty good job of letting some lower priority threads run once in awhile without stopping the higher priority threads from getting their (higher) share of attention.

The obligatory caveats

Running the tvm scheduler can be fun. A few sample threads are created in the kernel startup, dumped, and then run. A message is displayed each time a thread dies -- this makes it easier to tell which threads are still affecting the output. When all but the idle thread is finished, a summary of the usage stats is displayed. Since the idle thread never ends, you have to kill the program with Cmd-C (Ctrl-C or Alt-C depending on your keyboard setup).

By running different policies and adjusting the timer quantum, you can see variation in the output and the usage stats. It's difficult to use this output, however, to make any conclusions about which policy is best. There is likely no such thing as a best scheduling policy -- it is too dependant on the run-time environment and the specific set of threads being run. Put another way, there are too many knobs controlling the behavior. Simply adjusting the implementation of the policies, even by something as basic as changing a threshold value, for example, will alter the results. Try it and see.

Also, as lengthy as this article has turned out to be, and as many things as appear to have been stuffed into the simulator, it is still very simplistic and limited. I never got around to adding, for example, a BLOCKED_STATE for threads. This is needed for something as trivial as asking a thread to sleep (e.g. adding an 'opSleep' opcode). Not to mention the possibility of threads engaging in some kind of disk or port IO that would require blocking them. Which would, in turn, require the presence of semaphores for serializing access to these global resources.

And what about communication? Threads in a real OS are engaged in communication all the time. There are the classic signals from Unix, and BeOS adds its own flavor of inter-thread communication. This would then entail adding a WAITING_STATE for communication, or perhaps two states: SENDING_STATE, and RECEIVING_STATE. And so on.

True threads exist in rich world of interaction, likes thousands of bees floating from flower to flower in a meadow. The threads in the tvm simulator, however, are more like little islands unto themselves. In a way, that's not bad. It makes understanding the basics of scheduling easier, which is the whole point. And in any event, you've got the code to play with, so you can extend it any way that you can think of. It's fun.

Source Code:

Moore's Law fears not the Reaper by Michael Phipps 
Every so often, there is an article published by a pundit that says that Moore's Law can't possibly be true X months from now. Well, as you and I know pundits don't have any more idea what the future will look like than you and I do. After all, aren't we all using Web Tablets now? Or NetPCs? How about Video Phones?

There are two issues about Moore's Law that need to be considered, here. Every time someone says that transistors can't go closer together, the bright people in MicroE figure out some better way. Dope it differently, etch it with ultra-violet, etc. Every time they complain that the bus can't go faster, a way is found. This is the beauty of competition in the marketplace. If Intel can't top 3 Ghz, AMD will surpass them and sell more. If AMD wastes transistors, they pay out of the bottom line. No barrier can be allowed to stand for long. Now, I certainly understand that there are physical limits. Quantum tunneling and all of that. These issues, I think, are where we need to understand what Moore's Law is *really* about and why these things matter.

Moore's Law, while stated explicitly about microprocessors, is not a realistic measure of a computer any more. In the 1970's, the microprocessor *WAS* the computer. A couple of support chips, some A/D and D/A and presto. Look at the Apple ][. It used the CPU to redraw the screen. Not put things in memory, but actually redraw the screen. Every 30th of a second, the 6502 would be interrupted to redraw the screen. User code executed in the "off" time. At some point in time in the PC world, they figured out that they could add a fairly cheap chip to the video card and not have to borrow some CPU time. While the transistors on the chip didn't change, the end user got more bang for their buck. The Amiga took this to a whole new level, with blitters and DMA driven sound and other chips that allowed the 7.16 mhz processor to seem a lot faster.

Today's world is as far ahead of the Amiga, hardware wise, as the Amiga was ahead of the Apple ][. Today we have video cards that have high end graphics processors that do what we used to call ray tracing in real time. What took hours for my 68030 to do is done 30 times a second on a Radeon 9500, on a slow day. With little to no processor interaction. If you don't believe the GPUs on these video cards are fast, think again. I can render the OpenGL teapot at about 40 fps on my dual Athlon 1.6. Best case, pegging both CPUs. With HWOGL, you can get 200 or more FPS with little to no CPU load. Again - no more transistors on the CPU, but a lot more power available to me.

The examples here continue. DMA based hard drive interfaces by far beat out the old ST506 controllers. Sound works the same way. That means more of my CPU time for my processes, which is a good thing. While there were certainly examples of this in the 70's (in mainframes, mostly), it is a consideration that Moore's Law doesn't take into account.

Am I worming around the idea that Moore's Law will end, but I claim that it doesn't count? Well, yes and no. I believe that there will be physical limits that will stop us, some day. But there are so many possibilities that can be explored. Multi-core processors. Exotic materials. Optics. I think that we have years and years left on Moore's Law. We will also learn to design chips better. I find it impossible to believe that x86 is the best and final instruction set that humanity can achieve, in spite of the marketplace pointing in that direction. But I also believe that you have to look at the whole system, to be fair. Back in the days of the 80387 math coprocessor, one could ask if it is fair to count those transistors as part of the 80386. Do coprocessors count? Does wise use of SMP machines count? Does it uphold Moore's Law to have a dual or quad or more processor system? Well, not technically. Technically, Moore's Law is about what actually sits on one piece of silicon. I would argue that Moore's Law shouldn't necessarily be constrained to one chip. Any more than one could argue that measuring the size of one cylinder of an automobile tells you the whole vehicle's power.

Moore's Law will hold up for a long time to come. When it doesn't, PCs will be too fast and too scalable to miss its death.