Displaying Newsletter
Issue 15, 06 Apr 2002


  In This Issue:
 
A Tale of Two Net Stacks by David Reid 
Michael Phipps keeps referring to all the progress we're making on the networking team, but unless you're subscribed to our mailing list, you may not really be aware of how much we've done. Hopefully this article will explain further what we're doing and why we've made the decisions we've made and taken the paths we've taken. If you're looking for an article with lots of technical information, this won't be it! Maybe I'll write one of those next?


The Beginning

In the beginning... Ah, it seems like such a long time ago! The generally held logic is that the networking belongs in the kernel. This has proven to be the best place for it on many systems and provides for the best performance. It does, however, add significant overhead to the project, as developing in kernel land is anything but straightforward on an open system with the kernel source available, so the prospect of doing it on a closed system against a kernel we didn't know everything about...

When we started looking at actually adding code, we had a debate about the correct path to take. It was decided that going for a userland stack would be the easiest and, if we were careful, it could be moved into the kernel at a suitable point.

Why is a userland stack easier to write? In some ways it's not. The network protocols are the same, the way the data within them has to be manipulated is the same. Where it does gain a huge advantage, especially when you're just starting, is the ability to debug using familiar tools and in a nice "safe" environment that doesn't lead to rebooting when the code crashes. When the stack crashes (and crash it did in those early days) the system didn't crash and it was possible to carry on working. This is something I would only dream about later in the development.

Development went quite quickly and some of my early ideas went out the window as I realised they just couldn't be done. However, the server slowly started taking shape and the data began to flow from and to the network! The various milestones were rewarding to reach. The first time the server read data from the network and the first time it replied to a ping request stand out as being memorable.


The Middle

There was one issue that had been in the back of my mind since the start that I still didn't have a solution for. As we started adding things like sockets and the ability to bind and send/receive data from them, this issue started growing, and the more we looked at it, the messier and more problematic it became.

What was the issue? Put simply, it was how do we communicate with the network stack? If the stack was running in one team, and we're in another team, how do we talk to it? Can't do it directly as the memory spaces are "ring fenced" to a team. So, do we use shared areas of memory and use semaphores to inform the various parties there's data there? Maybe we use BMessages with a suitable C interface? You see the problem.

What we're talking about is designing and writing an IPC, Inter Process Communication, mechanism that would be at the heart of our net stack. This filled me with fear as it was about as far away from the KISS principle as I think we can get. The performance of this layer would be crucial to the whole server and we'd be writing it from the ground up. No, this wasn't a good idea.


Unexpected Turns

Then along came Philippe Houdoin. Philippe has more background in drivers and kernel modules than most of us in the group, and he came up with a way to have a driver register with the select() hook (not used presently) and then have the operating system actually use it! Bruno and the BFS guys added support very quickly and it became obvious Philippe had been right. This discovery got me thinking, and reopened the debate about how we do things within the networking team.

Philippe went even further and contributed some code that essentially created a socket device that could be used to create/use sockets using the standard system calls. Suddenly things were looking very different.

I had a few days away from the computer, due to work obligations, and started thinking that this might be just what was required to move the code into the kernel, thereby removing the IPC issue altogether. In fact, the more I thought about it, the more certain I became that this was the way to go. When I got home, I started looking at code and experimenting with some ideas. It wasn't long before I had enough working to be able to commit code to our current tree.

It took me a while to get it working, and I was a constant visitor to Kernel Debugger Land (KDL). Eventually things started to come together, and I had code that would react to network packets, running inside the kernel. The next step was simple -- write a socket driver that we could use to communicate with the stack! Oh yes, such a simple exercise! Well, in fact, using the code from Philippe as a base, it turned out be relatively simple. In about an hour, I had a simple driver that could actually open a socket in the stack!


The Home Stretch

Next was to make a library that knew how to talk to the socket driver in a sensible fashion. This had been discussed on the list, and the general consensus was to call it libnet.so. The Be version had all sorts of functionality that traditional unix systems have in their libc (Be's libroot), so these are being moved and added to our libnet as well. Already we have various inet_ functions available. Writing the library took about 30 minutes and then compiling and linking a small test app against it about another 20 minutes. Suddenly it all worked! Yes, the test app opened a socket in the new net stack and then closed it again. Not much maybe, but a huge step forward in terms of proving our approach.

Over the last couple of days [editor's note: this article was originally submitted 2 weeks ago], I've added support for the functions we had prior to our move to kernel land. Along the way, I've expanded the configuration support and built an ifconfig application that can set the ip address for an interface and show details of the devices configured.

At present libnet.so can do the following...

  • socket()
  • closesocket()
  • bind()
  • sendto()
  • recvfrom()

Support is also written and just needs testing for:

  • listen()
  • connect()


Where next?

Well, the path has been long, and has taken more twists than I care to remember. The result? Well, today we have a set of code that builds as either a kernel network stack or as a userland application. The userland application isn't really usable for much beyond basic testing, as tests need to be written into the application itself, but it is still proving to be invaluable in debugging new code.

I'd like to say that I'm 100% sure what will be done next, but I'm not. There are a lot of things that need to be fixed and even more things that need to be added. Among the highlights of things that need to be done before long are:

  • routing sockets
  • ip fragmentation
  • ip options
  • icmp error function
  • netstat application

Well, hopefully that makes things somewhat clearer and gives you an idea of how we're getting on. Keep watching this space.

 
Hardware OpenGL and where it should take us by Michael Phipps 
I want to talk a little bit (and I do mean a little) about the issues around even doing a HWOGL (Hardware OpenGL) implementation. Then I want to talk a little bit about my vision about where things should go and why.

Prepare yourself for an earthshaking statement: A quality implementation of hardware accelerated 3D graphics is hard. Be labored away at it for over a year, with staffing that was quite good (some of whom I knew personally). You see, video card companies are always tweaking their drivers to squeeze more speed out. There are tons of issues an implementor has to deal with:

  • taking advantage of everything that every card maker offers
  • making up for, in software, what each card maker *doesn't* offer
  • being fast
  • looking good
  • not slagging the system

I like playing games. HW accelerated OpenGL is important to me. I was crestfallen, the other day, when I saw that Neverwinter Nights will not be coming out for BeOS. I can't fault the publisher, since no one could use it legally, but I still don't like it. I *want* BeOS to be a great gaming platform.

So, you are asking yourself, why is he talking about gaming, what does this have to do with 3D GUIs, and, well, why doesn't he just go and do something about it?

All good questions. Gaming is the most frequent and most well understood usage of 3D graphics. Many of the people in the GlassElevator list, recently, described various input methods for 3D in terms of games that they had played. All well and good. 3D games (and 3D guis) are exceptionally unrealistic without hardware acceleration. As for doing something about it, well, there are several issues.

First off, engineers aren't lego blocks -- you can't just arrange them any way that you want. :-) Secondly, no one has come on board and said "I know *a lot* about 3D and I want to do this". Third, it is out of scope for R1, since R5 didn't have it.

OK. So we know why we don't have HWOGL and aren't likely to any time soon. It is on the radar, just a long way off. The next question is, what should we do with it once we have it? There are many levels of use we could put it to.

One is, simply enough, to replicate a 2D gui with it. Use it for z buffering. Easy to understand, fairly easy to write, low risk. Also pretty boring. A second (if you will) level of implementation that I think would be neat is to replace "fake3D" with real 3D.

For example -- windows over and under each other is a 3D concept that is "faked out" in 2D. With real 3D, one could do perspective, for example. Another instance of fake3D is buttons. We use drop shadowing to simulate "pressing" a button. A very good optimization technique. For 1985. How about real 3D buttons. For instance, when you press them, they "dimple" in the exact spot that you press them while you press them, giving the impression that you left a little mark and that the button is made of a pliable material.

Another example -- the scroll wheel could be zoom in and out, for example, giving you the ability to move closer (even, maybe, through a document, to what is underneath, then back out). This is real power for the user: instead of bringing a window to the foreground, you could just zoom in toward it, then back out. Pretty cool! The user would be able to control zoom on documents. Instead of every document having a drop down with percentages of magnification, zoom in and out is built in. Everything would have to be vectors (i.e. not bitmaps) for this to work without being horribly ugly, but that is not too much of a stretch, either.

People often speak of how cool it would be to extend the metaphor of the desktop (and real life) to 3D and have you literally pick up a document and carry it from one place to another. I don't really follow that line of reasoning. We invented computers and their modern uses because the way that people were doing things was inefficient. Why would we want to continue that inefficent way of working? Computers are a different medium to work in than "real life". 3D can be used to emulate real life. But is that the Right Thing to do?

 
Unit Testing by Jeremy Rand 

Unit testing is the process of showing that a part of a software system works as far as the requirements created for that part of the system. Unit testing is best if it has the following characteristics:

  • The software component is tested in isolation with as little interaction with other software components as possible.
  • The software component is tested using automated tools so that unit tests can be run with every build of the software if required.
  • All requirements of the software component are tested as part of the unit tests.

Unit testing is not the only type of testing but is definitely a very important part of any testing strategy. Following unit testing, software should go through "integration testing" to show that the components work as expected when put together.

This article describes the "why's" and "how's" of unit testing for the AppKit team of the OpenBeOS project. Although it is intended for the AppKit team, there is no reason other teams couldn't use this information to develop a similar unit testing strategy.


Why is unit testing important?

A basic concept of software engineering is that the cost of fixing a bug goes up by a factor of 2-10x (depending on the source of the information) the later in the development process it is found. Unit testing is critical to finding implementation bugs within a particular component as quickly as possible.

Unit testing will also help to find requirements problems also. If you write the requirements (or use cases) for your component from the BeBook, hopefully the BeBook and your use cases will match the actual Be implementation. A good way to confirm that the BeBook documentation matches Be's implementation is to write your unit tests and run them against the original Be code.

Unit tests will also continue to be maintained and run in the future also. As the mailing lists obviously show, many people are looking forward to OpenBeOS post-R1 when new features will be introduced above and beyond BeOS R5. These unit tests will be critical to ensuring that any new feature or even just a bug fix doesn't break existing functionality.

Speaking of bug fixes, consider adding unit tests for any bugs you identify that slipped through your original unit test suite. This will ensure that this bug or a similar one is not re-introduced in the future.

Finally, unit testing is not the be all, end all of testing. As mentioned above, integration testing must be done to show that software components work together. If all unit tests cover all requirements and have run successfully against all components, then a failure has to be due to a bug in the interaction of two or more known working software components.


When should I write my unit tests?

As the AppKit process document describes the recommended order for implementing a component is:

  1. Write an interface specification
  2. Write the use case specifications
  3. Write the unit tests
  4. Write an implementation plan
  5. Write the code

Please see the AppKit process document for more details about the entire sequence. The unit tests are to be written once the use cases are written and before any implementation work is done. The use cases must be done because they determine what the tests will be. You need to write as many tests are required so that all use cases for that component are tested. The use cases should be detailed enough that you can write your unit tests from them.

The unit tests are to be done before implementation for a very good reason. You should be able to run these unit tests against the Be implementation and confirm that they all pass. If they do not pass, then either there is a bug in the unit test itself or you have found a difference between your use cases and the actual implementation. Even if your use cases match the BeBook, if that is not how the actual Be implementation works, we must match the current implementation and not the BeBook. You should go back and modify the use case. Change the use case so that it matches Be's implementation and consider adding a note indicating this doesn't match the BeBook.

Imagine if you completed the implementation and then wrote and ran the unit tests. If you run the tests against your implementation and Be's implementation, you will notice the test passes for your code but fails on Be's. At this point, you will have to change the implementation, change the unit test and change the use case which is more work that if you write the unit tests before the implementation. Worse, if you only ran the unit tests against your implementation and not Be's, you may not notice the problem at all.


What kinds of tests should be in a unit test?

The unit tests you write should cover all the functionality of your software module. That means your unit tests should include:

  • All standard expected functionality of the software component
  • All error conditions handled by the software component
  • Interaction with software components which cannot be decoupled from the target software component
  • Concurrency tests to show that a software component which is expected to be thread safe (most things are under BeOS) is safe and free from deadlocks.


What framework is being used to do unit testing for the AppKit?

The AppKit team has chosen to use CppUnit version 1.5 as the basis of all of our unit tests. This framework provides very useful features and ensures that all unit tests for AppKit code are consistent and can be executed from a single environment.

There are two key components to the framework. First, there is a library called libCppUnit.so which provides all the C++ classes for defining your own testcases. Secondly, there is an executable called "TestRunner" which is capable of executing a set of testcases.

For more information on CppUnit, please refer to this website


What AppKit specific modifications have been made to this framework?

The following are the modifications that have been introduced into the CppUnit v1.5 framework:

  • A makefile has been added for the library and the TestRunner.
  • Some "bugs" in CppUnit v1.5 which lead to it not compiling under BeOS v5.
  • The TestRunner has been modified to support BeOS based addons. Each test which you can select from the TestRunner is found in the "add-ons" directory at runtime. The original TestRunner required you to change the TestRunner when new tests were added to it.
  • Changed the output from TestRunner. The output includes a name of the test being run and a run time for the test in microseconds.
  • Changed the arguments of the assert functions in the TestCase class from std::string to const char *'s due to apparent concurrency problems with std::string under BeOS when testing threaded tests.
  • Added locking to the TestResults class so that multiple threads can safely add result information at the same time for a single test.
  • The ThreadedTestCaller class was written to allow us to write tests which contain multiple threads. This is an important class because many BeOS components are thread safe and we need to confirm that the OpenBeOS implementation is also thread safe.

This is the list of the important modifications done to CppUnit v1.5 at the time this document is being written. For the latest information about modifications to CppUnit, check the code which can be found in the OpenBeOS CVS repository.


What framework modifications might be required in the future?

This framework will have to evolve as our needs grow. The main issues I think we need to solve are:

  • The format of the test name is an encoded string representing the class definition of the test class from gcc. It is not a very readable format but given that the test class is often a template class and you would like different names for different instances of the template, this seemed the best compromise. Suggestions welcome.
  • The threaded test support added into CppUnit forces you to specify the entry point for each thread in your test. If you are doing a test with a BLooper or a BWindow, these classes start a thread of their own. This thread will not be started through the standard entry point so doing "assert's" from one of these threads will not work. Perhaps we need TestBLooper and TestBWindow classes which will work with the assert's.

If you find you need some other features, feel free to add them to CppUnit.


How do I build the framework and current tests for the AppKit?

As of writing this document, you can build the framework and all the current AppKit tests by performing the following steps:

  1. Checkout the "app_kit" sources or the entire repository from the OpenBeOS CVS repository. There is information at the OpenBeOS site about how to access the CVS repository.
  2. In a terminal, "cd" into the "app_kit" directory in the CVS files you checked out.
  3. Type "make".

Note that the build system for OpenBeOS is moving to jam so these steps may become obsolete. When you the make has finished, you should find the following files:

  • app_kit/test/CppUnit/TestRunner - this is the executable to use to execute tests.
  • app_kit/test/CppUnit/lib/libCppUnit.so - this is CppUnit library which your tests must link against.
  • app_kit/test/CppUnit/lib/libopenbeos.so - this is library which contains OpenBeOS implementation of some Be classes (usually found in libbe.so, called libopenbeos.so to avoid a name clash at runtime).
  • app_kit/test/add-ons/BAutolockTests - this is the addon which contains the tests which are run against the Be and OpenBeOS implementation of BAutolock.
  • app_kit/test/add-ons/BLockerTests - this is the addon which contains the tests which are run against the Be and OpenBeOS implementation of BLocker.
  • app_kit/test/add-ons/BMessageQueueTests - this is the addon which contains the tests which are run against the Be and OpenBeOS implementation of BMessageQueue.

These are the key files which ensure that the tests can be run.


How do I run tests?

You have a few different options for how you run a test or a series of tests. Before you start however, you must build the code as describe in this section. Once it is built, you can run tests any of these ways:

  • Run "make test" from the app_kit directory. This will lead to all of the tests defined in app_kit/test/add-ons directory to be run.
  • From the "app_kit/test" directory, execute the command "CppUnit/TestRunner -all". This will lead to all of the tests defined in the app_kit/test/add-ons directory to be run and is the same as what happens in the "make" example above. However, recompile any code that has changed in the process.
  • From the "app_kit/test" directory, execute the command "CppUnit/TestRunner <TestName>" where <TestName> is one of the addons found in the "app_kit/test/add-ons" directory. Only the tests defined in that add-on will be run.


How do I write tests for my component?

The first step to writing your tests is to develop a plan for how you will test the functionality. For ideas of the kinds of tests you may want to consider, you should reference this section.

Once you know the kinds of tests you want, you need to:

  • For every test you want, define a class which derives from the "TestCase" class in the CppUnit framework.

  • Within each test class you define, create a "void setUp(void)" and "void tearDown(void)" member function if required. If before executing your test, you need to perform some actions, put those actions in the "setUp()" member. If you need to cleanup after your test, put those actions in the "tearDown()" member.

  • Within each test class you define, create a member function which takes "void" and returns "void". Within this member function, write the code to execute the test. Whenever you want to ensure that some condition is true during your test, add a line within the member function that looks like "assert(condition)". For example, if the variable "result" must have the value B_OK at a particular point in your test, you should add a line which reads "assert(result = B_OK)".

  • Create a constructor for all of your test classes that takes a "std::string name" argument and pass that onto the TestCase parent class. Add whatever actions you need to take in the constructor.

  • Create a destructor for all of your test classes and take whatever actions are appropriate.

  • Within each test class you define, create a member with the signature "static Test *suite(void)". For a simple test where only one test needs to be run for this class, the contents of this member should look like:

    return(new TestCaller<ClassName>("", &ClassName::MemberName));
    

    Replace "ClassName" with the name of your test class and "MemberName" with the name of the member function you defined your test in. If you need to define more than one test to run from this class, refer to instructions below on how to use the TestSuite class of CppUnit. If you are creating a threaded test, refer to this section.

  • Create one ".cpp" file for defining the "addonTestFunc()" function. This function must exist in global scope within your test addon. The contents of this ".cpp" file will look something like:

    #include "TestAddon.h"
    Test *addonTestFunc(void)
    {
        TestSuite *testSuite = new TestSuite("<TestSuiteName>");
        testSuite->addTest(<ClassName1>::suite());
        testSuite->addTest(<ClassName2>::suite());
        /* etc */
        return(testSuite);
    }
    

    In the above example, replace <TestSuiteName> with an appropriate name for the group of tests and <ClassName1> and <ClassName2> with the names of the test classes you have defined.

  • Create a build system around a BeIDE project, Makefile or preferrably a jam file which builds all the necessary code you have written into an addon.

  • Put this addon into the app_kit/test/add-ons directory and follow the above instructions for how to run your tests.


Are there example tests to base mine on?

There are example tests which you can find in the following directories:

  • app_kit/test/lib/application/BMessageQueue
  • app_kit/test/lib/support/BAutolock
  • app_kit/test/lib/support/BLocker

There are some things done in these tests which make things a bit more complex, but you may want to do similar things:

  • Most tests use a ThreadedTestCaller class even in some situations when there aren't actually more than one thread in the test.
  • All tests are defined as a template class. The test class is a template of the class to test (if that makes sense to you). For example, to test both the Be and OpenBeOS BLocker and not end up with a symbol conflict, the OpenBeOS implementation of BLocker is actually in a namespace called "OpenBeOS". So, the tests must be run against the classes "::BLocker" and "OpenBeOS::BLocker". The easiest way to do this was to make the class to be tested a template and define it for both "::BLocker" and "OpenBeOS::BLocker".

Even with the complexity, I think this code provides a pretty good example of how to write your tests.


How do I write a test with multiple threads?

If you have a test which you want to define that requires more than one thread of execution (most likely a concurrency test of you code), you need to use the ThreadedTestCaller class. The steps which differ from the above description on how to write a test case are:

  • In your test class, define a member function for each thread you will be starting. All of these member functions must take "void" and return "void". If all the threads in your test perform the exact same actions, it is OK to just define one member function. Usually in the tests I have written, I have called these member functions "TestThread1()", "TestThread2()", etc.

  • If your "static Test *suite()" function for your test class, you must return a ThreadedTestCaller. Imagine that the test class name is "MyTestClass" and you want two threads which run member functions "TestThread1()" and "TestThread2()". That code would look like:

    Test *MyTestClass::suite(void)
    {
        MyTestClass *theTest = new MyTestClass("");
        ThreadedTestCaller<MyTestClass> *threadedTest =
            new TreadedTestCaller<MyTestClass>("", theTest);
        threadedTest->addThread(":Thread1", &MyTestClass::TestThread1);
        threadedTest->addThread(":Thread2", &MyTestClass::TestThread2);
        return(threadedTest);
    }
    

    If you need to, you can put a number of ThreadedTestCaller instances into a TestSuite and return them in the suite() member function. Examples of this can be found in the BLocker and BMessageQueue test examples.

Otherwise the steps are the same as for other tests. The code gets much more complex if you define your test classes as templates as the examples do.