Displaying Newsletter
Issue 12, 23 Feb 2002

  In This Issue:
Another Valentine's Day Massacre? by Michael Phipps 
I am not a lawyer, but I play one on TV...

What news could possible overshadow the first (micro) release of OpenBeOS in the Be community? It would have to be something pretty big. A law suit between Be and Microsoft would certainly qualify.

Despite the fact that I am not a lawyer, I am a pretty good reader. So I read the 21 page brief from Be's web site (you know the URL, by now). There was not a whole lot of legal words that I didn't know. But there was an awful lot of history and allegations that I was certainly not aware of.

The whole Hitachi story is told there. As are stories about Gateway, Java and HTML. Some very good reading, for those who are so inclined. There is also some very interesting information about the Great Focus Shift. It turns out that Be had some major partners waiting in the wings for IA's before they announced the GFS. Unfortunately, Microsoft had some henchmen do their dirty work, and the GFS was a failure. Some would say that it was destined to be so from the beginning. I personally think that IAs are a great idea. Implemented badly, but a great idea. If Be had been able to license more, I think that IAs could have really taken off in much the same way that PDAs have.

The "Prayer for Relief" is interesting, as well. There is no requirement that Microsoft stop doing any of these things. That they do anything to make the OS market an easier place to succeed in. This, folks, is simply about money. Maximizing shareholder profit. Microsoft will certainly do one of two things. Either buy them off (settle out of court) or stretch this out as long as they possibly can. Buying them off might well be the smart thing to do. Be's value at IPO was around 60 million. Settling for that amount would be a drop in the bucket for Microsoft and I think that most of Be's share holders would be exuberent to receive that kind of a settlement. No admission of guilt would have to be made and this could be kept very quiet.

One thing that pleases me about this case is that the "browser bundling" issue is not brought up. One could certainly argue that Be bundled Net+. On the other hand, Be did *NOT* make it a fundamental part of the operating system. Nor did they make it difficult to replace (or uninstall, should one choose) it. BeOS might well be the shining example of the "right way" to deal with browsers. Yet they did not bring the issue up. I would like to hope that is because it isn't the issue and never was (except for Netscape). Ignoring the browser bundling issue allows us to ignore the stupidity of the "improving the technology for the consumer" argument. No, the issue here is that Microsoft used its monopoly power to force OEMs into not accepting BeOS. A very simple, very easy to prove argument. One that was already accepted by the courts.

In any case, the whole history is laid out for us. Be tries a strategy and Microsoft destroys it. Be zigs and Microsoft zags. I would almost say that this brief sounds a little paranoid if Microsoft's anti-competitive practices weren't so well known and despised. I think that denying embrace, extend and extinguish is pretty hard, given the numerous examples. Further, the intimidation practices of Microsoft are well documented as well.

Al Capone would be proud of Bill Gates. The OEM discount structure sounds a lot like a mafia scheme. You buy from us and no one else, or we will send our boys around to rough you up. Look at the bodies stacking up. Netscape, OS/2, Java, Corel, Word Perfect, Ashton-Tate, Lotus, Be. Maybe it isn't such a coincidence that they filed suit right after Valentine's Day.

Comment about comments by Michael Phipps 
First, a disclaimer. Almost no one will agree with me. Almost no one agrees with ANYONE else when it comes to this topic. That is just the nature of people. Some people are verbose, some succinct. Having said that, away we go!

There are many reasons for commenting code. How one comments has to be directly related to why one comments.

The "classic" reason is to clarify the code. When you are doing something clever or non-obvious, you explain what and why.

Example - I was working on a program that deals with dates on Solaris. There was a huge performance issue. Using code profiling, I found that I was spending an inordinate amount of time calling mktime. So I coded a class to "cache" mktime values. In the header for the class, I explained that I created this class because of mktime performance issues. That the caching strategy only caches the most recently used because that fit the calling pattern of most of the apps that I was working on. And finally, I documented some ideas for making it cache better if it was ever an issue. Caching a function call, especially one that is POSIX standard, is not a normal circumstance. So I thoroughly documented what I did and why.

Another example (similar to Duff's device) for copying bytes:

switch (nbytes) {
    case 4 : *dest++=*source++;
    case 3 : *dest++=*source++;
    case 2 : *dest++=*source++;
    case 1 : *dest++=*source++;

Notice the lack of breaks. This allows the code to fall through. Useful only rarely, when performance is critical. I was writing some code that was very performance sensitive, so I commented this function explaining why it worked, and why I used it.

One that we have been discussing recently is for self documenting code (an oxymoron if one ever existed!). The concept here is that by keeping the code next to the documentation, both will be updated in tandem. Since many people choose to document functions anyway, why not take advantage of that, do a little bit better job, and document the API at the same time. Here is an example:

/// Class: BToothbrush
/// Method: RinseOff
/// Parameters: duration : seconds, waterTemp : degrees
/// Returns: cleanliness : float
// This method is called to clean the BToothbrush. It should be called
// after each usage to ensure that we don't end up with cruft.

There are parsers that take this style of documentation and turn it into a generated document that looks kind of, sort of like the BeBook.

Those were some examples of how to comment. Now here are some examples of how not to comment:

for (int i=0; i<10;i++) // loop over i from 1 to 10.

If you can't read this C++ code well enough to understand that, then there is no use in reading source at all.

Or this one:

if (c == '(')
    return leftParen; /* left parenthesis */

If you find you have to comment bad code, don't. Please. Rewrite it. Example:

// If we have not come to the end of the file, doSomething.
if (! (1!= (!feof(file)))

Some final thoughts:

If you find yourself explaining variables, rename them. Variable naming is probably far more helpful to understanding code than comments. Often times old Fortran coders have these issues - variable names were required to be short back in the day. This is obvious, but make sure that your code matches the comments. Comment when it helps, not because some one says that you have to.

For a particularly good reference (and a good book overall), read The Practice of Programming, by Kernighan and Pike.

When to use the Client/Server model by Nathan Whitehorn 
Much has been said of the client/server model recently, including tutorials on making such an interface, but little has been said about the benefits and pitfalls of such a design. Although it appears (both from previous newsletter articles and from a cursory examination of the system) that the BeOS does the vast majority of its tasks remotely, this is only true of about 10% of the API.

A client/server design is useful in exactly two circumstances. The Be API includes a number of kits that are utterly serverless (the Support, Device, Storage, and Translation Kits). These have a common element: nothing needs to be coordinated between teams by them. Those that do, like the Interface Kit, have only a few elements that interact directly with the server (BView and BWindow). These are all that need to be coordinated or maintained remotely. Servers are also useful when the process has an extraordinary init time, which can then be shared among applications. Michael Phipps' example of a dictionary server could require a lengthy initialization time (more than a second or so). If this was done locally, it could vastly increase the load time for all client applications.

Many APIs, however, should be implemented locally. Client/server interactions require the use of IPC, which can be quite slow for large data transfers. Further, servers require system resources even when not in use, which a shared library does not. The dictionary server, for example, would use large quantities of RAM continously, even when no word-processing applications are being run. And, although not visible to the user, RPC interfaces can be a real pain to implement properly.

The final advantage of libraries is that they allow the use of wonderful things like abstract bases. The Translation Kit, for example, uses BPositionIO pointers, allowing the use of an infinite number of input and output methods. Although this is possible in a server by, say, using ports to send read/write messages and requests, this is likely to be slow and cumbersome, because the client has to implement a way to accept read requests, send messages with read data, and receive write requests, not to mention the 2 trips to the kernel per write and one per read. Data will also have to be copied twice for every operation (to the port and from the port).

People often use hybrids, as well. As I previously remarked on the Interface Kit, only a very few classes are remote. The rest (things like BButton and BTextView) are all implemented locally, using the remote API. Only the bare minimum of interactions should be implemented in the server: placing, say, BButton in the app_server merely complicated the RPC protocol and decreases extensibility (you could no longer draw on the button face, for example, as is the case with BScrollBar, which is implemented in the app_server).

Servers are an important part of system design, but should be used intelligently. We should not mindlessly put things in servers because servers are cool: this leads to a minimum of extensibility, and often at a minimum of speed. Nor should we opt to implement inter-app coordination in a library: this is a recipe for both mental pain and carpal tunnel syndrome. Use your head, and pick wisely.