Issue 4-22, June 2, 1999

Be Engineering Insights: 1, 0, and the Blue Meanies

By Trevor Smith

The testing labs here are Be are running full steam ahead and the espresso machine is working overtime to keep us wired and on task. Here's a run down of release day nightmares that we work to prevent. As developers, you can take this frightening tale and brew your own testing procedures.

That's funny, it works on my machine...

Yes, we've all had that terrible feeling of bringing a stable and beautiful product over to a friend's house (or to a demo) and watching our audience's eyes glaze over when it konks in "working" code and we fumble to try it again. A different video driver, a slower net connection, or less RAM can lead to unexpected and embarrassing crashes.

To deal with this at Be, we've created two labs that regularly ferret out differences in seemingly similar platforms. In our main lab we have racks of OEM systems set up for automated testing. Our testing server can deploy test scripts in wondrous and sometimes evil combinations, which turn up fun facts (like sleep mode on Thinkpads does terrible things to our sound output and shutdown on a quad zeon can be tricky). When media testing is in full effect, you can walk down the aisles and watch "Clueless," hear the entire White Album, and see Input Recorder controlling paint programs that save to network drives that are compressed and copied a thousand times from IDE to SCSI. Combinations of applications and weeks of duress shouldn't cause a crash. Period.

In our other lab, which I refer to as the Ark, we keep one or two of every card, motherboard, peripheral, processor, and hard drive type that we support. With the speed of cheetahs we can set up almost any system to reproduce developer submitted bugs. This is also where we perform some of the pre-release sanity checks. For example, every motherboard in the Ark is set up to use the same interrupt for all pci slots, and then every driver is run through its paces while other drivers generate traffic on the bus. We swap out a card, reboot, run the tests, file any bugs, and move on. The BeOS makes this easy, but I shudder to think what it would be like on other platforms. How many times have I seen the "Found new hardware, insert installation CD" message on the BeOS? Not once.

CrashMe.avi

Media standards are as flexible as flea market prices at closing time. If an application can read the media that it writes, it will often try to pass itself off as standards compliant. Fortunately, the BeOS team thinks that media we create should also be usable on other operating systems, so we spend time trolling the Internet for strangely formatted media and we use industry nonstandard applications to create kooky versions of "virtual (void)." We keep an internal server of strange files, and before every release we run through them all to make certain that they still sound and look as they should. We take media created on the BeOS and play it on other operating systems. Does that sound editor that you just wrote handle zero- length sounds? Can that video sequencer fade from black to an all-white video still and create a portable media file?

A Testing Fairy Tale

Two test engineers were in a crunch. The floppy drive they were currently testing would work all day while they ran a variety of stress tests, but the exact same tests would run for only eight hours at night. After a few days of double-checking the hardware, the testing procedure, and the recording devices, they decided to stay the night and watch what happened. For eight hours they stared at the floppy drive and drank espresso. The long dark night slowly turned into day and the sun shone in the window. The angled sunlight triggered the write-protection mechanism, which caused a write failure. A new casing was designed and the problem was solved. Who knew?

I Heart rand()

A silent sound sample is an example of an edge case, where everything is correct but a bit extreme. Does Tracker drag more than 1024 files to the desktop? Does the Network Kit handle four days of varying speed ftp transfers? Can you drag and drop a file around the desktop for three hours? We watch the heap, we make sure Pulse doesn't peg out, and we double check that the BeOS can hang with the weirdos. Does your application handle a thousand screaming monkeys pounding their way towards Hamlet?

while true; do your app --debug --random; done


Be Engineering Insights: Design Insight: Five Things to Think About

By Tim Martin

Updating your application for the new release? Trying to add some polish and shine because you want to start charging money for it? Well, in this very busy time before the release, I wanted to share some information on a few simple items that can make your application—and the BeOS in general—the best that it can be.

What follows are the top five interaction and interface issues that plague me and the software using public in general. Some are common to every platform, others are unique to the BeOS. While you read, think about how you accomplish tasks that involve these issues, then send me your comments.

Modality—"Are you sure?"

Not all questions and interactions deserve a user's undivided attention. One of the most frustrating things about software is when it limits you needlessly. Granted, modal interaction is required in some situations, but too often it's used only out of convenience to the programmer. To take the Mac OS as an example, it has always had modal file panels. This is no doubt an artifact of the way that the system worked long ago, but it has little or no technological basis today. The person who create that panel thought it would be great to lock the user into using it, and eliminate the need to check for changes in the file system or other nasty conditions. For users this is just another example of the computer prompting you for information and then not letting you use the system until it gets an answer.

Need I even mention that this is bad? If modality is used just to call attention to something, I think that color and positioning are preferable to modal blocking. If modality is used because the application can't handle changes to settings and interaction with the application, then you should explore the idea of having "always saved" so that changes are immediate. I urge everyone to explore ways to avoid stacking windows on top of each other within your own apps, allowing users to position things in their own environment.

Language—"Don't do what 'Johnny Don't' does!"

The Simpsons quote sums this item up nicely. Just because the words and phrasing you use are technically correct—or make sense to you -- doesn't mean that everyone else will understand them. Software usability is often sacrificed through the use of improper language. Software often presents controls and information in a language that users don't speak. The first example that comes to mind is the word "default." Although this word wasn't created in the computer industry, it's found a home there. "Default" may mean something very specific to engineers, but may have various meanings to the general public. We should hesitate to use such words in the interface. Finding clear and concise wording is difficult and may take several attempts before the right solution is found. It pays off though, if people can understand your program because wording in the interface is well placed and informative.

Feedback—"Please wait..."

Everyone knows by now that giving feedback to the user during various application operations is important. I realize that the BeOS we may put application developers in a difficult situation. What I mean is, why would you tell users the system is busy when at any one point in time, the system is still responsive to their interaction? This is exactly why we don't use the "busy" cursor. Cursor feedback is a great device for quick and easy user notification, but we don't want to use it in the same way as other operating systems. If you are using cursor feedback, it should be to expose different items of interaction as the mouse cursor moves over them. Showing when you're on a particular guide, what type of paintbrush you're using, when you're on the resize corner—these are all situations for employing informative cursors.

In the BeOS an hourglass or a watch cursor would hinder using the cursor to select other items and continue working. So what should we do instead? System feedback should come in the form of in-place notification. Let's take, for instance, a list in your application that needs to be sorted; while the list is being sorted, place a status indicator where the listed items would normally be. This avoids using modal alerts, which I believe said we should all use less of. I realize that putting a progress bar or even a text string in the panel might be more difficult than simply changing the cursor, or popping up an alert, but it's a more elegant and useful way to solve this problem that allows users to keep working and interacting with other parts of the system.

Consistency—"That which we call a rose by any other name would smell as sweet."

Repeat after me: Consistency is not the Holy Grail of interface. When two buttons look different, they can interact differently, but if they look similar, then they should behave the similarly. I urge developers not to make their applications sterile with conformity, but to remain consistent with their form language as well as their interaction behaviors. There isn't just one solution to this problem, but any time you override the behavior of a control, or load extra functionality onto an existing control, think twice about what you're doing and consider using your own custom parts to meet your particular needs.

Errors - "An unknown application has experienced an unknown error (#0032)."

You've seen messages like this before. When running beta software, I expect to get errors and alerts that tell me things I don't understand. These messages are meant for the programmers and I faithfully quote these messages back to the originators of the software. But—when I'm using commercial software, or even shareware/freeware, I don't expect to need my decoder ring to find out what my computer is trying to tell me. Sometimes I can understand that the application might not have enough information to render a useful message, but I see incomprehensible messages more often than helpful ones. If your app quits because it couldn't find a library, tell me which one. If you crash when you're out of memory, tell me to buy more. If you don't know what happened, tell me that you don't know! And don't tell me it's my fault that something has gone wrong. In fact, I like my software to be apologetic and polite, but most software is not. This is strongly related to the language issue, because errors should be explained in my language.

This list could be longer, but I thought I'd leave you with these five issues. I hope you can use them to make your applications better, raising the value of your own software, and making using the BeOS a better experience. Remember, it's a complex task to make things simple.


Developers' Workshop: At the Tone, the Time Will Be...

By Christopher Tate

You've seen a full-featured and glorious example of a Media Kit consumer node, the LoggingConsumer, but what about producer nodes? There are an awful lot of pure-virtual methods declared in the BBufferProducer class; what do they all do? And how do you deal with producer nodes in your applications?

We in DTS are painfully aware of this dearth of BBufferProducer examples. But behold—a new node has arisen that highlights and explains the sundry pitfalls of buffer production in the BeOS Media Kit. It generates pure tones (with parameters for frequency, amplitude, and waveform, of course), reacts properly to difficulties in node delivery, and is generally a model Media Kit citizen.

Before I discuss certain details of its operation, here's the URL, so you can follow along at home.

<ftp://ftp.be.com/pub/samples/media_kit/ToneProducer.zip>

Now then, where were we? Ah, yes: buffer production.

As recommended by everyone remotely involved with the Genki Media Kit, ToneProducer is based on the new BMediaEventLooper class, which enormously simplifies node implementation. There are a few topics still left to the node author, however; here's how ToneProducer deals with them, and how you might deal with them in your own nodes.

Do You Speak My Language?

The first major area that node writers must grapple with is data format negotiation. When two nodes are connected, they go through a multi-step "conversation" to determine what data format to use. ToneProducer's format negotiations are relatively simple: the node only produces 44.1 KHz floating-point raw audio, so the only wild card subject to negotiation is the buffer size.

The format negotiation process uses three BBufferProducer methods, in this order: FormatProposal(), PrepareToConnect(), and Connect(). The producer initiates the whole process in FormatProposal(), proposing its "favorite" format, with wild cards indicating aspects of the media format where variation is acceptable. When PrepareToConnect() is called, the consumer has been given an opportunity to adjust the proposed format to meet *its* needs. At this point, the producer must ensure that the format is still acceptable, and reserve the connection if it is. This is done by choosing the connection's source and destination, and remembering them in whatever cache structure the node chooses to use. Finally, Connect() is called—the end of the dialog. At this point the producer is not permitted to fail; the connection was "guaranteed" by a successful return code from PrepareToConnect().

From an application's viewpoint, by the way, the source and destination IDs that represent the connection are subject to change within the Media Roster's Connect() method. Don't try to save the free media input and media output records that your app found before calling Connect()—they might well be invalid after BMediaRoster::Connect() returns!

Where Do I Put My Data?

The second major topic for node writers is buffer management. Implicit in the Media Kit API are such interesting questions as "How many buffers do I need in my buffer group?" and "How exactly do I timestamp buffers, anyway?" Some of these questions are more complex than they first appear.

For example, the number of buffers to allocate in a node's buffer group depends on how long the buffered data will take to reach its eventual destination. After all, if there aren't enough buffers, the producer will be unable to send some data because all the buffers in its group are floating downstream! The usual heuristic for buffer allocation, therefore, is to use enough buffers to completely account for the node's downstream latency, plus one. Because of rounding effects, the formula for this is (downstream latency / buffer duration + 2).

Buffer timestamps are subtle. The first rule is "always work in performance time." Real time (that is, what system_time() or BTimeSource::RealTime() say) is never in sync with other hardware clocks such as audio cards, so trying to reconcile the two is doomed to failure. The second rule is that buffer timestamps should always be recalculated from a remembered start time, based on the amount of media (number of samples or frames) delivered so far. Recalculating from scratch for each buffer avoids cumulative drift that arises from trying to use a precalculated buffer duration.

And this isn't all: other nodes can request that you use *their* buffers, not your own. Your producer should be prepared to handle this case; see the example code for the recommended way to do so. It's pretty straightforward; the only thing to remember is this: make sure you delete your buffer group when you're finished. Failure to do so can leave your buffers orphaned in other parts of the system, which tends to cause Bad Things™ to happen.

Finally, producer nodes have a SetOutputEnabled() method, which acts like a mute button. When output is disabled, the producer sends no buffers downstream. However, just as your CD player keeps spinning while muted, so your producer should continue winding through its data in real time even when output is disabled. Keep this in mind when deciding exactly when to consume the source media, and whether to send buffers downstream....

Tell Me How To....

The third major topic for node writers is parameterization, the mechanism that lets the user (or other programs) change the node's behavior at run time. There are some vagaries of the BParameterWeb mechanism that bear mentioning, lest everyone make the same mistakes over and over again while learning how to work with the Media Kit.

First, don't allocate a BParameterWeb in your node's constructor. Comments in the Be header files in earlier OS releases notwithstanding, this is *not* the appropriate place to do so. The node's connection to the Media Roster is necessary to the parameter-handling process, and that connection doesn't exist while the node is being constructed. Instead, set up your parameter web in your node's NodeRegistered() method, which is called immediately after the Media Roster is informed of the node's existence.

Finally, nodes never have to delete their parameter webs. The SetParameterWeb() method deletes the previous web for you, and the BControllable node destructor deletes whatever web is active at the time. Between these two methods, your node never needs to worry about deleting its web.

Applications are another matter. While it's true that the Media Roster's ViewFor() method takes ownership of the web, and deletes it when the view is disposed of, the application is responsible for deleting the web itself in all other cases.

But Wait—There's More!

Err, well, actually there isn't any more. There are still two things that ToneProducer doesn't handle yet: offline mode and SetPlayRate(). Both of these are somewhat intricate, and are being held in abeyance for a future newsletter article. The fact that I sacrificed the Memorial Day holiday weekend to *this* article might have something to do with it as well.

So there you have it: a thorough, profusely documented producer node. Now get out there and write your own already!


A Crack in the Wall: Part II

By Jean-Louis Gassée

Some time ago, I wrote a semi-fictional column regarding the plight of the CEO of a PC clone company ("A Crack in the Wall." At a quarterly business review for Wall Street analysts, the CEO extolled his vision: Giving buyers more OS choices was A Good Thing. Everything went well—customers loved having Linux and the BeOS installed on their system at the factory, next to the classic Windows. The out-of-the-box experience was great, the options at boot time were easily understood and, since customers could delete the system(s) they didn't want to keep, this was the real thing, freedom of choice—without waste. The PC magazines loved the move, we reaped all the Best Of... awards and generated good will and oodles of free publicity.

Ah, another thing, the CEO continued. The company lost $50 million dollars this quarter because Microsoft fined us for offering other operating systems. Their contract with us gives them the right to increase the price we effectively pay for Windows if we offer other operating systems. Microsoft even invoked an obscure—and confidential -- clause in their licensing agreement and grumbled that we had no right to use their boot manager, or any DOS code, to load other operating systems. It's OK for the customer to install a boot manager him/herself, but you, the PC OEM shouldn't. As a result, they claim we shouldn't offer the of out-of-the-box experience I mentioned earlier. Some customer assembly is required.

At this stage, the CEO has lost his audience—and his job.

As I said at the beginning, this is a concoction. But testimony is sometimes tastier than what amateur columnists can dream up. What we have before us is a deposition by Garry Norris, an IBM executive and a government witness in the antitrust suit against Microsoft. In his testimony, Garry Norris describes how Microsoft quintupled the Windows royalties it demanded from IBM, to $220 million. There is some dispute about the exact numbers, but you get the idea.

How the media treated this is noteworthy. One title read "IBM breaks ranks..." This appears to reflect a commonly held belief: PC OEMs didn't want to break a code of silence for fear of some kind of retaliation. In private, PC OEMs "share their thoughts" quite freely. They appear to resent being treated as vassals by Microsoft in its use or abuse of its desktop OS monopoly. In public, they have to take care of business. Who can blame them? Business is competitive enough as it is. Why risk a falling out with Microsoft that will result in a competitive disadvantage? As far as we know, there is no Antitrust Witness Protection Program, so the tension between self-interest and the calculus of common good is understandable.

This leads to another thought: Why IBM? Is this an example of the altruism of an enlightened corporation, or have they decided they no longer have anything to lose in the PC business, as various rumors have intimated in the past few months? There has been speculation—and denials—that IBM wanted out of the PC business, because it has become too commoditized and it's been impossible for them to make a profit. Some have even read something of that nature in their multi-year, multibillion dollar agreement with Dell.

Whatever IBM's reason for breaking the code of silence, their testimony could make this phase of the trial as surprise-filled as the first one.

Creative Commons License
Legal Notice
This work is licensed under a Creative Commons Attribution-Non commercial-No Derivative Works 3.0 License.