Issue 4-4, January 27, 1999

Be Engineering Insights: Device Driver Idioms

By Dominic Giampaolo

Every language, whether human or computer, has "idioms"; that is, a common way of expressing a concept or thought. In American English, the phrase "thanks a million" is a convenient way to express great thanks for something. In the C programming language, the idiom

for(ptr=head; ptr; ptr=ptr->next)
  ...

is a common way to iterate through a linked list. Cognizant listeners or readers easily recognize both of these idioms. The nice thing about idioms is that they don't require parsing the individual bits to recognize the meaning of the whole. Idioms are a type of shorthand that efficiently communicates an idea.

In writing device drivers one finds several common "idioms" for achieving a particular result. In this article I'd like to cover some common device driver idioms that BeOS device driver writers should know about. These idioms are a bit more complex than phrases like "thanks a million" but are still simple to recognize. This list is not exhaustive, but it should cover the more common idiomatic expressions in driverspeak (a language littered with hex constants, acronyms, bits, shifts, and bytes).

Starting Up

The first problem most device driver writers have is that they often want only one person to be able to open their device at a time. I've often seen the following code used to accomplish this:

static long open_count = 0;

driver_open(const char *name, uint32 flags, void **cookie)
{

  open_count++;
  if (open_count > 1)
    return EBUSY;

  ...
}

That is the code equivalent of an English speaker saying "what can I do you for." It's just plain wrong. In this case the increment of open_count is not atomic (in a multiprocessor environment an increment of open_count could be lost) and the count is not decremented in the event that the open_count is greater than 1.

In proper driverspeak, the way to prevent multiple opens of a driver is this:

static int open_count = 0;

driver_open(const char *name, uint32 flags, void **cookie)
{
  if (atomic_add(&open_count, 1) > 0) {
    atomic_add(&open_count, -1);
    return EBUSY;
  }

  ...
}

The use of atomic_add() guarantees that open_count is indeed atomically updated. The return value of atomic_add() is the previous value of open_count, which allows us to check whether we are the first person to increment open_count. If the if test succeeds we are not the first person to open the driver, so we have to decrement open_count to put it back to what it was before and return EBUSY.

This idiom extends to allow a maximum number of open()'s as well. Changing the if test to

#define MAX_OPEN 4

if (atomic_add(&open_count, 1) >= MAX_OPEN)

allows only MAX_OPEN number of open calls to succeed.

Waiting for an Event

When a driver performs an I/O operation it usually must wait for that operation to complete. The obvious synchronization method is to use a semaphore. Normally a semaphore is created with a count of 1, which means that a call to acquire_sem() will acquire the semaphore and return immediately. A device driver, however, wants the acquire_sem() to block until an event happens. To accomplish this, we create the semaphore with a count of zero. Then in the device interrupt handler, we release the semaphore to unblock the I/O request. In code that looks something like this:

/* in a driver initialization function */
io_done_sem = create_sem(0, "device interrupt sem");

/* in a driver I/O routine */
... set up an I/O operation that will
    complete with an interrupt ....
ret = acquire_sem_etc(io_done_sem, 1, B_CAN_INTERRUPT, 0);
if (ret != B_OK) {
  ... the I/O request did not complete successfully ...
}

/* in the interrupt handler, release the thread
   blocked waiting */
release_sem_etc(io_done_sem, 1, B_DO_NOT_RESCHEDULE);

The key parts here are that the driver I/O routine will initiate an I/O operation and then immediately block on the io_done_sem until the device interrupts and the driver interrupt handler is called. When the interrupt occurs and the kernel calls the driver interrupt handler, it will release the semaphore, unblocking the thread that requested the I/O. At the end of the sequence of events the semaphore is left again with a count of zero and the next I/O request will block as expected.

The idiom in this example is the use of semaphores to block until an interrupt occurs. This is different from typical application (sneef) level use of semaphores, because we create the semaphore in a way that will block on our first acquisition of the semaphore instead of immediately acquiring it.

More Initialization

We can combine the idiomatic use of atomic_add() in the first example with the second example to show how to initialize part of a driver once (and only once). Normally, one-time initialization is done in routines like init_hardware() or init_driver() that the kernel guarantees to be single-threaded. Sometimes, however, that is not possible. If the driver allows multiple open()'s, then we need a mechanism to insure that initialization only happens once and that any other threads doing an open at the same time will block until initialization is complete.

The resulting idiom is a blend of the previous two idioms. It only depends on the init_driver() routine creating a semaphore with a count of zero. In code, the whole idiom is

static long  init_count = 0;
static sem_id init_sem = -1;

/* in init_driver() */
init_sem = create_sem(0, "init sem");

/* in driver_open() */
if (atomic_add(&init_count, 1) == 0) {

  /* do the initialization */

  delete_sem(init_sem);
} else {
  atomic_add(&init_count, -1);

  /* now wait for the init sem */
  acquire_sem(init_sem);
}

This idiom is somewhat less common, but is necessary for some drivers. An alternative form (a dialect if you will) addresses the case in which it's not possible to first create the init_sem semaphore. This case is slightly more complex:

static     long init_count = 0;
static volatile int init_done = 0;

/* in driver_open() */
if (atomic_add(&init_count, 1) == 0) {

  /* do the initialization */

  init_done = 1;

} else {
  atomic_add(&init_count, -1);

  /* now wait for the init sem */
  while(init_done == 0)
    snooze(5000);
}

This form will loop while waiting for the variable init_done to be set. The snooze() call will prevent the looping thread from consuming too much CPU time.

Observant readers may ask why no protection is needed around the manipulation of the variable init_done. The answer is that the thread that performs the initialization is the only one to store to the variable and the other threads only ever read the variable. Hence, there is no race condition for who will update the init_done variable. A store to a variable is an atomic operation and if only one thread is storing and other threads are reading, there is no race condition.

Spinlocks

As the dreadnaught of synchronization primitives, the spinlock is a powerful and dangerous tool. And built around this sultan of synchronization primitives is an idiom that carries the strength of four-letter epithets in the English language.

Just like colloquial expressions involving expletives, spinlocks are not appropriate for all situations. The most obvious example of appropriate use of a spinlock is when an interrupt handler and regular driver code must both perform read-modify-write operations on the registers of a device. A spinlock is necessary in this case because in a multiprocessor environment a thread may execute an I/O operation on one CPU, while a different CPU handles an interrupt from the device.

The first question to ask is—why not use a semaphore? The answer is simple: in the BeOS an interrupt handler cannot acquire a semaphore. Acquiring a semaphore can cause the calling code to block. An interrupt handler executes with interrupts disabled and, therefore, cannot block. Not to mention the fact that if an interrupt handler blocked, other devices that share the same interrupt would not be serviced for a very long time.

Now that we're convinced that there is a reason and our device requires the use of a semaphore to protect access to its registers, what is the proper idiom in driverspeak?

In the regular (top-half) of the driver, the following code works:

cpu_status ps;
spinlock hwlock = 0;

ps = disable_interrupts();
acquire_spinlock(&hwlock);

  ... play with hw registers ...

release_spinlock(&hwlock);
restore_interrupts(ps);

The interrupt handler calls acquire_spinlock() directly, and may omit the calls to disable_interrupts() and restore_interrupts() because it already executes with interrupts disabled.

For code not executed in an interrupt handler, the calls to disable_interrupts() and restore_interrupts() are required for correct use of spinlocks. If interrupts were not disabled before acquiring the spinlock, the system could deadlock if the spinlock were held and an interrupt occurred that needed to lock the same spinlock. Consider it an absolute rule that whenever a driver wants to acquire a spinlock, it must first disable interrupts (and likewise, it must restore interrupts after it releases the spinlock).

The above idiomatic use of spinlocks is often wrapped in two functions that encapsulate the pair of function calls:

static cpu_status
lock_hw(spinlock *lock)
{
  cpu_status ps;

  ps = disable_interrupts ();
  acquire_spinlock(lock);
  return ps;
}

static void
unlock_hw(spinlock *lock, cpu_status ps)
{
  release_spinlock(lock);
  restore_interrupts (ps);
}

The spinlock idiom of disable_interrupts/acquire_spinlock is a safe way to guard access to hardware from an interrupt handler and regular driver code.

Of course, no discussion of spinlocks can go without the two main postulates of spinlock usage:

  1. Never disable interrupts for lengthy periods of time (i.e., greater than 1 millisecond). Make sure you know what work is going to happen while you have the spinlock held (i.e., the complete code-path from start to finish).

  2. Once a spinlock is acquired, any loops that spin waiting for a hardware bit to change state should also have a safety exit in case the bit never changes state.

Not following those two rules can cause the behavior of the BeOS to deteriorate (the first rule) or lock up (the second rule).

The End: It's Time for the Fat Lady to Sing

This quick tour of idiomatic expressions in driverspeak should be enough to get you talking about and understanding the code in most device drivers. Other more complex (and subtle) idioms exist that involve producers and consumers, but we'll leave those for another article.


Be Engineering Insights: Worn Out Rhymes

By Baron Arnold

I can hear bulldozers out on the playing field. Red white and blue tracked machines spitting diesel exhaust, they lower their blades to the ground.

4.1 is coming down to the wire and where I normally would be totally excited to tell you about all that's great about Be and the BeOS, I'm just typing. I'm burnt out, I have two hours left to write this article and the truth is, I am completely uninspired.

I live this OS. I don't use anything else. All other operating systems are "alternative."

When I'm not breaking BeOS, chasing down bugs and engineers, I make records. I run BeOS on a Power Computing PowerTower 210 (long live PPC). I share 650 square feet of rectangular warehouse space with Ficus Kirkpatrick. I have an old Tascam 688 8-track cassette recorder with DBX noise reduction running at double speed. I master to minidisc and then under BeOS, I use BamBam to record, sox to swap AIFF to WAV, and 8mHz to create mp3's. Finished work gets uploaded to [http://www.catastropherecords.com]. To get in just click on the t-shirt. The password thing doesn't work.

Before the summer of '99 comes I'll be able to trade out that old analog 8-track for BeOS software. There will be an 8-in 10-out A/D converter, phat new digital multitrack/MIDI application choices, and mastering software. Alpha versions of full-featured applications from major names in the audio and video software world will step into testing soon and I am ready to break them. I won't say here what I know, but listen for announcements at NAMM.

Since I'm not really giving you the kind of post punk poetry you have come to expect, I'll pass on a QA tool that we couldn't live without here at Be. InputRecorder is a program originally written by David Chayla and hacked for stability by Steven Black. InputRecorder simply records and plays back keyboard and mouse input. It's a very big hammer. InputRecorder is best used to repeat simple tasks over and over again, say, clicking on the Be menu until the machine dies. That actually used to happen in pre 4.0 days. But not anymore. :) There are a couple of quirks, but hey, it works.

Unzip InputRecorder.zip [ftp://ftp://ftp.be.com/pub/samples/input_server/InputRecorder.zip]

and run install.sh from a Terminal window. To use the InputRecorder, enter a file name in the text field. Click Record. Click Stop, and then click Record again. A dialog will ask you if you want to Cancel, Initialize, or Add. Click Initialize. Add does not work. Your mouse should jump to center screen. You are now recording.

Open a Tracker window, pull down a menu, switch to a different workspace, switch back again, and click stop. Click Play and notice that whatever you did will be played back in time, with pixel precision. There are two check boxes you can enable, Lock and Loop. Lock will lock out all mouse and keyboard input during playback. To stop playback press Scroll Lock. Press Scroll Lock again to re-enable the Play button. Loop will, as you probably guessed, loop the playback.

In QA we used this tool extensively to find memory leaks in the Kits, the Tracker, and the app_server during 4.0 testing. Steven Black will be giving you a detailed explanation of the source in March, and possibly a cleaned up version.

Have fun, file good bugs, and spread the word.

BeOS forever.

love/ba


Developers' Workshop: Media Kit Basics: A Time to Every Purpose Under Heaven

By Owen Smith

In reflecting upon my lengthy childhood, one recurring incident springs to mind from my formative years: Late at night, with my parents out and my babysitter long since neutralized, I would dial a particular phone number over and over. After a few rings, It would be answered by a female voice: "At the tone, general telephone time is: 9:22 and thirty seconds. -- BEEP." Some have misconstrued this, years later, as a lonely child reaching out for warmth and sustenance from the mother Bell. Others have incorrectly assumed that it was my budding interest in women that drove me to the rotary dial time and time again. In truth, though, there was only one thing I was interested in—setting my wrist watch to be as close to the REAL time as possible.

Many years later, I am a Be engineer, and despite running approximately fifteen minutes behind the rest of the world, my obsession for time has not lessened.

In contrast to my last article, which dealt with the guts of the Media Kit in some detail, I felt it would be useful to back up a bit, and describe the system of time measurements that are encountered when programming media applications—a brief history of time in the Media Kit, if you will. This will hopefully be useful as a reference when you are dealing with the Media Kit.

When is a Microsecond Not a Microsecond?

The first obstacle to overcome when understanding time in the Media Kit is that time means different things to different components of your system.

  • Your CPU measures one sort of time using its internal timer registers. You pass timeout values to snooze() and read_port_etc() in terms of microseconds as it relates to the CPU's conception of time.

  • Your sound card may measure another sort of time, using its own conception of what time is. It may report this time in terms of "microseconds" as well, but a length of one microsecond to the sound card may vary slightly from your CPU's perception of how long one microsecond is.

  • Finally, your node maintains yet a third measure of time, in terms of your current position in a piece of media. This is also reported in "microseconds," but this conception of time may vary from both the sound card and the CPU.

So, How Do You Tell Time in the Media Kit?

Time in the Media Kit is governed by objects called time sources. The purpose of a time source is to keep track of some component's perception of time. It publishes this time as the time in "microseconds" that its corresponding component perceives. Every time source also knows how to convert between this component's representation of time, called "performance time," and "real time," which is the time that is defined by the CPU.

Your media node keeps track of time by "slaving" to a time source, like choosing one particular time source to help you keep track of time. The Media Kit creates at least one time source for you to use. It is called the System Time Source, and it represents your CPU's conception of time. In addition, other components of your system that have their own perceptions of time (sound card, video card, external timing device, etc.) may have other time sources associated with them.

The time source your node chooses to slave to can affect performance considerably. So, which time source do you choose? Use these simple rules of thumb:

  • If you're a producer, use the time source associated with your output, so that everything looks and sounds its best when the end user sees it. If you are receiving buffers that are marked with times from a time source that's different from the one you're slaved to, you'll need to translate between that time source's performance time and the performance time of the time source that you're slaved to.

  • If all you're doing is recording from a physical input, use the time source associated with that input for best results.

  • If you don't have any other time source available, use the System Time Source.

What Do You Do with this Time?

Here's how your node can make use of the performance time reported by a time source:

  • Events that affect your node are typically specified in performance time. When you tell a media node to start, stop, or seek, you specify the performance time at which the event should happen. (An important exception to this rule are time sources, which handle these events totally differently. I won't delve into the dark world of time sources here, however.)

  • If you are a buffer producer, you mark each buffer with the performance time at which that buffer should be performed. If you are also a physical input, you should instead mark each buffer with the performance time at which that buffer was recorded.

This means that you can't blindly pass buffers from physical inputs to your system's output. Because each incoming buffer is stamped with the time it was recorded, and there is always some delay between the input and output, your buffers will always appear to be late at the final output! Some node has to intermediate between the input and output of your system to make sure the buffers are timed correctly.

  • If you are a buffer consumer, you generally won't be too concerned with the timestamps on buffers you receive. If you are a physical output, however, you use the timestamp on each buffer to figure out when the buffer should be played.

  • Your node will often have to make system calls like snooze() and read_port_etc(), which take real time measurements. In these cases, your node will use its time source to convert performance time to real time.

Media Time

Earlier in this article I mentioned a third concept of time: a position in a piece of media. We refer to this as media time, and it is the responsibility of each media node to determine the relationship between media time and performance time, depending on what that node actually does. When you send a node a Seek request, you tell it the media time to which the node should seek. In addition, you can use media time as a basis for scheduling events which occur at a specific point in the media -- such as, syncing sound output to a frame in an animation.

Typically, media time flows at the same rate at performance time, but has some offset (ie. media time "0" may be at some point in the future of performance time). On the other hand, buffer producers may implement the SetPlayRate() function to increase or decrease the rate of media time with respect to performance time. For example, if you have a sound playback node that supports SetPlayRate(), and you tell that node to play half as fast, buffers will still be sent out at the same speed as before, and the performance time stamped on the buffers will be unaffected. However, the producer will only grab sound from the sound file half as quickly, and will stretch the media out to fill each buffer.

Closing Thoughts

Timing is the single most important issue that you will have to deal with if you work in the Media Kit. Hopefully, this has covered much of the groundwork you need to understand how time works in the Media Kit. Until next time, just remember that there's "no such thing as tomorrow, only one two three four --"


It Feels Just Right.

By Jean-Louis Gassée

I struggled to find a suitable title for today's column. In the end I chose one that describes how the company and the product feel, which is -- just right.

The company I refer to is IK-Multimedia, based in Modena, Italy. Enrico Lori founded it in 1996, to produce high-quality audio software at affordable prices. Its current product is called Groovemaker. It sells for $69.95. DJs use it, as do hobbyists and audio producers looking for background sounds for videos. You can create random tracks for a virtual DJ with Groovemaker, or mix loop libraries and create patterns for a sequence.

The company, with less than twenty employees, feels like a typical start-up, where everyone has at least two jobs. Or, as we say at Be, the universal job description is "Do whatever you have to do." At IK-Multimedia, every employee is a musician, which sounds like the old Hewlett-Packard practice of designing products for the engineer on the next lab bench. The company has its own studio and uses Groovemaker to produce audio material to go with the product.

The newest offering from IK-Multimedia is called T-Racks. It will be launched this week at NAMM in Los Angeles. It's an unusual and ambitious undertaking, modeling an analog amplifier/compressor/equalizer starting with the schematics, and "rebuilding" it in software using 32-bit floating point audio. T-Racks will sell for $299.

The BeOS connection to T-Racks began with a meeting in Paris. This was followed by a two-day porting session with assistance from our friend Duncan Wilcox, who made the trip from Florence to Modena to help with the change of platforms. Two days of work resulted in a stable beta, not the finished product. Some code was left on the porting room floor—mostly work required on the previous platform, but now provided by the BeOS itself. Putting in the finishing touches and testing will take a (hopefully small) multiple of that amount of time.

Of course, the credit belongs to the IK-Multimedia team. There is no magic—only clean, well-designed code ports that easily. What feels just right here is the combination of people, technology, company culture, and target market. And, of course, timing—to coincide with NAMM later this week. It'll be fun being there Thursday.

Creative Commons License
Legal Notice
This work is licensed under a Creative Commons Attribution-Non commercial-No Derivative Works 3.0 License.