Issue 4-24, June 16, 1999

Be is Open for Business!

By Dave Johnson

Developers, here's the word: It's time to get your BeOS product finished and into a box. Some of our third-party BeOS applications are starting to sell in quantity, and you probably want to get in on that. This is happening even before the upcoming R4.5 retail channels & marketing push.

Starting to Sell

Starting this summer we'll have new customer-oriented BeOS packaging, flyers, posters, and other materials. We'll be advertising and getting the retail outlets in place to deliver BeOS and third-party products to buyers.

BeDepot, As Great As It Is, Is Only As Great As It Is

BeDepot is a great place to sell your BeOS product, especially to users who buy a new computer that ships with BeOS pre-installed. But BeDepot does not sell your product on retail store shelves. You've got to get the product into a box for that.

Distribution Channels in Place

We're doing our part. We've been working hard to get distribution channels in place to handle BeOS retail store orders. Starting this summer some opportunities to sell through major chains may be available to you, but your product must be finished and in a box.

BeOS is Worldwide

Our retail channel sales are international. In Europe Stephan Landier <> has been working hard for you. He needs products with manuals and boxes in English, German and French. In Japan Takeaki Akahoshi <> is looking for products with Japanese boxes and manuals, and which support our Japanese input method.

"Would You Like Fries with That, Sir?"

In case you missed it, the message of this article is that Be has been working to establish retail store sales to end users for sales of R4.5 with the intention of enabling you to make money with your BeOS products. These resellers and distributors are asking for additional third-party products to recommend to their BeOS customers. "Would you like fries with that, sir?" It's part of the retail equation—resellers want add-ons to sell at time of purchase and customers want those products to use. Will it be YOUR product? Only if you finish it and get it into a box.

The Green Field Effect

We're beyond the "chicken-and-egg" paradox, where the conventional wisdom had it that there was a need for BeOS applications to bring in customers and customers to bring in the applications. We're now arriving at the point where there is serious customer and reseller interest in the BeOS, but there aren't enough boxed products to satisfy customer demand. This is great news for you, the developer. It's the beginning of the expected "green field" effect for the early (smart) BeOS developers, where good products in good boxes along side the BeOS can enjoy good store sales with little competition. This helps Be and it helps you. It helps us because third-party products help persuade even more customers to purchase BeOS. It helps you because the customers who buy the BeOS want third- party products to use. Assuming, of course, that they are finished and in a box.

That's Boxes - not Jewel Cases

To sell in a retail store your product must be in a box. Visit a few computer stores and look at the packaging used by different software companies, if you need ideas. The box must have a UPC bar-code, consisting of your manufacturer ID and a product number. The Uniform Code Council is the organization that assigns manufacturer ID numbers for use in UPC retail bar codes. Check the following web site to arrange to get your manufacturer ID: A company that provides software for printing bar codes, as well as good general bar code info is at

Packaging Tips for BeOS-By-Night Developers

Provide end-user-oriented copy on the back listing BENEFITS. Customers don't care what the product is called and—you'll be surprised to know -- their interest is not so much what the product is. Customers are selfish. They want to know WHAT'S IN IT FOR THEM. They want to know how using this product benefits them, what they can DO with it. Here's an example:

"The BeBop bop program is a utility that will bop your hard drive seven times a day. It is the only bop program that has a special symbop doopap doobopper."

The customer doesn't care. But—the same information put a different way will work wonders:

"With BeBop your hard drive is bopped clean so often that you'll never worry about a bop problem again. Our unique symbop doopap doobopper ensures swift debopping action."

Build It and They Will Come?

ADVERTISE. Whether you're selling at BeDepot or in stores people aren't likely to purchase your product if they don't know about it. It just ain't so that if you build it they will come. That worked in a movie, but the likelihood of selling to customers who don't know about your product is about as high as the likelihood of a dead pitcher stepping out of a cornfield into your yard. You can wait for that to happen or you can advertise.

BeOS news web sites are visited by thousands of BeOS users and their ad rates are reasonable. You should place ads at these sites to build name recognition and mind share for your products. Eventually these BeOS users will find their way to BeDepot or stores to purchase your products.

Engineers Are Not Regular People (and They Don't Buy Stuff)

I think it's important to understand that you and I and most of the people we know are computer sophisticates. We know more about computers and the industry than the people we're selling to and this has major implications. We probably don't understand what the average end-user, cruising an aisle in a store in Ohio, sees or thinks or wants or needs.

I just finished reading the book, "Why We Buy," by Paco Underhill I strongly recommend this book to anyone selling a product. It's about a consulting firm that sends people out to follow customers around in stores to see what they actually do. It contains example after example of what happens when marketing people come up with "brilliant" ideas without understanding their customers, and describes what happens when the customer is confronted with these examples of marketing foolishness in a store.

Here's a good example from the book: Some casino hotels don't provide lobby seating, hoping to drive people into the casino. Instead, this practice results in gloomy lobbies where incoming guests are confronted with lots of tired, unhappy people (the losers) sitting on the floor along the edges of the lobby waiting for their tour vans and buses. This book will help adjust your brain around how to think about retail channels and end-users and about using marketing research. So will one of my previous articles, "Testing and Tracking."

Be Engineering Insights: Kernel Programming Part 2: Device Drivers

By Victor Tsou

The kernel imposes and enforces rules to keep individual applications from interfering with other applications or bringing down the entire system. For example, a program can create windows and send messages to other programs, but it can't write into the address space of another program. The kernel makes sure everybody plays fair and cleans up the mess when accidents occur.

Sometimes this hand-holding gets in the way. When it does, device drivers will let you circumvent the kernel's protections. Remember, though, that drivers run with the same privileges as the kernel, so they must be careful to avoid disrupting system stability. Bugs are always more serious in kernel space since driver bugs can bring down the entire system. Good design dictates that device drivers should be as short as possible, with as much code as is feasible relegated to user space.


The kernel manages device drivers through devfs, the file system mounted at /dev during the boot process. Communication between user space and drivers occurs through entries published by the driver in the /dev hierarchy. Therefore, the basic primitives for interacting with drivers map to basic file operations: open, read, write, readv, writev, ioctl, and close.

Drivers tell devfs which entries they want to appear in /dev through a mechanism known as "publishing." Devfs publishes drivers as needed. Typically, this means it publishes drivers the first time a program iterates through the directory entries for a subdirectory in /dev. The kernel knows which drivers publish entries in any given portion of the /dev hierarchy through a simple mapping mechanism: binaries appear in /boot/beos/system/add-ons/kernel/drivers/dev in locations that correlate to their published entries in /dev. For example, the atapi driver publishes drivers in /dev/disk/ide/atapi, so its binary appears in /boot/beos/system/add-ons/kernel/drivers/dev/disk/ide/atapi.

Actually, this is a lie. Since drivers may publish entries in more than one location in the /dev hierarchy, the entries in /boot/beos/system/add-ons/kernel/drivers/dev are typically symbolic links to the actual binaries which live in /boot/beos/system/add-ons/kernel/drivers/bin. Of course, the same discussion applies to user-installed drivers in /boot/home/config/add-ons/kernel/drivers/...

Exported Symbols

The driver entry points are the scaffolding required for communication with devfs:

status_t init_hardware(void);

This function is called when the system is booted, allowing the driver to detect and reset the hardware. The function should return B_OK if the initialization is successful or an error code if it is not. If the function returns an error, the driver will not be used.

status_t init_driver(void);

Devfs loads and unloads drivers on an as-needed basis. This function is called when the driver is loaded into memory, allowing it to allocate any system resources it needs to function properly.

void uninit_driver(void);

Conversely, this function is called when the driver is unloaded from memory, allowing it to clean up after itself.

const char **publish_devices(void);

Devfs calls this hook to discover the names, relative to /dev, of the driver's supported devices. The driver should return a NULL-terminated array of strings enumerating the list of installed devices supported by the driver. For example, a network device driver might return the following:

static char *devices[] = {

Devfs will then create the pseudo-file /dev/net/ether, through which user level programs can access the driver.

Only one instance of the driver will ever be loaded, so it must be prepared to gracefully field requests for multiple devices. Typically, this is handled by exporting a separate entry for each device present in the system. For example, the serial driver exports /dev/ports/serial1, /dev/ports/serial2, and so on, up to the number of serial ports present in the system.

device_hooks *find_device(const char *name);

When an exported /dev entry is accessed, devfs calls a set of driver hook functions to fulfill the request. find_device() reports the hooks for the entry name in a device_hooks structure. The structure, detailed in be/drivers/Drivers.h, is composed of function pointers, described below in the section "Device Hooks."

int32 api_version;

This variable defines the API version to which the driver was written; it should always be set to B_CUR_DRIVER_API_VERSION (whose value, naturally, changes with the driver API).

Device Hooks

status_t open_hook(const char *name, uint32 flags, void **cookie)

This hook is called when a program opens one of the entries exported by the driver. The name of the entry is passed in name, along with the flags passed to the open() call. cookie is a pointer to a region of memory large enough to hold a single pointer. This can be used to store state information associated with the open instance; typically the driver allocates a chunk of memory as large as it needs and stores a pointer to that memory in this area.

status_t close_hook(void **cookie)

This hook is called when an open instance of the driver is closed. Note that there may still be outstanding transactions on this instance in other threads, so this function should not be used to reclaim instance-wide resources. Blocking drivers must unblock ongoing transactions when the close hook is called.

status_t free_hook(void **cookie)

This hook is called after an open instance of the driver has been closed and all the outstanding transactions have completed. This is the proper place to perform uninitialization activities.

status_t read_hook(void *cookie, off_t position, void *data, size_t *len)

This hook handles read requests on an open instance of the device. The function reads *len bytes from offset position to the memory buffer data. Precisely what this means is device specific. The driver should set *len to the number of bytes processed and return either B_OK or an error code, as appropriate.

status_t readv_hook(void *cookie, off_t position,
const struct iovec *vec, size_t count, size_t *numBytes)

This is the scatter-gather equivalent of read. The function is passed an array of count iovec entries describing address/length pairs to put data read starting at position. As with read_hook, the function should set *numBytes to the number of bytes processed and return an appropriate error code.

status_t write_hook(void *cookie, off_t position,
void *data, size_t len)

status_t writev_hook(void *cookie, off_t position,
const struct iovec *vec, size_t count, size_t *numBytes)

Conversely, these hooks handle write requests. Again, the meaning of "write" is device specific.

status_t control_hook(void *cookie, uint32 op, void *data, size_t len)

This hook handles ioctl() requests. The control hook provides a means of instructing the driver to perform actions that don't map directly to read() or write(). It is passed the cookie for the open instance as well as a command code op and parameters data and len supplied by the caller. data and len have no intrinsic relation; they are simply two arguments to ioctl(). The interpretation of the command codes and the parameter information is defined by the driver. Common command codes can be found in be/drivers/Drivers.h.

NOTE: len is only valid when ioctl() is called from user space; the kernel implementation of ioctl always passes 0 in the len field.

status_t select_hook(void *cookie, uint8 event, uint32 ref,
                                        selectsync *sync);

status_t deselect_hook(void *cookie, uint8 event,
                                        selectsync *sync);

These hooks are for future use; their corresponding entries in the device_hooks structure should be set to NULL for now.

Thread Awareness

The following rules apply for any given open instance of a driver:

  1. open() will be called first, and no other hooks will be called until it has completed.

  2. close() may be called while there are pending read/readv/write/writev/ioctl commands. Again, blocking drivers must unblock any outstanding transactions. Calls to read/readv/write/writev/ioctl may occur after the close() hook is called. The driver should return failure in response to any such requests.

  3. free() is not called until all the pending transactions for an open instance have completed.

  4. Multiple threads may be accessing the read/readv/write/writev/ioctl/close hooks of the driver simultaneously, even for a single open instance, so you must be careful to lock as needed.

Sample Code

I've put together a sample device driver that you can find at <>. After you build it, you should place the binary in ~/config/add-ons/kernel/drivers/bin. You should also create a link to it in ~/config/add-ons/kernel/drivers/dev/misc, i.e.:

mkdir -p ~/config/add-ons/kernel/drivers/dev/misc
cd ~/config/add-ons/kernel/drivers/dev/misc
ln -s ../../bin/digit .

Be Engineering Insights: The Legend of the Buggy Library

By Adam Haberlach

This is the story of the Be Bugmaster. He inherited his duties from Bugmaster Ronzilla, who left bugs to sell used chariots. Ronzilla inherited the bugs from The Great Ming, who went on to the Empire of the northwest. And so it has been, throughout history.

The Bugmaster is charged with the upkeep of the great library of bugs. This library is used to track problems in BeOS, both current and past, in order that they may be fixed. This makes BeOS strong.

It came to the new Bugmaster as a library that worked—barely. It allowed users and engineers to enter bugs into the library and it allowed engineers to comment upon them, track them, and eventually mark them fixed. And it usually only burned down twice a day. It came with great plans. It was to move from a machine running that "Operating System" known as Mac OS to a machine running that other "Operating System" known as NT. It was to house the library in a new building, one which would not burn. It would rely on a strange language, "Rope with Loop" for communication. It was to use a new dialect of the magic which brought information to the masses, which would do more then ever before.

Alas, that was not to be. There were problems between that new library on that new "Operating System." There were things that simply could not be done. Things which must be done. Simple things that any database should have been able to do, actually. The dialect was found to be faulty. It could not handle phrases of longer than 255 characters, a grave limitation indeed.

Thus, the great hunt began. Many great libraries were rounded together. There was the mighty Oracle, which was judged too expensive, and whose setup time would be longer than the life of the library, and possibly of the Bugmaster himself. There was the sleek Sybase ASE, who spoke a language unlike that of the Bugmaster. There was MiniSQL and MySQL, both of which had many followers, but were strange and untrusted by the Bugmaster. In the end, the keeper of the bugs chose to call upon PostgreSQL, a friend from his past, which he knew was up to the task at hand. But the previous language would not work with the new library -- and a new one had to be chosen. That language was PHP, which was similar to the rope language, but faster and with more features.

Now the Bugmaster called upon the great gods of free software. Not free as in speech, but free as in beer: the Bugmaster, from his school days living in a fraternity, has a keen understanding of the value of beer and the money which backs it. The gods of Debian Linux were called down and asked to push aside the "Operating System" which had infested the machine. They had to be called down three times, in fact, for they seemed to have trouble settling into their accommodations. The gods of Debian eventually seemed to be pleased in their new surroundings, and took quickly to their new wards, Apache and PostgreSQL.

A great translation project was begun. The Bugmaster spent weeks perusing the magical incantations which had kept the old library together, all the while rebooting it as necessary. Mountains were climbed, great battles were fought. Large scripts were run with the help of the Master of Webs in order to get several years of bugs from the old library into the new. The Bugmaster spent a lot of time listening to techno music and typing arcane commands into terminal windows, dead to the world, knowing that he was putting his days of hearing cries of "mos-eisley is down again" behind him.

And one day, it was judged that the new library was ready. Its doors opened. During the first day, all the books from the first library were hurriedly brought forth, transcribed, and entered into their new places. At the end of that day, all was good (except for a few features). Backup procedures were created. Those things that had been missing during the grand opening were added. Many visitors cried that they could not find their bugs, but this was not to be helped, for there were things which could not be disclosed, and no simple way to discern different disclosures. And thus far, although the doors have been closed occasionally, and the village has suffered a power outage, the library of the bugs has not burned.

In short: The old system used Filemaker Pro as the back end database, Lasso as a web <-> database interface, and WebSTAR as the web server. On Mac OS, of all things. At the end, it was only crashing 2-3 times a day. The new system uses PostgreSQL as a back end, PHP3 as a database interface, and Apache as a web server, all running on Linux. Two months, and no crashes. None.

A few common feature requests/bug reports. Some data seems to have been lost regarding the "Show to world" status of bugs. Those bugs exist and are nagging engineers as we speak, they just aren't visible to the outside world. Currently, the developer databases all reside on the crufty old FileMaker system. This means that the new system cannot verify identities and we therefore cannot allow developers to modify or comment on their bugs. We Will Fix This (tm).

Developers' Workshop: Taking the Media Kit Offline

By Stephen Beaulieu

When processing media in real time with the Media Kit, a given node chain will have one node that acts as the time source for the entire chain. The time source's sense of time advances in constant but tiny increments (microseconds), and is used to keep all the nodes in sync with each other.

In offline mode, there are no real time constraints for processing buffers, and as such, nodes do not pay attention to external time sources. In effect, each node acts as its own time source. Time advances for an offline node when all the data required for a processing quantum is available. This usually occurs in fits and jumps. Time can progress faster or slower than real time, depending on the work being done.

Applications and nodes have different behavior when working with media offline.

Offline Applications

Writing media applications to use offline nodes is straightforward, though a little more complicated than more standard real time apps. Our advice boils down to two main points:

  1. In a given node chain, all nodes downstream from an offline node must also be in offline mode. As you'll see, media nodes in offline mode depend on the behavior of downstream nodes to drive the proper flow of buffers. Nodes in a real time mode will not properly request and handle buffers from an upstream node in offline mode.

  2. Processing media in an offline chain should be done in distinct quanta of time, where all the components and settings are consistent. An app should connect the nodes and specify their settings for the specified quantum. Use BMediaRoster::RollNode() to instruct the nodes to start and stop. RollNode() calls Seek(), Start() and Stop() on each node in an atomic action. This ensures that a stop event will be enqueued before the node has a chance to move past the stop time (remember, performance time can elapse much faster in offline mode).

Following these two rules will help make sure that the output from offline nodes is what you expect.

Offline Nodes

Nodes that are derived from BMediaEventLooper (and why would you want to do it any other way?), already have a good deal of infrastructure for handling offline mode. BMediaEventLoopers have an internal sense of time in offline mode which can be accessed and set through SetOfflineTime() and OfflineTime(). For most nodes, this simple value is sufficient, but if necessary, OfflineTime() can be overridden to provide more specific behavior.

The BMediaEventLooper has a different control loop in offline mode that bases actions on the offline time. The loop looks like this:

  1. Determine offline time using OfflineTime().

  2. Handle all performance time events before or at offline time.

  3. Determine next real time event (if any).

  4. Wait for messages until the next real time event.

  5. Handle the real time event or a message which comes in.

  6. Start over.

The gist of the loop is to wait for messages and add events to the event queue until the offline time is updated for the node. Then handle all events in the event queue up to and including the new offline time.

Only a node can determine the exact conditions required to advance its internal sense of time, but there are some general guidelines. Offline time advances under two circumstances: when starting and stopping the node, and through buffer flow.

Starts and Stops

When Start() is called and the looper is currently stopped, set offline time to the start time and add the start event to the queue. When handling a B_STOP event in HandleEvent(), look to see if there is another start event in the queue, and advance offline time to that event's time. This will ensure that events start to be handled appropriately.

Buffer Flow

Advancing offline time based on buffer flow varies depending on the type of node: producer, consumer, or both.


In any real time mode producers create and send buffers downstream on a regular basis, according to the time. Most nodes handle this by adding a B_HANDLE_BUFFER event to the queue the next time a buffer needs to be produced. So when time advances to that time, the next buffer is produced and sent downstream. In order to keep nodes responsive, only the first buffer is sent downstream when an offline producer is started. Subsequent buffers are sent when an additional buffer is requested by the downstream node (through the new AdditionalBufferRequested() function).

Producers in offline mode will increment offline time to the next B_HANDLE_BUFFER time when AdditionalBufferRequested() is called.


Consumers work in exactly the opposite fashion: they wait for all the data necessary for a specific time quantum to arrive, and then update offline time to reflect the data received. Remember, in offline mode the goal is to not drop data, so consumers have to wait until all buffers have arrived before moving on. In offline mode, there are really no late buffers. So, offline time is updated in BufferReceived() to the minimum time of the last buffer received from each source.

In addition, to keep buffers flowing, whenever a consumer is done with a buffer, the node calls RequestAdditionalBuffer() from the upstream source. This will ensure that another buffer is prepared and sent.

Filters (Consumer/Producers)

Filters combine these two sets of requirements, and are a little more complex. For offline time to advance to the time to send a buffer downstream all of the data must arrive for that time quantum and an additional buffer must be requested. A convenient way to manage this is to stash a couple of booleans: ReadyToSend and AdditionalBufferRequested. In BufferReceived() and AdditionalBufferRequested() these flags are set and checked. If additional buffers have been requested it is safe to update offline time in BufferReceived(); if ReadyToSend it is safe to update offline time in AdditionalBufferRequested(). In either case, when a buffer is sent downstream, the booleans must be updated.

Sample code that demonstrates offline processing can be found at <>. This sample code works with the later genki betas and the final genki release of the BeOS.

Singing, Dancing... and Selling

By Jean-Louis Gassée

The Web started as a hypertext extension to what was then a geek underworld known as the Internet. The Internet itself was a community -- some even said a commune—of cheeky Unix users, if you'll pardon the pleonasm. They had the foresight to divert (or pervert) defense funding into developing a unified set of powerful protocols and wires interconnecting their computers. A simple Unix command—no pleonasm here—gave you access to files and other computing resources anywhere on the Internet.

Ted Nelson had popularized the idea of hypertext with his 60's book, "Literary Machines" HyperCard was one attempt to organize information using hypertext ideas. Finally, Tim Berners-Lee at the CERN, saw how a simple new protocol, http, could fertilize the Internet by linking Universal Resource Locators, URLs, in a "one-click-away" web of hypertext connections.

And fertilize it did. http proved to be so simple, so powerful, so magical that the new web became the Web, and attracted the largest wave of capital invested in a new sector. I chose the word "fertilize" with intent. The hypertext protocol was promptly enriched with media types sexier than old-fashioned text. And it begat hypermedia that quickly became hyping-media, taking the Web to what could be its real raison d'être: selling. Selling goods, services, "content," music, video, hotel reservations, and the lineage of your ancestors.

I'm not sure how Ted Nelson feels about this incarnation of "Literary Machines," but nevertheless, that's where we are. The Web is hyperselling -- everything, to everyone, everywhere, all the time. Personally, I like this—for many reasons. First, it has the potential of re-leveling the playing field, even for artistic activities. Take the music scene, for instance. One could lament the ugliness of MP3 piracy—a real problem. One could also look at the potential for creators and publishers. Yesterday, their access to the market was guarded by the big publishing and distribution houses, names withheld. With the Web, there's hope that they can be heard. More generally, new forms of old commerce—Amazon, E*TRADE—or new media—Yahoo, AOL—or altogether new forms such as eBay emerge. It's interesting and it's fun.

Lastly, our media-focused technology has ways to make IP packets sing, dance, and sell like no one in the business. That's why we feel the BeOS has a choice role to play in the creation and the consumption of rich media on the Web. Understandably, we encounter two schools of thought when dealing with new technology and new opportunities. On one side of the debate, some believe the PC as we know it—or, more precisely, Windows—will continue to do everything, for everyone, everywhere, all the time. It will just keep adding more blades to the Swiss Army knife.

At the far extreme of the debate, some claim Windows is no longer required and will disappear. Our position isn't that radical. We think the office automation roots of Windows make it extremely useful, even irreplaceable for some, in productivity applications—including those now extended to the all-pervasive Web. But these very roots are what limit Windows in media-rich applications, where this mature OS has trouble with singing and dancing IP packets.

In other words, we believe that PCs are bound not to disappear but to coexist with a growing number of Web-centric devices. When a new genre appears, there is a tendency to assume it will kill off earlier life forms: TV was going to kill movies, videoconferencing would replace business travel. Instead, as we now know and forget, we ended up with more choices—the new genres created more opportunities.

Creative Commons License
Legal Notice
This work is licensed under a Creative Commons Attribution-Non commercial-No Derivative Works 3.0 License.