Displaying Newsletter
Issue 14, 24 Mar 2002


  In This Issue:
 
OpenBeOS Event: BeGeistert 008 by  
BeGeistert, the semi-annual gathering in Germany of Be programmers and enthusiasts from far and wide, is about to convene their spring session. This one, called "The Road Ahead", will focus on OpenBeOS.

When: 6 - 7 Apr 2002
Location: Düsseldorf, Germany

BeGeistert 008 will essentially be organized as a OpenBeOS developer conference. Several OpenBeOS members will be in attendance and an internet connection to the OpenBeOS CVS server will be setup during programming sessions.

As with previous BeGeistert gatherings, the atmosphere will be friendly and relaxed. Users and developers can meet and discuss programming and other odd topics well into the evening. So if you can make the trip, stop on by and meet the faces behind the names you've come to know.

 
Why OpenBeOS? by Michael Phipps 
I was recently asking a musician friend about MIDI, since I am leading that team until we find someone with a passion for music, code and OBOS. After just a moment, he became curious and asked me (as tone deaf as they come) why I was so interested. I started telling him about OpenBeOS. He asked me a very insightful question: "Does the world really need another operating system?"

I very glibly responded, as any good zealot would, talking about 64 bit journaled file system goodness and ten second boot times. Stuff straight out of comp.sys.be.advocacy. All very true and honest. Long after that conversation, though, I had a long, introspective conversation with myself, asking Alex's question of myself, in my own mind.

I know all of the *technical* reasons that OBOS should exist. How bfs is so nice to work with. Window responsiveness, Translation kit, native mime types, unshackling the hardware, et cetera. But, as we all well know, the *world* is not technical. And that (however unintentional) was what this question made me think about. What does OBOS have to offer the non-technical people of the world that Windows, Linux, MacOS or some other system can not deliver.

Some of this will be obvious, I beg your indulgence. Some of it is prognostication, on my part. No promises, but some clues and concepts of where OBOS *could* go. Extensions of the Be way, if you will.

Most of what OBOS will have to offer is more involved in what you *don't* see than in what you do. Thanks to the Translation Kit, users never have to remember that tool A will load GIF and JPG, but not PNG. They won't (hopefully ever) see "You must reboot your machine to complete this install". They won't sit and wait for ScanDisk (or fsck) to finish. They won't see "import" and "export" on every mail client they download. They won't see virii, Service Pack 7 or vfstab files. No registry, no problems cut and pasting across applications and no configuration by text file. They won't see a C: or D: drive. No "Add Hardware" wizard. Let's leave wizards to bad movie adaptations of children's books. Anything that requires that much explanation should be made easier.

Still, there will be things to see. Snappy window redraws, where you don't have to wonder if the computer is hung, or just doing something. Integrated, easy audio/video acquisition, so that when you want to get stills from your digital camera or movies from your video camera or photos scanned in, there is one application to work from. Burning CDs and DVDs by dragging and dropping files. Built in streaming video acquisition, so you can send a video email, if you choose. How about integrated email/messaging? Linux applications running side by side with native apps. Maybe even some Windows apps, too.

Developers should not be left out, either. There are some features that we know that Be was working on. In my opinion, the "Be Way" was to empower developers by providing common OS level functionality so that developers would not have to write it over and over. By definition, developers can write (almost) anything that they need. Our networking kit is a perfect example. The community could have written it at any time. It took us being "forced" into it, because Be provided the OS level tools. Be built an XML kit and an SSL kit for R6.

These are good examples of the direction that we should go in. Many people have responded favorably to my "BAudioCD" class idea, back when I was looking at ATAPI. The Device Kit will have a massive facelift, rosterizing (is that a word?) all devices. Images, movies and audio clips should all be captured using a standard requestor. Just like files. The end user should choose "Open Image", and be able to choose from his digital camera, scanner, files, internet, etc.

More whiz-bang widgets are another key feature. One of my particular favorites is Outlook's "bar". Santa's Gift Bag, on steroids, if you will. How about an Internet Kit, incorporating Mail Kit, plus FTP, ping, HTTP, IRC, etc? An expansion of the Translation kit, too, is in order. It is a *wonderful* tool, as far as it went. It needs formats for 3D files, vector graphics, desktop publishing, and more.

The way that I see it, the world *wants* another operating system. One not rooted in the 1900's. One designed for 2000 and up. Not relabeled and rebranded. Not a child of the 1970's with a face job. A true child of the new millenium. The womb is starting to swell. This child is growing. There has never been a more exciting time to be part of computing. Or OBOS.

 
Zelda and the various flavors of IPC by Daniel Reinhold 
Lately, I've become quite interested in learning more about IPC. This stands for 'Inter-Process Communication' and refers to the mechanism by which different processes can send/receive information between themselves. The term 'process' is a standard one in Unix -- in the BeOS, we would refer to them as 'threads'. But I'm not going to try to change the name to ITC (inter-thread communication) because IPC is too well established.

The BeOS supports numerous different methods for implementing IPC. Which flavor you want depends on what you would like to do. If you are handling messaging at the app level, then you should certainly want to use BMessages, which are designed for just that purpose. However, if you want to do the messaging at a lower-level, then there are several options you could use:

  • areas
  • ports
  • pipes
  • sockets
In this article, I'll cover exactly three of these: pipes, ports, and sockets.


Implementation by layers

When two processes are engaged in a conversation, inevitably one process is requesting some sort of information or resource from the other, so we call the asker a client, and the giver, a server. Client/server communications are the foundation of most dynamic systems.

To demonstrate how you could implement IPC in the BeOS, I've created a client and server app that talk to each other using a simple protocol called "zelda". The server app is called "zserver" and the client app is called "zclient". The protocol itself is implemented with another, lower-level API called iochannel. The iochannel functions take care of connecting the client and server and transporting the messages back and forth.

This is done for each type of IPC mechanism considered here: pipes, ports, and sockets. That is, a zclient and zserver app is created for each of the three mechanisms. By running the server and then the client, you can see the messages passing back and forth between them.

The purpose of the iochannel API is to make the differences between the three IPC mechanisms completely transparent. Thus only one client and one server source file was written -- by compiling them with each of the different iochannel implementations, we create the different client/server apps.


How iochannels work

Iochannels are designed for client/server communication. Each contains a request channel, a response channel, and a message buffer. The request channel is used for transmitting client requests, the response channel is is used for transmitting server responses. Sending and receiving occur as if the client and server are connected by a set of wires with the output of one hooked up to the input of the other and vice-versa:

   client side                                 server side
    
    ________                                    _________ 
   |        |                                  |         |
   | output >>------>>---*       *===<<======<< output  |
   |________|             \     //             |_________|
                           \   //
                            \ //
                             X
                           // \
    ________              //   \                _________
   |        |            //     \              |         |
   |  input <<=====<<===*       *---->>------>> input   |
   |________|                                  |_________|
This is quite similar to how you can visualize a telephone connection between two people. Each have an idential piece of equipment on either end. The handset has an input speaker at the top and an output microphone on the bottom. You speak into the microphone while your ear rests next to the speaker, listening for input. Your voice goes out across the phone wire and shows up on the speaker on the other end. Thus, your output becomes input on the other end, and vice-versa.

The iochannel is like a telephone handset with the connecting wires. Two processes communicate by each opening an iochannel on either side and then alternating between sending data and listening for replies. When the communication is finished, the iochannel is closed.

The channels themselves are implemented as either pipes, ports, or sockets. For example, in the pipe version, two named pipes are created: one for the request channel and one for the response channel. The client talks to the server by writing a request message to the client end of the request pipe and then reading his end of the response pipe for server replies. Likewise, the server sits in a loop, reading the server end of the request pipe for any incoming requests -- when one is received, a reply is determined and then written to the server end of the response pipe. Of course, the method is exactly the same for the port and socket versions.


The iochannel API

The iochannel transmissions are connection-oriented. That is, they require the client and server to be actively listening for messages from each other. A send operation will go thru immediately. A read operation will block until a response has been heard. The incoming bytes are stored in a message buffer. Here's the generic data structure:

typedef struct
    {
    int  input_descriptor;
    int  output_descriptor;
    char msgbuf[MSGBUFSIZE];
    }
iochannel;
This is how iochannel is declared in iochannel.h. However, each IPC version defines the iochannel in its own implementation specific manner. For the pipe version, the descriptors are file descriptors bound to named pipes. For the port version, the descriptors are port ids. And for the socket version, the descriptors are sockets. These input and output descriptors are only used internally and should never be referenced as part of the API. But the message buffer is public and should be examined after each blocking read (performed by io_listen).

The usage is simple. To establish a connection, declare an iochannel object, then use io_open() to initialize it. You can open the iochannel as either a server or client -- this determines how it should be hooked up to the request and response channels. You then send messages with io_send() and listen for replies with io_listen(). Examine the msgbuf buffer for any received data. When finished, call io_close().

Here's the functional interface:

bool io_open (iochannel *ch, char mode)
    :
    this initializes the iochannel
    the mode is either 's' for server or 'c' for client
    for server mode, the IO channels are created and bound
        to the appropriate descriptors
    for client mode, the descriptors are bound to the
        (hopefully) already created channels
void io_send (iochannel *ch, char *data)
    :
    this sends the string of data to the output channel
void io_listen (iochannel *ch)
    :
    this reads from the input channel
    the function blocks until data becomes available
    the bytes read are stored in the message buffer
void io_close  (iochannel *ch)
    :
    closes the IO channels
    after this is called, the connection between
    the client and server is gone
    
That's it. Couldn't be much simpler. But it's enough to carry out a reasonably complete conversation between two processes.


The Zelda protocol

Ok, with the iochannel API in place, we have a method of communication. Now we just need something for the client and server to talk about. Generally, the client will need to use a specific protocol for requesting information from a server. So I've created one called zelda. Don't ask why. It had to be called something, and that just popped into my head.

There are about a bazillion different kinds of information that a server might offer: time, names, list info, images... only your imagination and the ability to process the requests are the limits. Since I wanted the demo apps to be simple, and because I couldn't think of anything more exciting, zelda has been designed as a number server. That is, you request a number, and it sends you one.

Ok, I'll admit, this is about as lame as you can get. You would never, ever need a server for something so simple and lamebrain. But hey, it's just an example. The principles will be the same for a server that actually does something useful. And we don't get bogged down in the details of how to process requests for a more sophisticated service. So hopefully you can overlook the fact that zelda is pretty useless in itself, and appreciate the fact that it demonstrates a basic communications protocol well enough.

Here's the protocol:

  • The client initiates the conversation by sending the string "zelda".
  • The server answers with "ready".
  • The client then sends the string "number".
  • The server responds by sending a numeric string (e.g. "7230158")
  • The client then decides whether it likes the number or not. If it does, it sends "stop", otherwise it sends another "number" request.
  • When the server receives the "stop" request, the conversation is over.

Pretty riveting, eh? The handshaking at the beginning (with the "zelda" and "ready" messages) isn't really needed for such a simple server, but I couldn't resist. For more complex services, handshaking is often necessary. Nor is the ending message "stop" really required either -- it just made designing my demo apps easier. The only message that's really needed is "number". But, its my protocol, so that's how it is!


The server app

With the zelda protocol defined, and the iochannel API in place to implement the communication, we're ready to create the client and server apps. Here are the major portions of the server:

#include <stdio.h>
#include <string.h>
#include <signal.h>
#include "iochannel.h"
// server IO channel:
// declared globally so that the shutdown procedure can access it
iochannel ZServer;
int
main ()
    {
    // zelda server
    // keep running until shutdown (Ctrl-C/Alt-C by user)
	
    iochannel *ch = &ZServer;
	
    signal (SIGINT, server_shutdown);
	
    for (;;)
        {
        printf ("\nzelda server: waiting for client requests...\n");
		
        if (io_open (ch, 's'))
            {
            session (ch);
            io_close (ch);
            }
        else
            // uh-oh, can't create IO channel...
            break;
        }
    }
void
session (iochannel *ch)
    {
    // run a session (conversation) with a single client
	
    char *request;
    char *reply;
	
    // handle client requests
    for (;;)
        {
        io_listen (ch);
        request = ch->msgbuf;
		
        if (strcmp (request, "stop") == 0)
            {
            printf ("closing client session\n");
            break;
            }
		
        reply = server_response (request);
        io_send (ch, reply);
        }
    }
char *
server_response (char *request)
    {
    // zelda server protocol:
    // for the given request string, return a response string
	
    if (strcmp (request, "zelda") == 0)
        return "ready";
	
    if (strcmp (request, "number") == 0)
        {
        static char buf[20];
        sprintf (buf, "%d", random_positive_integer ());
        return buf;
        }
	
    // don't understand client request
    return "?";
    }
void
server_shutdown ()
    {
    // perform shutdown procedures...
	
    printf ("zelda server: commencing shutdown...\n");
    io_close (&ZServer);
	
    printf ("done\n");
    exit (0);
    }


The client app

The client app is even simpler. It only needs to connect, request the service, then disconnect. A client must have some kind of criteria for determining whether it "likes" the number returned from the server. In this instance, the client is happy when it receives a prime number. Again, this is totally lame, but... oh well. Here are the major portions of the client app:

#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <ctype.h>
#include <math.h>
#include "iochannel.h"
int
main ()
    {
    // zelda sample client
	
    int n;
	
    printf ("\nzelda client: looking for a prime...\n");
	
    if (getPrime (&n))
        printf ("\nhey, %d is a prime number!\n\n", n);
    else
        putchar ('\n');
    }
bool
getPrime (int *prime)
    {
    // use the zelda server to send us some numbers
    // when a prime value is found, return it
	
    bool found = false;
	
    iochannel client, *ch = &client;
	
    if (io_open (ch, 'c'))
        {
        char *response;
        char *request;
		
        // initiate conversation with server
        io_send (ch, "zelda");
		
        // handle server responses
        for (;;)
            {
            io_listen (ch);
            response = ch->msgbuf;
			
            request = client_request (response, isprime);
            if (strcmp (request, "bail") == 0)
                {
                printf ("flaky server... giving up.\n");
                break;
                }
			
            io_send (ch, request);
            if (strcmp (request, "stop") == 0)
                {
                // found a good number
                found = true;
                *prime = atoi (response);
                break;
                }
            }
		
        io_close (ch);
        }
	
    return found;
    }
char *
client_request (char *response, bool (*isgood)(int))
    {
    // zelda client protocol:
    // for the given response string, return a request string
		
    if (strcmp (response, "ready") == 0)
        return "number";
	
    if (isdigit (response[0]))
        {
        int number = atoi (response);
        return (isgood (number) ? "stop" : "number");
        }
	
    // don't understand server response... bail
    return "bail";
    }


Implementing iochannel

The client and server apps are spared the details of managing the data transport. Iochannel takes care of that. But we still haven't covered how iochannel itself is implemented. So here we go:


Pipe version

Pipes are probably the original IPC technique -- they go way back. Whenever you use a command line such as:
    ps | grep "Tracker"
then you are utilizing a pipe to send the data from one program to another. In this instance, it's a case of redirecting stdout for the first program and stdin for the second. This is probably the most common use for pipes.

But you can also create named pipes. In the BeOS, they will appear in the file system at /pipe. For example, we could create a named pipe called "zelda" -- in which case it would appear as a file called /pipe/zelda. It's not quite a real file, but it can be written to and read from. For implementing iochannel, two pipes are created:

/pipe/zelda_req
/pipe/zelda_rsp

The first acts as the request channel, the second as the response channel. The iochannel structure is implemented in the pipe version this way:

#define ZELDA_REQUEST_PIPE  "/pipe/zelda_req"
#define ZELDA_RESPONSE_PIPE "/pipe/zelda_rsp"
#define MSGBUFSIZE 32
typedef struct
    {
    int  inpipe;              // input  (read)  file descriptor
    int  outpipe;             // output (write) file descriptor
    char msgbuf[MSGBUFSIZE];  // message buffer
    }
iochannel;
Initializing the server requires creating the named pipes and then binding the file descriptors to the appropriate pipes:
bool
init_server (iochannel *ch)
    {
    // initialize an iochannel server:
    // create the needed IO pipes
    // and bind the corresponding read/write file descriptors
	
    ch->inpipe  = open (ZELDA_REQUEST_PIPE,  O_RDONLY|O_CREAT);
    ch->outpipe = open (ZELDA_RESPONSE_PIPE, O_WRONLY|O_CREAT);
    if ((ch->inpipe < 0) || (ch->outpipe < 0))
        {
        int err = errno;
        printf ("server unable to create message pipes %s\n", strerror (err));
        return false;
        }
	
    return true;
    }
The client initialization is about the same, only the pipes are presumed to already exist, and the inpipe and outpipe descriptors are hooked up in the reverse order.

Reading and writing are trivial:

void
io_send (iochannel *ch, char *data)
    {
    // send data on the output pipe
	
    if (write (ch->outpipe, data, strlen (data)) > 0)
        {
        printf ("sent '%s'\n", data);
        }
    }
void
io_listen (iochannel *ch)
    {
    // wait for data on the input pipe
    int n = read (ch->inpipe, ch->msgbuf, MSGBUFSIZE);
    if (n >= 0)
        ch->msgbuf[n] = 0;
    printf ("recv '%s'\n", ch->msgbuf);
    }
The io_close() function need only close the inpipe and outpipe descriptors. While testing the pipe version, I noticed that closing the pipe often took a bit too long, which messed up the ability to re-run another client session. So I added a one second delay in the close function. Seems pretty kludgy, but it appears to do the trick.


Port version

Ports are wonderful. They are a BeOS native feature and work beautifully for implementing iochannel. Once created, a port is accessible by any thread running in any address space. They are identified by name -- the name must be unique for each port and can be no longer than 32 characters.

With a port, you automatically get a message queue. You set the queue length yourself when creating the port. Thus unlike pipes, which are restricted to one message at a time, ports can pile up messages in the queue. However, I only use a queue size of 1 for my implementation because zelda just doesn't require anything more elaborate.

The iochannel structure looks like this:

#define ZELDA_REQUEST_PORT  "zelda_req"
#define ZELDA_RESPONSE_PORT "zelda_rsp"
#define MSGBUFSIZE 32
typedef struct
    {
    int  inport;              // input port
    int  outport;             // output port
    char msgbuf[MSGBUFSIZE];  // message buffer
    }
iochannel;
Pretty much what you'd expect. The server and client init routines are similarly easy:
bool
init_server (iochannel *ch)
    {
    //
	
    ch->inport  = create_port (1, ZELDA_REQUEST_PORT);
    ch->outport = create_port (1, ZELDA_RESPONSE_PORT);
    if ((ch->inport < 0) || (ch->outport < 0))
        {
        int err = errno;
        printf ("server unable to create message ports %s\n", strerror (err));
        return false;
        }
	
    return true;
    }
bool
init_client (iochannel *ch)
    {
    //
	
    ch->inport  = find_port (ZELDA_RESPONSE_PORT);
    ch->outport = find_port (ZELDA_REQUEST_PORT);
    if ((ch->inport < 0) || (ch->outport < 0))
        {
        printf ("unable to connect to zelda server\n");
        return false;
        }
    return true;
    }
As always, it's the server's responsibility to create the IO channels. The client then attempts to bind to them. Here the server creates the IO ports with a message queue length of 1. The client then looks for these ports using find_port().

The read and write routines are equally simple:

void
io_send (iochannel *ch, char *data)
    {
    // send data to the output port
	
    int n = write_port (ch->outport, 'ok', data, strlen (data));
	
    printf ("sent '%s'\n", data);
    }
void
io_listen (iochannel *ch)
    {
    // read incoming bytes from the input port
	
    int32 code;
	
    int n = read_port (ch->inport, &code, ch->msgbuf, MSGBUFSIZE);
    if (n >= 0)
        ch->msgbuf[n] = 0;
    printf ("recv '%s'\n", ch->msgbuf);
    }
With write_port() and read_port(), you send a four-byte integer message code with each message. This is more flexibility than I even need for iochannel, so I just use the value 'ok'. In other words, in my implementation, I'm not distinguishing between the message code and the message data, because for zelda, there is no distinction. But in general, this will not be the case, so port messaging is very powerful in this regard.

For closing up, you only need delete the ports. This frees up their resources and allows creating another set of ports for the next client/server conversation.


Socket version

It might seem out of place talking about sockets when dealing with IPC. Aren't sockets meant for communication across a network, and sending/receiving info with remote machines? Well, certainly they can do that, but actually, the socket interface is a complete communications mechanism that is capable of performing a wide variety of services over several address domains.

In fact, you can even create sockets strictly for sending messages on your local machine, which is called the Unix domain. Well, wait... no you can't... not in the BeOS. Unix domain sockets are extremely efficient and useful, but they require raw sockets. A raw socket is basically the same as a file descriptor. However, the BeOS implementation of sockets is strictly geared toward the internet domain -- i.e. using IP addresses. You can't treat a socket as a file descriptor. Bummer.

But still, the sockets interface is well designed and well understood. If you are familiar with it, then you'd feel quite comfortable with using it to implement iochannel. By doing so, we are using the net_server to handle the low-level message passing for us.

To bind/connect a socket, you need an address interface. This basically means a (port, IP) pair. In our instance, we are only sending messages to and from our local machine, so we specify the loopback address for the IP. The loopback address (0x7f000001) is a special IP address set aside to designate the local machine.

We also need to specify a port. This is a magic number that is unique for each type of service that is available. It is a two-byte value, so you can pick any number up to 65535 as long as it's over 1024 -- the first 1024 port numbers are reserved for well known services (e.g. 80 for http). So what port number to use? Well, I looked up into the air and pulled out the number 8888. It's as good as any other, I guess.

Alright, here's the iochannel represenation with sockets:

// define the standard location (port, IP) for zelda communication
// note: these *must* be in network byte order
// (hence to the calls to htons and htonl)
#define ZELDA_PORT (htons (8888))             // zelda port number
#define ZELDA_IP   (htonl (INADDR_LOOPBACK))  // IP address
#define MSGBUFSIZE 32
typedef struct
    {
    int  this_socket;         // socket for this channel
    int  msg_socket;          // socket for sending/receiving data
    char msgbuf[MSGBUFSIZE];  // message buffer
    }
iochannel
The sockets version is implemented in a slightly different manner from pipes and ports. Namely, we no longer use a symmetrical arrangement for the IO descriptors. This is because sockets are already designed with client/server communication in mind, so that we don't have to set it up manually. Instead, we have to keep track of the additional socket created for a client/server session.

Here's how it works: for either client or server, you begin by calling socket() to create the original socket. For a server, you then call bind() to bind it to a particular (port, IP) and then listen() to set it up for creating client sessions. Then a call to accept() will block -- i.e. sleep forever until a client connection wakes it up. When a client has been detected, a new "proxy" socket is created for handling the data transmision and this is what is returned from the accept() function.

A client, on the other hand, need only call connect() for a given (port, IP) and it's ready to go.

To manage this distiction, iochannel uses two sockets. The first one represents the original socket created by the call to socket(). The second one represents the "proxy" socket and is called the message socket. All communication between server and client (after the connection) is thru the message socket.

For a server, this is a meaningful distinction -- there actually are two sockets present. For clients, the message socket is just set to the same value as the original socket -- i.e. you really only have one socket, since that's all that's required. A bit kludgy perhaps, but it allows both client and server to use the same iochannel structure just as they can with the pipe and port versions.

Having explained all this, let's have a look at the server and client initialization:

bool
init_server (iochannel *ch)
    {
    //
    static bool startup = true;
	
    int t,  // this socket
        m,  // message socket
        x;  // just a temp value
	
    struct sockaddr_in where = {AF_INET, ZELDA_PORT, ZELDA_IP};
	
    // create the server socket and bind it to the standard location
    // (but only once... at program startup)
    if (startup)
        {
        t = socket (AF_INET, SOCK_STREAM, IPPROTO_TCP);
        if (t < 0)
            return false;
	
        // bind the socket to the correct interface and turn on listen mode
        if (bind (t, (struct sockaddr *) &where, sizeof where) < 0)
            {
            printf ("unable to bind socket: %s\n", strerror (errno));
            return false;
            }
        if (listen (t, 1) != 0)
            {
            printf ("listen failed: %s\n", strerror (errno));
            closesocket (t);
            return false;
            }
		
        ch->this_socket = t;
        startup = false;
        }
	
    // each time thru, set the message socket
    ch->msg_socket = -1;
    t = ch->this_socket;
    // block until a connection with a client has been achieved.
    // data is sent/recv'd thru the message socket which
    //      is a proxy created by the accept routine
    m = accept (t, (struct sockaddr *) &where, &x);
    if (m < 0)
        {
        closesocket (t);
        return false;
        }
    else
        {
        ch->this_socket = t;
        ch->msg_socket  = m;
        return true;
        }
    }
bool
init_client (iochannel *ch)
    {
    int t;  // this socket
	
    struct sockaddr_in where = {AF_INET, ZELDA_PORT, ZELDA_IP};
	
    // create the client socket
    t = socket (AF_INET, SOCK_STREAM, IPPROTO_TCP);
    if (t < 0)
        return false;
	
    // connect to the zelda server at the standard location.
    if (connect (t, (struct sockaddr *) &where, sizeof where) < 0)
        {
        int err = errno;
        printf ("unable to connect to zelda server: %s\n", strerror (err));
        return false;
        }
	
    // data is sent/recv'd thru the client socket
    ch->this_socket = t;
    ch->msg_socket  = t;
    return true;
    }
Wow! That's a bit ugly. Definitely more work than what was required for pipes or ports. But that's how it is... the sockets interface is simply more general and complex than the other two. It's also more powerful out of the box. It has the capability of dealing with multiple client sessions as the same time. I didn't implement it -- though it wouldn't have been much more work -- because there would have been alot more to do with the pipe and port version to enable multiple sessions. I wanted to keep the consistency between versions.

Fortunately, once setup, sending and receiving are no big deal:

void
io_send (iochannel *ch, char *data)
    {
    // send data thru the message socket
	
    int i, n;
    int	len = strlen (data);
	
    for (i = 0; i < len; i += n)
        {
        n = send (ch->msg_socket, data+i, len-i, 0);
        if (n <= 0)
            break;
        }
	
    printf ("sent '%s'\n", data);
    }
void
io_listen (iochannel *ch)
    {
    // receive incoming bytes via the message socket
	
    int n = recv (ch->msg_socket, ch->msgbuf, MSGBUFSIZE, 0);
    if (n >= 0)
        ch->msgbuf[n] = 0;
    printf ("recv '%s'\n", ch->msgbuf);
    }


Comparing the flavors

At last, we having working versions of all three methods of performing IPC. How do they compare?

Well, due to the impending deadline for this article, I didn't have time to setup a testing workbench for the various client and server apps. So I'll just offer my opinions based on observing the output.

Ports are clearly the fastest. Messages go flying back and forth between client and server at the speed of gossip. Combine this with the ease of programming them and their flexibility and power and you have a clear winner in my book. I would think that for most situations where you want to have different threads talking to one another, you will definitely want to use ports to implement the communication.

Does this mean the other two methods are useless? Not at all. The pipe version is just as easy to program and is almost as fast as ports. It does have less flexibility, but if you have simple needs and are comfortable with a file system style interface, then you can't go wrong with pipes. In fact, the best situation would be where you are accessing several resources, where some are files, some are devices, and other may just be in-memory structures. Using pipes would be a win here because every access could use a file descriptor interface and you wouldn't need to distinguish which type of underlying device the pipe is talking to.

The socket version was the most complex, took me the longest to write and debug (by a large margin) and yet produced the slowest version. It's possible that the slower speed is an unavoidable consequence of the extra overhead involved in sockets communication. However, I'm really suspicious that the net_server is the logjam. I'd be very curious to recompile the sockets version again after the OpenBeOS net team has implemented a new network stack. I have the feeling that the sockets version would suddenly become much more competitive.


Limitations

The sample client and server apps are fine as far as they go. But they do have several (built-in) limitations.

  • Only one message at a time is processed (no message queue is utilized)
  • The server can service only one client at a time
  • Fragmentation is not dealt with (but really isn't needed since the messages are so small)
  • Flow control is not dealt with (again, not a big deal here since only small data strings are sent)
  • No authentication or security (well, that's just overkill for zelda)
There are probably a few others I've left out. I'll offer the standard author's excuse and tell you that any of these limitations could be fixed, but will be left as an exercise for the reader (wide grin).


Source Code:
zeldaIPC.zip