Late last year, we decided to rewrite an important part of the Haiku app_server. Why was that? Let's start out with what the app_server is supposed to do: At the heart, it manages multiple applications simultaneously using the display device as a shared resource. Two of the important system objects through which this is organized are Windows and Views. Through views, the applications can draw information onto the screen, while a window is merely some sort of container for views. One big difference between the two is that, to a certain degree, all views within one window are expected to be self organized, while the windows themselves are organized by the server. The views form a tree-like hierarchy of parent and children views. It is expected, that all children of the same parent (sibling views) don't overlap. If they do, the space they share belongs to both of them, which will have strange results. Windows, on the other hand, are managed by the server. This is an important difference - more on that later. A window, in fact, doesn't care about other windows at all. The server simply tells a window which part of the screen it can draw into. This equals the window's visible part within the stack of windows on screen. This idea is called "clipping".
There are other approaches to managing this, most prominently "fully buffered windows". In this setup, each window has its own frame buffer somewhere off screen, into which it can let views draw independently of other windows. The final scene is composed together from all window buffers. But the clipping approach is used in the BeOS app_server, and it is what we currently use in the Haiku app_server as well. It means that drawing is performed directly on the visible frame buffer. The previous app_server design that implemented what I just described didn't really distinguish between views and windows as far as clipping was concerned. Everything was a "Layer" and was organized in one big tree starting from something called the "RootLayer". The problem with this approach was, that while it seems more generic, it didn't model the requirements very well and meant that a lot of computations were necessary to change the clipping state of the entire layer tree in one operation.
The new design only cares about windows as far as changing the clipping state is concerned (which needs to be done in one atomic operation). The clipping for views is computed only when it is actually needed, like when a view wants to draw something. The nice thing about this is that changing the global clipping state takes much less time, since the number of windows on screen is far less compared to the total number of views. Additionally, it means that the view clipping calculation can be done in each window thread (independently from other windows), which will benefit multiple-CPU machines. Compared to the previous implementation, the new implementation is much more encapsulated in the different classes. The old code had lots of dependencies scattered throughout the classes which made it hard to maintain and understand.
Updates and Dirty Regions
All visible information that is available to the app_server is already showing on the screen. If the order of windows is changed, for example--which means that information previously covered behind another window is to become visible--the server has to ask the client application to draw these parts. The situations in which something has to be drawn are these: windows or views being moved, resized, shown, or hidden; views being scrolled; and views being "invalidated" by client request. This means all very much the same to the app_server. It manages a so-called "dirty region" for each window. If a region within a window is marked dirty, it will eventually result in the client being requested to redraw it. In order for the server to know when a region is clean again, the start and the end of the sequence of drawing commands that belong to a single repaint request are marked. What makes all this a little more complicated is the asynchronous nature of the multitasking environment. Right in the middle of one application repainting a region of a window, additional regions of the same window might become dirty. To manage this efficiently, each of the server side window objects has a pending and a current "update session". It all starts with the pending update session. As soon as there is even a tiny bit of the window dirty, the server will inform the client window object that a redraw is necessary. It just sends a message. Some time later, the client window will process this message, and at that point, it will ask the server which parts exactly need updating. Meanwhile on the server side, the pending update session's dirty region might still have grown. But as soon as the client has asked what the region is, the pending session becomes the current session. Any additional dirty regions will then go into a new pending session, the current session stays fixed. That is, it might still shrink in case some of its region intersects with new dirty regions, but it cannot be allowed to grow. Shrinking is performed to avoid repainting the same area again in the next update session. When the client starts the current update session, the server also paints the backgrounds of all affected views (and that is the reason the current session has to stay fixed). Drawing commands might arrive at the app_server at any time, but when it is within a current update session, the drawing is always clipped to the region of the current session. This happens because views had their backgrounds not painted outside of this region, so the client would not draw on a clean background outside this region. As was said, if there were new dirty regions, those went into a new pending update session. When the current session is over, which the client tells the server, then the server would go back to Step One and tell the client that regions would need repainting (in case there is another pending session). When the message reaches the client, the pending session becomes the current and all repeats itself.
One more difficulty lies within scrolling views. Scrolling a view means that previously hidden portions of it will become visible, and those portions need to be repainted by the client. The client might not know that it has not yet redrawn those parts and request the next scroll event. This could for example happen, if several mouse events are in the client event queue in front of the repaint request of the server. Those events will be processed first, and they might mean to keep scrolling a view. The server on the other side has to keep track which parts of the view have not been drawn yet, but are requested to be scrolled anyway. In this case, the dirty region is shifted along with the scrolled region, and that is also something that didn't work correctly in the old implementation.
Well, I hope to have given you a little insight into the workings of our app_server. The classes involved with the implementation of what I described are Desktop, ServerWindow, WindowLayer, and ViewLayer. A ServerWindow is the object through which a client BWindow object talks to the app_server. It receives commands for creating and managing views, drawing to the screen and such stuff. The WindowLayer is the server-side container for Views, which are represented by ViewLayer objects. The client side BView hierarchy is mirrored on the server by a ViewLayer hierarchy, which contains all the information the server needs to correctly carry out drawing commands. The Desktop is the object through which the app_server manages all its WindowLayers and their clipping.
The next task is to optimize the implementation for speed. There are still several things we can try, including removing of floating point coordinates where that doesn't make sense, optimizing the locking scheme and clipping operations, and generally improving drawing speed itself.