Blogs

Haiku in VirtualBox

Blog post by koki on Thu, 2007-01-25 18:59

Today I woke up to the news that Haiku was mentioned at MYCOM Journal, a Japanese IT related news site, in a regular column known as OSX Hacking. This time the author was playing with VirtualBox and he tried running Haiku on it. Well, he did succeed, but the speed was not up to the expectations. GLTeaPot ran at the incredible speed of 1.3FPS (yes, you read right!), on a first generation MacBook 1.83GHz. Here is another screenshot of the entire Haiku desktop running in VirtualBox.

Trac is back

Blog post by wkornewald on Thu, 2007-01-25 09:23

Hopefully, you noticed that Trac, our bug and task tracker, is back and has a new home: http://dev.haiku-os.org. We switched our hosting provider and now everything works well and Trac is stable (as far as I know ;).

New Website Goes Live

Blog post by koki on Tue, 2007-01-23 15:04

We finally deployed the new website. Waldemar fixed at the last minute a bug that we discovered in one of Drupal's module, and we then asked Takidau to change the DNS settings. It was a long road. How long did it take? A bit more than 6 months? Much longer than I would have expected, I have to admit. But I think it is a good start. As some have already pointed out on the Haiku mailing list, there are still a few areas that need to be tweaked.

First Google Tech Talk

Blog post by bga on Fri, 2007-01-19 19:55

Today I presented the first (out of three) Google Tech Talks scheduled for the following weeks. The reception I got from other Google engineers was really good and, more than that, they gave me lots of feedback that I will apply to the following presentations. Seeing the reactions made me reach the conclusion that we are in the right path and even when considering a highly technical audience, the simplicity we are trying to achieve with Haiku got the interest of several engineers.

Getting ready to deploy

Blog post by koki on Fri, 2007-01-19 06:42

Today I finished making all the changes that had to be made after dropping the Drupal authorship.module; it was much more work than expected (I should have known), as all the articles belonging to authors that did not have an account had to be edited one by one. Well, it's done now, so all there is left before we can finally do the migrations is creating two pages: Contributing Content and Spreading the Word.

app_server Memory Management Revisited

Blog post by axeld on Thu, 2006-03-23 08:30

I recently looked into why BeIDE's interface did only have green squares where its icons should have been (bug #313). The function importing the client's bitmap data did not work correctly, and while playing with it, the app_server suddenly crashed, and continued to do so in a reproducible way.

How was this possible? Bitmaps are located in a shared memory pool between the app_server, and an application. Unfortunately, the app_server put those bitmaps into arbitrary larger areas, and put the structures managing that space into those areas as well - like a userland memory allocator would do. However, if a client clobbered memory outside of its space in those areas (and that's what buggy clients do all the time), the structures could be easily broken, which caused the app_server to crash when it tried to use them next time. Also, since all applications shared the same area, they could easily clobber bitmaps of each other, as well.

But there even were more disadvantages of the way client memory was managed: the client would clone that area for each bitmap therein - that meant for an application like Tracker with potentially tons of icons (which are bitmaps), that it wasted huge amounts of address space: if the area was 1 MB large and contained 500 icons, Tracker would have cloned that area 500 times, once for each icon, wasting 500 MB of address space. If you have a folder with many image thumbnails, the maximum limit (2 GB per application) could have been reached with ease. Not a very good idea.

Another problem of the previous solution was memory fragmentation and contention - if many applications were allocating server memory at the same time, their memory would have been spread out over the available areas, and since it was only a single resource, all applications needed to reserve their memory one after the other, for every single allocation. If now one of these applications were quit, its memory had to be freed again, and left holes in the area. Of course, the app_server needed to create quite a few areas - and with memory fragmentation like this, would waste much more memory and address space, which is a real concern in the app_server.

Anyway, the new solution works pretty much different: the app_server now tries to have a single area per application - if that application dies, that area can be freed instantly, without having to worry about other applications. To achieve this, the client reserves a certain area for the app_server - that makes sure that the area can be resized if required - at the server's side, the area is always exactly as large as needed. Since the app_server doesn't reserve space for the client, it comes up with fully relocatable memory; if an area cannot be resized in the app_server (since there are other areas in its way), it can be relocated to another address where it fits. If that's not possible, a new area is created, and the client is triggered to clone it. Of course, every area is now only cloned once in the client, too.

The structures that manage the allocations and free space in these areas are now separated from the memory itself, and not reachable by the client, with the desired effect that the app_server cannot be crashed that easily anymore that way. The contention is reduced to the requirements of a single application which should be much more adequate.

As an additional bonus, the new solution should be much faster due to the wastly reduced amount of area creations and clones. The allocator itself is pretty simple, though, and could probably be improved further, however it works pretty nice so far.

APM Support

Blog post by axeld on Sat, 2006-02-04 13:03

Since a few days, we have a working APM driver in our kernel. APM stands for Advanced Power Management. It's a service as part of the computer's firmware commonly called BIOS in the x86 world. The latest APM standard, version 1.2, is already almost 10 years old. Today's computers do still support it, even though the preferred method to have similar services (among others) is now ACPI, or Advanced Configuration and Power Interface. Thanks to Nathan Whitehorn effort and Intel's example implementation, we even have the beginnings of ACPI support in Haiku as well.

But let's go back to APM. Theoretically, it can be used to put your system in one of several power states, like suspend or power off. You may also read out battery information from your laptop as the estimated remaining power. It also supports throttling the CPU for some laptops, but it only differentiates between full speed and slower speed.

The driver doesn't do much yet, but it should let you shutdown your computer. In addition to that, it follows the standard and periodically polls for APM events. An example APM event would happen when you connect the AC adapter to your laptop.

By default, the driver is currently disabled, but that might change when I have a better picture about on which hardware it doesn't run yet. I have successfully tested in on 4 different systems over here, but I also have one negative report.

If you're interested to test Haiku's APM support yourself, you can add the line "apm true" to your kernel settings file. When you then enter "shutdown -q" in the Terminal, the system should be turned off. If an error comes back, APM couldn't be enabled for some reason. If nothing happens, your computer's APM implementation is probably not that good. In some rare cases, your computer may refuse to boot with APM enabled - in this case, you can disable APM in the safemode settings in the boot loader. If it really doesn't work, I would be very interested in the serial debug output in case you can retrieve it.

In other news, we now also have syslog support in the kernel, as well a on screen debug output during boot. The former can be enabled in the kernel settings file "syslog_debug_output true", while the latter can be enabled in the safemode settings of the boot loader. "syslog" is a system logging service that currently stores its output file under /var/log/syslog. Note that you must shutdown the system gracefully to make sure the log could be written to disk.

Syndicate content