Blogs

Finally

Blog post by axeld on Tue, 2005-10-25 01:02

I just booted into Haiku working on an SMP machine. Unfortunately, I am not really sure what change exactly triggered this - I've tried so much and all of a sudden it started to work, after I disabled setting up the APIC (the advanced programmable interrupt controller) to use ExtINT delivery mode - that shouldn't tell you anything, I know, but it's still remarkably that this code was originally disabled as well.
It took me quite a number of hours to get it working, so it's a bit frustrating not to know what was actually responsible for the hickup, but it still didn't make me that curious to start an investigation on this topic for now...

SMP update

Blog post by axeld on Sat, 2005-10-22 18:34

Even though I usually don't work at the weekend, I had to, since I didn't manage to work 8 hours on friday.

Unfortunately, I still haven't got SMP to work yet. I've investigated the issue, and came to the simple conclusion that the APIC interrupts doesn't reach their goal (it just took me some time to get there, and exlude all other possible faults). I can trigger such an interrupt manually, so the second CPU is setup correctly, but its APIC doesn't seem to be. You don't understand a word of what I just said? Well, let's just say one CPU doesn't manage to talk to the other CPU (through the APIC, the "advanced programmable interrupt controller").

Signal Distractions

Blog post by axeld on Fri, 2005-10-21 01:17

It took a bit longer to get the dual machine up and running again - it has two 500 MHz PIIIs and the hard drive is a bit older as well, so it took about two hours to update the source repository and get it compiled.

While waiting for the machine to complete its task, I had the time to look into some other known issues of our code, and clean the signaling code a bit. We are now able to handle signals with interrupts turned on, some minor bugs went away, and there is now support for sigsuspend() and sigpending() - nothing earth shaking, but definitely a step into the right direction.

There were some other distractions, so I played around with SMP only shortly - I am just sure now that it still doesn't work :-)

SMP

Blog post by axeld on Thu, 2005-10-20 10:05

I'm done implementing sub transactions for now - I haven't yet tested detaching sub transactions, but everything seems to work fine. Time will tell :-)
A complete Tracker build now dropped from 13.5 minutes to 5.4 minutes - that's great, but BeOS R5 does the same job on this machine in around 2.5 minutes, so even while this is an improvement, we still have a long road ahead of us. I can only guess where we lose those 3 minutes for now, but I am sure we'll find out well before R1. One of the responsible components should be the caching system, as it still only looks up single blocks/pages, instead of doing some bigger reads and read-ahead.

Sub-Transactions

Blog post by axeld on Wed, 2005-10-19 15:38

A small update to the BFS incompatibility: I've now ported the original logging structure to the R5 version of BFS as well, so that the tools like bfs_shell can now successfully mount "dirty" volumes, too. I also found another bug in Be's implementation, and needed to cut down the log entry array by one to make it work with larger transactions.

Now I am working on implementing sub transactions. If you have tried out Haiku and compiled some stuff or just redirected some shell output to a file, you undoubtedly are aware that this takes ages on the current system.

Another BFS surprise

Blog post by axeld on Tue, 2005-10-18 23:25

Turns out BFS logging code is not that intelligent - it uses block_runs in the log area, but it doesn't make use of them. In other words: it only accepts block_runs with length 1 - which effectively kills the whole idea of using them. It's now as space consuming as the single block number arrays I had before, but doesn't share the binary search capability we had earlier.

While our code now could use block_runs how they should be used, I have disabled joining separate block_runs to make our BFS fully compatible to Be's in this regard. If we someday leave compatibility with the current BFS behind, we can enable it again, of course.

Analyze This

Blog post by axeld on Tue, 2005-10-18 10:00

This morning, I went through analyzing the BFS log area structure. Turns out it's very different from what I did for our BFS.
Our current log structure looks like this:


block 1 - n:
uint64 number of blocks
off_t[] array of block numbers
block n+1 - m:
real block data

While the one from BFS looks like this:


block 1:
uint32 number of runs
uint32 max. number of runs
block_run[] array of block runs
block 2 - m:
real block data
Syndicate content