bfs

Implement BFS over FUSE

Blog post by raghuram87 on Thu, 2009-05-28 09:07

I am a BTech 4th year student at Indian Institute of Technology Madras, Chennai, India.

I will be working on implementing a FUSE based filesystem for BFS so that BFS partitions can be mounted natively in Linux and other POSIX operating systems.

I enjoy building systems like these where the final outcome is really interesting to watch and useful. I will be keeping the community updated regarding the progress in this blog. Happy coding all! Enjoy your summer!!

HCD [bfs]: Status Report #1

Blog post by emitrax on Sun, 2008-06-22 17:33

It's been almost a month already since the very first Haiku Code Drive began!

First of all thanks to all of those who have voted me, I was very surprised about the poll result.

Now some updates about my project.

As you know, my project aims to test the stability of the bfs file system. In order to do so
the idea is to first implement XSI Posix semaphores, and then compile bonnie++ which is a benchmark suite
for file systems. To be honest though, XSI Posix semaphore are not really mandatory, because it would be
faster to just port bonnie++ to Haiku, as it would require very few changes (e.g. those concerned locking).
However though, in the long run, Haiku would benifit more if I implement the semaphores previously mentioned,
as it would also make it more Posix compliant.

The easiest part was the user space one, now I'm working on the kernel side. I also started a thread
about this on the gsoc mailing list so you can follow it by clicking on the link below.
http://www.freelists.org/archives/haiku-gsoc/06-2008/msg00009.html

Although I'm not done with the above though, I've already started running some test without bonnie++
and hitting the first bug. See ticket #2400.

The test is quite simple but very time consuming, especially on my current hardware (by the way,
if someone is willing to try the test with real hardware or a faster maching please contact me).

I first packed the whole haiku source code into a tarball from linux, move it to my usb disk, run
vmware, and try to unpack the almost 500MB tarball (1.5 GB unpacked) from Haiku.
Yeah... "Good luck with that! :)"

The result, which at first seemed to me as a bfs bug, turned out to be a vfs one, although we are still discussing about it in the gsoc mailing list. See the link below for more details.
http://www.freelists.org/archives/haiku-gsoc/06-2008/msg00021.html

Despite the fact it has been confirmed not to be a bfs bug, as you can read from the mailing list, I'm still trying to fix it, while also finishing xsi sempahore implementation.

That's all for now.

Why BFS needs chkbfs

Blog post by axeld on Fri, 2007-10-05 09:16

You are probably aware of the existance of chkbfs. This tool checks the file system for errors, and corrects them, if possible.
Nothing is perfect, so you might not even be asking yourself why a journaling file system comes with such a tool.

In fact, it wasn't originally included or planned in the first releases of the new BFS file system. It was added because there is a real need for this tool and you are advised to run it after having experienced some BeOS crashes.

Sub-Transactions

Blog post by axeld on Wed, 2005-10-19 15:38

A small update to the BFS incompatibility: I've now ported the original logging structure to the R5 version of BFS as well, so that the tools like bfs_shell can now successfully mount "dirty" volumes, too. I also found another bug in Be's implementation, and needed to cut down the log entry array by one to make it work with larger transactions.

Now I am working on implementing sub transactions. If you have tried out Haiku and compiled some stuff or just redirected some shell output to a file, you undoubtedly are aware that this takes ages on the current system.

Another BFS surprise

Blog post by axeld on Tue, 2005-10-18 23:25

Turns out BFS logging code is not that intelligent - it uses block_runs in the log area, but it doesn't make use of them. In other words: it only accepts block_runs with length 1 - which effectively kills the whole idea of using them. It's now as space consuming as the single block number arrays I had before, but doesn't share the binary search capability we had earlier.

While our code now could use block_runs how they should be used, I have disabled joining separate block_runs to make our BFS fully compatible to Be's in this regard. If we someday leave compatibility with the current BFS behind, we can enable it again, of course.

Analyze This

Blog post by axeld on Tue, 2005-10-18 10:00

This morning, I went through analyzing the BFS log area structure. Turns out it's very different from what I did for our BFS.
Our current log structure looks like this:


block 1 - n:
uint64 number of blocks
off_t[] array of block numbers
block n+1 - m:
real block data

While the one from BFS looks like this:


block 1:
uint32 number of runs
uint32 max. number of runs
block_run[] array of block runs
block 2 - m:
real block data