- Debugger: Odds and Ends
- Package Management: Getting Cross
- Package Management: New Contract Starts
- Package Management: Building Things (Part 2)
- Research into crowdfunding
- Bitcoin now accepted!
- Flattery will get you everything.
- Raising funds for Haiku through Goodsearch
- Debugger: Overview of New Features
- ASLR and DEP implemented
NFSv4 client: midterm report
Having implemented mandatory hooks by quarter term I had good base for implementing other operations like write, rename, create, etc. Moreover, improvements in file system migration and user ID mapping. Apart from that, file locks required most work, since they are both more complicated than other NFS operations and Haiku VFS originally did not allow the file system to handle them its own way.
NFS operations like write, rename, remove and create were relatively easy to use. All that was needed was to pass appropriate data to the server and handle possible errors. It was fortunate for me that they (except write) identify the file using its name and parent node and that exactly the data Haiku provides to the file system module.
Files opened with flag
O_APPEND have been a problem for NFS client since its first. That's because the protocol does not support appending data to the file. Each write operation has to specify the exact position when data has to be written and since NFS file system may be used by many client there a race condition between obtaining the file size and writing at the end of the file. NFSv4 allows multiple operations to be performed in one request (what enables the client to check whether the file size has changed) but they are not guaranteed to be performed atomically.
I also made file system migration support complete by adding proper handling of lease migration. This is a situation when file opened by the client is moved to another server. Depending on how the migration is internally implemented by servers, the new one may get the state from the old one or the client is required to reclaim its shares and locks as if the server has rebooted. That was not a problem since I had recovery from server reboot already implemented.
Since version 4 NFS does not use (or rather discourages use) numerical user ID and group ID. Owner (or group) name is passed together with domain name, instead. That means the client needs a way to resolve these names to UID and GID. For that purpose I used userland helper application that uses functions like
getpwnam() to perform the mapping.
File locks required more work for numerous reasons. Firstly, as I mentioned before, VFS did not allow file system to handle file locks. This is understandable when we are dealing with local file system, but would not work for network file systems. That's why I needed to add three more hooks a file system module can implement if it wants to do file locks its own way:
release_lock(). Then, I could implement NFS file locking. Locks, just like share reservations (ie. opening files), requires at-most-once semantics, while the default RPC request semantics is at-least-once. That requires both server and client to use sequence and state IDs, consequently, storing state by server makes client responsible for reclaiming it when server reboots. It's all very similar to how NFS deal with share reservation (and I had that already implemented), so it did not take me too much time.
My next goal is to implement various types of client side caching what is essential if anyone wants to use NFSv4 over anything worse than Gigabit ethernet. That includes metadata caching (actually I implemented caching of
struct stat data yesterday), lookup caching, directory caching (closely related to the former) and file data caching. There is also a possibility to cache RPC headers and authentication information and ID mapper data. In addition to that I aim to support open delegation. It is a situation when client takes over the responsibility over the file and can switch write semantics from write-through to write-back.