- Debugger: Getting mixed signals
- 'Packaging Infrastructure' Contract Weekly Report #4
- Haiku monthly activity report - 06/2015
- 'Packaging Infrastructure' Contract Weekly Report #3
- 'Packaging Infrastructure' Contract Weekly Report #2
- GCI 2014 winners trip report (mentor side)
- TeX Live and LyX; Changes to the boot code
- 'Packaging Infrastructure' Contract Weekly Report #1
- Beginning of 'Packaging Infrastructure' Contract
- Haiku monthly activity report - 05/2015
Contract weekly report #60
Hello world!
Not much commits from me this week, as I'm still working on the libbind update, and I'm also doing some work for other customers. I got netresolv to build after implementing the missing getifaddrs function in Haiku - this is a non-POSIX function, but it is available in Linux and all major BSDs. It enumerates all network addresses for all network interfaces on the system, similar to our BNetworkRoster and BNetworkInterface classes.
netresolv (the libbind replacement) uses this to properly implement address resolution in an RFC-compliant way. I got the thing to compile and resolve DNS requests, and all basic network tools are working properly again (FTP, telnet, etc). I have checked that the issue we were getting with connecting to GMail servers is fixed: when there is no IPv6 address configured, the DNS resolver now properly returns IPv4 addresses for the servers to connect to. However, I'm still working on some issues with apps using the "services kit" (HaikuDepot and WebPositive). These seem to hit some kind of deadlock waiting on replies from the DNS server or just be very slow (even worse than before). Things could be improved here to reduce interlocking between the different threads doing requests. The netresolv implementation of getaddrinfo and other resolution functions is based on a pthread lock, which means calling them from several threads is safe, but slow. There is a getaddrinfo_r function which is reentrant and avoids this problem.
To further improve the speed of services kit apps, I have also started work on a DNS caching system. This is something implemented in most modern web browsers (it is done for example in IE and Chrome). The idea is that in most cases, several network requests will be going to the same server. In the current situation, each of these requests will first require a request to the DNS server. The cache will avoid that by keeping the most recent replies from the DNS server and reusing them for several requests.
I have written a simple version of the cache to experiment with. While doing so I made some improvements to the BReference API to add a BConstReference class (which is a read-only reference to an object, useful for the cache). While testing this and the cache, I needed to build the BReference class in debug mode. Unfortunately, there is a bug in the Stack and Tile decorator code which makes the app_server crash when BReference is in debug mode, as soon as a window is opened. I will have to fix this before I can continue debugging my cache code.
I also did some work on the non-coding side: GCI finished last week, but it's already time to get ready for the Google Summer of Code and prepare our Ideas page: https://www.haiku-os.org/community/gsoc/2015/ideas . There is still room for some improvement here and I'll continue working on it.
- PulkoMandy's blog
- Login or register to post comments

Comments
Re: Contract weekly report #60
Re: Contract weekly report #60
It makes sense to implement this on a global level (ie. in net_server), if only to be able to flush it manually. Also, it's important not to store those DNS entries too long.
Re: Contract weekly report #60
At most not longer than the upstream server's TTL.....
Re: Contract weekly report #60
I am testing this with a cache that keeps BNetworkAddressResolver objects in memory, and shares them between callers needing to access the same domain. The cache uses an LRU policy (but I may switch to a FIFO, as this is simpler to manage and possibly makes more sense) and only keeps the last 256 requested items.
The idea is to make this a very simple and efficient cache. It will be protected by a reader/writer lock so several threads can request entries from it easily (the write lock being used only to insert new entries in the cache; the thread which does an insersion is responsible to "garbage collect" expired items).
Since the items will be kept for a rather short time, there should be little need to explicitly clear the cache (just wait a few seconds). The 1-minute timeout may be a bit high, I think Chrome uses a timeout of 1 minute because it matches well with the use of a website: if you spend less than 1 minute on one page and move to the next one in the same website, there is no need to do any DNS requests at all. I'm not sure moving the caching to net_server is a good idea, as the inter-app messenging needed to get there may not be much faster than the actual DNS requests, and in case of cache misses, this could actually make the whole process much slower. Moreover, it is more likely that threads from inside an application are going to access the same server, while threads from other applications would access other servers.
Re: Contract weekly report #60
If it's a short-lived cache, implementing it in libbnetapi.so certainly makes sense, indeed.
There are a number of cases where a global cache is useful, too, but that can also be done in combination with an implementation in libbnetapi one day.
In any case, internal communication via BMessages should be 1000 times faster than a DNS lookup.