Syllable vs Haiku

Forum thread started by The WallAR on Wed, 2009-07-29 17:52

Hi! I'm new to So alternative OS. And I want to you know Test it.
And I just want to know how much is difference between Haiku and Syllable? I found some news but they are sooo old (from 2006) As far as i know Haiku develops extremly fast :) Which is very good.
The Only thing i know (but not sure) that syllable was ported to linux and i don't know why it EATS my PC and lags(on VirtualBox i have over 40% cpu time).

But How much syllable works better or worse?

Comments

Re: Syllable vs Haiku

Syllable and Haiku are not related they just have the same file system that is it. Syllable is based of of Athe OS witch is an Amiga os 3 aka "Clasic" clone. Syllable is a joke, an os for way out dated hardware. You know one of those floppy OS's like Menuent and Kollapi. Dex is more usable. The only time Syllable is better then haiku is in the VM, heck even React OS is better in a VM. Haiku is better and that is it..

  1. The End

Re: Syllable vs Haiku

You wanna know what is wrong with linux and most UNIX systems?

Too much cruft and cheap hacks to make them work.

One example : http://lwn.net/Articles/436012/

Just read the article and see. This happens all over the place, from networking where they first introduced the GUI part (nm_applet) and then they startted writing the non-gui part, to sound and storage technologies.

Haiku maybe smaller and simpler, but it is consistent with it self.

My only problem is that by the time Haiku is ready for general use, then nonmore desktop computers will exist.

Re: Syllable vs Haiku

To end this silliness.

A micro-kernel uses message queues to pass messages around, since some functionality runs on Ring0 and some not.
A monolithic kernel uses signals and more recently sockets.

A hybrid does both.

This means that *both* Haiku and Linux are hybrids.

The fact that Linux is 30 millions lines of code long, is because it supports *far* more H/W and can do more tricks than Haiku can.

Also it is because, this way less context switches happen, so it is faster.

Re: Syllable vs Haiku

Quote:

This means that *both* Haiku and Linux are hybrids.

I did not realize you were the expert. How come everyone else, even NoHaiku, was saying Linux had a monolithic kernel? The fight was whether Haiku was monolithic or hybrid. We all agreed Linux was monolithic.

There are books & websites & Wikipedia that will tell you that Linux's kernel maybe more modular but still monolithic. That is why Andy T & Linus were fighting back in 1992 & again in 2007. Also, if you read any of Linus' posts you'll see he is strongly opposed to microkernels and does not believe in hybrid kernels. So, would be big surprise if Linux was not monolithic.

http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&thr...
On May 9, 2006 Linus said "As to the whole "hybrid kernel" thing - it's just marketing. It's "oh, those microkernels had good PR, how can we try to get good PR for our working kernel? Oh, I know, let's use a cool name and try to imply that it has all the PR advantages that that other system has"

It is because of Linus (who likes to bash & attack when he disagrees with others) that people like NoHaiku see hybrid kernels as non-existent and as monolithic. When you have Linus, founder of Linux, running around saying hybrid is hype (marketing term) then people follow and believe him even though Wikipedia & others recognize the term for good reason. ie, Linus will still argue hybrid is not real and people will still believe him. Even Andy T mentions hybrid. See near end of my post for Symbian quote.

Quote:

The fact that Linux is 30 millions lines of code long, is because it supports *far* more H/W and can do more tricks than Haiku can.

I agree but what I was trying to point out is with a monolithic kernel as it gets bigger & bigger in size it risks 1) more serious bugs (code size) 2) loses efficiency (code bloat) and 3) reduces stability (code complexity)

Linus now spends most of his time reviewing kernel patches because any bad code and the Linux kernel would be crashing & unstable.

Below from: Linus calls Linux 'bloated and huge'
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/

"Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two percentage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked."

"We're getting bloated and huge. Yes, it's a problem," said Torvalds.

Linus said, "I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse."

Debunking Linus's Latest
http://www.coyotos.org/docs/misc/linus-rebuttal.html
"Linus, as usual, is strong on opinions and short on facts.

"Shared-memory concurrency is extremely hard to manage. Consider that thousands of bugs have been found in the Linux kernel in this area alone."

"When you look at the evidence in the field, Linus's statement ``the whole argument that microkernels are somehow `more secure' or `more stable' is also total crap'' is simply wrong. In fact, every example of stable or secure systems in the field today is microkernel-based. There are no demonstrated examples of highly secure or highly robust unstructured (monolithic) systems in the history of computing."

If you want to know more about microkernels and hear Andy's side then read his post from 2007:
http://www.cs.vu.nl/~ast/reliable-os/
Andy T said,"Symbian is yet another popular microkernel, primarily used in cell phones. It is not a pure microkernel, however, but something of a hybrid, with drivers in the kernel, but the file system, networking, and telephony in user space."

Re: Syllable vs Haiku

If you want some remote admin capability, VNC is available:

Multi-User

Multi-User in terms of more than one person using the same machine at diffirent times, then multi-boot is the way to go right now. Bootman makes it very easy to set-up a diffirent partition for each user today.

Multi-User in terms of more than one person using the same machine at diffirent times but all booting into the same partition then having a logon screen that determines thier /home directory was available as an addon for BeOS, I never tried it for Haiku-OS but expect it may not be that hard to set if needed.

Multi-User in terms of more than one person using the same machine at the same time is very unlikely as I believe the idea is the user has the entire machine at thier finger tips when using a desktop OS.

Multi-User in terms of rights management to diffirent parts of the OS is lightly supported already (try deleting or moving a file in the system folder) and I expect there will be more of that type of thing added *AFTER* Release 1 is delivered. HaikuFS already lets you set owner/group assignments to files but I don't think many the OS/programs re-inforces the assignments at yet, but I could be very wrong here as I have always used the default settings only.

writing applications

Thanks everybody for all the informative comments.

I think the creators of Haiku made an inspired decision to reimplement BeOS, first, because it is such a good model to start from, and second, because it is not a moving target. I think what the creators did is sort of like what the creators of the GNU project did in building their versions of the unix tools.

So, given my admiration of Haiku, anything i say below about what i would find useful is not meant as any kind of suggestion of the way Haiku should grow, but only what i personally consider important in choosing a platform to write applications on. Clearly having a well-thought out api is extremely important, and Haiku has that automatically. (And it also obviously has a great community.)

One more preliminary remark is that i am not trying to stir up anything with syllable, i just joined this thread because Denise Purple's post had a lot of resonance with me.

Now, i should respond to all of you since you were so kind to respond to me.

Marco (forart.it): thanks for the summary and the links. Regarding ReactOS, i am considering it.

In more detail: i am looking for a platform to write gui applications for, but with requirement #2 being no X11, and requirement #3 being that it has a good api. (Requirement #1 is that it be free---i used to program on NeXTstep which was awesome, but then it disappeared. Eventually some relative of it reappeared in Mac OS X, but in some ways it is quite different, and you certainly can't recompile your old code without practically rewriting every line. I do not want to be burned by that again.)

AFAIK, that means possible platforms for me are Haiku, Syllable, ReactOS, or AROS (Amiga descendant).

The thing that worries me about ReactOS is i think they may have a moving target (windows)---sort of like GNUstep (the moving target there being Cocoa, and GNUstep isn't cast as an OS in any event, but as a layer of some sort). It is very hard to hit a moving target, of course, especially if the target doesn't want to be hit.

But i would be grateful to be corrected on any point about which i am wrong.

cipri: thanks for being very forthright. If you've already written anything up about whatever is wrong with the syllable api i'd be interested in a link to it (it might be too far off-topic to go in a Haiku forum, and i don't want to wear out my welcome, and of course i also don't want you to write anything up just for me). I would of course also like to hear what Kaj and Vanders have to say (but again, not in a Haiku forum). Just as a side remark, i have two coworkers who are very smart, and have very similar politics, but diametrically opposed viewpoints about certain languages and programming practices. (And they're both good people also, friendly with everybody including each other. But man they disagree about language X and the way to program in it.)

Fredrik (tqh) --- thanks for the copy/paste testimonial. I use X11 at work, and every day, after all these years, and knowing all the tricks, i still make at least one copy/paste blunder.

thatguy: thanks for the info about the experimental multi-user support, and also for the video by Leszek Lesner. When Leszek was showing the help for rsync (rsync --help), you could see that one of the lines was rsync --daemon (i.e., rsync in server mode). I'm not sure if Haiku supports it yet (because it could be that when it was ported, http://ports.haiku-files.org/, they just left the help intact).

But if it did support rsync --daemon, it would probably also support having an ssh server (the video showed only the ssh client).

Having an ssh server would be enough for remote access to the file system (and maybe just as good as nfs).

IMVHO this is important when you're developing, so that (e.g.) you can take a quick look at what kind of files your users are producing with your software. It's just very very handy to be able to quickly snoop around on another machine without getting up out of your chair to go interact with it: you don't want to interrupt whatever they're doing, or change the state of the gui, etc.

And for modern machines, which for a few hundred bucks have gigabytes of ram and terabytes of disk and execute thousands of times faster than machines 20 years ago did, any decrease in performance from having somebody remote in should be so small as to be not perceptible by a human. (I mean, telnet was available long before 1990, and even in that era, just remoting in and taking a look around would be a very light load. Today you'd do it with ssh, but the additional computational resources consumed would be miniscule.)

Regarding the double-buffering of windows, that's not exactly the same as a window hanging around because of gdb. The idea is that when an app paints, it paints into a secondary buffer which then goes in the window. So if the app suddenly gets very slow or crashes, you don't ever get a half-painted window. (On X11 systems you can see half painted windows in linux/firefox, for example, if some javascript or something goes crazy and consumes all the bandwidth so that firefox doesn't have a chance to paint in its window. Sometimes shaking the window back and forth shows this effect. But in a double buffered system this doesn't happen.) It's not like it's the most important thing in the universe, but given how capable our hardware it is a definite nice-to-have.

Thanks for the other info and suggestions.

Earl: Thanks for the info about multi-user, and the possible interpretations of it.

For the reasons i gave thatguy above, i don't think the console user should have to take a hit in performance in order for a secondary user with a different account to take a look at their work on a remote terminal. It wouldn't be reasonable for the secondary user to get a gui----that would be a slippery slope down to X11.

But i think being able to remote in and have simple terminal type access is very useful in a number of situations (e.g., if you have a bunch of machines in a lab, for example, the admin or teacher may want to flip through them ---- even in the context of a single user it may be useful to remote in and terminate a run away process that was locking up the gui).

So i guess we'll just see what the multi-user support turns out to involve.

Everybody: thanks again for all your information, ideas, and suggestions, and i hope you all get a lot of value out of your experience with Haiku.

dan

Re: writing applications

california_dan wrote:

For the reasons i gave thatguy above, i don't think the console user should have to take a hit in performance in order for a secondary user with a different account to take a look at their work on a remote terminal. It wouldn't be reasonable for the secondary user to get a gui----that would be a slippery slope down to X11.

What your really asking for is remote adminstration capability. Which has nothing "in terms of design" to do with the whole multiuser concept. The problem is adding that inherently makes any operating system utilizing it that much less secure. If you instance the deamon full time it will consume some level of resources. The amount depends on the amount of data and protocal it is supporting obviously.

california_dan wrote:

But i think being able to remote in and have simple terminal type access is very useful in a number of situations (e.g., if you have a bunch of machines in a lab, for example, the admin or teacher may want to flip through them ---- even in the context of a single user it may be useful to remote in and terminate a run away process that was locking up the gui).

again what your asking for a remote desktop feature of sorts. but if the GUI on haiku locks up, typically its becuase something is doing bad bad things to the kernel in general. You have to understand that the haiku GUI isn't a bolt on afterthought like it is with the linux model of designing things. It is all integrated well from the ground up. At which point a reboot is likely going to be your only cure. AFAIK there isn't a way to restart the GUI. also with the way haiku handles resources etc, GUI lock ups even with severly misbehaving applications is extremly uncomoons, about the only time I have seen the user GUI hang is becuase of driver failures. Beyond that applications just crash and the OS generally humngs along about 99% of the time.

Also haiku is more of a microkernel/modular kernel design then Linux is. Linux is definatively more monolithic kernel "even though its become modularized to some extent" and many of its applications are as well. I have had audio driver fialures that simply cuase the media server to just shut down. Nothing bad really happens. Where as with a linux OS audio drivers crash and it can bring the whole system down. The design of Haiku is inhereted from BeOS which had a more crash proof design then most commercial operating systems of its day. While things can and do crash, they rarely take out the OS and thanks to the aggresive premption model used by the kernel, even when things go south in really bad way it rarely cuases a problem where you can't use the application force shutdown acess ctrl-alt-del keys to bring it under control.

california_dan wrote:

So i guess we'll just see what the multi-user support turns out to involve.

Everybody: thanks again for all your information, ideas, and suggestions, and i hope you all get a lot of value out of your experience with Haiku.

dan

I wouldn't wait and I am not. There are plenty of utilitys that meet the needs you describe having, without carrying all the complexity and problems of a "conccurent multi user system" right now.

My advice would be to setup a Haiku install and take a look around. As to your comments about the API. the BEAPI "by default the Haiku API" is very good and if you need something where QT runs very well, the www.qt-haiku.ru site has a very good quality QT port for haiku that integrates extremly well.

look over at www.haikuware.com for applications etc. theres a good bit of stuff and most of it is of reasonable quality.

Re: writing applications

thatguy wrote:

Also haiku is more of a microkernel/modular kernel design then Linux is. Linux is definatively more monolithic kernel "even though its become modularized to some extent" and many of its applications are as well.

This simply isn't true. Haiku does not have a microkernel, it has a monolithic kernel, the same as Linux. Further Linux is more modularised, largely because the far greater hardware support has made it necessary to narrow things down more. For example, in Haiku there is one huge file for all HDA codecs. If you don't have a Realtek chipset Haiku will load workarounds for Realtek bugs anyway. The Linux kernel automatically detects which HDA codec you have and loads one of about a dozen different driver modules specific to a brand of codec. This means more hardware support with less waste.

Quote:

I have had audio driver fialures that simply cuase the media server to just shut down. Nothing bad really happens. Where as with a linux OS audio drivers crash and it can bring the whole system down.

The Haiku audio drivers are conventional monolithic drivers and as a result just as vulnerable to serious consequences if there are bugs in them. Worse, because Haiku's "new driver API" still isn't finished after all these years, many of the drivers don't do proper resource reservation, meaning that a user who mistakenly has two drivers for the same hardware (as happened all the time with Haiku audio) can expect misbehaviour or crashes as both try to access the device simultaneously.

Re: writing applications

NoHaikuForMe wrote:

For example, in Haiku there is one huge file for all HDA codecs. If you don't have a Realtek chipset Haiku will load workarounds for Realtek bugs anyway.

Well, the pages of code for these Realtek workaround will never be actually loaded because the HDA driver code itself will never trigger a (code) page fault to execute it.
Only a span of virtual memory pages is actually wasted, not physical memory.

Quote:

The Linux kernel automatically detects which HDA codec you have and loads one of about a dozen different driver modules specific to a brand of codec.

Actually, it's not the linux kernel but its generic module probing mecanism that does this. Which also mean that, in fact, the realtek specific codec module (and any others installed) is also loaded at least during probing phase.
So in the end it does little difference, as on both platforms the hardware detection code of each codec supported must be loaded in physical memory and run and in both case only the code actually of some use is kept loaded that way.

Anyway.

For code design consideration, Haiku HDA driver could be more modularized. But that will give us only lesser VM pages wasted, not physical ones. As the gain doesn't seems to worth it, considering the few amount of active contributors skilled in this area the project has, nobody did it yet.

Patches are always welcome, though.
Be our guest.

Re: writing applications

So now I actually had time to sit down with the Haiku source code and write a reply addressing the trickier part of your post.

phoudoin wrote:

Well, the pages of code for these Realtek workaround will never be actually loaded because the HDA driver code itself will never trigger a (code) page fault to execute it.
Only a span of virtual memory pages is actually wasted, not physical memory.

It would actually be possible (although fraught with danger) to do this, but Haiku doesn't. Instead you can see

                length = _kern_read(fd, programHeaders[i].p_offset,
                        (void *)(region->start + (programHeaders[i].p_vaddr % B_PAGE_SIZE)),
                        programHeaders[i].p_filesz);

The Haiku module loader routine simply reads the ELF sections into reserved kernel RAM. So all the realtek code (and everything else) is read from disk into RAM by this call. There is code to map an ELF image from a disk file into virtual addresses such that subsequent page faults bring it into RAM but so far as I was able to confirm that's only used for userspace programs, not Haiku's kernel add ons.

Quote:

So in the end it does little difference, as on both platforms the hardware detection code of each codec supported must be loaded in physical memory and run and in both case only the code actually of some use is kept loaded that way.

As we see this isn't true. Not only is there no need to load all this "hardware detection code" from every driver on Linux as I explained above, but Haiku does in fact load the entire driver into RAM.

Quote:

For code design consideration, Haiku HDA driver could be more modularized. But that will give us only lesser VM pages wasted, not physical ones. As the gain doesn't seems to worth it, considering the few amount of active contributors skilled in this area the project has, nobody did it yet.

Patches are always welcome, though.
Be our guest.

Of course, many of the things Haiku lacks are a result of your limited resources. But it does no good for people to insist that Haiku does this, or has that capability, when in fact it does not "for lack of resources".

Re: writing applications

NoHaikuForMe wrote:

There is no "probing phase". [...]
The "secret sauce" is a single line: MODULE_ALIAS("snd-hda-codec-id:10ec*");

Should be a recent design change then, because in my Ubuntu 9.04 (kernel 2.6.28.?), the HDA module codecs are still assembled all together (well, dependending on kernel config, that is) within a single module. It's kinda hard to keep in touch with Linux sound system change, as there's several. And chasing multiple targets is not that fun.

Anyway, that's a nice move indeed, and our own HDA driver will be better if non-standard codec was indeed moved into separated kernel modules so only used one are kept loaded.

NoHaikuForMe wrote:
phoudoin wrote:

Well, the pages of code for these Realtek workaround will never be actually loaded because the HDA driver code itself will never trigger a (code) page fault to execute it.
Only a span of virtual memory pages is actually wasted, not physical memory.

It would actually be possible (although fraught with danger) to do this, but Haiku doesn't. Instead you can see

                length = _kern_read(fd, programHeaders[i].p_offset,
                        (void *)(region->start + (programHeaders[i].p_vaddr % B_PAGE_SIZE)),
                        programHeaders[i].p_filesz);

The Haiku module loader routine simply reads the ELF sections into reserved kernel RAM.
So all the realtek code (and everything else) is read from disk into RAM by this call. There is code to map an ELF image from a disk file into virtual addresses such that subsequent page faults bring it into RAM but so far as I was able to confirm that's only used for userspace programs, not Haiku's kernel add ons.

Well, the code above is not the culprid one for that but the B_FULL_LOCK requested for the area to host the read data. Anyway, you are right: the whole HDA driver code is loaded in memory. I dunno why I though we had a lazy load also for kernel add-ons with some hack to lock the ones installing interrupt handlers. Maybe it was once proposed but never made it into code.

So, I stand corrected : our HDA driver is bloated.

Quote:

Of course, many of the things Haiku lacks are a result of your limited resources. But it does no good for people to insist that Haiku does this, or has that capability, when in fact it does not "for lack of resources".

Yes, braging about capabilities Haiku has not actually does no good.
One thing that could do good to Haiku is that people spending time to fight against Haiku community's pointless arguments will instead contribute the missing parts in Haiku.

After all, Linux is twice as old as Haiku, and had more than just twice its horsepower.
And no debate, flame, counter-argument or defense against whatever our community propaganda will improve Haiku more than, well, code contribution.
Everyone is free to spent his time as he want. But only time spent in doing actual contribution could change Haiku into a better Haiku...

One could simply start by filing en enhancement ticket in our Trac system, asking to split HDA codecs into submodules in order to reduce the code bloat loaded in kernel by the very commonly found and so used hda driver. That would be a good mark of willingness, too.

Re: writing applications

phoudoin wrote:

Should be a recent design change then, because in my Ubuntu 9.04 (kernel 2.6.28.?), the HDA module codecs are still assembled all together

This change landed in 2.6.29, over two years ago. I suppose this is "recent" in terms of Haiku's decade-plus lifecycle but most general purpose systems have a much faster turnover. When Haiku began (as OpenBeOS) the Intel HDA proposal did not exist.

Quote:

After all, Linux is twice as old as Haiku, and had more than just twice its horsepower.

Indeed, however the decision not to use an existing, mature kernel was never forced upon Haiku's developers. OpenBeOS (as it was then called) made an explicit decision to re-invent the wheel. If they want to revisit that decision now they don't need a bug report to do that. But I suspect you will find that the sunk cost fallacy rules the day.

Re: writing applications

NoHaikuForMe wrote:
phoudoin wrote:

Should be a recent design change then, because in my Ubuntu 9.04 (kernel 2.6.28.?), the HDA module codecs are still assembled all together

This change landed in 2.6.29, over two years ago. I suppose this is "recent" in terms of Haiku's decade-plus lifecycle but most general purpose systems have a much faster turnover. When Haiku began (as OpenBeOS) the Intel HDA proposal did not exist.

And? Today both Linux and Haiku has HDA support.
Sure, since 2.6.29 the HDA linux driver is more modular than the Haiku's one.
But it doesn't make the later not working, just a lesser well designed one.

Quote:

Indeed, however the decision not to use an existing, mature kernel was never forced upon Haiku's developers. OpenBeOS (as it was then called) made an explicit decision to re-invent the wheel. If they want to revisit that decision now they don't need a bug report to do that.

I fail to see the link here. Linux's HDA driver was made more modular over two years ago, and for you that's translate into a proof that Haiku should have use Linux kernel instead of his own?! Since when porting a driver needs to change the whole kernel too? When the price of this change is higher than write from scratch or a thin adapter layer, I fail to see the point, in particular when since start BeOS and Haiku were never an Unix operating system.

We've ported a good portion of xBSD network adapters drivers without switching to a BSD kernel. Why would we needs to do that for audio drivers? Even Linux kernel modules are not that dependent on kernel private API...

The "enhancement" report (it's not buggy, just code bloated) was just about asking for HDA driver to be more modular than today, nothing more, nothing less. It's not like all Haiku drivers have the same issues. It's not like all Haiku audio drivers have either.

But thanks for staying on the non-contributing camp, all while saying that less contributors doesn't explain most of the issues Haiku is still facing...

Re: writing applications

phoudoin wrote:

I fail to see the link here.

The kernel sub-project is hugely manpower intensive because of its unnecessarily large scope. It makes no sense to engage in such a project and then complain that you're short of people. It's like buying one last drink with your taxi money and then complaining that now you have to walk home.

All OpenBeOS actually needed was someone to tweak say, a BSD kernel to meet their requirements. This much smaller project would have resulted in access to a mature kernel and thus far broader hardware support, much improved portability, and all with less effort.

Re: writing applications

NoHaikuForMe wrote:
phoudoin wrote:

I fail to see the link here.

The kernel sub-project is hugely manpower intensive because of its unnecessarily large scope. It makes no sense to engage in such a project and then complain that you're short of people. It's like buying one last drink with your taxi money and then complaining that now you have to walk home.

All OpenBeOS actually needed was someone to tweak say, a BSD kernel to meet their requirements. This much smaller project would have resulted in access to a mature kernel and thus far broader hardware support, much improved portability, and all with less effort.

What part of "the developers didn't want to use a server/unix kernel" didn't you get ?

I for one would never consider using a linux kernel. I'd rather gouge out my eyes then untangle that sphagehtti mess of code.

Re: writing applications

thatguy wrote:

What part of "the developers didn't want to use a server/unix kernel" didn't you get?

I for one would never consider using a linux kernel. I'd rather gouge out my eyes then untangle that sphagehtti mess of code.

To add, Linux kernel came out in 1991 and the GNU project has been looking to develop their own kernel (Hurd) for GNU OS which was based off Mach microkernel and now L4 microkernel. Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel. GNU project wanted to move away from Linux kernel because inefficient and huge which will only get worse over time. In big part because of monolithic design.

Linus wrote only the kernel & the GNU project created the rest of the OS for Linux.

If Linux kernel was so great & fast then why wouldn't GNU just accept it sooner? Makes you wonder. Simply because GNU project wanted something better by realizing the drawbacks to monolithic design but were unable to deliver. There was little interest in making competing kernel & Linux kernel had become standard making it impossible to get developers to work on a GNU kernel.

NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess

Haiku aims to be fast, clean and efficient which would get bogged down with the Linux kernel. Linux kernel would give trade off of greater hardware support for lesser performance.

Re: writing applications

tonestone57 wrote:

Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel.

Jeez, you are starting to spread lies at the same rate as thatguy, I had respect for you tonestone but that is quickly fading.

FSF put Hurd on the back-burner AGES ago as soon as Linux started gaining traction and focused on providing the gnu tools needed for Linux to become a self-sufficient system, (compilers, core/binutils, libs etc) and the Hurd hasn't seen any serious development since.

tonestone57 wrote:

NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess

Back this claim up, show me some benchmarks that show Linux to be huge, bloated and inefficient!

You try so hard to paint Linux as inefficient which obviously has something to do with some crazy notion that monolithic kernels are somehow the root of all evil, despite the fact that you can't point to a single factual thing as to why that would be.

I find it easy to ignore the stupidity of thatguy, since he obviously has no clue whatsoever, but you atleast seems to have basic computer knowledge. And yet you are just throwing out sweeping statements with nothing to back them up with. It's just sad when someone like NoHaikuForMe is pretty much the only one in this thread who presents FACTS.

And why is it that guys like tonestone and thatguy are so hellbent on mudslinging Linux? I find it so damn sad since as a huge Haiku fan I want Haiku to attract developers and the NUMBER ONE place from where Haiku has a hope of attracting them is from that of other 'alternative' operating systems of which Linux is by far the largest. And instead I see morons like thatguy popping up on Linux oriented boards attacking Linux like some frothing-at-the-mouth madman while waving the Haiku banner, totally idiotic.

It's no wonder this community starts to seem stale when it's filled with such crazy haters. I'm really losing faith here, and seeing some weirdo like NoHaikuForMe coming across as the 'voice of reason' in this thread just underlines that.

Re: writing applications

Quote:

FSF put Hurd on the back-burner AGES ago as soon as Linux started gaining traction and focused on providing the gnu tools needed for Linux to become a self-sufficient system, (compilers, core/binutils, libs etc) and the Hurd hasn't seen any serious development since.

Not right because people are still working on Hurd at a slower pace. You understand that there's a difference between slow development and no development? Also, the microkernel choice has changed a couple of times setting Hurd back further.

Yet for some reason Debian offers Hurd release. Also development on Hurd is still going on with the release of GNU/Hurd 0.401.
http://www.gnu.org/software/hurd/

"The Hurd is under active development, but does not provide the performance and stability you would expect from a production system. Also, only about every second Debian package has been ported to the GNU/Hurd. There is a lot of work to do before we can make a release."
http://www.debian.org/ports/hurd/

There is even an Arch Hurd ISO in the works.
http://www.archhurd.org/news/19/

Posted Nov 25, 2010
"How many developers are working on the GNU Hurd?
Not many. One handful work on it in their free time, and another two handful do help with Debian GNU/Hurd and Arch Hurd packaging. Also, an additional handful of former developers are still available for answering technical questions, but are not really participating in the current development anymore."

http://www.gnu.org/software/hurd/faq.html

Quote:

Back this claim up, show me some benchmarks that show Linux to be huge, bloated and inefficient!

I've already done this. I provided a link to a news story where Linus admits to this himself. Of course you may argue that was in 2009 and now 2 years later but the situation is not likely to have improved. Direct quotes from Linus & the reporter to prove this:

Quote:

LinuxCon 2009 Linux creator Linus Torvalds says the open source kernel has become "bloated and huge," with no midriff-slimming diet plan in sight.

Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.

"We're getting bloated and huge. Yes, it's a problem," said Torvalds.

"I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse."

http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/

Linus directly says the kernel is getting bloated & huge himself!!! So now you don't even believe Linus? I also showed the code size of the kernel getting bigger and bigger very fast which makes it more open to code bloat. Do you even read any of my posts or links? I gave that same link above in another post in this very thread. I am sure I provided the exact same quotes in that other post too.

The inefficient issue was in comparison to previous releases of the Linux kernel. With every release the kernel was getting slower and slower. ie, becoming inefficient; performance was dropping - took 12% hit. The benchmark was provided and done by Intel. Maybe you don't believe Intel either? I did say inefficient right? Pretty sure I never compared or stated worse or better than other OS kernels in performance. There is a difference between the two. You do realize that?

I was trying to point out that because Linux kernel is huge, bloated, inefficient and complex that many alternative OSes don't bother using it. Those are the reasons why the GNU project itself is looking to not use Linux kernel and move to Hurd in the next 10 years. R. Stallman only recently accepted Linux kernel in 2010 because 1) way too slow development on Hurd 2) performance issues to deal with because Hurd was still not finished 3) the vast amount of drivers for Linux kernel. Had there been more developers then Mr. Stallman would have been pushing for Hurd instead. Why should other OSes use the Linux kernel when they cannot even convince the GNU project to use it? Can you not see the irony there?

Re: writing applications

tonestone57 wrote:

The inefficient issue was in comparison to previous releases of the Linux kernel. With every release the kernel was getting slower and slower. ie, becoming inefficient; performance was dropping - took 12% hit. The benchmark was provided and done by Intel. Maybe you don't believe Intel either?

Bottomley says the results are from a "database benchmark that we can't name". That's right, they can't even tell you what they tested. Intel posted results from an unnamed OLTP benchmark. We might reasonably choose to imagine that this is TPC-C, probably on Oracle. We know the hardware involved is a Nehalem Xeon monster 72GB RAM system with 192 SSDs.

There's a fair amount of crazy involved here, their "benchmark" system does things no actual production database would do, for example we know it ran in realtime priority, which means that changes to Linux aimed at realtime systems will mess with the results. But the whole point of a realtime priority is that you are willing to trade throughput for scheduling reliability - and the benchmark measures throughput but doesn't care about scheduling. So that doesn't make a whole lot of sense.

But despite these caveats Intel's benchmark posting to LKML was useful to identify and fix some regressions. By 2010 when Intel appear to have discontinued this project‡, the gap had narrowed from 12% to 0.8%. There is essentially no equivalent regression testing for Haiku of course.

Quote:

I was trying to point out that because Linux kernel is huge, bloated, inefficient and complex that many alternative OSes don't bother using it.

The results speak for themselves, don't they?

‡ The real purpose of these postings is pretty transparent, Intel engineers needed to learn how to tune a Linux system for TPC benchmarks, they learned a lot about doing that from the feedback they received, and this doubtless helped them achieve some record results for new Intel systems in 2010.

Re: writing applications

Rox wrote:
tonestone57 wrote:

Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel.

Jeez, you are starting to spread lies at the same rate as thatguy, I had respect for you tonestone but that is quickly fading.

FSF put Hurd on the back-burner AGES ago as soon as Linux started gaining traction and focused on providing the gnu tools needed for Linux to become a self-sufficient system, (compilers, core/binutils, libs etc) and the Hurd hasn't seen any serious development since.

tonestone57 wrote:

NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess

Back this claim up, show me some benchmarks that show Linux to be huge, bloated and inefficient!

You try so hard to paint Linux as inefficient which obviously has something to do with some crazy notion that monolithic kernels are somehow the root of all evil, despite the fact that you can't point to a single factual thing as to why that would be.

I find it easy to ignore the stupidity of thatguy, since he obviously has no clue whatsoever, but you atleast seems to have basic computer knowledge. And yet you are just throwing out sweeping statements with nothing to back them up with. It's just sad when someone like NoHaikuForMe is pretty much the only one in this thread who presents FACTS.

And why is it that guys like tonestone and thatguy are so hellbent on mudslinging Linux? I find it so damn sad since as a huge Haiku fan I want Haiku to attract developers and the NUMBER ONE place from where Haiku has a hope of attracting them is from that of other 'alternative' operating systems of which Linux is by far the largest. And instead I see morons like thatguy popping up on Linux oriented boards attacking Linux like some frothing-at-the-mouth madman while waving the Haiku banner, totally idiotic.

It's no wonder this community starts to seem stale when it's filled with such crazy haters. I'm really losing faith here, and seeing some weirdo like NoHaikuForMe coming across as the 'voice of reason' in this thread just underlines that.

Well, think whatever you want, I certainly do not care. 80+ % of computer users have spoken, linux desktop sucks and no amount of evangelizing will change this fact.

I have never said linux was inefficient. I said linux is a big mess, and frankly there is no refuting that statement. That and I dislike the attitudes of most linux users.

the only people spreading lies here are the linux users. They must feel threatened by haiku. why else bother with the community here ?

Enough

Can we please call this a dead horse?
This thread has outlived its usefulness.

Re: writing applications

tonestone57 wrote:
thatguy wrote:

What part of "the developers didn't want to use a server/unix kernel" didn't you get?

I for one would never consider using a linux kernel. I'd rather gouge out my eyes then untangle that sphagehtti mess of code.

To add, Linux kernel came out in 1991 and the GNU project has been looking to develop their own kernel (Hurd) for GNU OS which was based off Mach microkernel and now L4 microkernel. Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel. GNU project wanted to move away from Linux kernel because inefficient and huge which will only get worse over time. In big part because of monolithic design.

Linus wrote only the kernel & the GNU project created the rest of the OS for Linux.

If Linux kernel was so great & fast then why wouldn't GNU just accept it sooner? Makes you wonder. Simply because GNU project wanted something better by realizing the drawbacks to monolithic design but were unable to deliver. There was little interest in making competing kernel & Linux kernel had become standard making it impossible to get developers to work on a GNU kernel.

NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess

Haiku aims to be fast, clean and efficient which would get bogged down with the Linux kernel. Linux kernel would give trade off of greater hardware support for lesser performance.

Actually the linux kernel has better overall throughput performance then haiku or BEOS or GNU Hurd minix and the NT kernels.

It does have that, but with that small 3-5% edge over some other kernels comes a huge mess of crap I personally don't want to deal with it either. The biggest problem with linux is that it is essentially anarchy and no one is leading. Without leadership, focus and vision, you get a big mess.

haiku will continues to use code that makes sense and it fiarly well designed. I don't have a problem with the hda driver either. Works fine for me.

Re: writing device drivers

Who do I talk to for help in writing drivers for Haiku?

I am trying to port/improve my old drivers from Beos to Haiku, the simple ones that support simple I/O work fine, but I am having real problems getting the more complex ones working.

I have been looking at the Haiku source code for weeks and just can't seem to see my mistake - HELP!

For example, CRAM is seen by DiskProbe but not by DriveSetup so I can't mount it.

I am trying to post this question to the Haiku-Development mailing group, but I seem to be messing up there too.

Re: writing device drivers

Earl Colby Pottinger wrote:

Who do I talk to for help in writing drivers for Haiku?

I am trying to port/improve my old drivers from Beos to Haiku, the simple ones that support simple I/O work fine, but I am having real problems getting the more complex ones working.

I have been looking at the Haiku source code for weeks and just can't seem to see my mistake - HELP!

For example, CRAM is seen by DiskProbe but not by DriveSetup so I can't mount it.

I am trying to post this question to the Haiku-Development mailing group, but I seem to be messing up there too.

did you create a account at freelists.org ? You have to create a account and then join the mailing list.

Once you do that your email will be sent.

Re: writing device drivers

I have now created an account now. Before I was trying to subscribe without creating an account first (boy am I dumb) and yes they have very quickly helped me there! I already see my driver working better and hope to get it working complete by the weekend.

This is why I like Haiku-OS, the developers are far more friendly/helpful than the Linux developers I have talked to locally. And the Linux forums are over-full of people like 'NoHaikuForMe' which does not make asking questions something you want to do. Mostly I just read the Linux forums that cover subjects I am interested in, but re-frame from asking questions.

Re: writing device drivers

My ram drive now works!

Thanks to the Haiku developers for the needed clues.

Now to add compression.

Re: writing device drivers

Earl Colby Pottinger wrote:

I have now created an account now. Before I was trying to subscribe without creating an account first (boy am I dumb) and yes they have very quickly helped me there! I already see my driver working better and hope to get it working complete by the weekend.

This is why I like Haiku-OS, the developers are far more friendly/helpful than the Linux developers I have talked to locally. And the Linux forums are over-full of people like 'NoHaikuForMe' which does not make asking questions something you want to do. Mostly I just read the Linux forums that cover subjects I am interested in, but re-frame from asking questions.

Mostly I find linux developers to be hacks, Not hackers. Just hacks.

http://dictionary.reference.com/browse/hack

to damage or injure by crude, harsh, or insensitive treatment; mutilate; mangle: The editor hacked the story to bits.

Re: writing applications

I think, that running Haiku on Linux kernel is not hard task. To do this need to learn kernel to launch Haiku binaries, make accelerant, media_server, input_server and print_server add ons that work with Linux drivers and run bootsript when kernel initalize. No rebuild needed.

But I don't think that Haiku kernel need to be abandoned. Haiku kernel is more compatible with multi-threading, more simple and fast.

Re: writing applications

NoHaikuForMe wrote:

The kernel sub-project is hugely manpower intensive because of its unnecessarily large scope.

It's not anymore. Most work done in kernel land since a couple of years is not anymore in what is actually called "the kernel" but in drivers and their userland composants.
And when it comes to write hardware drivers, any alternative operating systems have a manpower issue. After 20 years, Linux has now enough market share to get some most complex and critic hardware (GPU, network adapters) support from their manufacturers themselves. The standardisation of several hardware *busses* and devices *classes* (AHCI, USB, ACPI, even HDA ;-) ) make this task more easier than it used to be 20 years ago when outside Windows your hardware had no support at all, and no technical datasheet from manufacters. This trend benefit Linux, but more even smaller alternative operating systems.

Quote:

It makes no sense to engage in such a project and then complain that you're short of people.

We don't complain. Knowing and acknowledge that some drivers are not the best design they could be is not complaining. Otherwise, I'll bet that pretty much every operating software developer is complaining, then.
You're the one complaining that our HDA driver design is not good enough, which I acknowledge.
You want a better designed driver? Stop complaining about it and start contributing.
I'm not complaining about the lack of contributors, I'm just complaining about people complaining that an open (aka contributed by people for people) project is not good enough to their taste without seeing that it's pointless: it's not bad enough if you don't care enough to actually do yourself something to improve it.

Quote:

It's like buying one last drink with your taxi money and then complaining that now you have to walk home.

Which, ironically, is a better solution than taxiing home earlier while you're still quite drunk. Nothing better than a long walk to think about issues at hand
;-)

Quote:

All OpenBeOS actually needed was someone to tweak say, a BSD kernel to meet their requirements. This much smaller project would have resulted in access to a mature kernel and thus far broader hardware support, much improved portability, and all with less effort.

Nobody will never knows because, well, simply nobody did it. Someones tries this path with Linux Kernel (the BluEyeOS project IIRC), but never reach critical mass.

Call it stupid[est] decision as much as you want but one thing remains: only Haiku reach "let's rewritten BeOS in open source" critical mass. It doesn't make it perfect, it's doesn't make it the only possible way to do it, but so far it's the only one attempted which did it.

And it's not an under-designed single audio hardware driver that will change that fact.

Re: writing applications

phoudoin wrote:

Actually, it's not the linux kernel but its generic module probing mecanism that does this. Which also mean that, in fact, the realtek specific codec module (and any others installed) is also loaded at least during probing phase.

I shall try to find time to write a more in-depth response, but ah, No. There is no "probing phase".

If you read the code, sound/pci/hda/patch_realtek.c you will see the realtek codec module doesn't even have a probe method. It consists just of code to be run for these HDA codecs, together with their IDs. There is no need for a probe method because the HDA driver is able to determine the codec ID for itself.

The "secret sauce" is a single line: MODULE_ALIAS("snd-hda-codec-id:10ec*");

The snd-hda-codec-id:10ec* string is baked into the module, and used by the userspace module tools to match it to requests from the kernel. Any codec with an ID beginning 10ec (Realtek's vendor prefix) will cause this module to be loaded. Until it's requested, none of this information needs to be in RAM. And once it has been loaded it can all be flushed when necessary.

This approach is used throughout Linux (and indeed by NT and presumably other modern systems that aspire to run on more than a handful of different people's computers). There has been a project to try to introduce the same approach to Haiku, but it seems stalled. Here are some examples of such magic strings:

bt-proto-6  (Bluetooth HID)
usb:v*p*d*dc*dsc*dp*ic01isc01ip*  (general USB PCM audio)
pcmcia:m01F1c0100f*fn*pfn*pa*pb*pc*pd*  (a PC Card for laptop pro audio)
acpi*:PNP0400:*  (a PC parallel port reported via ACPI)

Re: writing applications

NoHaikuForMe wrote:

This simply isn't true. Haiku does not have a microkernel, it has a monolithic kernel, the same as Linux. Further Linux is more modularised, largely because the far greater hardware support has made it necessary to narrow things down more. For example, in Haiku there is one huge file for all HDA codecs. If you don't have a Realtek chipset Haiku will load workarounds for Realtek bugs anyway. The Linux kernel automatically detects which HDA codec you have and loads one of about a dozen different driver modules specific to a brand of codec. This means more hardware support with less waste.

do you ever shut up. haiku is a hybrid kernel and far more compartmentalized/modular then linux. If you want to argue with the devs you can, but considering they wrote it and thats what they refer to it as being. I will take there word on it over yours.

also the comparison can be made on sheer size, the linux kernel " the raw kernel" is massively larger.

BTW hardware support is great, if it actually works, with linux it rarely works correctly.

NoHaikuForMe wrote:

The Haiku audio drivers are conventional monolithic drivers and as a result just as vulnerable to serious consequences if there are bugs in them. Worse, because Haiku's "new driver API" still isn't finished after all these years, many of the drivers don't do proper resource reservation, meaning that a user who mistakenly has two drivers for the same hardware (as happened all the time with Haiku audio) can expect misbehaviour or crashes as both try to access the device simultaneously.

Really they are ? thats news to me becuase I can delete them while the system is running and geuss what, nothing happens. the media server shuts down and throw a debug warning. Thats about it.

try that on a linux system and get back to me.

there is a issue with opensound and native drivers. But thats more of a problem with open sound and they can coexist just fine. In haiku selecting a driver is as easy as opening the media prefernces and selecting the driver and restarting the media server. I think you knowledge of the audio capabilitys is generally overstated.

but you likely didn't know that.

Re: writing applications

thatguy wrote:

do you ever shut up. haiku is a hybrid kernel and far more compartmentalized/modular then linux.

"hybrid kernel" is a meaningless marketing term. Haiku has a conventional monolithic kernel design.

Quote:

If you want to argue with the devs you can, but considering they wrote it and thats what they refer to it as being. I will take there word on it over yours.

Can you cite where "the devs" en masse agree with this?

Quote:

also the comparison can be made on sheer size, the linux kernel " the raw kernel" is massively larger.

What comparison? "microkernel" does not mean "it's smaller". The purpose of the µkernel design is to put the very minimum of components into the privileged "Ring 0", Haiku doesn't even attempt to do this. BeOS made one gesture in this direction, running the network protocols in userspace, but Haiku reverses even this difference because of the lousy performance.

Quote:

BTW hardware support is great, if it actually works, with linux it rarely works correctly.

You'd have to define "rarely" in a very strange way to support this claim.

Quote:

Really they are ? thats news to me becuase I can delete them while the system is running and geuss what, nothing happens. the media server shuts down and throw a debug warning. Thats about it.

try that on a linux system and get back to me.

You can delete the driver files and the Haiku kernel (rather amusingly) treats this as a request to unload the driver. So then you have no more driver and sound comes to a stop.

In Linux deleting the driver files just removes the files from disk. The kernel is unaffected. Music (if you happened to be playing some) continues as before. If you want to unload the driver, you can do this explicitly, but you need to stop the music first.

I don't think Haiku's approach here is the Right Thing™, anyone who understands the driver dependencies well enough to know whether removing this or that file will actually have the desired effect, also understands enough to manually unload a driver. For ordinary users the behaviour is just mysterious, sometimes you can "upgrade" a driver by replacing one file, sometimes a complicated dance is needed. In the end most of them will reboot to see if it works.

Quote:

there is a issue with opensound and native drivers. But thats more of a problem with open sound and they can coexist just fine. In haiku selecting a driver is as easy as opening the media prefernces and selecting the driver and restarting the media server. I think you knowledge of the audio capabilitys is generally overstated.

I explained exactly why the problem occurs already. The process you call "selecting a driver" doesn't actually choose which kernel audio driver is loaded but only which of the kernel drivers will be used by default by Haiku's system mixer. You can see the equivalent preferences panel in many other operating systems, except that it's not necessary to "restart" anything. I can choose my Bluetooth headset, the music continues playing seamlessly and then I can walk away from the PC. Perhaps one day Haiku will be able to do this too.

Re: writing applications

NoHaikuForMe wrote:

"hybrid kernel" is a meaningless marketing term. Haiku has a conventional monolithic kernel design.

Can you cite where "the devs" en masse agree with this?

What comparison? "microkernel" does not mean "it's smaller". The purpose of the µkernel design is to put the very minimum of components into the privileged "Ring 0", Haiku doesn't even attempt to do this. BeOS made one gesture in this direction, running the network protocols in userspace, but Haiku reverses even this difference because of the lousy performance.

You'd have to define "rarely" in a very strange way to support this claim.

You can delete the driver files and the Haiku kernel (rather amusingly) treats this as a request to unload the driver. So then you have no more driver and sound comes to a stop.

In Linux deleting the driver files just removes the files from disk. The kernel is unaffected. Music (if you happened to be playing some) continues as before. If you want to unload the driver, you can do this explicitly, but you need to stop the music first.

I don't think Haiku's approach here is the Right Thing™, anyone who understands the driver dependencies well enough to know whether removing this or that file will actually have the desired effect, also understands enough to manually unload a driver. For ordinary users the behaviour is just mysterious, sometimes you can "upgrade" a driver by replacing one file, sometimes a complicated dance is needed. In the end most of them will reboot to see if it works.

I explained exactly why the problem occurs already. The process you call "selecting a driver" doesn't actually choose which kernel audio driver is loaded but only which of the kernel drivers will be used by default by Haiku's system mixer. You can see the equivalent preferences panel in many other operating systems, except that it's not necessary to "restart" anything. I can choose my Bluetooth headset, the music continues playing seamlessly and then I can walk away from the PC. Perhaps one day Haiku will be able to do this too.

1.A monolithic kernel runs all drivers and services in Ring 0. If you want to argue with Andrew Tennebaum about this please feel free to do so. This is not the case with Haiku and you are well aware of this. Linux is a monolithic kernel. Haiku is more monolithic then micro kernel but does not approach the level of service integration of linux. Don't try lending the exscuse that its becuase haiku is less featured. Its a design beneit every other OS vendor has used for the past 15 years and with good reason.

2.Generally microkernel means bare minimum to make the machine run and nothing more. Everything runs outside of the kernel versus inside of it. I am begging to question your actuall knowledge of operating system design.

3.Linux drivers and the kernel abi for drivers is extremly unstable leading to frequent bugs and fialures. Its not some gigantic conspiracy to paint linux in a bad light. It does it in this regard well enough on its own. You act as if no one here has run linux based OS's. We have and we found them wanting.

4. I don't really care what you think of the approach of the haiku developers. I just don't care. It is meaningless to us and them. If they thought your advocations had merit, the designs would have been implemented. They were not implemented.

5. As to your comments about drivers. Well, I don't agree with your assertions and basically, its not worth arguing with you.

Basically,you should troll on back to linuxville and go evangelize to people who agree with you.

kernel

thatguy wrote:

1.A monolithic kernel runs all drivers and services in Ring 0.

This is roughly correct...

Quote:

This is not the case with Haiku and you are well aware of this.

... but you're wrong here. It is absolutely the case with Haiku. For example you will find Haiku's drivers are just kernel modules, and not separate processes running in userspace. Drivers for disk controllers, display adaptors, pointing devices, sound, and so on, are all just code inserted at runtime into the kernel.

Quote:

Linux is a monolithic kernel. Haiku is more monolithic then micro kernel but does not approach the level of service integration of linux.

You haven't offered a single example where this is true. As I explained earlier, when compared to BeOS Haiku has pushed even more stuff into the kernel.

Quote:

Don't try lending the exscuse that its becuase haiku is less featured. Its a design beneit every other OS vendor has used for the past 15 years and with good reason.

Although it's true that in many places no judgement can be made because Haiku simply lacks support altogether, I think in every comparable place it's clear that Haiku shoves the same or more into the kernel. On the whole I would say this is because of a combination of inexperience and lack of manpower.

Re: kernel

Wikipedia has good information and comparison on different types of kernels.
http://en.wikipedia.org/wiki/Hybrid_kernel

The monolithic kernel: all (or most) stuff done in kernel mode
The microkernel: *minimal* stuff in kernel mode and most stuff done in user mode
The hybrid kernel: stuff fairly split between kernel & user modes

It also lists which kernels are hybrids which includes BeOS & Haiku but not Linux.

Linux, Windows 9x/ME & most BSDs are still monolithic according to Wikipedia. If this were not true then I would have expected someone to have changed this by now. It has been listed like this for very long time on Wikipedia.

PS
#1 Microsoft dropped 9x/ME monolithic kernel and developed a new hybrid NT kernel from the start. ie: not possible to change from one kernel type to another without major rework.
#2 NewOS kernel (found in Haiku) was written by former BeOS developer trying to make a similar hybrid kernel as found in BeOS

Re: kernel

tonestone57 wrote:

Wikipedia has good information and comparison on different types of kernels.
http://en.wikipedia.org/wiki/Hybrid_kernel

There's some fancy footwork in the opening description to avoid making plain what is meant, I particularly enjoyed:

"While there is no performance overhead for message passing and context switching between kernel and user mode, as in monolithic kernels, there are no performance benefits of having services in user space, as in microkernels."

Initially that looks as though you get some advantages from one, and some from the other. But when you stop and read it again for meaning you discover it's saying hybrids have the advantages of a monolithic kernel (because they /are/ monolithic kernels) but not the advantages of a microkernel (because they are /not/ microkernels).

There's a wonderful diagram in which we see that the "UNIX server" is a userspace component in a hybrid kernel. No further mention of this occurs, because of course if you examine one of the supposed examples, Haiku, you will find that the "UNIX server" userspace component doesn't exist, and instead the Unix system call framework is implemented by the monolithic kernel.

But maybe I'm being unfair - is it good information? If it were good information, you'd expect references to back it up. Well the article does have some references. What you get are links to various descriptions of operating systems which never use the word "hybrid". For example to prove Netware has a hybrid kernel, a Wikipedian has linked a reference which describes it as a microkernel. To prove that Plan 9 is a hybrid kernel, a long and fairly detailed paper has been linked which never says any such thing. There's a link to an obsolete Microsoft document which describes their approach as a "macrokernel" but never as hybrid.

As a crowning glory one of the quotes from the earliest version of this article has survived. It's a quote of Linus Torvalds in which he dismisses the entire concept. Earlier versions of this article used to include similar quotes from other people who actually write operating system kernels for a living, but it seems those were "too negative" and had to be removed to make space for more unsupported claims.

Quote:

It also lists which kernels are hybrids which includes BeOS & Haiku but not Linux.

Indeed, but it's Wikipedia, why not be bold and remove them? You might also want to remove Plan 9 and Netware, since as I explained the references contradict the claims made about them in the article.

Quote:

Linux, Windows 9x/ME & most BSDs are still monolithic according to Wikipedia. If this were not true then I would have expected someone to have changed this by now. It has been listed like this for very long time on Wikipedia.

Indeed, the Linux and BSD developers are quite comfortable saying that their kernels are monolithic kernels, since they never set out to build a microkernel nor to impress "journalists" from OSNews.

Quote:

#1 Microsoft dropped 9x/ME monolithic kernel and developed a new hybrid NT kernel from the start.

Ah, no, you have your history a little wrong. Microsoft began the NT project a little before work was begun on BeOS, long before anyone conceived of a "Windows 95". Initially NT was seen as a future member of the OS/2 family, but Microsoft fell out with IBM and the design was altered to incorporate the newly successful Windows 3.x GUI. What NT has in common with Win95 is the Win32 API, which improves significantly on the Win16 API by offering a modern flat memory model and pre-emptive multitasking among other things.

From the application programmer's point of view it was this API (which debuted in about 1994 with the Win32s subsystem for Windows 3.x on 386 or above) which radically overhauled Windows, and not the much later switch to the NT kernel for Microsoft's consumer Windows brand.

Quote:

ie: not possible to change from one kernel type to another without major rework.

Indeed? So presumably the first thing you will be doing is removing DragonflyBSD from that list of hybrid kernels since, according to you it's "not possible to change from one kernel type to another" yet DragonflyBSD is merely a fork of FreeBSD, and remains similar enough for large bodies of code to move between the two.

Quote:

#2 NewOS kernel (found in Haiku) was written by former BeOS developer trying to make a similar hybrid kernel as found in BeOS

But "a similar hybrid" seems to be your description, not that of Travis the developer. The NewOS page never uses this to describe the kernel, and not does Travis in the brief search I attempted.

Re: kernel

Quote:
Quote:

ie: not possible to change from one kernel type to another without major rework.

Indeed? So presumably the first thing you will be doing is removing DragonflyBSD from that list of hybrid kernels since, according to you it's "not possible to change from one kernel type to another" yet DragonflyBSD is merely a fork of FreeBSD, and remains similar enough for large bodies of code to move between the two.

Never said not possible just that you would have to rework things in the kernel. Things would work differently. Code would have to change accordingly. The bigger the kernel (lines of code), the harder this would be to get done.

Kernels are taught in this one course:
http://www.physicsarchives.com/index.php/courses/176

You can also search for other textbooks to confirm what I link to above.

A short, similar overview is found here:
http://en.wikiversity.org/wiki/Operating_Systems/Kernel_Models

From the searches I have done they all say pretty much the same thing. They also list BeOS as hybrid kernel.

There are books out there that say the same stuff as you find in that Wikipedia article. Look for them and read them.

Just because Linus does not agree with stuff does not mean he is right. Why would Microsoft drop their monolothic OSes (DOS, 9x/ME) and move to hybrid (NT) kernel? Or classic Mac OS with monolithic kernel moving to hybrid kernel with OS X?

microkernels give the best security, stability & least amount of bugs but slower performance.
monolithic kernels give the best performance but more bugs, less secure & stable.
hybrid kernels take the best from both and give good security, stability & less bugs with very good performance.

Microkernel Approach:
http://genode.org/documentation/general-overview

Today only Linux & BSD still remain monolothic while all other current OSes are either hybrid or microkernel.

Re: kernel

tonestone57 wrote:

Never said not possible just that you would have to rework things in the kernel. Things would work differently. Code would have to change accordingly. The bigger the kernel (lines of code), the harder this would be to get done.

What "things" ? If this actually meant something you'd be able to say what they had to change to make FreeBSD's kernel into the hybrid DragonflyBSD kernel. I could point at a change and you'd say "Oh yes, that was necessary to make it a hybrid" or "No, that's unrelated". But you can't do that, and nor can anyone else because it's just another monolithic kernel.

Quote:

Just because Linus does not agree with stuff does not mean he is right.

That would be a more convincing line of argument if anyone of similar stature had ever actually refuted Linus on this issue.

Quote:

Why would Microsoft drop their monolothic OSes (DOS, 9x/ME) and move to hybrid (NT) kernel?

You are assuming your conclusion, which is called begging the question.

Quote:

Or classic Mac OS with monolithic kernel moving to hybrid kernel with OS X?

Again, begging the question.

Quote:

microkernels give the best security, stability & least amount of bugs but slower performance.
monolithic kernels give the best performance but more bugs, less secure & stable.
hybrid kernels take the best from both and give good security, stability & less bugs with very good performance.

If this were really true it would be so easy to prove, so why don't you try?

What does the Haiku "hybrid kernel" take from microkernels which compromises performance in order to provide greater security, stability or less bugs compared to a monolithic kernel?

Quote:

Today only Linux & BSD still remain monolothic while all other current OSes are either hybrid or microkernel.

Or equally, try to prove this according to your criteria (that is, not accepting there mere word of kernel developers). Show how Linux refuses to "take the best from both" by doing something which improves performance but compromises security and stability compared to Haiku.

Re: kernel

Quote:

That would be a more convincing line of argument if anyone of similar stature had ever actually refuted Linus on this issue.

Sorry but you are too biased only listening to Linus and discounting everyone else. Even Linus says Linux is monolothic kernel (in his post to Andy).

I have given multiple websites and there are many (if not all) sites out there that agree with what I'm saying. Hybrid Kernels are a real thing which you don't want to believe in.

The only current OSes with monolithic kernels are Linux, BSD & Unix-like OSes.
http://en.wikipedia.org/wiki/Comparison_of_kernels

All other OSes today use either hybrid or microkernel.

Quote:
Quote:

Why would Microsoft drop their monolothic OSes (DOS, 9x/ME) and move to hybrid (NT) kernel?

You are assuming your conclusion, which is called begging the question.

Assuming how? DOS till Windows ME used monolithic kernel. NT to Windows 7 went with hybrid kernel. Every site I have looked at confirms this to be true.

Quote:
Quote:

Or classic Mac OS with monolithic kernel moving to hybrid kernel with OS X?

Again, begging the question.

Classic Mac OS was monolithic too. Then around 8.6 switched to nanokernel (microkernel) and with Mac OS X went to hybrid kernel. Every source I checked confirms this.

Sorry but you don't believe in hybrid kernels and want to say they don't exist. That there are only monolithic & microkernels but that isn't true. Hybrid is a mix of the two, it is not either one but something in-between. Monolithic means everything done in kernel space & microkernel is where most processes run in user space. Hybrid runs some processes in kernel space and others in user space. The split does not have to be 50-50. It could be one or two processes that are run in user space instead of kernel space to be considered a hybrid. This concept is likely too hard for you to understand as I've seen from your posts. I've even given good links to support me but all you've done is rant on and on about how Linus knows best.

Funny how Apple + Microsoft hold 95% of the Desktop OS and they both understood they had to move from monolithic to hybrid kernels. I guess you'd rather believe & follow Linus. The Linux kernel will get more & more complex with time and have more bugs & stability issues as code gets pushed into it. Linux only works because it has lots of developers looking over the code & fixing it. Had Linux had the same # of developers as Haiku then it would have been very hard to maintain their kernel code and would have many more bugs & stability issues.

Quote:
Quote:

microkernels give the best security, stability & least amount of bugs but slower performance.
monolithic kernels give the best performance but more bugs, less secure & stable.
hybrid kernels take the best from both and give good security, stability & less bugs with very good performance.

If this were really true it would be so easy to prove, so why don't you try?

Because the information is already out there from multiple sources. You can search, find it and read it yourself. But then again if it doesn't come from Linus you won't believe anything out there.

Quote:

What does the Haiku "hybrid kernel" take from microkernels which compromises performance in order to provide greater security, stability or less bugs compared to a monolithic kernel?

Any processes done in kernel space give the highest performance. Any processes done in user space give the highest stability, security and less bugs because handled as separate process from the kernel.

Quote:

Show how Linux refuses to "take the best from both" by doing something which improves performance but compromises security and stability compared to Haiku.

Here you are confusing security & stability in the kernel for the OS. Two different things. Monolithic kernels are less secure and less stable than microkernels by design. That's just a fact whether you accept it or not. The information is out there if you look for it.

Apple & Microsoft swtiched to hybrid kernels because they realized the advantages. These are more secure, stable kernel, with less bugs and easier to maintain. That's why many OSes today go for hybrid (which also provides very good performance) or microkernel (for extreme stability and security). Here are some examples of where QNX microkernel are better suited over Linux because of extreme reliability, stability & security.
http://onqpl.blogspot.com/2008/03/10-qnx-systems-that-could-save-your.html
http://www.itbusiness.ca/it/client/en/CDN/News.asp?id=40793

Today, any OS not using hybrid or microkernel is living in the past when OSes were simplier to maintain. Monolithic kernels are bad today for OSes (from design perspective) but it works for Linux because of the massive amount of developers. ie: tons of developers which are able to make it work. Apple & Microsoft realized the benefits and dropped monolithic to swtich to hybrid kernels for their OSes. You probably hate it that Linux, Solaris & BSD are the only monolithic OSes today. Must really show how ancient their kernel design really is compared to the other OSes today. Sorry but this is just a fact whether you like it or not.

Re: kernel

tonestone57 wrote:

I have given multiple websites and there are many (if not all) sites out there that agree with what I'm saying. Hybrid Kernels are a real thing which you don't want to believe in.

There are plenty of sites that insist hybrid kernels are "a real thing" but none of them manage to say how you'd know. Maybe you don't agree? Feel free to follow their advice for diagnosing how Haiku uses a hybrid but Linux is monolithic.

tonestone57 wrote:

Any processes done in kernel space give the highest performance. Any processes done in user space give the highest stability, security and less bugs because handled as separate process from the kernel.

Give examples. Maybe you don't understand how to do that. Let me try for you:

Haiku's BeFS driver is in kernel space. The Haiku terminal window runs in user space.

Those examples are no good, because of course Linux filesystem drivers run in the kernel, and a Linux terminal window is a user space application. But that's the sort of contrast you're looking for. To make your point you'd need to show how important stuff that Linux has in the kernel, is instead a separate user process in Haiku. Do you see? Otherwise, by definition what you have is a monolithic kernel.

thatguy said earlier that he believed Haiku's sound card drivers are userspace. He was wrong about that, but if he'd been right that would have been a good start.

Quote:

Here you are confusing security & stability in the kernel for the OS ...

So, do you have any examples? No you don't.

FWIW This is why Linus was dismissive. Instead of actually knowing anything about Haiku's kernel, you have been taught to parrot "It's a hybrid" and pretend that this makes it superior. Why not learn something?

Quote:

Today, any OS not using hybrid or microkernel is living in the past ... Sorry but this is just a fact whether you like it or not

If you're still having trouble seeing how the wool was pulled over your eyes, I will suggest once more doing the opposite: Explain why Linux is not a hybrid kernel. Imagine someone forks Linux, and (rather as happened with DragonflyBSD) fan boys begin claiming that their new kernel "LinuxPlus" is a hybrid. Explain why they're wrong. If you find yourself resorting to "Linus says so" then you've made my point.

Re: kernel

removed this one because double posted

Re: kernel

It's funny how tonestone57 is adamant that Haiku is a hybrid kernel and yet can't seem to point at one single reason why it is a hybrid kernel.

I can't say either way myself, because generally I'm not interested in how something is labeled but rather how something performs, and in this area Haiku is excellent for my needs.

But in the interest of understanding, a hybrid would logically be a kernel which places some parts of what is traditionally inside the kernel (a monolithic kernel) into user-space. So in the case of Haiku, what are these user-space components which usually reside inside a monolithic kernel? If there are such, then I guess Haiku would be considered a hybrid kernel, but sofar tonestone57 haven't been able to point at any such components. That doesn't mean there isn't any, so if anyone knows I'd like to be informed. Given the information presented in this thread I'd have to say that (although it hurts) NoHaikuForMe is right, there's no reason to call Haiku a hybrid kernel.

Not that it really matters to me, I don't care Haiku's kernel is labeled hybrid or monolithic, as long as it stays fast and tuned for responsiveness.

Re: kernel

Rox wrote:

It's funny how tonestone57 is adamant that Haiku is a hybrid kernel and yet can't seem to point at one single reason why it is a hybrid kernel.

Ok, I made a post that will makes things clearer why Haiku's kernel is a hybrid kernel. I was ill & tired back then and only got to it now. That post is blocked by spam filter and waiting for admin to approve it.

Quote:

Given the information presented in this thread I'd have to say that (although it hurts) NoHaikuForMe is right, there's no reason to call Haiku a hybrid kernel.

Sorry but no. Just because I never properly explained the reason why does not mean you should believe NoHaiku and think it's monolithic. Wikipedia + other websites + BeOS books call BeOS' kernel hybrid (or microkernel) for a reason which my blocked post will explain + prove. hybrid = modified microkernel.

Quote:

Not that it really matters to me, I don't care Haiku's kernel is labeled hybrid or monolithic, as long as it stays fast and tuned for responsiveness.

Yes, that's most important but I was responding to misinformation given by NoHaiku. Also, kernel design matters for the future and affects the OS.

Re: kernel

Rox wrote:

So in the case of Haiku, what are these user-space components which usually reside inside a monolithic kernel? If there are such, then I guess Haiku would be considered a hybrid kernel, but sofar tonestone57 haven't been able to point at any such components. That doesn't mean there isn't any, so if anyone knows I'd like to be informed.

In Haiku, video card drivers are separated into two components, a kernel driver component, and a user space accelerant. The following document explains this in detail, including the reasons why this separation exists:

http://www.haiku-os.org/legacy-docs/writing-video-card-drivers/04-accele...

Re: kernel

haiku.tono wrote:

In Haiku, video card drivers are separated into two components, a kernel driver component, and a user space accelerant. The following document explains this in detail, including the reasons why this separation exists:

That's correct, but I suspect you meant it as an example of how Haiku is different, which it isn't.

You can see this also in any modern Linux system with the DRM modules in the kernel and then higher level user space drivers for 2D (XAA) and/or 3D (MESA DRI / Galium) which interface to those modules.

Re: kernel

NoHaikuForMe wrote:
haiku.tono wrote:

In Haiku, video card drivers are separated into two components, a kernel driver component, and a user space accelerant. The following document explains this in detail, including the reasons why this separation exists:

That's correct, but I suspect you meant it as an example of how Haiku is different, which it isn't.

No, I was just responding to Rox. But in reading your other posts, I see your point.

Re: kernel

"There are plenty of sites that insist hybrid kernels are "a real thing" but none of them manage to say how you'd know."

A hybrid kernel is regarded as a modified microkernel. The use of servers is what makes kernels hybrid or microkernel. See below.

A little insight can be gained by looking at NT's kernel.
http://technet.microsoft.com/en-ca/library/Cc750820.f0af_big(en-us,TechNet.10).gif

"The Windows NT 4.0 architecture merges the best attributes of a layered operating system with those of a client/server or microkernel operating system."
http://technet.microsoft.com/en-ca/library/cc750820.aspx

Compare this to (monolithic) Windows 95 which does not use servers and works direct with Windows 95 core.
"Similar to Windows version 3.1 and Windows for Workgroups version 3.1, Windows 95 includes a core composed of three components — User, Kernel, and graphical device interface (GDI)." (ie, User, GDI, Kernel all melded together into one).
http://technet.microsoft.com/en-us/library/cc751120.aspx

So, the most important difference is the use of servers to communicate with the kernel. In comparison, monolithic melds the code all into one and does not separate it.

"A basic set of servers for a general-purpose microkernel includes file system servers, device driver servers, networking servers, display servers, and user interface device servers. This set of servers (drawn from QNX) provides roughly the set of services offered by a monolithic UNIX kernel."
http://en.wikipedia.org/wiki/Microkernel#Servers

Now compare this to BeOS (& Haiku with similar kernel layout) which uses Kits & Servers. The kits talk to the applications and to the servers but in some cases (for simple stuff) talk directly to the microkernel. The servers talk to the kits and microkernel (go-between). See page 5, BeOS Programming Overview below for general layout.
http://oreilly.com/catalog/beosprog/book/

"As a microkernel-based OS, QNX is based on the idea of running most of the OS in the form of a number of small tasks, known as servers. This differs from the more traditional monolithic kernel, in which the operating system is a single very large program composed of a huge number of "parts" with special abilities. In the case of QNX, the use of a microkernel allows users (developers) to turn off any functionality they do not require without having to change the OS itself; instead, those servers are simply not run."
http://en.wikipedia.org/wiki/Qnx

If still not convinced then you should read through wikipedia starting with monolithic and microkernel first to get better understanding and then read hybrid kernel. Once you know what is considered monolithic and what is microkernel can you better understand hybrid kernels.

Re: kernel

tonestone57 wrote:

Compare this to (monolithic) Windows 95 which does not use servers and works direct with Windows 95 core.
"Similar to Windows version 3.1 and Windows for Workgroups version 3.1, Windows 95 includes a core composed of three components — User, Kernel, and graphical device interface (GDI)." (ie, User, GDI, Kernel all melded together into one).
http://technet.microsoft.com/en-us/library/cc751120.aspx

Goodness me no, this is completely wrong. A lot of the stuff you're talking about in Windows 95 does not run in Ring 0. The reference you cited even explains this.

Quote:

"A basic set of servers for a general-purpose microkernel includes file system servers, device driver servers, networking servers, display servers, and user interface device servers. This set of servers (drawn from QNX) provides roughly the set of services offered by a monolithic UNIX kernel."
http://en.wikipedia.org/wiki/Microkernel#Servers

OK...? (We will see in a moment that there's an error in this paragraph, but it's not very important)

Quote:

Now compare this to BeOS (& Haiku with similar kernel layout) which uses Kits & Servers.

Let's do that.

  1. File system servers: No, Haiku's file systems are part of the monolithic kernel
  2. Device driver servers: No, Haiku's device drivers are part of the monolithic kernel
  3. Networking servers: No, Haiku's networking is part of the monolithic kernel
  4. Display servers: Yes! But, alas, the display server is also a non-kernel component of a Linux distro. So is Linux also a "hybrid" by this rule?
  5. User interface device servers: No, Haiku's user interface devices are again part of the monolithic kernel
Quote:

If still not convinced then you should read through wikipedia starting with monolithic and microkernel first to get better understanding and then read hybrid kernel. Once you know what is considered monolithic and what is microkernel can you better understand hybrid kernels.

So, now that I've shown you using your own examples why you're wrong, are you going to accept that? Or are you going to keep squawking "It's a hybrid, it's a hybrid" and refuse to actually confront the problem of how it can be "hybrid" while having the same architecture as a well known monolithic kernel?

Re: kernel

Linux kernel links each with graphics (with fairly similar layouts) to show & prove that Linux's kernel is monolithic & for comparison.
http://book.opensourceproject.org.cn/embedded/oreillybuildembed/opensour...

http://tldp.org/LDP/sag/html/kernel-parts.html

http://www.ibm.com/developerworks/linux/library/l-linux-kernel/

Of extra interest is also the Source Lines Of Code to show how complex (big & error prone) the Linux kernel is getting.
January 2001 : 3.37 Million
December 2003: 5.92 Million
March 2011 : 14.29 Million