Syllable vs Haiku
Hi! I'm new to So alternative OS. And I want to you know Test it.
And I just want to know how much is difference between Haiku and Syllable? I found some news but they are sooo old (from 2006) As far as i know Haiku develops extremly fast :) Which is very good.
The Only thing i know (but not sure) that syllable was ported to linux and i don't know why it EATS my PC and lags(on VirtualBox i have over 40% cpu time).
But How much syllable works better or worse?

Comments
Re: kernel
Linux kernel links each with graphics (with fairly similar layouts) to show & prove that Linux's kernel is monolithic & for comparison.
http://book.opensourceproject.org.cn/embedded/oreillybuildembed/opensour...
http://tldp.org/LDP/sag/html/kernel-parts.html
http://www.ibm.com/developerworks/linux/library/l-linux-kernel/
Of extra interest is also the Source Lines Of Code to show how complex (big & error prone) the Linux kernel is getting.
January 2001 : 3.37 Million
December 2003: 5.92 Million
March 2011 : 14.29 Million
Re: kernel
@NoHaiku
#1 Haiku does everything with servers which communicate with the kernel. Had you taken the time to read the quotes I provided and looked at the pictures (from the links) you might have understood this.
I have shown you that far from "doing everything with servers" Haiku has a conventional monolithic design. Look at that itemised list. The items are from your own quote, and in every case the same decision has been made as in Linux. Why quote that list if you are simply going to ignore it?
#2 Wikipedia + all the websites on the internet + written/published books have described BeOS' kernel as hybrid (or microkernel)
But you claim hybrid and microkernel are different things. So a reasonable person would wonder, why do these sources disagree about whether BeOS uses a microkernel? Might it be that none of the sources you've looked at really knew what they were talking about on this topic?
Plenty of places (including books, magazine articles, and in the past Wikipedia) erroneously assert that BFS can handle files up to 2^64 bytes. That isn't true, and Be never claimed it was true, but they didn't do very much to stop people spreading this untrue claim once it was made. Why should they? It's impressive, and that drives interest.
and should be accepted as such. I even proved this with my previous post which you won't accept. Can you show some credible sources to prove otherwise or is it just your word? I have not seen any links from reliable sources to show you're right.
The misunderstanding you have isn't rare among people who don't really know what they're doing. Michael Phipps felt that OpenBeOS was a microkernel design so long as the implementation of printf() wasn't in the kernel. Of course Haiku (like most other operating systems) does actually provide an implementation of printf() in the kernel... but even if it didn't this wouldn't make Phipps right about OS design.
#3 I provided quick comparison of Windows 95 to Windows NT and to BeOS/Haiku using offical sites like Microsoft for info. All with pictures to show the similarities and differences. That being, micro & hybrid kernels use servers to communicate with applications. The code removed from the kernel & put into servers. Did you not look at the Window 95 & NT pictures? Where were the servers in Windows 95? There were none!
There aren't any OS personalities in Windows 95, which is the sort of thing Microsoft means here when they talk about "servers", but there aren't any such "servers" in Haiku either. Muddling together everything with the name "server" doesn't help you. The Samba file server and a Minix file server are not the same kind of thing.
#4 I have not seen any links from you. I wonder why that is? Maybe because you have NONE to prove you're right. If Haiku's kernel is monolithic then easy to prove. Please point to some credible developers posting this so I can read about it. I'm sure some developer would have made a post of this somewhere. Or why would they not correct the Wikipedia entry?
What should I link? There's a hilarious muddle in threads like
http://www.freelists.org/post/haiku-development/Haiku-Kernel-Architectur...
Does that seem authoritative to you? Should it?
Notice that no core Haiku developers step up to deny that Haiku is essentially the same design as Linux, all the drivers and other low-level paraphernalia in Ring 0, but not stored on disk as a single file. Why don't they deny it? Because it's true of course.
You might ask Haiku's developers for yourself about why they don't spend more time editing Wikipedia. I suspect they feel there are better things they could do.
#5 You apparently want to believe what you want. We don't care what you believe but stopping spreading your lies to Haiku users.
Who is this "we" you're suddenly speaking on behalf of?
#6 I'm going to provide 3 different links, with pictures, to show the Linux kernel layout for comparison. You'll see the kernel talks to applications directly and everything handled by the kernel & not servers. Will do in next post to avoid spam filter.
You can confirm for yourself fairly easily both that Haiku programs talk "directly" to Haiku's NewOS-derived kernel and that many common programs run on Linux use servers such as the X.org display server, the PulseAudio audio server, or something as trivial as the SSH agent. What you think of as a fundamental difference is just some nifty re-branding by Be.
#1 I think I have done a decent job showing Haiku has hybrid kernel. Going into specifics (which shouldn't be required) is only possible with a Haiku kernel or advanced C++ developer.
The trouble with this approach is that kernel architectures are a matter of specifics.
Re: kernel
January 2001 : 3.37 Million
December 2003: 5.92 Million
March 2011 : 14.29 Million
My latest comment is sat in the spam filter apparently.
Among those millions of lines of code are all the drivers for the hardware that simply doesn't work in Haiku. Everything from the DVB dongles to entire computers that are powerful and effective with Linux, but are just paperweights with Haiku. Whole platforms that Haiku not only doesn't boot on, but hasn't even begun porting to.
Not to mention the network protocols and filesystems, or the features like virtualisation or suspend to RAM that lots of people use everyday with Linux but remain on some distant future "TODO" list for Haiku.
Re: kernel
In Haiku, video card drivers are separated into two components, a kernel driver component, and a user space accelerant. The following document explains this in detail, including the reasons why this separation exists:
That's correct, but I suspect you meant it as an example of how Haiku is different, which it isn't.
No, I was just responding to Rox. But in reading your other posts, I see your point.
Re: kernel
I have shown you that far from "doing everything with servers" Haiku has a conventional monolithic design.
Yet when I boot Haiku I see 13 of 17 running processes are servers. Boot Haiku & check against the server list below to see for yourself.
http://dev.haiku-os.org/browser/haiku/trunk/src/servers
I guess those servers are just for show then?
But you claim hybrid and microkernel are different things.
I'm very sure I have said that hybrid is modified microkernel two times already. Have you not understood that? Hybrid is closer to microkernel than monolithic design.
So a reasonable person would wonder, why do these sources disagree about whether BeOS uses a microkernel? Might it be that none of the sources you've looked at really knew what they were talking about on this topic?
More likely that a) the term hybrid did not exist (or too new) back then or b) people like you that believe there are only micro & monolithic kernels. Haiku is closer to microkernel and why they would have chosen that term instead. Not right but closer in design before people accepted hybrid term.
There aren't any OS personalities in Windows 95, which is the sort of thing Microsoft means here when they talk about "servers", but there aren't any such "servers" in Haiku either.
What the heck are you going on about? Microsoft has a picture showing exactly what they mean. You have trouble interpreting pictures? Look for the gif link I gave. For Windows NT, you'll see: Security, OS/2, WIN32, POSIX Subsystems. Those are the servers that talk to the applications & kernel. Windows 95 does not have these subsystems (servers). Look again. The picture is self explanatory itself. Then compare to Windows 95 picture in one of my other links.
Who is this "we" you're suddenly speaking on behalf of?
You've been told Haiku's kernel is hybrid (or microkernel) by me, AndrewZ, thatguy & MichaelPeppers and still won't accept it.
Before Haiku-OS website redesign in 2008 the site used to say:
"The Haiku kernel is a multi-threaded, modular hybrid kernel based on Travis Geiselbrecht's NewOS. At the point of this writing, it's still under heavy development to create the stable and reliable foundation you will expect of Haiku."
What should I link? There's a hilarious muddle in threads like
http://www.freelists.org/post/haiku-development/Haiku-Kernel-Architectur...
Does that seem authoritative to you? Should it?
So the best you can do is point to a thread where two non-kernel developers assume (or guess) Haiku's kernel to be monolithic. With no proof or any good reasons to show why this would be and you take their word for it because no other developer bothers to reply back to correct them. But me giving many quotes & references to show why I'm right you don't believe and want me to give specifics. Doesn't this seem like a double standard? I have asked you to provide links to linux kernel/expert developers saying & proving Haiku's kernel is monolithic and you have been unable to do so! No surprise there because you're wrong and won't admit it!
The trouble with this approach is that kernel architectures are a matter of specifics.
Not right. All I had to prove was that Haiku uses servers & I know I've managed that. If you bother to boot Haiku you'll see the servers in ProcessController. Also, many people will agree with me rather than you because I'm right!
Re: Syllable vs Haiku
To end this silliness.
A micro-kernel uses message queues to pass messages around, since some functionality runs on Ring0 and some not.
A monolithic kernel uses signals and more recently sockets.
A hybrid does both.
This means that *both* Haiku and Linux are hybrids.
The fact that Linux is 30 millions lines of code long, is because it supports *far* more H/W and can do more tricks than Haiku can.
Also it is because, this way less context switches happen, so it is faster.
Re: Syllable vs Haiku
You wanna know what is wrong with linux and most UNIX systems?
Too much cruft and cheap hacks to make them work.
One example : http://lwn.net/Articles/436012/
Just read the article and see. This happens all over the place, from networking where they first introduced the GUI part (nm_applet) and then they startted writing the non-gui part, to sound and storage technologies.
Haiku maybe smaller and simpler, but it is consistent with it self.
My only problem is that by the time Haiku is ready for general use, then nonmore desktop computers will exist.
Re: kernel
I've always considered 'servers' to be the Haiku equivalent of Windows 'services' and Linux 'daemons', what's the difference exactly?
Anyway, as for the whole hybrid kernel thing, maybe Haiku falls under the category 'hybrid' (it seems to be a very fuzzy concept) but from what I've read here in this thread I'd certainly say Haiku leans more towards a monolithic kernel than a microkernel.
But again, who cares? Why do you (tonestone) let NoHaikuForMe lure you into this fruitless discussion again and again? Does it matter SO much to you that Haiku holds the epithet 'hybrid'? Why? It's the user experience that counts, no end user will use or discard Haiku based upon kernel description. Just as users don't choose Windows, Linux or OSX for those reasons either.
Again, in this thread I've seen little to no indication that Haiku is more 'hybrid' than say Linux with the sole exception of the accelerants mentioned by haiku.tono, but that doesn't stop me from thinking that Haiku is a superior desktop system compared to Linux. This is due to being tuned towards interactivity, due to coming with a standard and well-integrated gui, due to coming with a standard well-versed api, due to having (atleast in theory, there's a rewrite afaik) a stable kernel driver api, etc
At the end of the day they could label the Haiku kernel 'super-monolithic' for all I care, it's still the same great system.
Re: Syllable vs Haiku
This means that *both* Haiku and Linux are hybrids.
I did not realize you were the expert. How come everyone else, even NoHaiku, was saying Linux had a monolithic kernel? The fight was whether Haiku was monolithic or hybrid. We all agreed Linux was monolithic.
There are books & websites & Wikipedia that will tell you that Linux's kernel maybe more modular but still monolithic. That is why Andy T & Linus were fighting back in 1992 & again in 2007. Also, if you read any of Linus' posts you'll see he is strongly opposed to microkernels and does not believe in hybrid kernels. So, would be big surprise if Linux was not monolithic.
http://www.realworldtech.com/forums/index.cfm?action=detail&id=66630&thr...
On May 9, 2006 Linus said "As to the whole "hybrid kernel" thing - it's just marketing. It's "oh, those microkernels had good PR, how can we try to get good PR for our working kernel? Oh, I know, let's use a cool name and try to imply that it has all the PR advantages that that other system has"
It is because of Linus (who likes to bash & attack when he disagrees with others) that people like NoHaiku see hybrid kernels as non-existent and as monolithic. When you have Linus, founder of Linux, running around saying hybrid is hype (marketing term) then people follow and believe him even though Wikipedia & others recognize the term for good reason. ie, Linus will still argue hybrid is not real and people will still believe him. Even Andy T mentions hybrid. See near end of my post for Symbian quote.
The fact that Linux is 30 millions lines of code long, is because it supports *far* more H/W and can do more tricks than Haiku can.
I agree but what I was trying to point out is with a monolithic kernel as it gets bigger & bigger in size it risks 1) more serious bugs (code size) 2) loses efficiency (code bloat) and 3) reduces stability (code complexity)
Linus now spends most of his time reviewing kernel patches because any bad code and the Linux kernel would be crashing & unstable.
Below from: Linus calls Linux 'bloated and huge'
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/
"Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two percentage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked."
"We're getting bloated and huge. Yes, it's a problem," said Torvalds.
Linus said, "I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse."
Debunking Linus's Latest
http://www.coyotos.org/docs/misc/linus-rebuttal.html
"Linus, as usual, is strong on opinions and short on facts.
"Shared-memory concurrency is extremely hard to manage. Consider that thousands of bugs have been found in the Linux kernel in this area alone."
"When you look at the evidence in the field, Linus's statement ``the whole argument that microkernels are somehow `more secure' or `more stable' is also total crap'' is simply wrong. In fact, every example of stable or secure systems in the field today is microkernel-based. There are no demonstrated examples of highly secure or highly robust unstructured (monolithic) systems in the history of computing."
If you want to know more about microkernels and hear Andy's side then read his post from 2007:
http://www.cs.vu.nl/~ast/reliable-os/
Andy T said,"Symbian is yet another popular microkernel, primarily used in cell phones. It is not a pure microkernel, however, but something of a hybrid, with drivers in the kernel, but the file system, networking, and telephony in user space."
Re: kernel
"But again, who cares? Why do you (tonestone) let NoHaikuForMe lure you into this fruitless discussion again and again? Does it matter SO much to you that Haiku holds the epithet 'hybrid'? Why?"
I thought I answered you with:
Not that it really matters to me, I don't care Haiku's kernel is labeled hybrid or monolithic, as long as it stays fast and tuned for responsiveness.
"Yes, that's most important but I was responding to misinformation given by NoHaiku."
Giving incorrect information is bad! We now have some people that believe Haiku's kernel is monolithic. Other people believing Linux's kernel is hybrid. What next?
Same with NoHaiku saying people were giving wrong info for BFS. ie, being accurate and giving right info really matters to us.
But yes, since both of us believe we are right then neither one of us will change our position. I've given many reasons why but everyone has to decide for themselves. I found it very interesting that NoHaiku did not believe in hybrid kernel or have an idea of what it really meant (defined as) yet he felt strongly that Haiku used monolithic kernel.
Re: kernel
Giving incorrect information is bad! We now have some people that believe Haiku's kernel is monolithic. Other people believing Linux's kernel is hybrid. What next?
The nice thing about technical issues is that those who most need to care can see for themselves what's right. They may not know the terminology (we can see threads where Haiku developers use "hybrid" to mean "modular" for example) but none of them believes that the X server on a Linux distribution is part of the kernel, or that running the PPP client daemon in user context is an innovation.
But yes, since both of us believe we are right then neither one of us will change our position.
I'd have been completely happy to learn something and change my position, but the trouble is that you never refuted a single thing I said, preferring instead to find endless pretty box diagrams to link to and trying to equate Haiku's high-level servers like the app server with the servers in a microkernel system that handle low-level stuff like device drivers.
I've given many reasons why but everyone has to decide for themselves. I found it very interesting that NoHaiku did not believe in hybrid kernel or have an idea of what it really meant (defined as) yet he felt strongly that Haiku used monolithic kernel.
Hybrid kernels are a marketing exercise. Take a monolithic kernel, rebrand it, the technical people will say "that's a monolithic kernel" but no need to worry about them, because the marketing works on some people who then believe - and post at great length - that it's inherently better than the alternatives.
Imagine if someone told you their car had a "fusion" flat 6 engine. Upon opening the bonnet you realise it is an ordinary (inline) four cylinder. They keep saying "flat 6" and "fusion" and insisting that this is resulting in a very smooth ride. Obviously if you point out that it has four cylinders they may refuse to listen. They may have a colourful brochure which insists it's a new "fusion" between an four cylinder and a flat 6, and how can you contradict such a glossy piece of marketing? And they're enjoying such a smooth ride. Still, it's not true. Maybe they (and a horde of other people who've been sold these hypothetical cars) will write magazine articles and Wiki entries based on what they read in the brochure. It's a lot easier than doing actual research.
All anyone has to do is open the bonnet, and they'll find all the low level components of Haiku running in Ring 0 in the monolithic kernel just like a typical Linux system. No matter how many people add Haiku to a list of microkernels, or nanokernels, or any other classification, one look under the bonnet confirms it's just a regular monolithic kernel.
Re: kernel
I'd have been completely happy to learn something and change my position, but the trouble is that you never refuted a single thing I said, preferring instead to find endless pretty box diagrams to link to and trying to equate Haiku's high-level servers like the app server with the servers in a microkernel system that handle low-level stuff like device drivers.
Main difference between monolithic kernel and hybrid kernel is module hosting. Hybrid kernel only host, but dont use modules. Monolithic kernel use modules himself and modules can be statically linked to it. Monolithic kernel must support every device in computer or you can't use it. Hybrid kernel don't manage devices, devices managed by userland components or special modules, that loaded during boot.
In Haiku devices controlled mainly by servers app_server, input_server and media_server. Only exception to this is network and file system, it implemented in kernel because it is perfomance critical. But it doesn't make Haiku kernel monolithic. Haiku driver consists of 2 parts — kernel add on and server add on. Kernel add on need only to give access a device from server add on. Kernel don't use kernel add on directly, kernel only hosts it. Most of driver logic implemented in server add on. Sometimes kernel add on don't need. Look serial mouse driver for example. All servers don't depend on hardware at all.
I don't know what are in Linux exactly because I don't use it.
Re: kernel
Main difference between monolithic kernel and hybrid kernel is module hosting. Hybrid kernel only host, but dont use modules.
What is the difference between "hosting" and "using" modules? If the natural meanings of these words are used, it seems a "hybrid kernel" cannot really exist.
In Haiku devices controlled mainly by servers app_server, input_server and media_server. Only exception to this is network and file system, it implemented in kernel because it is perfomance critical. But it doesn't make Haiku kernel monolithic.
You write here "controlled mainly by" but you don't explain what you mean. Does my telephone "control" the Indian takeaway in this sense? Haiku's servers you mentioned each use system calls to access device drivers, which run in Ring 0 and are part of the monolithic Haiku kernel. This is roughly the same as their counterparts in a Linux system, such as the X server, GPM, PulseAudio or GStreamer.
Haiku driver consists of 2 parts — kernel add on and server add on. Kernel add on need only to give access a device from server add on. Kernel don't use kernel add on directly, kernel only hosts it.
When you say here that the kernel "only hosts it" you are including a lot in that. There is no mechanism to directly access these add ons (driver modules) from userspace. Instead userspace performs system calls, which are interpreted by the kernel, and in some cases the execution path inside the kernel will execute code from a relevant device driver module. This is, of course, pretty much the same as on a Linux system or indeed most modern operating systems.
Most of driver logic implemented in server add on. Sometimes kernel add on don't need. Look serial mouse driver for example. All servers don't depend on hardware at all.
The serial mouse is not a bad example. You have a serial driver, which is kernel code, and you have some trivial code in the input server to deal specifically with serial mice. But without that serial driver, the mouse code is useless. The driver, Ring 0 kernel code, deals with interrupts, I/O registers and other hardware specific issues, while the code in input server is oblivious to this.
In a microkernel, that serial driver would be a userspace program. If an interrupt occurred the driver program would receive a message from the microkernel, the program would interrogate the hardware directly, read any pending data and send this as a further message to any listening programs.
Re: writing applications
For example, in Haiku there is one huge file for all HDA codecs. If you don't have a Realtek chipset Haiku will load workarounds for Realtek bugs anyway.
Well, the pages of code for these Realtek workaround will never be actually loaded because the HDA driver code itself will never trigger a (code) page fault to execute it.
Only a span of virtual memory pages is actually wasted, not physical memory.
The Linux kernel automatically detects which HDA codec you have and loads one of about a dozen different driver modules specific to a brand of codec.
Actually, it's not the linux kernel but its generic module probing mecanism that does this. Which also mean that, in fact, the realtek specific codec module (and any others installed) is also loaded at least during probing phase.
So in the end it does little difference, as on both platforms the hardware detection code of each codec supported must be loaded in physical memory and run and in both case only the code actually of some use is kept loaded that way.
Anyway.
For code design consideration, Haiku HDA driver could be more modularized. But that will give us only lesser VM pages wasted, not physical ones. As the gain doesn't seems to worth it, considering the few amount of active contributors skilled in this area the project has, nobody did it yet.
Patches are always welcome, though.
Be our guest.
Re: kernel
Hybrid kernels are a marketing exercise.
And here is why you will never understand what hybrid is. You've already started out biased and closed minded! Linus is the one that strongly holds this position and has brainwashed others into believing the same. Yet, many programmers (including Andy T) recognize hybrid to be real and an entry is even on Wikipedia explaining hybrid kernels. But the Linus camp will never accept or recognize it and really believe hybrid is another name for monolithic.
In fact, I read a post by Linus few days back saying that Amiga OS is not microkernel. Linus was saying that Amiga OS is really monolithic kernel. No wonder people are getting confused and misinformed when you have Linus running around spreading lies like these. Guess Linus does not want other people to know the truth, otherwise he'd get pressured to switch Linux to hybrid or microkernel too. So, Linus goes around making stuff up. ie, making it look like everyone else is using monolithic kernels when in reality only Linux, Solaris & BSD are. So, Linus says hybrids do not exist, that microkernels are bad, bad, bad, etc. so people will side with him. But if people really look at both sides they'll see that it's just Linus' strong opinion & bias and Linus has not proven anything.
I have a post blocked by spam filter that will show Linus saying Linux is bloated & inefficient in 2009! But Linus won't admit that monolithic kernel has made the problem real bad for Linux. There is also a link to Ph.D computer science graduate that shows how Linus twists the truth to arrive to his biased conclusions!
Re: writing applications
Actually, it's not the linux kernel but its generic module probing mecanism that does this. Which also mean that, in fact, the realtek specific codec module (and any others installed) is also loaded at least during probing phase.
I shall try to find time to write a more in-depth response, but ah, No. There is no "probing phase".
If you read the code, sound/pci/hda/patch_realtek.c you will see the realtek codec module doesn't even have a probe method. It consists just of code to be run for these HDA codecs, together with their IDs. There is no need for a probe method because the HDA driver is able to determine the codec ID for itself.
The "secret sauce" is a single line: MODULE_ALIAS("snd-hda-codec-id:10ec*");
The snd-hda-codec-id:10ec* string is baked into the module, and used by the userspace module tools to match it to requests from the kernel. Any codec with an ID beginning 10ec (Realtek's vendor prefix) will cause this module to be loaded. Until it's requested, none of this information needs to be in RAM. And once it has been loaded it can all be flushed when necessary.
This approach is used throughout Linux (and indeed by NT and presumably other modern systems that aspire to run on more than a handful of different people's computers). There has been a project to try to introduce the same approach to Haiku, but it seems stalled. Here are some examples of such magic strings:
Re: writing applications
So now I actually had time to sit down with the Haiku source code and write a reply addressing the trickier part of your post.
Well, the pages of code for these Realtek workaround will never be actually loaded because the HDA driver code itself will never trigger a (code) page fault to execute it.
Only a span of virtual memory pages is actually wasted, not physical memory.
It would actually be possible (although fraught with danger) to do this, but Haiku doesn't. Instead you can see
The Haiku module loader routine simply reads the ELF sections into reserved kernel RAM. So all the realtek code (and everything else) is read from disk into RAM by this call. There is code to map an ELF image from a disk file into virtual addresses such that subsequent page faults bring it into RAM but so far as I was able to confirm that's only used for userspace programs, not Haiku's kernel add ons.
So in the end it does little difference, as on both platforms the hardware detection code of each codec supported must be loaded in physical memory and run and in both case only the code actually of some use is kept loaded that way.
As we see this isn't true. Not only is there no need to load all this "hardware detection code" from every driver on Linux as I explained above, but Haiku does in fact load the entire driver into RAM.
For code design consideration, Haiku HDA driver could be more modularized. But that will give us only lesser VM pages wasted, not physical ones. As the gain doesn't seems to worth it, considering the few amount of active contributors skilled in this area the project has, nobody did it yet.
Patches are always welcome, though.
Be our guest.
Of course, many of the things Haiku lacks are a result of your limited resources. But it does no good for people to insist that Haiku does this, or has that capability, when in fact it does not "for lack of resources".
Re: writing applications
There is no "probing phase". [...]
The "secret sauce" is a single line: MODULE_ALIAS("snd-hda-codec-id:10ec*");
Should be a recent design change then, because in my Ubuntu 9.04 (kernel 2.6.28.?), the HDA module codecs are still assembled all together (well, dependending on kernel config, that is) within a single module. It's kinda hard to keep in touch with Linux sound system change, as there's several. And chasing multiple targets is not that fun.
Anyway, that's a nice move indeed, and our own HDA driver will be better if non-standard codec was indeed moved into separated kernel modules so only used one are kept loaded.
Well, the pages of code for these Realtek workaround will never be actually loaded because the HDA driver code itself will never trigger a (code) page fault to execute it.
Only a span of virtual memory pages is actually wasted, not physical memory.
It would actually be possible (although fraught with danger) to do this, but Haiku doesn't. Instead you can see
The Haiku module loader routine simply reads the ELF sections into reserved kernel RAM.
So all the realtek code (and everything else) is read from disk into RAM by this call. There is code to map an ELF image from a disk file into virtual addresses such that subsequent page faults bring it into RAM but so far as I was able to confirm that's only used for userspace programs, not Haiku's kernel add ons.
Well, the code above is not the culprid one for that but the B_FULL_LOCK requested for the area to host the read data. Anyway, you are right: the whole HDA driver code is loaded in memory. I dunno why I though we had a lazy load also for kernel add-ons with some hack to lock the ones installing interrupt handlers. Maybe it was once proposed but never made it into code.
So, I stand corrected : our HDA driver is bloated.
Of course, many of the things Haiku lacks are a result of your limited resources. But it does no good for people to insist that Haiku does this, or has that capability, when in fact it does not "for lack of resources".
Yes, braging about capabilities Haiku has not actually does no good.
One thing that could do good to Haiku is that people spending time to fight against Haiku community's pointless arguments will instead contribute the missing parts in Haiku.
After all, Linux is twice as old as Haiku, and had more than just twice its horsepower.
And no debate, flame, counter-argument or defense against whatever our community propaganda will improve Haiku more than, well, code contribution.
Everyone is free to spent his time as he want. But only time spent in doing actual contribution could change Haiku into a better Haiku...
One could simply start by filing en enhancement ticket in our Trac system, asking to split HDA codecs into submodules in order to reduce the code bloat loaded in kernel by the very commonly found and so used hda driver. That would be a good mark of willingness, too.
Re: writing applications
Should be a recent design change then, because in my Ubuntu 9.04 (kernel 2.6.28.?), the HDA module codecs are still assembled all together
This change landed in 2.6.29, over two years ago. I suppose this is "recent" in terms of Haiku's decade-plus lifecycle but most general purpose systems have a much faster turnover. When Haiku began (as OpenBeOS) the Intel HDA proposal did not exist.
After all, Linux is twice as old as Haiku, and had more than just twice its horsepower.
Indeed, however the decision not to use an existing, mature kernel was never forced upon Haiku's developers. OpenBeOS (as it was then called) made an explicit decision to re-invent the wheel. If they want to revisit that decision now they don't need a bug report to do that. But I suspect you will find that the sunk cost fallacy rules the day.
Re: writing applications
Should be a recent design change then, because in my Ubuntu 9.04 (kernel 2.6.28.?), the HDA module codecs are still assembled all together
This change landed in 2.6.29, over two years ago. I suppose this is "recent" in terms of Haiku's decade-plus lifecycle but most general purpose systems have a much faster turnover. When Haiku began (as OpenBeOS) the Intel HDA proposal did not exist.
And? Today both Linux and Haiku has HDA support.
Sure, since 2.6.29 the HDA linux driver is more modular than the Haiku's one.
But it doesn't make the later not working, just a lesser well designed one.
Indeed, however the decision not to use an existing, mature kernel was never forced upon Haiku's developers. OpenBeOS (as it was then called) made an explicit decision to re-invent the wheel. If they want to revisit that decision now they don't need a bug report to do that.
I fail to see the link here. Linux's HDA driver was made more modular over two years ago, and for you that's translate into a proof that Haiku should have use Linux kernel instead of his own?! Since when porting a driver needs to change the whole kernel too? When the price of this change is higher than write from scratch or a thin adapter layer, I fail to see the point, in particular when since start BeOS and Haiku were never an Unix operating system.
We've ported a good portion of xBSD network adapters drivers without switching to a BSD kernel. Why would we needs to do that for audio drivers? Even Linux kernel modules are not that dependent on kernel private API...
The "enhancement" report (it's not buggy, just code bloated) was just about asking for HDA driver to be more modular than today, nothing more, nothing less. It's not like all Haiku drivers have the same issues. It's not like all Haiku audio drivers have either.
But thanks for staying on the non-contributing camp, all while saying that less contributors doesn't explain most of the issues Haiku is still facing...
Re: kernel
NT has modules, Haiku has kits, neither are microkernels, but you're just denying the existence of hybrid kernels. Pal, this is not philosophy, there are some actual truths, and one is that, like it or not, hybrid kernels do exist and hybrid is not just a fancy word for monolithic.
Now, seriously, you obviously don't like this OS and are entitled to your opinion, but why are you still posting here? I'm fairly sure you could do more productive things than looking for excuses to bash this project. If you think you can do better, code your own OS, if you don't, well, search for one that suits your needs and help its team. Doing lengthy discussions about what's it and what's it not is just a huge waste of everyone's time, including yours.
That said, I'm out of this discussion. Do whatever you want, I just don't care enough to waste time here anymore.
Michael do not be nervous, NoIntelligenceForMe err... NoHaikuForMe is a troll.
Please all do not feed a troll.
Re: writing applications
I fail to see the link here.
The kernel sub-project is hugely manpower intensive because of its unnecessarily large scope. It makes no sense to engage in such a project and then complain that you're short of people. It's like buying one last drink with your taxi money and then complaining that now you have to walk home.
All OpenBeOS actually needed was someone to tweak say, a BSD kernel to meet their requirements. This much smaller project would have resulted in access to a mature kernel and thus far broader hardware support, much improved portability, and all with less effort.
Re: writing applications
The kernel sub-project is hugely manpower intensive because of its unnecessarily large scope.
It's not anymore. Most work done in kernel land since a couple of years is not anymore in what is actually called "the kernel" but in drivers and their userland composants.
And when it comes to write hardware drivers, any alternative operating systems have a manpower issue. After 20 years, Linux has now enough market share to get some most complex and critic hardware (GPU, network adapters) support from their manufacturers themselves. The standardisation of several hardware *busses* and devices *classes* (AHCI, USB, ACPI, even HDA ;-) ) make this task more easier than it used to be 20 years ago when outside Windows your hardware had no support at all, and no technical datasheet from manufacters. This trend benefit Linux, but more even smaller alternative operating systems.
It makes no sense to engage in such a project and then complain that you're short of people.
We don't complain. Knowing and acknowledge that some drivers are not the best design they could be is not complaining. Otherwise, I'll bet that pretty much every operating software developer is complaining, then.
You're the one complaining that our HDA driver design is not good enough, which I acknowledge.
You want a better designed driver? Stop complaining about it and start contributing.
I'm not complaining about the lack of contributors, I'm just complaining about people complaining that an open (aka contributed by people for people) project is not good enough to their taste without seeing that it's pointless: it's not bad enough if you don't care enough to actually do yourself something to improve it.
It's like buying one last drink with your taxi money and then complaining that now you have to walk home.
Which, ironically, is a better solution than taxiing home earlier while you're still quite drunk. Nothing better than a long walk to think about issues at hand
;-)
All OpenBeOS actually needed was someone to tweak say, a BSD kernel to meet their requirements. This much smaller project would have resulted in access to a mature kernel and thus far broader hardware support, much improved portability, and all with less effort.
Nobody will never knows because, well, simply nobody did it. Someones tries this path with Linux Kernel (the BluEyeOS project IIRC), but never reach critical mass.
Call it stupid[est] decision as much as you want but one thing remains: only Haiku reach "let's rewritten BeOS in open source" critical mass. It doesn't make it perfect, it's doesn't make it the only possible way to do it, but so far it's the only one attempted which did it.
And it's not an under-designed single audio hardware driver that will change that fact.
Re: writing applications
I think, that running Haiku on Linux kernel is not hard task. To do this need to learn kernel to launch Haiku binaries, make accelerant, media_server, input_server and print_server add ons that work with Linux drivers and run bootsript when kernel initalize. No rebuild needed.
But I don't think that Haiku kernel need to be abandoned. Haiku kernel is more compatible with multi-threading, more simple and fast.
Re: writing applications
I fail to see the link here.
The kernel sub-project is hugely manpower intensive because of its unnecessarily large scope. It makes no sense to engage in such a project and then complain that you're short of people. It's like buying one last drink with your taxi money and then complaining that now you have to walk home.
All OpenBeOS actually needed was someone to tweak say, a BSD kernel to meet their requirements. This much smaller project would have resulted in access to a mature kernel and thus far broader hardware support, much improved portability, and all with less effort.
What part of "the developers didn't want to use a server/unix kernel" didn't you get ?
I for one would never consider using a linux kernel. I'd rather gouge out my eyes then untangle that sphagehtti mess of code.
Re: writing applications
What part of "the developers didn't want to use a server/unix kernel" didn't you get?
I for one would never consider using a linux kernel. I'd rather gouge out my eyes then untangle that sphagehtti mess of code.
To add, Linux kernel came out in 1991 and the GNU project has been looking to develop their own kernel (Hurd) for GNU OS which was based off Mach microkernel and now L4 microkernel. Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel. GNU project wanted to move away from Linux kernel because inefficient and huge which will only get worse over time. In big part because of monolithic design.
Linus wrote only the kernel & the GNU project created the rest of the OS for Linux.
If Linux kernel was so great & fast then why wouldn't GNU just accept it sooner? Makes you wonder. Simply because GNU project wanted something better by realizing the drawbacks to monolithic design but were unable to deliver. There was little interest in making competing kernel & Linux kernel had become standard making it impossible to get developers to work on a GNU kernel.
NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess
Haiku aims to be fast, clean and efficient which would get bogged down with the Linux kernel. Linux kernel would give trade off of greater hardware support for lesser performance.
Re: writing applications
What part of "the developers didn't want to use a server/unix kernel" didn't you get?
I for one would never consider using a linux kernel. I'd rather gouge out my eyes then untangle that sphagehtti mess of code.
To add, Linux kernel came out in 1991 and the GNU project has been looking to develop their own kernel (Hurd) for GNU OS which was based off Mach microkernel and now L4 microkernel. Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel. GNU project wanted to move away from Linux kernel because inefficient and huge which will only get worse over time. In big part because of monolithic design.
Linus wrote only the kernel & the GNU project created the rest of the OS for Linux.
If Linux kernel was so great & fast then why wouldn't GNU just accept it sooner? Makes you wonder. Simply because GNU project wanted something better by realizing the drawbacks to monolithic design but were unable to deliver. There was little interest in making competing kernel & Linux kernel had become standard making it impossible to get developers to work on a GNU kernel.
NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess
Haiku aims to be fast, clean and efficient which would get bogged down with the Linux kernel. Linux kernel would give trade off of greater hardware support for lesser performance.
Actually the linux kernel has better overall throughput performance then haiku or BEOS or GNU Hurd minix and the NT kernels.
It does have that, but with that small 3-5% edge over some other kernels comes a huge mess of crap I personally don't want to deal with it either. The biggest problem with linux is that it is essentially anarchy and no one is leading. Without leadership, focus and vision, you get a big mess.
haiku will continues to use code that makes sense and it fiarly well designed. I don't have a problem with the hda driver either. Works fine for me.
Re: writing device drivers
Who do I talk to for help in writing drivers for Haiku?
I am trying to port/improve my old drivers from Beos to Haiku, the simple ones that support simple I/O work fine, but I am having real problems getting the more complex ones working.
I have been looking at the Haiku source code for weeks and just can't seem to see my mistake - HELP!
For example, CRAM is seen by DiskProbe but not by DriveSetup so I can't mount it.
I am trying to post this question to the Haiku-Development mailing group, but I seem to be messing up there too.
Re: writing device drivers
Who do I talk to for help in writing drivers for Haiku?
I am trying to port/improve my old drivers from Beos to Haiku, the simple ones that support simple I/O work fine, but I am having real problems getting the more complex ones working.
I have been looking at the Haiku source code for weeks and just can't seem to see my mistake - HELP!
For example, CRAM is seen by DiskProbe but not by DriveSetup so I can't mount it.
I am trying to post this question to the Haiku-Development mailing group, but I seem to be messing up there too.
did you create a account at freelists.org ? You have to create a account and then join the mailing list.
Once you do that your email will be sent.
Re: writing device drivers
I have now created an account now. Before I was trying to subscribe without creating an account first (boy am I dumb) and yes they have very quickly helped me there! I already see my driver working better and hope to get it working complete by the weekend.
This is why I like Haiku-OS, the developers are far more friendly/helpful than the Linux developers I have talked to locally. And the Linux forums are over-full of people like 'NoHaikuForMe' which does not make asking questions something you want to do. Mostly I just read the Linux forums that cover subjects I am interested in, but re-frame from asking questions.
Re: writing device drivers
I have now created an account now. Before I was trying to subscribe without creating an account first (boy am I dumb) and yes they have very quickly helped me there! I already see my driver working better and hope to get it working complete by the weekend.
This is why I like Haiku-OS, the developers are far more friendly/helpful than the Linux developers I have talked to locally. And the Linux forums are over-full of people like 'NoHaikuForMe' which does not make asking questions something you want to do. Mostly I just read the Linux forums that cover subjects I am interested in, but re-frame from asking questions.
Mostly I find linux developers to be hacks, Not hackers. Just hacks.
http://dictionary.reference.com/browse/hack
to damage or injure by crude, harsh, or insensitive treatment; mutilate; mangle: The editor hacked the story to bits.
Re: writing applications
Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel.
Jeez, you are starting to spread lies at the same rate as thatguy, I had respect for you tonestone but that is quickly fading.
FSF put Hurd on the back-burner AGES ago as soon as Linux started gaining traction and focused on providing the gnu tools needed for Linux to become a self-sufficient system, (compilers, core/binutils, libs etc) and the Hurd hasn't seen any serious development since.
NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess
Back this claim up, show me some benchmarks that show Linux to be huge, bloated and inefficient!
You try so hard to paint Linux as inefficient which obviously has something to do with some crazy notion that monolithic kernels are somehow the root of all evil, despite the fact that you can't point to a single factual thing as to why that would be.
I find it easy to ignore the stupidity of thatguy, since he obviously has no clue whatsoever, but you atleast seems to have basic computer knowledge. And yet you are just throwing out sweeping statements with nothing to back them up with. It's just sad when someone like NoHaikuForMe is pretty much the only one in this thread who presents FACTS.
And why is it that guys like tonestone and thatguy are so hellbent on mudslinging Linux? I find it so damn sad since as a huge Haiku fan I want Haiku to attract developers and the NUMBER ONE place from where Haiku has a hope of attracting them is from that of other 'alternative' operating systems of which Linux is by far the largest. And instead I see morons like thatguy popping up on Linux oriented boards attacking Linux like some frothing-at-the-mouth madman while waving the Haiku banner, totally idiotic.
It's no wonder this community starts to seem stale when it's filled with such crazy haters. I'm really losing faith here, and seeing some weirdo like NoHaikuForMe coming across as the 'voice of reason' in this thread just underlines that.
Enough
Can we please call this a dead horse?
This thread has outlived its usefulness.
Re: writing applications
Just last year in 2010 did R. Stallman (GNU leader) finally give into Linux kernel.
Jeez, you are starting to spread lies at the same rate as thatguy, I had respect for you tonestone but that is quickly fading.
FSF put Hurd on the back-burner AGES ago as soon as Linux started gaining traction and focused on providing the gnu tools needed for Linux to become a self-sufficient system, (compilers, core/binutils, libs etc) and the Hurd hasn't seen any serious development since.
NoHaiku is right that Linux kernel has been available and would give many drivers (that's why Android OS uses it) but fails to say that it is huge, bloated & inefficient. ie, a real big mess
Back this claim up, show me some benchmarks that show Linux to be huge, bloated and inefficient!
You try so hard to paint Linux as inefficient which obviously has something to do with some crazy notion that monolithic kernels are somehow the root of all evil, despite the fact that you can't point to a single factual thing as to why that would be.
I find it easy to ignore the stupidity of thatguy, since he obviously has no clue whatsoever, but you atleast seems to have basic computer knowledge. And yet you are just throwing out sweeping statements with nothing to back them up with. It's just sad when someone like NoHaikuForMe is pretty much the only one in this thread who presents FACTS.
And why is it that guys like tonestone and thatguy are so hellbent on mudslinging Linux? I find it so damn sad since as a huge Haiku fan I want Haiku to attract developers and the NUMBER ONE place from where Haiku has a hope of attracting them is from that of other 'alternative' operating systems of which Linux is by far the largest. And instead I see morons like thatguy popping up on Linux oriented boards attacking Linux like some frothing-at-the-mouth madman while waving the Haiku banner, totally idiotic.
It's no wonder this community starts to seem stale when it's filled with such crazy haters. I'm really losing faith here, and seeing some weirdo like NoHaikuForMe coming across as the 'voice of reason' in this thread just underlines that.
Well, think whatever you want, I certainly do not care. 80+ % of computer users have spoken, linux desktop sucks and no amount of evangelizing will change this fact.
I have never said linux was inefficient. I said linux is a big mess, and frankly there is no refuting that statement. That and I dislike the attitudes of most linux users.
the only people spreading lies here are the linux users. They must feel threatened by haiku. why else bother with the community here ?
Re: writing applications
FSF put Hurd on the back-burner AGES ago as soon as Linux started gaining traction and focused on providing the gnu tools needed for Linux to become a self-sufficient system, (compilers, core/binutils, libs etc) and the Hurd hasn't seen any serious development since.
Not right because people are still working on Hurd at a slower pace. You understand that there's a difference between slow development and no development? Also, the microkernel choice has changed a couple of times setting Hurd back further.
Yet for some reason Debian offers Hurd release. Also development on Hurd is still going on with the release of GNU/Hurd 0.401.
http://www.gnu.org/software/hurd/
"The Hurd is under active development, but does not provide the performance and stability you would expect from a production system. Also, only about every second Debian package has been ported to the GNU/Hurd. There is a lot of work to do before we can make a release."
http://www.debian.org/ports/hurd/
There is even an Arch Hurd ISO in the works.
http://www.archhurd.org/news/19/
Posted Nov 25, 2010
"How many developers are working on the GNU Hurd?
Not many. One handful work on it in their free time, and another two handful do help with Debian GNU/Hurd and Arch Hurd packaging. Also, an additional handful of former developers are still available for answering technical questions, but are not really participating in the current development anymore."
http://www.gnu.org/software/hurd/faq.html
Back this claim up, show me some benchmarks that show Linux to be huge, bloated and inefficient!
I've already done this. I provided a link to a news story where Linus admits to this himself. Of course you may argue that was in 2009 and now 2 years later but the situation is not likely to have improved. Direct quotes from Linus & the reporter to prove this:
LinuxCon 2009 Linux creator Linus Torvalds says the open source kernel has become "bloated and huge," with no midriff-slimming diet plan in sight.
Citing an internal Intel study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.
"We're getting bloated and huge. Yes, it's a problem," said Torvalds.
"I mean, sometimes it's a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago...The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse."
http://www.theregister.co.uk/2009/09/22/linus_torvalds_linux_bloated_huge/
Linus directly says the kernel is getting bloated & huge himself!!! So now you don't even believe Linus? I also showed the code size of the kernel getting bigger and bigger very fast which makes it more open to code bloat. Do you even read any of my posts or links? I gave that same link above in another post in this very thread. I am sure I provided the exact same quotes in that other post too.
The inefficient issue was in comparison to previous releases of the Linux kernel. With every release the kernel was getting slower and slower. ie, becoming inefficient; performance was dropping - took 12% hit. The benchmark was provided and done by Intel. Maybe you don't believe Intel either? I did say inefficient right? Pretty sure I never compared or stated worse or better than other OS kernels in performance. There is a difference between the two. You do realize that?
I was trying to point out that because Linux kernel is huge, bloated, inefficient and complex that many alternative OSes don't bother using it. Those are the reasons why the GNU project itself is looking to not use Linux kernel and move to Hurd in the next 10 years. R. Stallman only recently accepted Linux kernel in 2010 because 1) way too slow development on Hurd 2) performance issues to deal with because Hurd was still not finished 3) the vast amount of drivers for Linux kernel. Had there been more developers then Mr. Stallman would have been pushing for Hurd instead. Why should other OSes use the Linux kernel when they cannot even convince the GNU project to use it? Can you not see the irony there?
Re: writing applications
The inefficient issue was in comparison to previous releases of the Linux kernel. With every release the kernel was getting slower and slower. ie, becoming inefficient; performance was dropping - took 12% hit. The benchmark was provided and done by Intel. Maybe you don't believe Intel either?
Bottomley says the results are from a "database benchmark that we can't name". That's right, they can't even tell you what they tested. Intel posted results from an unnamed OLTP benchmark. We might reasonably choose to imagine that this is TPC-C, probably on Oracle. We know the hardware involved is a Nehalem Xeon monster 72GB RAM system with 192 SSDs.
There's a fair amount of crazy involved here, their "benchmark" system does things no actual production database would do, for example we know it ran in realtime priority, which means that changes to Linux aimed at realtime systems will mess with the results. But the whole point of a realtime priority is that you are willing to trade throughput for scheduling reliability - and the benchmark measures throughput but doesn't care about scheduling. So that doesn't make a whole lot of sense.
But despite these caveats Intel's benchmark posting to LKML was useful to identify and fix some regressions. By 2010 when Intel appear to have discontinued this project‡, the gap had narrowed from 12% to 0.8%. There is essentially no equivalent regression testing for Haiku of course.
I was trying to point out that because Linux kernel is huge, bloated, inefficient and complex that many alternative OSes don't bother using it.
The results speak for themselves, don't they?
‡ The real purpose of these postings is pretty transparent, Intel engineers needed to learn how to tune a Linux system for TPC benchmarks, they learned a lot about doing that from the feedback they received, and this doubtless helped them achieve some record results for new Intel systems in 2010.
Re: writing device drivers
My ram drive now works!
Thanks to the Haiku developers for the needed clues.
Now to add compression.
Re: Syllable vs Haiku
Syllable and Haiku are not related they just have the same file system that is it. Syllable is based of of Athe OS witch is an Amiga os 3 aka "Clasic" clone. Syllable is a joke, an os for way out dated hardware. You know one of those floppy OS's like Menuent and Kollapi. Dex is more usable. The only time Syllable is better then haiku is in the VM, heck even React OS is better in a VM. Haiku is better and that is it..