Displaying Newsletter
Issue 43, 02 Jun 2003


  In This Issue:
 
The Scientific Method by Michael Phipps 

Many people who major in computer science fail to grasp the reason that they have to take all of those science courses. While, yes, learning how to put chemicals together, or how to calculate newtons of force, is in and of itself a worthy endeavor, there is another reason that is more directly related to our future careers. That is to learn, understand and apply the Scientific Method.

The Scientific Method, as explained by a professor at the University of Rochester is:

  1. Observation and description of a phenomenon or group of phenomena.
  2. Formulation of an hypothesis to explain the phenomena. In physics, the hypothesis often takes the form of a causal mechanism or a mathematical relation.
  3. Use of the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations.
  4. Performance of experimental tests of the predictions by several independent experimenters and properly performed experiments.

When I was first exposed to formal methods of Scientific Method, I resisted them very strongly. Jumping to conclusions was far easier and more suited to my nature than slow, methodical proof of what seems "obvious". It took years of dealing with the consequences of that--false assumptions and long traversals of blind alleys--to teach me to work more methodically. This process is still evolving.

The first time I saw Bresenham's algorithm, my first thought was, "I wonder if the added complexity to work only in integer math is worth it." I never got around to testing it out, back then. I was talking to DarkWyrm a few days ago and I wondered about it again aloud. I decided that this was the time to put my wonderings to rest.

I copied a version of the algorithm from the app_server code. I ran it, plotted out the points and ensured that it worked. My observation here is that Bresenham's works well and is completely integer, and that processors have much faster floating point units than they used to. My hypothesis here was that a floating point version would be as fast or faster and would be much simpler to write.

So I created the code you see in this file. Execution of that code showed that the slope based code took twice as long as the Bresenham's code! Based on what I knew about the x86 architecture, that did not make any sense at all. The floating point operations should be within an order of magnitude of the performance of the integer operations, so the vastly reduced number of operations in the slope based code should have made a huge difference.

The beauty of Scientific Method, though, is that it doesn't leave you stranded in a case like this. There is a way to reconcile your results with reality, if you can let go of your ego enough to admit that you might be wrong. That way is to take your results, go back to step 1 and start over.

That is precisely what I did. I started again at step 1 with "slope based code runs slower than predicted" as my observation. My hypothesis was that the compiler was doing a really poor job of compiling my floating point code. I predicted that when I disassembled the slope based code, it would be inefficient. Sure enough, when I did so, it was very bad.

Whether your hypothesis is right or wrong, Scientific Method always goes leads the scientist to start again with some new observation or question. In this case, it brought about a code rewrite.

Starting with a description of "slope code can be more efficient when properly written in x86 assembly", I wrote the code and found that I needed very few instructions in my rendering loop to draw the line. That lead me to believe that the new code would run very fast.

The "messy point" of Scientific Method is "properly performed experiments". With computer science, this is made somewhat easier than with a "physical" science--rerunning our tests is a lot easier, not to mention the fact that we have the compiler to help up make fewer mistakes. In my case, I wrestled long and hard with the compiler to get it to do inline assembly with dependency checking on the registers and memory. I finally, at the advice of Voidref, gave up on inline assembly and moved the slope code into its own .s file. Then came the debugging--I got one plotted point instead of the hundreds I expected. With a little work, those bugs were fixed and I got (nearly) the same results with both algorithms.

Finally, the time testings came into play. Where the C/C++ version of the slope based code had required twice as long as the Bresenham's version, the assembly version of the slope based code was about 30% faster than the Bresenham's version.

At this point, I am out of time for further tests. There are a number of questions and further experiments that we should draw from this:

  1. This is an unfair test. Hand coded assembly vs compiled C is not necessarily a good comparison. While I could argue that the compiler badly skewed the slope version, I can't tell without hand coding Bresenham's if it would benefit from some tuning as well.
  2. I have never written x86 assembly code before today. It is very possible that someone with more knowledge could do better.
  3. I only tested a few lines for correctness. It is possible that there are cases where the two algorithms vary greatly in their results. If that is the case, it is probably worthwhile to plot the points and compare.
  4. My experimentation method was somewhat slipshod--I had > 20 applications running, untrustworthy memory, two processors, interrupts enabled, etc. While I don't think that the results were dramatically changed by these variables, they do exist.
  5. This experiment actually undermines the one reason that I thought that the floating point code should be better--more registers would be available for other use in the slope code than in the Bresenham's.
  6. The slope code has a fair amount of potential for use in cases where the granularity could be less than a pixel--rendering postscript, say, our subpixel rendering. None of that was taken into account.

Still, overall, I am pleased with this experiment. It demonstrated Scientific Method and somewhat finally put to rest a question that has been dwelling in the back of my mind for years.

 
The Content Co-Op by Michael Phipps 

It was once said that only Nixon could go to China. For those readers who are not familiar, let me briefly explain. In the United States, the Democratic Party is the more left-leaning party. If a Democrat were to extend diplomatic relations to China, it would look like "selling out" US interests to the Communists. Nixon, being an absolute staunch conservative Republican, could go to China without drawing too much criticism. Just to keep this non-partisan I will point out that only Clinton, a fairly leftist Democrat, could have enacted the social program changes that he did.

It is much the same with online music. There is no way that Shawn Fanning (the creator of Napster) could go to record companies and convince them to offer music online in a reasonable way. Nor could any one of a dozen dot-com companies. No, it took one of their own. Steve Jobs, as CEO of Pixar and with tight interests with Disney, is surely seen as "one of them"--someone with a significant interest in protecting the rights of the creators. Not to mention, of course, the many legal engagements of Apple Inc while under his control.

The future of the iTunes Music Store (iMS) seems to be assured. It has been a big success so far, bringing in all sorts of press attention and big sales. I wonder, though, if the model is a little short-sighted. As neat as it is to be able to get music instantly, the real money is, I think, in other forms of media. The iPod, iTunes and iMS together make for a tightly coupled ultimate user experience for music. What would a tightly coupled, excellent user experience be like for other forms of media?

I would be able to turn on my TV and navigate menus of some sort to *any* TV show that I want. A 40 year old Star Trek rerun shouldn't be any different than last nights episode of Law and Order. I should be able to pay "per view" with a maximum of maybe 3 times after which it would be free. The program is streamed to my TV and is stored on a local hard drive for pausing/rewind/etc. There needs to be some policy/protection to ensure that I don't just save off files that I have chosen for PPV, but lifetime licenses, I should be able to save off. Maybe a different format? While it doesn't interest me, I am sure that many people would be interested in a "what are people watching" type of feature. Maybe even some "what is Justin Timberlake watching?" feature. This can never happen, though, until enough people have high-bandwidth connections to content providers. No one wants to wait for something to load.

I am an avid reader. The above model would fit nicely with books, too. O'Reilly has done something like this already--a subscription model where you can choose, each month, a certain number of books that you have the right to read that month. A model like this could work, although I would also like the ability to buy certain books outright so that they never "count against" the number per month that I can rent. What is really lacking for this to work is an iPod for books. As I wrote a few newsletters back, the Palm Pilot makes a serviceable e-book reader. My major complaint is that the screen is too small. On my IIIxe, the case is not quite sturdy enough either--if I put it in my pocket for any length of time, it loses its memory and needs to be completely reset. The ultimate device for something like this would be about the size of an oversized paperback, but could fold in half. It would be a quarter inch (.63 cm) thick and run on AA batteries for at least 10 hours. It would use LCD technology, not emitting photons but instead reflecting them--easier on the eyes. Personally, I think that grey-scale is fine, but I am sure that some people would disagree with me. This device would communicate wirelessly to get new books. Additionally, it could be used as an alternate display device for a TV or a PC. The screen should be touch sensitive. And it should cost < $100.

Finally, the business model of iMS is very good, but it could get a little bit better. I have a friend who likes to listen to "new music". Doesn't matter what it is, so long as it is new. After about a week, he is sick of it and wants to hear the next new batch of music. iMS is a poor model for him. I have purchased many DVDs of movies, but I have almost never watched a movie more than once. Why? Good memory. I tend to remember them too well. I watch Star Wars about once a decade, for example. For people like us, an "a la carte" model is very expensive. On the other hand, straight rental can get very expensive, too. $3 per movie can easily cost $60 a month, which is why cable TV "works". My proposal is different. It removes the need for commercials, pays people by their popularity and allows nearly anyone to publish content. A subscription service that is "all you can eat" would satisfy people who want the same content over and over and the people who want the next new thing. After the cost of doing business (bandwidth, servers, support, etc) is paid, the remainder of the revenue is paid back to the content creators based on a few criteria. One is popularity--the more "users", the greater the compensation. Another is level of effort. I realize that this is somewhat subjective, but there needs to be a difference in compensation between someone who writes, say, a newsletter article, and someone who makes an 8-hour epic movie. Finally, user feedback should be an element of the compensation. If everyone who downloaded your song hated it, you should not be paid as much as someone who made great music.

This model could also be extended to software. And there, I think, is possibly the biggest benefit of all. I would love to have a library of 10,000 applications available to me at any time. Authors would have a reason to commit to upgrades and changes. Users would be repaying the authors without having to pony up for every application that they ever need. I would love to belong to this sort of a "co-op", both as a producer and a consumer. Watching any TV show ever made, listening to any song ever recorded, reading and book ever written and running any application ever coded would be worth a lot of money to me. Think about it this way--cable, in the US, is around $40 a month. XM radio is about $10 a month. Assume you purchase one application per month (~$40) and $20 worth of books/CDs. That would be $110 a month. For $99 a month, say, wouldn't you want to subscribe to the content co-op instead?