“The question of whether a machine can think is no more interesting than the question of whether or not a submarine can swim.” — Edsger Dijkstra
“The question of whether a machine can think is no more interesting than the question of whether or not a submarine can swim.” — Edsger Dijkstra
The trailer for the sequel to Tron looks awesome.
And this gives me an opportunity to vent one of my pet peeves about the original Tron.
In Tron, you have (in essence) a battle between three programmers: Flynn (hero), Bradley (hero), and Dillinger (villain). Now, what actual programs have these three delivered, per the movie?
Unsporting though it may be of me, I know which one I’d hire. With appropriate code reviews, of course.
In a thoughtful post at the Big Nerd Ranch blog, Joe Conway talks about the relatively new dot notation in Objective-C for invoking messages on objects.
The executive summary is, he doesn’t like it.
He’s making two arguments against dot notation:
I have no argument at all with point 1. I personally dislike the brackets notation, but that’s just years of writing C++. Objective-C is a different language, different syntax, get over it, write some code: no problem.
Point 2… well. Maybe this is also my years of C++, but I really don’t see the problem. For example, he writes:
In Objective-C, the square brackets () were added for message sending. While the square brackets had previously only been used for indexing an array, the intent of the brackets can be easily determined by the context.
That’s true, but the original reason that Objective-C used brackets was not to differentiate message sending from structure access; it was because Objective-C was (and largely still is) a preprocessor on C, and using brackets in that way made it easy to parse and do the substitutions. I’m entirely in favor of making a virtue of that necessity, but let’s not forget that the original reason was arbitrary, and unique in the history of object-oriented extensions to C.
He goes on to say, in the example:
int x = foo.value;
What does that mean? Are we getting the value field out of the structure object foo? Are we executing a simple method that returns a value? Or, in this case, are we creating a network connection, pulling a value from a web server, turning that data in to an integer and then returning it?
Well, yes, but we really don’t any more in:
int x = [foo value];
except that we’re running some code. He writes:
Our first glance tells us we are definitely sending a message. That clues us in that more work is being done, not just a simple memory offset and an assignment.
OK, good, we do know that we’re sending a message, but… well, we still don’t know anything at all about the nature of the message than we did in the first example. Calling these things “foo” and “value” is stacking the deck a bit, too; if the example was either:
int remoteMemoryTotal = remoteConnectionPartner.getRemoteMemoryUse;
int remoteMemoryTotal = [remoteConnectionPartner getRemoteMemoryUse];
it’s a bit harder to argue that we have zero information about what’s going on.
My preference for dot notation largely comes from the desire for encapsulation. I like the idea that if I need to swap out an implementation detail, I can do so without a syntax change. For me, the fact that accessing a field off of an object, and invoking a method on that object use the same syntax is a feature, not a bug.
Part of it is cultural, I suspect. Objective-C is an interesting hybrid language. The object model is very dynamic and fluid, very Smalltalk-y, but that’s added on top of C. Not a “C-like language,” but C. K&R C. Thus, there’s a huge difference (in philosophy as well as performance) between reading an int out of a structure and sending a message to an object that returns an int, and the syntax reflects that. In other languages, like C++ and Java, the philosophical difference is reduced (somewhat in C++, greatly in Java), and a founding principle of those languages was to make “get something from an object” be the fundamental operation.
He does make some good points about returning l-values from functions (something that C++ wrestled with unhappily), which is a requirement for writing really nice setters as well as gettings using dot notation. All in all, I agree with his fundamental point that:
You aren’t at home. You’re working with another language.
… even if I don’t accept the premise that using dot notation to invoke methods is fundamentally a bad idea.
I have multiple reactions when I read that extremely talented Macintosh developers are boycotting the iPhone App Store.
First, I agree completely with the essential issues. The rejection and approval process is not one that inspires the least bit of confidence, especially for those developers who might write exactly the kind of high-investment, sophisticated application that is exactly (a) the kind of application that makes the iPhone the leader, and (b) is likely to run afoul of an expansive or pedantic interpretation of one of the “rules.” (I put the term in quotes because the rules, as applied, are simply too vague to be considered such.)
As an analogy, suppose Apple told Adobe that it could not ship Photoshop for Macintosh OS X because it duplicates the functionality of Preview. That’s what they did with Google Voice. I agree that this particular scenario is unlikely, while observing that code signing for Mac OS X is going to be mandatory at some point in the future.
When I was at Apple during the early 1990s, the company culture was still one of absolute self-assurance and arrogance, even though those were the wilderness years. I can only imagine what it is like there now. One of the things that struck me about Phil Schiller’s response regarding the Ninjawords situation was that Apple responded, not in a case where they were clearly, undeniably wrong to reject an application, but in one where they felt they were clearly, undeniable right, and were being treated unfairly by that portrayal. It is a mark of how extremely arrogant Apple appears (and is) right now that Schiller can say, in effect, “No, you have it all wrong, we were correct and the developer is wrong, we did everything exactly right and have no case to answer,” and it is considered an step forward in good communication.
Let’s consider Schiller’s closing:
Apple’s goals remain aligned with customers and developers—to create an innovative applications platform on the iPhone and iPod touch and to assist many developers in making as much great software as possible for the iPhone App Store. While we may not always be perfect in our execution of that goal, our efforts are always made with the best intentions, and if we err we intend to learn and quickly improve.
Gruber considers this statement as the “first proof I’ve seen that Apple’s leadership is trying to make the course correction that many of us see as necessary for the long-term success of the platform.” I think he’s being far too generous. Schiller’s statement contains absolutely nothing concrete. Of course they’re not perfect, and one would hope that “if they err” they would do something about it. Those are platitudes, not steps.
The only rational conclusion is: What you see now in the App Store is what you are going to get, absent the FCC or FTC slapping Apple around (and please do not hold your breath for anything substantive in that regard).
Second, I doubt that this is enough for me to stop developing for the iPhone. (The easy answer is “no,” because I have signed agreements to deliver iPhone applications, but what about my own spec work?) The reality is that if you are developing for a mobile, you develop for the iPhone and then worry about any other platform. (Microsoft has pretty much admitted this themselves.) It’s an impossibly large market and pile of money, and it is very difficult to consider walking away from it.
Third, we are watching an important shift happening in the software industry. There is a class of developer, console game developers, who must be regarding this with considerable amusement, since they’ve always developed in a walled garden with constant supervision from the platform developer. Desktop developers aren’t used to this. The only kind of retribution we’ve ever been given was the occasional breaking of a private API in the OS (and even that was often treated as a major betrayal on the OS vendor’s part).
From the point of view of desktop developers, the open development model is a natural law. From the point of view of the platform owners, it is a historical accident, like the lack of DRM on CDs, that they wish they could go back in time and fix. Anyone who thinks that Steve Jobs is not irritated that Adobe can sell Creative Suite Design Premium for $1,800 without paying Apple a penny does not understand Steve Jobs.
Jeff Davis has written a superb article about the problems with NULL in SQL. He has it exactly right when he says:
I think the best way to think about NULL is as a Frankenstein monster of several philosophies and systems stitched together by a series of special cases.
The closest single operational definition of
NULL I can think of is, “This could be any value, so I-the-database will not treat it as being any particular value.” Of course, that immediately breaks given that aggregates ignore nulls.
This is, sadly, one of those things that you just have to learn. Painfully.
For the last few months, I’ve been honored to videotape PostgreSQL events. The last one was PGDay San Jose 2009, which was the day before the start of the 2009 OSCON. I didn’t attend OSCON this year, in part because one of the great pleasures of OSCON for me was getting to go to Portland; visiting San Jose just isn’t the same.
The recorded sessions for PGDay SJC are available:
Many thanks to Steve Crawford for the audio system and assistance with setup, tear-down, and coordination during the event.
So, I’ve settled on Mercurial as my VCS for my current software development. I looked at Git and Bazaar, and decided that Mercurial met my needs as closely as anything did. Git is, I’m sure, a thing of beauty and a joy forever, but the documentation and cultural milieu were quite oft-putting; it’s like when you joined the computer club at High School for the first time, and everyone knew about a billion times more than you, and were happy to remind you at every turn. Bazaar is, I’m sure, lovely, but I just bonded with Mercurial better.
For a good comparison of Git and Mercurial, I would recommend Git vs Mercurial: Please Relax. I would especially recommend this bit of advice at the end:
- Evaluate your workflow and decide which tool suits you best.
- Learn how to use your chosen tool as well as you possibly can.
- Help newbies to make the transition.
- Shut up about the tools you use and write some code.
Tony [Comstock] has described Google as “lazy” for various sins: The sex-negative algorithm changes, considering [penis] a less naughty word than [clitoris], and so forth. And he has a good point.
But, really, aren’t we the lazy ones?
Just like Homo Sapiens has trouble imagining geological time, Homo Internetica has trouble imagining time periods more than a couple of years. In 1994, just having a web site was enough; we didn’t care how we ranked in search engines, because there weren’t any, and then when there were, they were lame. (Remember lists of links as the height of Internet culture?)
That gave way to the era of the search engine, an era that I believe is now coming to a close. Until a couple of years ago, the trick to success was making sure you were properly ranked on Google, and the world would be delivered to you.
And we got very lazy. We assumed that the world would always be that way, that Google would continue to deliver to us what they always did. I would hope that the events of September through December, 2006, would have disabused everyone of that notion.
Really, Google is trying to solve an impossible problem. What does it mean to search for “pump”? What could the top ten results possibly be? (This is why Wikipedia figures so large in Google search results; it’s an easy way of punting on impossible-to-figure queries.) For a while, just having a web site that was pump-related was enough, because not everyone had one of those.
Now, everyone does. It’s like having a telephone: If you don’t have a web site, you’re not serious about business. (In fact, I’d say at this point you could skip the telephone first. And a fax machine? Whatever; use eFax if you care.)
But when the entire world is on the Internet, then search becomes worse than useless. Google understands this problem, but their basic model, which is, “Type in a short phrase and we’ll tell you about web sites” is as much a part of their DNA as Microsoft’s Windows codebase, and just as much of an anchor. I have no idea what will replace it (if I did, I’d be out building it), but something will.
Today, trying to build a business around organic search is becoming counterproductive. What do you think of when you see a company called “AAAAA Aardvark Plumbing Service”? Not “quality,” but “oh, look, they’re gaming the indexing system.”
So, it’s time to stop thinking about organic search results. It’s over, it’s done. Even if [real sex] did the right thing (and I’m not claiming it does), there are probably 1,250 companies (minimum) that could make a plausible claim to having pages that are relevant to that term; are you really that interested in spending time fighting between results page 119 and 121?
It’s time to get back to selling our wares by, you know, finding people proactively and getting it into their hands. Networking, marketing, advertising, all of that boring tedious fiddly work.
Just like we had to back in 1993.
And some companies won’t make it, because the cost of doing that marketing will exceed the revenue that the results will produce. That’s not a comfortable truth, but it is a truth nonetheless. It will mean that the old traditional boogiemen of distributors and other gatekeepers will continue to be important, and will continue to get their cut.
Remember how people told us that the Internet would completely disintermediate everything, and it would be a direct artist-to-consumer paradise? They lied.
The organic search results gold rush has been over since September, 2006. Time to get back to work.
That which only exists on one disk, you do not truly possess
Recently, the very cool video side Vimeo announced that they would no longer be allowing videos which were just samples of the gameplay of video games. Needless to say, howling and gnashing of teeth followed. I don’t have a strong opinion on it either way (except that Vimeo is completely right and the people complaining can go hang), but one of the repeated comments baffled me. To wit, “Well, how long do I have before they are deleted? I need to back them up.”
Excuse me? The only copy of something that you presumably valued, since you were willing to take the time to record it and upload it to a video site so that we could all be bothered by it, exists only on some third-party video site?
Then I encourage Vimeo to delete all of those movies now, as an object lesson in proper digital asset management. Harsh, yes, but sometimes, that kind of lesson is the only one that sticks.
A friend of mine once bought a $200 beater car. On the way to a very, very important job interview, this $200 car broke down on the freeway. My friend blamed this on “bad luck.”
Needless to say, this was not truly bad luck.
A server failing is not “bad luck.” Computers fail. All the time. Bad luck is a meteor hitting the data center, or Godzilla rampaging through an Internet connection facility. A single, un-backed-up server losing its single, non-RAID disk is not “bad luck.” Depending on how prepared you are, it is either as boring as a kitchen light bulb burning out, or as disastrous as my friend’s experience. It is, however, something that you will have to confront sooner or later if you have any kind of public web presence.
Before you pick a $9.95 a month hosting plan, you might want to reflect on that.