bestkungfu weblog

Facebook needs accessibility help

Filed in: General, Wed, Jan 26 2011 20:17 PT

That title looks a little confrontational, I know. But no, seriously. They’re actually looking for someone with the skills necessary to help make the most popular site on the web a little more accessible to its users.

Okay, there’s the good news. Here’s the bad: while they’re looking for that person, they’re also busy reinventing my greatest nemesis: the CAPTCHA. Now, at first blush, it might look like it’s not going to be that big of a deal. Reportedly, Facebook is rolling out a service that will display three images of one of a given user’s friends, and prompt them to select that friend’s name from a list. This will appear as a prompt to users who are exhibiting suspicious activity, in an attempt to keep hacked accounts from spamming others, for example.

I will admit that asking people to identify themselves by verifying knowledge of their social network has some significant upside potential. However, by pinning that knowledge on being able to see and recognize an image, a couple groups of people get screwed in this deal. First and foremost are blind and low-vision users, who would fail this test just as readily as they’d fail to recognize mangled patterns of numbers and letters. Given that most of these users have no chance of passing this test, they would presumably be locked out of their own accounts with no real recourse to regaining access. My fear is that the tenor of any subsequent screening would be similar to what one might expect of someone from Nigeria who just failed the same test: that is to say, not pleasant. (Another group I’m less concerned about is people like Robert Scoble, who famously hit up against Facebook’s 5000-friend limit, and who I’d bet dollars to donuts would fail a photo ID on 4/5 of those friends. No offense, Robert. I’m just playing the odds.)

What really worries me about this isn’t that it’ll be a failure, but that it’ll probably be a success. Facebook has all the information it needs to provide random information that tie users together. If that happens, it won’t be long before we start getting these kinds of searches on our Ticketmaster orders in place of CAPTCHAs. Companies would love that kind of certainty because it would reduce fraudulent transactions dramatically. But what makes that a win for Facebook and its partners would also make an already-huge problem for blind and low-vision users even bigger. From what I see here, there isn’t any way to fall back to another kind of test, and this is precisely the role an accessibility specialist needs to play inside Facebook. Somebody has to jump in at this stage and say, “hey, guys, I think you forgot something.”

This one’s a freebie. (By the way, does this count toward my community service? Man, I’ll never punch that mascot again.) What I would say to them is that there is quite likely some other set of information that could be shared, that doesn’t require vision, but merely provide some kind of safeguard beyond the information that’s presently available. Provided the user’s email account hasn’t also been captured, something like a simple password reset could do the job. Failing that, Facebook needs at a minimum to provide a way for blind and low-vision users to contact a human being and prove their identity. That could be as simple as locking down access to a user’s private info once the account has been flagged as suspicious, and asking questions about that info. Sure, hackers could jump in and capture that information first, but once the more sophisticated hackers get wise to this new prompt, they’ll start capturing every friend’s photo as well. There’s no perfect security solution here, but we have to create some way to allow blind and low-vision users to protect themselves more or less equally.

Mind you, any blind user who’s created an account on Facebook has already had to defeat at least one CAPTCHA, which is another problem to solve, but remember that the user has no skin in the game on account creation. When your account may have been hacked, it’s critical, especially on a social networking site, to regain control of it as soon as possible to minimize the damage. (I wish I didn’t know as much about that feeling as I do. Thanks, Gawker.) So it’s important to the overall security of Facebook to ensure that legitimate users can retrieve their accounts quickly, irrespective of their sensory capabilities.

If you have the skills called for in Facebook’s job posting, let me know you’re interested and I can put you in touch (in confidence, if necessary) with someone there. I can’t think of an open position in accessibility that has a greater potential for one person to do a whole lot of good for a whole lot of people. And you will not want for design challenges.

Does the iPad 2 display connect to your brainstem?

Filed in: Apple, tech, Sun, Jan 16 2011 20:16 PT

I’m sure we could ask Apple, but they don’t comment on rumors.

Still, the mill continues to churn, and this week we saw double-sized glyphs embedded in a beta of iOS. From that, the conclusion being drawn is that the iPad 2 will have a pixel-doubled, 2048×1536 display. The question is, is that insanely great, or just insane? Let’s take a look at where we are today:

Device Resolution Size PPI Shipping
iPad 1024×768 9.7″ 132 Yes
Galaxy Tab 1024×600 7″ 171 Yes
Xoom 1280×800 10.1″ 149 No
Atrix 4G 960×540 4″ 275 No
iPhone 4 960×640 3.5″ 326 Yes

What we can see here is that Apple remains far ahead in pixel density among phones, but they’re behind even already-shipping devices in the tablet world. Once they fell behind other phones, they fired back with a display that the crop of devices announced at this year’s CES still haven’t caught up to.

I think it’s a foregone conclusion that the next iPad will follow the same pattern. Let’s be honest: the iPad’s display is good. Not great. It needs a boost.

But does it need to be quadrupled? Eh. I’m not so sure. First and foremost, there’s only one application that really benefits from that kind of pixel density, and that’s text. The iPhone 4 was a tremendous advance for readability, but bear in mind that a tablet is traditionally held farther from the eye than a phone. It doesn’t need to be as tack-sharp as an iPhone 4 to be unbelievable.

Second, video is another big use case, and a 2k 4:3 display doesn’t make sense for any common video size. If we take as a given that the iPad 2 will support 1080p video, that means a 1:1 representation of that video leaves a black box all the way around. In 720p, the format delivered by iTunes content, pixel-doubling is no big improvement. It’s obvious that higher pixel density means better video, but if Apple likes controlling every pixel, then they’ll want a resolution that matches the video content they already distribute.

Third, a 2048×1536 display means over three million LEDs, all of which need to be driven by a GPU. Just to keep up, the GPU would need to be four times as fast as the first generation, which is a tall order for one generation. I know that Hitachi has shown off a 302ppi, 6.6″ display, which suggests a retina display could be made, but just because something can be found, doesn’t mean it’ll be implemented. The screen itself is only part of the balance between performance, cost and battery usage.

Finally, according to iSuppli, the iPad display/touchscreen unit accounts for more than a third of the overall bill of materials for the 16GB version ($80 out of $229.35). How likely is it that they’ll quadruple the number of LEDs in that package without ultimately affecting the price? Don’t get me wrong: I’m sure Apple will squeeze more pixels out of that form factor. But if it’s actually 2048×1536, I will be extremely impressed.

So if a pixel-doubled iPad isn’t in the cards, let’s look at plan B. I’m just spitballing here, but I think the optimal display resolution for a next-generation iPad is 1440×1080. It’s a higher resolution than any announced tablet. At 186ppi, it’s the best tablet display on the market. It also displays 1080p video at full screen when zoomed in, along with 720p at a nice, clean 1.5x multiplier, which has been fundamentally handled by consumer electronics companies for several years. All without breaking the bank either in cost or GPU: At 1.56Mpix, it nearly doubles (1.978x) the current display, while still taking up less than half as many pixels (49.4%) as a 2k display.

Apple has become known for its hardware advances. They scored a huge coup with the display technology on their flagship device. But while I can’t completely rule it out either technically or economically, I just don’t think lightning will strike twice. If it does, I’ll be first in line. Again.

It’s the most wonderful time of the year

Filed in: consumerism, culture, design, tech, Wed, Jan 5 2011 21:05 PT

…by which I mean CES, of course.

Granted, half of the things presented this week will never see the light of day, and the other half will be three to six months later than they announced. But still, today alone was just staggering. Motorola gave an Apple-grade presentation (if you can look past its comically bad audio). Olympus finally announced a Micro Four Thirds camera I’m willing to take the plunge on. Even Microsoft may be at risk of becoming relevant again.

I think what can be said about this batch of announcements is that this is the year everything is good enough. What I mean to say is that, of all the devices I’ve seen in the last couple of days, nearly all of them are capable of convincing someone to give up the PC as their primary computer device. These aren’t just rehashed netbooks–relatively few, in fact, even have an Atom CPU–but devices everywhere from 3 inches on up that have enough juice to browse the web, handle email, play games, watch movies, find yourself on a map, and generally do what 90% of the market does with their PCs.

I’ve seen a lot of CES presentations in my time, but this is the first year that I’ve seen the writing on the wall for PCs as we know them. Now, okay, if you’re reading this, you’re one of two kinds of people: the ones who will be using your phone or tablet as your primary computing device by the end of the year, if you’re not already; or the ones who will still be lugging a 5-lb. clamshell device with a keyboard to your neighborhood Starbucks. Either way, in my opinion, you’re an outlier. You may need to type so frequently that a keyboard is always in your plans. Or you may be editing 4k video, or compiling an operating system. And that’s fine. PCs will still exist for those cases. But you’re still going to be affected by the trends in the industry.

What I want you to think about as you contemplate the death of the PC (or, say, Wintel, or the WIMP model, or what have you) is someone you know who’s not at all a geek. Maybe your mom, or the partner who stares glassy-eyed at you when you come home complaining about the latency of your DNS at work. Now, think: what do these people do with their computers all day? They browse the web. And by “the web”, I mean web-based email, Facebook, YouTube, Twitter, Netflix, their bank accounts, their stocks. Name me one thing 90% of these users need a Core i7 CPU for. Games? Only if they’re hardcore. Editing images or videos? Probably not worth the investment.

In the overall cost-to-benefit calculation, there’s going to be a lot more value given to size and battery life than to raw horsepower. And raw horsepower per dollar is really the only remaining benefit of the PC. They’re complicated, bulky, virus-prone, and get slower over time. I looked at my in-laws’ mid-tower Windows machine like it was a record player: it’s big, loud, sucks down a lot of juice… and most importantly, it was asleep most of the time I was there, since they got my hand-me-down netbook for a present.

Meanwhile, you can walk into any mobile phone store in the US today and pick up a 1GHz computer with a half-decent browser for anywhere from $200 to nothing. Then you can shove it in your pocket. That’s powerful. And what we’re seeing this week shows us that the gap between the desktop and the pocket is not only narrowing, but it’s morphing in all kinds of ways. If Motorola is to be believed, the tablet battle will be joined by the Android Honeycomb-powered Xoom this spring; there will be at least one 960×540 phone in the near future; and Windows 8 is aiming for low-power CPUs as well. Consumer electronics companies aren’t tailoring their offerings for power users: they’re aiming squarely at the non-geek in the house. (Don’t feel threatened. It’s for the best.)

This week, we’re seeing what the non-Apple players in the market are seeing as the future of computing. This looks to be the first time Apple has to look at the competition seriously.

Five Myths about Accessibility Myths

Filed in: accessibility, Tue, Jan 4 2011 05:06 PT

I’m an accessibility evangelist by trade. (No, seriously. It’s on my business card. My boss even knows.) Needless to say, such a job requires me to think seriously about what accessibility is all about, and how to communicate that to an audience that, frankly, has a lot of things on their mind, like keeping their jobs, turning a profit, or just not shooting up the place by the end of the day.

I would like to say that the material we have to offer people who want to learn about accessibility is plentiful (it is), and that it’s compelling enough to the constituencies we need to reach (it’s not).

The chief antipattern I can cite is the “accessibility myths” article. Search “accessibility myths” on Google and you’ll find a few hundred examples of this phenomenon. Ordinarily, these are blog posts citing a handful of elementary observations on the state of accessibility among designers and developers at the ground level. I know many, many people who have written them, nearly all of whose skills I respect. I want to ensure each of you that I’m not targeting any one of their lists in particular, except as symptoms of a greater problem. (I’m hoping this disclaimer will protect me from pages of comments defending previous attempts at this kind of writing.) The problem is that when one sets out such a line of demarcation–your information is false, mine is true–you may not be reaching the right people with the right news. Or if you are, you may not have made them feel motivated toward your cause.

In an effort to focus our energies a little better, I’ve put together the problems I find with the “accessibility myths” genre. And since it is better, pedagogically speaking, to present material in a form that the audience finds comfortable, here they are: five myths about accessibility myths.

1. Addressing knowledge gaps as “myths” is productive.

So let’s say I just walked into your RSS feed one morning and said, here are the five things you think are right about Mac OS X, that are wrong. That is to say: dear reader, I know you don’t know me, and obviously I don’t know you, but I’m going to start talking about how wrong you are about what may or may not be a common issue.

Wow. No wonder we’re so popular.

It’s clear that accessibility advocates frequently encounter the same kinds of opposition, and often fight dated or inaccurate information while trying to improve access to web content. But to stand on a soapbox and decry it all as mythology can be especially alienating.

So let’s try not to belittle people who we’re trying to rally to our side. Make sense? Good. I just don’t understand why everybody says you’re so dumb.

2. “Myth” articles are compelling and convincing.

Now that we’ve irritated the people we most need to reach, it’s time to move on to the banalities that set each specific accessibility expert off. Most myth posts begin with a no-brainer like “accessibility is about creating a text-only site” or “accessibility is all about blind people and screen readers.” These are generalizations at their most basic and uncontroversial. If you actually encounter someone in the wild who believes something like this, they’re not misinformed: they’re uninformed. It’s not a myth to someone who hasn’t ever heard or believed it.

The second type of misstep is to claim something is a myth when it’s actually something reasonable people can debate. One that I can think of that keeps recurring is the statement that accessibility work is “time-consuming, expensive and very technical”. That came from RNIB’s myths article from 2009. Here’s the thing: quite often, accessibility work is time-consuming, expensive and very technical. Especially to someone who doesn’t know all they need to know about it, or someone who went too far down the wrong path before accessibility was called to his or her attention. That is to say, your most critical audience.

It’s not the best strategy to say to someone who’s suffering through accessibility remediation that they’re not experiencing what they’re experiencing. Or worse, to tell them that they’re only in the situation they’re in because they didn’t come to us earlier. It causes accessibility advocates to seem out of touch with the reality of the lifecycle of web design and development. It’s good for people to be made aware of the problems that can arise. But for those staring those problems in the teeth, they’re looking for solutions. What they’re getting is: “I told you this would happen.”

3. “Myths” are actually the reason accessibility isn’t happening.

One of the most alluring parts of mythologizing accessibility problems is that we can take our anecdotal evidence and construct an entire worldview around it. It’s a great way to vent our frustrations when we aren’t enabled to change the outcome of a given site’s design, for example.

But the way I see it, maybe the myth in most myths articles is that people are really legitimately thinking any of these things. Might some people still believe that a text-only site is an acceptable way to check off that accessibility box on their launch checklist? Maybe. Was it a problem in 2003? You bet. Now? Less so. And with much better arguments in its favor, like an approach that also integrates scenarios for mobile users, as well.

One more thing here: the text-only thing isn’t a myth. It’s just what many people were taught, starting in 1998. (Remember that text-only sites are enshrined in paragraph 1194.22(k) of Section 508, which itself is a product of an assembly of accessibility experts, and which remains the primary reference for workers in all levels of government in the US.)

The goal of advocacy is to frame the debate regarding what you stand for, and why. When you begin the debate by arguing against what you’re not about, you are setting yourself up for failure. (Side note: remember this in 2012, Democrats.) It would be much better to say to people that times have changed, that we have better ways to do things, and that an integrated site is more equitable and less work-intensive overall than if we tell our readers they’re fooling themselves simply because they listened to the experts a dozen years ago.

4. “Myth” articles contain useful, actionable information.

Anybody who writes stuff like this knows that it’s not going to be any good until you come up with five red herrings to rail against. (Three, if you’re really an SEO person in disguise.) Where a lot of these articles go awry is in failing to provide information that people can integrate in their websites today. Go read a few dozen myths posts. Can you find more than a handful of real accessibility techniques there? Are there pointers to other resources so that people can learn more proactively? (Note to Roger Johansson: you, sir, are officially off the hook.)

When I put myself in the shoes of a web developer who’s investing five minutes into learning about accessibility, I tend to think I’d rather hear “you can use red and green if you’re also using other means to differentiate those elements” than “Myth: Red and green cannot be used.” This is a learning opportunity, not Opposite Day.

And while I’m piling on, if you can be confident about what people should and shouldn’t be doing for all content on the web, it should be easy enough to come up with some examples of what you’re talking about. Right?

5. “Myth” articles reach the audience they’re intended for.

Wanna know how I find out about new accessibility myths articles? My friends in the accessibility community retweet them. Incessantly. They’re inescapable. (At least, they used to be. Thanks, TweetDeck filters!) They touch a nerve with us, and we share them far and wide, like designers share that McSweeney’s Comic Sans piece. (Guilty.) We’re sharing the causes of our suffering. And there’s a place for that. But what is being practiced isn’t advocacy, it’s self-soothing.

Myths are “inside baseball” articles–the kind of thing you’d commiserate about over beers at your favorite accessibility conference. Are these kinds of posts really meant to recruit new accessibility advocates, or are they really only going to resonate with those of us who are already in the thick of things?

Think hard about who it is that you want to motivate with this kind of writing. If you put something like this together and find when you’re proofreading that it sounds like you’re venting about your day, it’s entirely likely that you’re preaching to the converted. And there are a lot of unconverted out there to choose from. Tailor your message to the needs of that audience, and you may have a better chance to make your case.

Dear future Android tablet users…

Filed in: General, Sun, May 16 2010 19:54 PT

So, I own an iPad. (And here I am working for Adobe. You may point and laugh… now.) I’ve had it out in public—including in Europe, where I might as well have worn an “ask me about the iPad” t-shirt. I’ve got the pattern down now: someone does a double-take, and I think, “Oh, shit. Here we go again. ‘Yes, I couldn’t resist. I bought it because (reasons), and I’m (mood) with it.’”

I enjoy using my iPad (nicknamed “killer”, by the way), mostly. iBooks and the Kindle app have been perfectly stable, which is good, because the moment my ebook reader crashes, it’s not a serviceable ebook reader. It’s the kind of thing you need to have nailed down.

I’ve got about four pages of downloaded applications, only one of which consists of go-to apps: Twitter client, media streamer, remote keyboard and mouse. The rest are kind of a blur and a distraction. Games, magazines, utilities. I could do without them. That one page covers 95% of what I want to do.

What I can’t do is listen to NPR while I browse my email, and while I’ve been promised that will change this fall, I’m not a patient man. Nor am I the kind of fanboi who’s going to say the promise of multitasking tomorrow is as good as multitasking today.

There are about four things I expect out of a tablet: I want a good-looking screen, HD-quality video, the ability to run the occasional app in the background, and I want it to run all day on a single charge. The iPad meets two and a half of those requirements, if we take off a half-point for only playing 720p video.

Here’s the thing, though. Of my four main requirements, precisely zero of them are unique to Apple. Between now and Christmas, there are going to be dozens of Android-based tablets flooding the market. They’ll all be at or below the price point of the iPad. And you’ll be able to pick the winners fairly easily: they’re the ones that meet all four of my criteria.

So let’s say for the sake of argument that things shake out this way, and by the time you’re in an L-tryptophan coma, your Black Friday ads are loaded with non-iPad options. And let’s say the following morning you sneak into a Best Buy at 5am, scratch and claw your way to one of these devices, and start browsing the Android Market.

You are now invested in the success of Android.

When someone does a double-take in a coffee shop, you’re going to think, “Oh, shit. Here we go again. ‘Yes, I looked at the iPad, but I went with this because (reasons), and I’m (mood) with my purchase.’” The reasons are largely fixed in the hardware, so once you’ve rolled your Gooblet off the lot, you already have your answer to that. How you feel about your purchase, however, is going to depend largely on what you can do with it, and that has a lot to do with what software is available to you, and whether or how well it works.

There’s a tendency to be more forgiving of open-source platforms. Your Free Software Foundation adherents will insist that it’s better to have products that are clearly inferior because free-as-in-speech is better. But try telling that to your mom when she’s trying to make head or tail of your tablet while she maps out the nearest Apple Store.

For Android tablets to succeed, users of the platform need to fight the instinct to apologize for its shortcomings and that of its software. You need to be vocal, even brutal, about the problems you find. If an open-source product doesn’t cut it, call it out. If a payware application crashes left and right, let ‘em have it.

The tablet market is not the same as the Linux hobbyist market. The vast majority won’t be compiling their own apps, much less rewriting them. They shouldn’t be expected to. As a result, developers of tablet apps need to be very responsive to their users. If you want the Android platform to take off on tablets, the applications can’t be just great for Android apps. They need to be great apps, period. No editing text files, no byzantine dependencies, no open-source shovelware.

You, the user, should use all available avenues to establish the bar for your expected user experience, and hold developers to it. On the iPhone/iPad platform, ostensibly, the App Store serves as a quality filter. (In reality, there are mountains of shitty little applications, but there, the App Store at least does the service of letting us one-star them.) In the Android Market, users need to be just as brutal, demanding and vocal as they are in the App Store. Android tablets are not going to succeed by being almost as good as iPads, but, like, less evil. The best way to make them successful is to be very clear about where the system, the hardware and the applications fall short. The best way to make them fail is to pretend they’re good enough, when they’re not.

Realistic sentimentality

Filed in: General, Fri, Jun 12 2009 18:18 PT

“Whenever people say, ‘We mustn’t be sentimental,’ you can take it they are about to do something cruel. And if they add, ‘We must be realistic,’ they mean they are going to make money out of it.”

Brigid Brophy, author

Twitter, ye markup be non-standarrrrrd.

Filed in: Web, design, vent, Fri, Sep 19 2008 11:10 PT

Twitter unveiled a new redesign today. Very little of it is really noticeably different, until you look at the underlying code. Which, last I recall, used to be pretty good–they even used the fancy-pants XHTML 1.0 Strict doctype (though still using tables for layout, which the spec says it shouldn’t do).

But one thing about the latest version makes me wonder just what the hell they’re doing these days.

<center>.

The <center> element. In September 2008. From the “it” Web 2.0 company. Seriously.

I know this will make me sound like the annoying standardista, but how could anyone who still uses <center> still be doing web design professionally in, of all places, San Francisco? This is an element which has been deprecated for eleven years. Do we really have people who haven’t changed their coding practices since before 1997?

Sadly, yes. And the worse news is, they’re writing books. I just saw a book whose first edition was published in July 2008, which teaches users to use <center>, to do layout tables, to use CSS primarily just for font selections, and loads of other outdated guidance. This is material from the bad old days of web design, and it simply gets regurgitated over and over again. To quote the late, great George Carlin, “it’s all bullshit, and it’s bad for ya.”

I don’t know what it is going to take to finally cull the proverbial herd of these kinds of authors and designers. But each time I see this, it makes me wonder when we can expect some kind of professionalism out of the average content producer. Many of us have been talking about this stuff for years. It’s de rigueur at many web conferences, to the point that people now roll their eyes at it. And yet, it continues. I also don’t know whether Twitter is doing this in-house or if they hired an external designer. But certainly, somebody there dropped the ball.

And I know that one <center> is not a big thing. It’s just a symptom of a larger disease: that of lazy, ignorant and/or incurious designers. When someone sticks to one way to do something without ever updating their own skill set, their designs get more and more inflexible. Which makes redesigns more and more difficult, and more expensive, all with less to show for it. Which brings us to the boxy gridlock we experienced in the 90s. Which is why standardistas get so angry about this stuff. We know that customers deserve better than this. We know that when customers find out how their designer painted them into a corner, it casts a shadow over all of us in those customers’ eyes.

The question that remains from all this is, how can the professionals in this field separate themselves from the amateurs? Really. I want suggestions. What concrete steps can we take to ensure that the good designers and developers, the ones who are always learning, who have a full and balanced skill set, don’t get lumped in (or worse, beaten out by) the ones who are locked in 1995? Who’s got an idea?

Introducing “Universal Design for Web Applications”

Filed in: Web, Web 2.0 Expo, accessibility, book, design, universal design, Mon, Sep 15 2008 14:46 PT

It’s funny how sometimes things get wrapped up in a little bow.

Last April, I was in San Francisco, giving my “Accessibility 2.0″ talk at the first O’Reilly Web 2.0 Expo. Out of that conference came the seed for the project that I’ve been working on, and now, I’m happy to unveil it. This Wednesday, I’m flying off to speak at Web 2.0 Expo New York, to give a talk called “Universal Design for Web Applications” with my longtime colleague Wendy Chisholm.

What’s gone on in the intervening 17 months has been our work on a book of the same name.

Universal Design for Web Applications just reached final manuscript status last Thursday. It’s scheduled to be published by O’Reilly in November.

We’re really excited about how the book turned out. We chose universal design as our standard to bear because we’re moving beyond accessibility, and applying the principles we’ve learned from accessible design to a whole new world of mobile devices like the iPhone, and lifestyle devices like the Asus Eee PC. The point here is that the days of knowing what your users’ screens look like are over. Even if accessibility weren’t a consideration, universal design is going to inform most of the big decisions web content producers are going to face in the near future. We in accessibility have been where those decision-makers will be, and we have a lot of advice to impart.

We have a lot of information on new topics like the Web Content Accessibility Guidelines 2.0 and the WAI-ARIA specification. We talk about video and script like they’re first-class citizens. And we do the same for Flash, Flex and Silverlight. The fact is that all of these technologies are going to be with us for a long time, and the faster we embrace them, and learn how to make them work for people, the better we will all be for it.

You can preorder UD4WA on Amazon. And come see us Wednesday at 9am in 1A21 & 22 at the Javits Center.

Here’s a shot of the cover:

Universal Design for Web Applications book cover, featuring a woodcut of an Italian greyhound

Olympics on the web, live, worldwide: when?

Filed in: culture, media, sports, Wed, Aug 6 2008 13:48 PT

On the eve of the first Olympics in which live and on-demand content will be available on the web in most countries, I have to wonder how long it will be until the IOC recognizes that they should no longer bother to embargo content to match the prime-time schedules of viewers around the world.

This time, broadcast licensees in many countries will be running their own Olympics video sites (and 77 more will have a YouTube channel, restricted to their countries by geolocation). This builds on the 2004 coverage, which was spectacular in the UK, thanks to the BBC, but generally pretty poor everywhere else. It served as a good proof of concept, at least. I do think, though, that the feedback this time around will be that users will be confused, or frustrated about content not being where they expect it to be, since the networks will hold on to it until it’s been broadcast.

Hardcore Olympics fans don’t care when it’s prime time. And they get impatient when they know the event is finished, but still, they don’t see the results. On top of that, we have time-shifting technology, which evens out the playing field for everyone. So when will the IOC finally realize what’s good for them, and require broadcasters to show events online, in real time?

My guess is no later than 2016. Beijing is the largest experiment yet in web video, and they’ll have enough time to learn in time for the 2010 Winter Games in Vancouver/Whistler. The Winter Games are much smaller, in terms of events, participants and viewers, so this could be a great dry run. London hosts in 2012, and their infrastructure is probably much better suited to a widescale video deployment. My only question is whether there are too many signed agreements already, which would preclude a full, real-time Games.

After London, it’s hazy. The Winter Games in 2014 are in Sochi, Russia, and even six years out, I don’t have high hopes for them to take the lead in Internet distribution. That leaves the 2016 Games, which are down to Madrid, Chicago, Tokyo and Rio de Janeiro. All but Rio could pull it off easily, and maybe with 8 years of preparation, Rio would be ready too.

Any longer than that, and I think people the world over will start to wonder when the Olympics, an event created to sponsor international unity, will live up to its billing and put the athletes in the spotlight, even when that spotlight falls at 3am Eastern, or Central European, or Japan Standard Time.

eBay, please don’t speak for me

Filed in: Web, design, vent, Mon, Jul 28 2008 09:06 PT

I just had a horrible experience with eBay, which I think is summed up in this message I just sent to them.

I have a serious complaint with the way eBay sends automated messages. I was forwarded a message sent from eBay to the seller, which reads:

“I would like to have the item shipped to the address below:”
…followed by the shipping address in my eBay account.

However, I had never asked eBay to give that address as the shipping address, and in fact, the payment info I sent on PayPal had the correct address. As a result, the seller shipped the package, then went out of her way to return to the post office to change the shipping address from the _right_ one to the _wrong_ one–all because eBay says I told her to. And now the package will be delayed by a full week, and arrive at my home, where no one will be present to pick it up.

I hold eBay and its email notification system responsible for this.

eBay should never, under any circumstances, misrepresent its users by making statements beginning with the word “I”. You do not speak authoritatively on behalf of your users. I’m a designer and I understand the urge to sound human, but in this case, and I’m sure many others, you are doing more harm than good. I am extremely displeased that my package will be late, but even more upset that eBay sees fit to substitute its words for mine.

The same goes for any other e-commerce outfit out there. I know it sounds all down-home folksy to say things like “I would like to have the item shipped…” instead of “The buyer’s address is…”, but you don’t know what I want. You only know what your user tells you. And you should never communicate more than that.

Powered by WordPress (RSS 2.0, Atom)