Looks like Kristopher Tate took a short vacation to recharge his batteries. Upon return a couple of days ago he gave an update on the status of Zooomr Mark III, where the blame for the delay was laid squarely at an unnamed 3rd party storage provider.
They had some in house storage infrastructure previously but wanted to outsource that function to someone that purported to have the ability to massively scale for them. Apparently it didn’t work out. Now, I wonder if that was Amazon S3, since Thomas Hawk did mention that Zooomer used Amazon EC2 and S3 . . .
Now I am willing to assume that S3 was too slow for them, but also based on Thomas Hawk’s posting I’m wondering if they are simply using this as a smoke-cover excuse to distance themselves from Amazon because of a blog-frenzy kerfuffle overAmazon’s efforts to protect it’s Alexa proprietary data and possibly it’s trademarks etc.
Whatever the reason behind the delay of Zooomr Mark III may be, I hope that they eventually arrive at their destination intact. I am intrigued by some of the new features slated for that release, especially their taking on the micro stock photography space.
Edit: April 15
There is an article on The Register today about Amazon’s S3 service and their lack of SLA’s :
A barrier to adoption of Amazon’s web services is the absence of any SLA (Service Level Agreement), making some businesses reluctant to entrust data or critical services to Amazon. “They are absolutely correct,” says Vogels, with disarming frankness. “You have to understand that this is a nascent business. So we have to figure out on our side how to give these guarantees. It doesn’t make sense to guarantee things, and then not be able to meet those guarantees. It is better to explain to people that there are no guarantees at the moment, except high level statements that it is fast and reliable, instead of lying to them.”
That said, has anyone lost data? Have there been outages? “We’ve lost nobody’s data. We’ve had a few performance blips that didn’t affect everyone,” Vogels tells me. “We try to avoid that with Amazon.com also, where any outage has significant financial impact. We try to deploy the same techniques around S3 and EC2.”
Emphasis on the first sentance was added by me.
So what they are saying is they aren’t ready for commercial prime time. OK, fine. It’s still ok to experiment with your latest and greatest Web 2.0 startup that has a ‘beta’ (or alpha) moniker attached to it’s logo though. Who knows, you might get lucky and able to take a competitive advantage before the bigger, slower, more established companies in your space can.
Too bad it didn’t work out on the first try for Zooomr, but they had to try it. They’d have been fools to not see this as potentially huge advantage for them. I’ll bet that they will revisit the Amazon S3 service every few months over the next while to see if it has improved or changed enough to make sense for another stab at using it.
Edit April 23, 2007 :
Not everything’s perfect. For instance, the speed of delivery for data stored on S3 can be slow, because S3 lacks edge caching features standard to true content-delivery networks.
To get around that, SmugMug uses a tiered structure in which 90% of its data is stored on S3, and the most popularly accessed 10% remains with SmugMug. That way, S3 mostly serves as a type of archive or backup site, with almost all requests served up faster by SmugMug’s own servers.