We have written here extensively about the rise of web services and the transformation of the web into a platform. In our post When Web Sites Become Web Services we argued that more and more web sites will open their information via an interface. In our post about Yahoo! Pipes we wrote about viewing the web as a massive, and in essence relational, database. And finally in the post about the Future of RSS we looked at the past, present and future uses of the really simple syndication protocol.
Today we will look at an example of putting all of these elements together. Wine.com has launched an innovative way to expose their catalog – via RSS with the API on top.Read Full Post | Make a Comment ( None so far )
When you think of widgets you typically think of web 2.0 companies. Flickr, Digg, del.icio.us were among the first services with widgets and many more followed. Indeed, if you’re a startup then it could be seen as unusual nowadays if you don’t have a widget strategy.
But the older and larger companies are still trying to wrap their hands around widgets. Just recently, we profiled NBC signing up to use Clearspring. This move clearly signals that NBC is serious about their widgets strategy.
In this post we look at an unlikely widget company, Random House Publishing. A quick look at their web service and the widget is enough to realize that the company “gets it”. But a deeper look reveals that Random House not only has gotten the widget bug, it also has a broad and solid strategy around widgets. The publishing giant is using widgets to build its presence and brand awareness everywhere online.Read Full Post | Make a Comment ( None so far )
We have recently written here about the ongoing transformation of Web Sites into Web Services. In that post we noted that with the rise of APIs, scraping technologies and RSS, web sites are really turning into data services and collectively the web is becoming one gargantuan database. As such, the web is quickly becoming a platform or foundation that powers new kinds of applications that remix information in ways not possible before. The web is also becoming much more connected, not not just on the level of links – but at a much more fundamental, semantic level.
The big picture is always exciting and important, but the mechanics matter too. How exactly do we unlock and correlate information from separate web sites? Ideally, we’d like for all web sites to offer simple and elegant APIs – like Amazon, del.icio.us and Flickr do today. Alas, this is not feasible today and it isn’t clear that something like this can be done quickly at all, on a Web scale. So in the meantime, solutions like Dapper that help you process unstructured information from HTML, clean it, transform it and re-emit as structured XML – these types of solutions are worth serious consideration. So in this post we take a close look at all aspects of Dapper: how it works, what can be done with it, the company’s business model and legal implications of this service. (more…)Read Full Post | Make a Comment ( 4 so far )
Today’s Web has terabytes of information available to humans, but hidden from computers. It is a paradox that information is stuck inside HTML pages, formatted in esoteric ways that are difficult for machines to process. The so called Web 3.0, which is likely to be a pre-cursor of the real semantic web, is going to change this. What we mean by ‘Web 3.0′ is that major web sites are going to be transformed into web services – and will effectively expose their information to the world.
The transformation will happen in one of two ways. Some web sites will follow the example of Amazon, del.icio.us and Flickr and will offer their information via a REST API. Others will try to keep their information proprietary, but it will be opened via mashups created using services like Dapper, Teqlo and Yahoo! Pipes. The net effect will be that unstructured information will give way to structured information – paving the road to more intelligent computing. In this post we will look at how this important transformation is taking place already and how it is likely to evolve.Read Full Post | Make a Comment ( 3 so far )
FeedBlendr is a web service that lets you remix your feeds. It has just launched the public beta of its second version. At first glance FeedBlendr does not appear to have a lot of bells and whistles, but it is an interesting and intelligent service that lets you easily remix many kinds of RSS feeds. And after closer examination, we can see that developer Beau Lebens has put a lot of love into the site – and actually there are bells and whistles after all. Let’s take a look…Read Full Post | Make a Comment ( 3 so far )
A few weeks ago we were briefed by the co-founder of vFlyer, Oliver Muoto, about the changes and new features in their upcoming release. Oliver gave us a great overview of the service (which we will talk about a bit later) but he also shared with us an interesting map of the entire classifieds landscape. The map shows a lot of activity in the space and lots of players in different niches. And this is not surprising, since classifieds is a volume business. If you can attract and retain customers, then you will make money because the margins are there. So in this post we look at what is happening in the different corners of the classifieds landscape – and try to figure out who is doing well and where this market segment is heading overall.
We begin with an overview of the general marketplace – i.e. those companies that provide end-to-end service to sellers and buyers. All of these web sites allow you to both list and search for items in various categories. However the mechanisms and approaches of each are quite different.
Craigslist (Alexa 33) is the grand daddy of simplicity in Web 2.0. Craigslist allows its users to list geographical classifieds in categories such as personal, housing and jobs. Remarkably the site only charges for listings in San Francisco, Los Angeles and New York areas – which, according to the traffic stats on Alexa, amount to approximately 33% of listings.
Known for its extreme simplicity, code-of-honor, self-governance and great customer service, Craigslist has attracted the power users – at least in the metropolitan areas. However, the company has not made a major push to grab all it can. According to a recent article in the SF Chronicle, craigslist is not focused on maximizing revenues (25M in 2005), despite the fact that it attracts almost 10M visitors from the US monthly. Bottom line, Craigslist is leaving some money on the table. This is not, however, easy money to take – but others are certainly trying.
Next we look at Google Base – the web giant’s foray into the world of classifieds. When it first came out a year ago, there was speculation it would force eBay and Craigslist out of business! But even this six month old article by Hitwise proves that rumor to be bogus. Google Base has not taken off, at least not yet. Why not? There are perhaps several factors. First it simply may not be obvious how general users get to the site. And once on the site, it may not be obvious what to do.
There are definitely unique and positive things about Google Base. The bulk upload feature is great for power users. Perhaps Google’s smartest play with the base to date was the recent release of an API. Any application now has an easy way to publish classifieds to the base. Since the listings are instantly available via RSS and are searchable on Google, this is a very compelling proposition.
And what about the economics? Since Google makes money via context sensitive ads, it does not need to charge listing or transaction fees. Can this ultimately sway people away from Craigslist? Perhaps, but it seems that for this to happen Google needs to do more marketing and more UI work.
edgeio (Alexa 17,500) calls itself a distributed marketplace for classifieds. Publishing is always free of charge and can be done in a number of ways. Here are a few very handy ones: a blogger can create an instant listing in a post by using a special tag, people can submit listings by pointing edgeio to existing web sites or RSS feeds, and large enterprises can request a data pull via an API. But edgeio is not only creative when it comes to the consuming of data, they do a great job on distribution as well. The edgeio content is syndicated on blogs and websites in a variety of ways, from straight RSS to embedded widgets.
How does edgeio make money? There are many revenue streams. For example, if you do a search on their web site you will get context sensitive ads. Also, there is an option for advertisers to bid on the placement of the ad – based on the number of clicks. There is also an option to do the same but based on a fixed, one time fee.
Overall, edgeio is an interesting, evolving service. The key question is whether it can attract enough users. The traffic chart from Alexa shows there is no definitive growth, so we will have to wait and see on this one.
From an infrastructure point of view, there is no reason to have a vertical marketplace. If you can list houses, you can just as easily list jobs and cars. But there is value from the user interface, semantical and specialization points of view.
Vertical marketplace sites can also offer domain expertise and value add. For example, Rent.com (Alexa 2,800) allows its users to find roommates – but also offers value add services like estimating and booking the moving. And Auto Trader (Alexa 800) offers an array of services, ranging from assessing the value of the car, to listing it, to getting a car loan.
In a sense these sites are more portals, than just marketplaces or search engines – since they strive to deliver an end-to-end experience. Despite that, Alexa traffic patterns show an overall decline in traffic for these sites over the past half year.
Vertical Search Engines
We have written before here about the rise of vertical search engines. The classifieds market is no exception, perhaps even a vivid illustration. The chart at the top of this article lists a range of vertical search engines ranging from generic classifieds to a narrow specific vertical.
In the jobs market, Indeed.com (Alexa 1,900) and Simply Hired (Alexa 6,300) offer a very similar feature set. Both have two initial search boxes for position and location, then allow the user to refine the search results based on various criteria.
The contenders that we looked at in the housing market were PropSmart, (Alexa 48,000) and Trulia (Alexa 4,500). Despite a big difference in traffic, both of these sites offer a similar interface. The initial search is done by geographic location, then the search results are displayed via a list and a map. Both sites offer nice ways of customizing the search results to help the user find what he/she is looking for. There is a feature on Trulia, which is nothing short of amazing, called “city guides”. By typing in a city and the state, you get a report card which ranks the real estate activity and gives a ton of useful information.
Finally, we looked at two generic classifieds search engines – Oodle.com (Alexa 5,200) and Vast.com (Alexa 38,000). Again both are very similar, however we were not overly impressed with the results. Vast did straight text search, which defeats the purpose of a vertical search engine (hint: it needs to understand the semantics). Oodle did better, but was not as clean an interface as Indeed.com. Overall, both sites seemed a bit overwhelming because of the number of different tabs. Breaking it down to a refined vertical, adding more semantics and making results more relevant – would make these engines more appealing.
vFlyer – make it once, post everywhere
To come full circle, now we review vFlyer (Alexa 41,000). vFlyer claims to do the heavy lifting in classifieds. Publishing a classified to the vFlyer service instantly makes it available on many major classified sites and search engines. It is not an easy job, but in our tests we found vFlyer to be comprehensive and easy to use.
The Classifieds market is complex and rich. There is plenty of money to be made and, as the above players show, plenty of room for innovation.
So what really stands out? Craigslist has certainly been doing a great job. In a way, you can argue that their approach is perfect – just let the users fill in the ad. But things start to get interesting with the rise of the vertical search engines, which are able to “parse” and “understand” the semantics of the ads fairly well.
But should users input classifieds using structured forms? We believe so, since people get impatient when search engines don’t deliver what they are looking for. So we think the semantics work being put in by classifieds services is well worth it – since it leads to more precise results and therefore faster transactions.Read Full Post | Make a Comment ( 4 so far )
We have written before about the innovative Amazon Web Services Platform. This stack was officially announced by Amazon CEO Jeff Bezos during the recent Web 2.0 summit and is now considered part of the core business strategy for Amazon. While analysts, competitors and Wall Street are pondering what to make of this move from a business sense, in this post we look at who is utilizing Amazon Web Services – and how. This post is based on personal communication with those people, along with the set of success stories available on the Amazon Web Services site.
The fact is many small, medium and even large businesses (even Microsoft), rushed to put Amazon Web Services to use. Why did they do it? Because Amazon offers a decade of experience in running one of the largest internet enterprises – and has wrapped this expertise into a set of pre-packaged services and APIs.
To remind you, here again is the Amazon Web Services Stack:
The Amazon Web Services stack is impressive in its scale and also well thought through. Amazon is methodical about this strategy and is aiming to create an offering which can truly be called the Operating System for the new Web. Many companies have already recognized the power and ROI of the Amazon platform and are literally betting their business on it.
Webmail.us – email hosting provider
Webmail.us is probably the most compelling success story for Amazon Web Services because of its huge ROI. It is an established business with over 27,000 customers. It had a real and simple business need – improve the cost and reliability of its backup system. After considering many alternative solutions, the company decided to utilize Amazon S3, The Simple Queue Service and the Elastic Compute cloud to address all of its needs.
The company claims to have improved its backup process and cut costs by 75%. The Webmail.us success story on Amazon contains a paragraph that nicely summarizes the technical and business gains:
“Amazon’s Web Scale Computing model shifts the focus from do-it-yourself to let-the-experts-do-it. It allows businesses to scale up or down based on requirements and demand, and provides pay-as-you-go billing models. This combination allows businesses to turn fixed costs into variable costs, while knowing that their data or services will always be available.”
SmugMug – online photo provider
SmugMug is another interesting success story. It is a straightforward one, because it uses Amazon S3 exactly how it was intended to be used – for storing large media. Today SmugMug hosts on Amazon over half a billion photos. And here is the real “wow factor” in this story: one week after writing the first line of code, SmugMug was storing all of its new images in Amazon S3.
To pepper this with more numbers: SmugMug is now backing all of its new images to S3, which amounts to 10 terabytes of data monthly. The SmugMug site has not gone down since adopting Amazon S3 and it estimates it will save half a million dollars on its disk storage annually. So, as the company points out, S3 makes it possible for SmugMug to compete head to head with bigger companies that have deep pockets, without having to raise massive amounts of cash for hardware. So this is game changing.
Altexa, ElephantDrive and JungleDisk – backup providers
As soon as the S3 came out, many companies recognized an opportunity to deliver business and personal backup solutions. The model is simple – charge a small premium on the top of the Amazon S3 storage costs.
With that approach it is essentially a user acquisition battle, where the implementation and marketing become paramount. Altexa targets small businesses, while ElephantDrive and JungleDisk target consumers – but all of them share the benefit and ease of use of S3. In their success stories, the companies emphasized incredibly quick (literally a few days) adoption, cost savings and reliability.
Scanbuy – mobile shopping solution provider
The success stories that we have covered so far are mostly using Amazon’s S3 storage service. Scanbuy however is utilizing the Amazon eCommerce API to bring unique comparison shopping solutions to mobile phones. Their claim to fame is allowing the users to lookup prices by simply scanning the barcodes of items in a store. This is a clever approach that is made possible by a combination of technologies.
One of the key technologies here is the Amazon eCommerce API, which offers unlimited and complete access to most Amazon items. Scanbuy uses the API to fetch the latest pricing information, letting the user decide if they are really getting a good deal in the store. And as the company explains, they simply could not have done what they are doing now without Amazon eCommerce API.
So why are analysts not sure what to make of the Amazon Web Services? An article in BusinessWeek is entitled Jeff Bezos’ Risky Bet and their main concern seems to be: will businesses use this? Well in this post we’ve shown that for some businesses, the AWS stack provides a set of very compelling value propositions – both technical and business. And having real business success stories with ROI and cost savings in the 50-75% range, makes it basically a no brainer.
We think that the real question is: does this work for Amazon? Is it ready to be a software and Web Services company? Will Amazon be able to scale this business indefinitely… and most importantly: are the margins high enough for it to be worthwhile? We have to believe that Jeff Bezos and the Amazon team did the math and that the answer is absolutely yes.Read Full Post | Make a Comment ( None so far )
WebOS services are going to be utilized by thousands of companies – and will power the next generation of web applications. Amazon is at this point leading the charge of the big Internet companies to capture this potentially huge market.
There is a very long, but interesting, cover story in today’s BusinessWeek entitled Jeff Bezos’ Risky Bet. The article focuses on the transformation of the e-commerce giant into a software company. The growing stack of Amazon Web Services clearly points to a sea change in the Seattle e-commerce giant. Indeed Amazon is beginning to look more like an alternative Microsoft for the web computing era!
In short, Jeff Bezos’ big bet is a bet on the software infrastructure of the Web. We here at Read/WriteWeb think this is a visionary strategy by Amazon – and it is likely to pay off…
Amazon completes its Web Services stack
In August, I wrote a series of articles about Amazon’s Web Services strategy for a Web 2.0 magazine. The article that summed up what Amazon is up to was called: Amazon – the Real Web Sevices Company. Based on the piece in Business Week, it is clear that during the Web 2.0 conference next week, Amazon’s Web Services strategy will become official. As a software engineer, I can’t hide my joy. This is indeed a triumph of software engineering – a large company has managed to productize the pieces of its own infrastructure.
Not only that, but Amazon is very serious about making money on this endeavor. The web giant is carefully and methodically rolling out the building blocks of its next generation Web Platform. It started with the Amazon eCommerce API and Alexa services. But not until the Simple Storage Service rolled out, did it became clear that Amazon is building a full web services stack. Here is our diagram showing what it looks like:
Web as a Platform
Amazon’s Web Services stack is evidence of a new computing paradigm, where web services in aggregate give rise to a new web-based operating system. Like a classical operating system, this new one has the key ingredients – infinitely scalable storage, dynamic indexing service, adaptive grid, etc. These pieces, put together, provide a compelling new way to think about application development. Amazon is actively working to both define and implement the ingredients of this new Web Platform.
Why this makes sense
Building large-scale web software is a big challenge. Amazon solves this problem by offering the infrastructure that has powered one of the biggest online stores for the past decade. Amazon hides complexity behind simple, minimalist APIs and offers their services for a very reasonable cost. The Amazon team takes the concepts of search, storage, lookup and management of data – and turns them into pay-per-fetch and pay-per-space web services.
To begin with, it’ll be small and medium businesses that take up Amazon’s services. As Business Week points out, Wall Street is not going to jump on this. But the SmugMug photo service did and other startups and small businesses will follow suit. So even if large corporations will not come, there is plenty of money to be made. The Long Tail anyone?
What can we expect?
In the near term, we will probably see more services from Amazon which focus on completing their Web Services stack. For example, S3 does not have querying capabilities – which is a fairly big limitation. The elastic cloud is very powerful, but at the same time complex – so we can expect additional offerings that simplify deployment and management of the grid.
We are also likely to see other players entering the WebOS market. Google has already made moves with its Google Base API and is rumored to be working on the GDrive. Microsoft also has Live Drive in the works. Both Google and Microsoft are no doubt working on other web services initiatives. Also watch out for smaller but more innovative players, like 3Tera – which we profiled in September.
Regardless of the provider, WebOS services are going to be utilized by thousands of companies – and will power the next generation of web applications. Amazon is at this point leading the charge of the big Internet companies to capture this potentially huge market.
In upcoming posts, we will highlight the use cases for Amazon and other web services. In the mean time, let us know if you’re currently using Amazon Web Services – and what you think of the experience so far.Read Full Post | Make a Comment ( None so far )
With this article I hope to reach out to the companies and thought leaders working in the attention space and start the dialog on the infrastructure for the Attention Economy. Note that the views presented in this article are my own and do not represent the views of the AttentionTrust organization.
I have previously discussed here the exciting developments happening in the attention space. The Attention Economy and Attention Architecture are in the early days, but there are clear indications of growth. Ever since Steve Gillmor and Seth Goldstein co-founded AttentionTrust, the topic of attention is getting, well, a lot of attention. And very rightly so – AttentionTrust created principles that put the user in control of the data and open the door for new, exciting set of personalization applications.
The AttentionTrust was created based on the four core principles that guarantee that the users are in control of their data and that the privacy is always respected:
- Property: You own your attention and can store it wherever you wish. You have CONTROL.
- Mobility: You can securely move your attention wherever you want whenever you want to.
You have the ability to TRANSFER your attention.
- Economy: You can pay attention to whomever you wish and receive value in return.
Your attention has WORTH.
- Transparency: You can see exactly how your attention is being used.
These foundational principles embody the spirit of the Attention Economy and provide the rules and philosophy for building out the Attention Marketplace. In this article we will focus on exploring additional technical infrastructure that might be beneficial for enabling the users and companies to participate in an open Attention Ecosystem. In particular, we will discuss the format for storing extended attention information as well as the generalized interface for the Individual Information Storage Service. (Note that in previous articles I referred to this as AttentionVault and changed the name to remove any possible associations with AttentionTrust and RootVault)
The first blocks of the attention ecosystem infrastructure have been mapped out since AttentionTrust created an add-on to the Firefox browser called AttentionRecorder.
The recorder in its current form, provides basic, yet encompassing and essential function of capturing timestamped click stream, otherwise known as implicit attention.
The Diagram 2 shows the XML output of the AttentionRecorder, which is essentially a set of HTTP transactions.
In accordance with the Property principle the recorder offers the user a choice to either store these attention records in a local file or to direct them to one of the approved attention services. If the user chooses a remote storage, the records are sent via HTTP.
The current infrastructure facilitates capturing and storage of the implicit attention. In a way, it defines the interface for storing this type of attention, but it, intentionally, does not focus on the interface for accessing the stored attention records. Also, the format that AttentionRecorder uses today is not generic, but rather specific to the click stream. In the following sections I will discuss how the current infrastructure can be extended to accommodate additional types of attention and will discuss the interface for reading/writing the data from/to the Individual Information Storage Service.
The types of Attention
As we discussed, currently the AttentionRecorder captures implicit attention. There are also other types of attention. For example, bookmarks are a variant of explicit attention. The difference between implicit and explicit attention is that the user makes an explicit effort to store the information.
We can gain further insight into explicit attention by looking at the famous social bookmarking service called del.icio.us This services has popularized the notion of tags – the labels or attributes attached by the user to a piece of information. Seth Goldstein told me that del.icio.us’s founder, Joshua Schachter, thought of tags as a crystallized attention. The tags illuminate the personal angle on the information – what exactly the user is paying attention to here?
Like tags, the rating captures another aspect of attention. For example, the services like BlinkList and Ma.gnolia include the rating information in addition to the URL information. By rating an article the user specifies the attitude or the outcome of the attention. Recognizing the importance and the value of the rating, the sites like Netflix and Amazon have always allowed the users to rate movies, books , music, etc.
And there is yet another type of attention – explicit semantical attention, that is the attention related to objects like books, movies, music, wine and restaurants. The blueorganizer that we are developing at adaptiveblue focuses on capturing this kind of attention. Here is the XML illustrating how the blueorganizer represents books:
Abstracting Attention format
As shown in Diagram1 the Attention Economy brings together the users and the services creating an ecosystem where the users choose what attention services they want to receive. From the examples above it is clear that there are different types of attention: implicit, explicit, tagged, rated, semantical are just a few that we looked at. So to enable a flexible ecosystem, where the services can harness a variety of attention, we need to extend the current attention format to encompass the different types of attention.
Looking at the current format of attention recorder and the format used by the blueorganizer, we note that they are quite different except for the URL, title and the timestamp information. But really, the blueorganizer’s format just extends the attention recorder format, by adding semantical information about the object, in this case, a book, contained in the page. So what should the generic attention format look like? To answer this question we observe that it is not necessary to enumerate all possible types of attention in advance. Instead, we need to capture common aspects of different attention types and then provide a room for extensibility.
Since this an informal article, I will just give you a flavor for what this format will be like. At the same time, more formal specification will be emerging via discussions and feedback from the companies working in the attention space.
The format is very simple. The common attributes are grouped under the recordHead, while specific attributes are placed into the recordBody. Note that this format immediately facilitates decoupling and flexible paring of attention types and attention services as shown in the Diagram 5.
For example, a personalized recommendation service might take advantage of bluemarks, while personalized news service could leverage a combination of the user’s click stream, OPML and bookmarks. The generic attention format allows the user to be in control of different types of attention, and at the same time facilitates flexible attention services.
Abstracting the Individual Information Storage Service
Another piece that needs to be in place in order to make the Diagram 1 a reality is the interface for accessing the Individual Information Storage Service. Having this service is central to the Attention Economy, because it puts the user in control of the information. Without this storage, each personalization service would have to capture the user data separately and create, effectively, an information silo.
For example, the sites like Amazon have a rich history of the user interests, which can not be utilized anywhere but on Amazon. Clearly, Netflix recommendation engine could give better suggestions if it had knowledge of the data in the Amazon Wishlists.
The API for the information storage puts the user in control and solves the attention silo problem. The question is what should this interface look like? It needs to be simple (think del.icio.us) and include just the essentials to enable reading and writing the attention records. Here are the key operations:
AddRecord – adds the attention record to the store.
DeleteRecord – deletes attention record from the store
UpdateRecord – replaces an attention record in the store
ListRecords – lists the user records updated or added since given time
LastUpdated – returns the timestamp when last add/update/delete occurred
The current attention infrastructure laid out by AttentionTrust is a major step towards enabling the Attention Economy. However, its primary focus is the capture and storage of the user’s click stream. To facilitate truly open ecosystem, this infrastructure needs to be augmented with generic attention format, allowing the other types of attention, and Individual Information Storage API for both reading and writing the records. With these two pieces in place, not only the users will be in control of their information, but they will also be able to receive wide range of attention and personalization services. The chosen service providers will be able to utilize a variety of attention data and make their services more agile, but more importantly better and more relevant to the end user.
In this post, we survey a range of client applications which utilize the new web platform. This is a follow-up to our Web Platform Primer post a few days ago, in which we explained the building blocks of the new Web infrastructure:
The Web Computing Platform
Essentially the building blocks are foundational services from Internet companies such as Amazon, Google and Microsoft – which combine to form a Web development platform. Indeed a couple of days we saw Amazon add to the platform with a limited beta ‘Compute’ service, called Elastic Compute Cloud. All of these services facilitate a new breed of software: smart desktop and browser applications that use the Web Platform as their backbone.
In this category there is Amazon S3 and openomy. Amazon S3 has a wide variety of clients using it. Firstly, there are personal backup applications like Jungle Disk and Elephant Drive. Another common use case for S3 is storing large media files – the Amazon S3 success stories page features MediaSilo video storage and SmugMug on-line photo sharing. A webtop application called YouOS is also using Amazon S3 to store user information. Finally, there are two other applications listed in the success stories section: MyOwnDB, which allows users to define and store their personal information in the form database tables; and the blueorganizer smart browser extension for Firefox, developed by my [Alex's] company adaptiveblue.
Messaging and Compute Services
In the previous article we gave an example of a Messaging service: Amazon Simple Queue Service. There are no success stories listed on the Amazon site for this service but – as we noted – it is likely that Amazon.com itself utilizes this service.
After our previous article was published earlier this week, Amazon released the first example of a black-box compute service – called Amazon Elastic Compute Cloud. The service is currently in limited beta, but we are likely to start hearing of success stories soon.
We start this broad category with the applications that use Amazon eCommerce Service, one of the most widely used APIs on the web. Among the success stories listed on the Amazon’s page, most fall into the category of shopping and store fronts. For example:
- ActionEngine and ScanBy use the Amazon API to enable wireless shopping.
- Associate-o-matic uses the Amazon API to help its customers create store fronts.
- Inside C uses the Amazon API to bring shopping into the instant messaging space.
There are other interesting uses of the API as well. For UNIX lovers there is the Amazon Command Line interface, marketed as 0-click shopping. Also there is RightCart, which enables a web-wide shopping experience on blogs and regular sites.
Note that adaptiveblue also uses the eCommerce API, to dynamically look up product information – when a user selects the title of a book, or the name of a gadget.
The most popular information API is Google Maps. A comprehensive list of usages can be found at the Google Maps Mania blog. They range from housing market sites to travel logs. These, however, are more mashups or utilities than applications – because they do not provide an end-to-end user experience, but rather provide a solution to a particular information problem. In general, we are seeing a big surge in so-called mashups fueled by Information Services and Web 2.0 APIs. A comprehensive list of these mashups, along with APIs and other great information, is maintained by John Musser at Programmable Web.
The Alexa Web Search Platform was launched in December 2005. At the time Richard wondered if it would make Amazon a major search player. As of now there are no references to a major vertical search engine built on top of Alexa. The Alexa web site features a few applications – a Camera search and Zip File search – but that just scratches the surface of what is possible with the Alexa platform.
I still think that this platform will pick up and we will see some really interesting vertical search applications built on it. In the meantime, the blogging community does not live a day without checking the Alexa Information service for traffic rankings: alexa.com and alexaholic.com.
Web 2.0 Services
Thanks to del.cio.us, APIs are back in style. So-called Web 2.0 companies rush to open up their information, in order to enable cross-pollination of data and mashups. Here is the current chart of Top APIs from Programmable Web:
Google Maps is a clear front runner. Among other popular APIs are Flickr, Amazon, YahooMaps and del.icio.us. Also according to the ‘last 14 days’ chart, the YouTube API is on the rise.
It is exciting to see this new wave of applications developed on top of the emerging Web Platform. As the platform matures, we are sure to witness more and more applications using it as their primary infrastructure. This allows businesses to focus on innovation and domain knowledge, rather than worrying about the scalability of their backend systems.Read Full Post | Make a Comment ( None so far )
« Previous Entries