There are two ways for users to know which of the two million websites to visit:
- Brand : the user knows that a site usually provides certain qualities; if the user likes these qualities, then it will probably be worth visiting more pages on that site
- Reputation : the user follows the advice of other users who know that a site has quality
Branding is the focus of most current Web projects: the theory is that building a powerful brand while the Web is still relatively small will allow a site to be profitable when the Web gets larger. A example of a good Web brand is news.com : when you want to know what's happening in the information industry, you will usually be able to find recent news at news.com.
Even though brands work well for a few, large sites, they are not a good mechanism to help users handle millions of sites. On the contrary, the nature of the Web encourages the formation of many smaller sites, and most of its value comes from such specialized sites . Thus, the Web needs a mechanism for making sense of overwhelming diversity .
Since there is no way for computers to automatically measure quality, we have to rely on human judgment for Web quality ratings. The reputation manager is a way to automate the processing of such human judgments; not a way to make the judgments themselves. In other words, quality needs to become an explicit attribute of Web objects.
My vision for a reputation manager involves the coordination of billions of individual quality judgments by hundreds of millions of users. Every time you encounter an information source on the Internet, your Web client software will present you with an opportunity to vote on its quality. Typically, this would be done by adding two buttons to the interface: a thumbs-up button and a thumbs-down button. A neutral rating would be given by doing nothing (since we want to minimize overhead in the user interface), but when a user encounters something particularly good, he or she would hit the "good" button. Similarly, disappointing services would be punished by a click on "bad."
The simplest reputation manager would compute the average rating for each information source, but more advanced services would use ideas from collaborative filtering and compute different ratings for different users. Basically, the reputation manager would find other users whose tastes are very similar to your own and give added weight to these users' ratings. Since the Web will have half a billion users in five years, it will always be possible to find other users who match your interests, no matter how obscure they are. Thus, the reputation manager can deal differently with people who love the Spice Girls and people who don't.
The reputation manager will collect ratings for entire websites, for individual pages, and for people who contribute comments to discussion groups or chat rooms. The resulting reputation can be used to direct users to sites that will be helpful or interesting, and it can be used to filter out the less valuable contributors in chat rooms and online discussions. In this way, the reputation manager becomes a more powerful version of the bozo filter .
Reputation management will be especially valuable when combined with micro-payments : once you have to pay for clicks, you will be motivated to find out in advance whether the destination website is any good.
Implementing Reputation on the Web
Initially, I expect reputation managers to become embedded in proxy servers for large corporations and offered as a value-added service by larger Internet Service Providers. In both cases, it is possible to collect data about the behavior of a large number of users at a single point. In about three years reputation management will become an Internet-wide service that users can subscribe to by paying a small micro-payment for each recommendation.
There is already an independent reputation manager available on the Internet: Alexa - which unfortunately is a browser add-on and thus not fully integrated with the user's Web client software (of course, right now, browsers serve as a weak type of Web access clients). The most relevant features of Alexa relative to this column are:
- Reputation statistics for most sites on the Web showing how frequently they are visited and how popular they are.
- Recommendation links to good other sites that are related to the current page.
Even without any fancy statistics, most websites could benefit from explicit use of quality ratings in their interface. A simple logfile analysis will show you what parts of your site are the most popular, and it would be reasonable to give these pages special prominence in search results and in listings. For example, my own list of old Alertbox columns highlights those columns that have attracted the largest readership in the past. Doing so prevents new readers from being overwhelmed by choices and allows them to focus on the links that are most likely to be of interest. A site with an even larger set of old material could provide a single page listing nothing but its top hits .
I have posted several reader comments , including a suggestion to use a user's bookmarks as an initial set of positive votes and a warning against an oligopoly of rating services.
See also: Reputation managers are happening (this topic revisited 1.5 years later).