Mathematical
PageRanks for a simple network, expressed as percentages. (Google uses a logarithmic scale.) Page C has a higher PageRank than Page E, even though there are fewer links to C; the one link to C comes from an important page and hence is of high value. If web surfers who start on a random page have an 85% likelihood of choosing a random link from the page they are currently visiting, and a 15% likelihood of jumping to a page chosen at random from the entire web, they will reach Page E 8.1% of the time. (The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. In the presence of damping, Page A effectively links to all pages in the web, even though it has no outgoing links of its own.
PageRank is a link analysis algorithm, named after Larry Page[1] and used by the Google Internet search engine, that assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by Failed to parse (Missing texvc executable; please see math/README to configure.): PR(E).
The name "PageRank" is a trademark of Google, and the PageRank process has been patented (U.S. Patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for $336 million.[2][3]
Cartoon illustrating basic principle of PageRank
A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or usa.gov. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.
Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[4] In practice, the PageRank concept has proven to be vulnerable to manipulation, and extensive research has been devoted to identifying falsely inflated PageRank and ways to ignore links from documents with falsely inflated PageRank.
Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg (used by Teoma and now Ask.com), the IBM CLEVER project, and the TrustRank algorithm.
PageRank was developed at Stanford University by Larry Page (hence the name Page-Rank[5]) and Sergey Brin as part of a research project about a new kind of search engine.[6] Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page is ranked higher as there are more links to it.[7] It was co-authored by Rajeev Motwani and Terry Winograd. The first paper about the project, describing PageRank and the initial prototype of the Google search engine, was published in 1998:[4] shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web search tools.[8]
PageRank has been influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua. In the same year PageRank was introduced (1998), Jon Kleinberg published his important work on HITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original paper.[4]
A small search engine called "RankDex" from IDD Information Services designed by Robin Li was, since 1996, already exploring a similar strategy for site-scoring and page ranking.[9] The technology in RankDex would be patented by 1999[10] and used later when Li founded Baidu in China.[11][12] Li's work would be referenced by some of Larry Page's U.S. patents for his Google search methods.[13]
PageRank is a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value.
A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank.
Assume a small universe of four web pages: A, B, C and D. Links from a page to itself, or multiple outbound links from one single page to another single page, are ignored. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial PageRank of 1. However, later versions of PageRank, and the remainder of this section, assume a probability distribution between 0 and 1. Hence the initial value for each page is 0.25.
The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links.
If the only links in the system were from pages B, C, and D to A, each link would transfer 0.25 PageRank to A upon the next iteration, for a total of 0.75.
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(A)= PR(B) + PR(C) + PR(D).\,
Suppose instead that page B had a link to pages C and A, while page D had links to all three pages. Thus, upon the next iteration, page B would transfer half of its existing value, or 0.125, to page A and the other half, or 0.125, to page C. Since D had three outbound links, it would transfer one third of its existing value, or approximately 0.083, to A.
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(A)= \frac{PR(B)}{2}+ \frac{PR(C)}{1}+ \frac{PR(D)}{3}.\,
In other words, the PageRank conferred by an outbound link is equal to the document's own PageRank score divided by the number of outbound links L( ).
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(A)= \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}. \,
In the general case, the PageRank value for any page u can be expressed as:
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(u) = \sum_{v \in B_u} \frac{PR(v)}{L(v)}
,
i.e. the PageRank value for a page u is dependent on the PageRank values for each page v contained in the set Bu (the set containing all pages linking to page u), divided by the number L(v) of links from page v.
The PageRank theory holds that even an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[4]
The damping factor is subtracted from 1 (and in some variations of the algorithm, the result is divided by the number of documents (N) in the collection) and this term is then added to the product of the damping factor and the sum of the incoming PageRank scores. That is,
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(A) = {1 - d \over N} + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right).
So any page's PageRank is derived in large part from the PageRanks of other pages. The damping factor adjusts the derived value downward. The original paper, however, gave the following formula, which has led to some confusion:
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(A)= 1 - d + d \left( \frac{PR(B)}{L(B)}+ \frac{PR(C)}{L(C)}+ \frac{PR(D)}{L(D)}+\,\cdots \right).
The difference between them is that the PageRank values in the first formula sum to one, while in the second formula each PageRank is multiplied by N and the sum becomes N. A statement in Page and Brin's paper that "the sum of all PageRanks is one"[4] and claims by other Google employees[14] support the first variant of the formula above.
Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a probability distribution over web pages.[4]
Google recalculates PageRank scores each time it crawls the Web and rebuilds its index. As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents.
The formula uses a model of a random surfer who gets bored after several clicks and switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link. It can be understood as a Markov chain in which the states are pages, and the transitions, which are all equally probable, are the links between pages.
If a page has no links to other pages, it becomes a sink and therefore terminates the random surfing process. If the random surfer arrives at a sink page, it picks another URL at random and continues surfing again.
When calculating PageRank, pages with no outbound links are assumed to link out to all other pages in the collection. Their PageRank scores are therefore divided evenly among all other pages. In other words, to be fair with pages that are not sinks, these random transitions are added to all nodes in the Web, with a residual probability usually set to d = 0.85, estimated from the frequency that an average surfer uses his or her browser's bookmark feature.
So, the equation is as follows:
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(p_i) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j)}{L(p_j)}
where Failed to parse (Missing texvc executable; please see math/README to configure.): p_1, p_2, ..., p_N
are the pages under consideration, Failed to parse (Missing texvc executable; please see math/README to configure.): M(p_i)
is the set of pages that link to Failed to parse (Missing texvc executable; please see math/README to configure.): p_i
, Failed to parse (Missing texvc executable; please see math/README to configure.): L(p_j)
is the number of outbound links on page Failed to parse (Missing texvc executable; please see math/README to configure.): p_j
, and N is the total number of pages.
The PageRank values are the entries of the dominant eigenvector of the modified adjacency matrix. This makes PageRank a particularly elegant metric: the eigenvector is
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R} = \begin{bmatrix} PR(p_1) \\ PR(p_2) \\ \vdots \\ PR(p_N) \end{bmatrix}
where R is the solution of the equation
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R} = \begin{bmatrix} {(1-d)/ N} \\ {(1-d) / N} \\ \vdots \\ {(1-d) / N} \end{bmatrix} + d \begin{bmatrix} \ell(p_1,p_1) & \ell(p_1,p_2) & \cdots & \ell(p_1,p_N) \\ \ell(p_2,p_1) & \ddots & & \vdots \\ \vdots & & \ell(p_i,p_j) & \\ \ell(p_N,p_1) & \cdots & & \ell(p_N,p_N) \end{bmatrix} \mathbf{R}
where the adjacency function Failed to parse (Missing texvc executable; please see math/README to configure.): \ell(p_i,p_j)
is 0 if page Failed to parse (Missing texvc executable; please see math/README to configure.): p_j
does not link to Failed to parse (Missing texvc executable; please see math/README to configure.): p_i
, and normalized such that, for each j
- Failed to parse (Missing texvc executable; please see math/README to configure.): \sum_{i = 1}^N \ell(p_i,p_j) = 1
,
i.e. the elements of each column sum up to 1, so the matrix is a stochastic matrix (for more details see the computation section below). Thus this is a variant of the eigenvector centrality measure used commonly in network analysis.
Because of the large eigengap of the modified adjacency matrix above,[15] the values of the PageRank eigenvector can be approximated to within a high degree of accuracy within only a few iterations.
As a result of Markov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equal Failed to parse (Missing texvc executable; please see math/README to configure.): t^{-1}
where Failed to parse (Missing texvc executable; please see math/README to configure.): t
is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.
One main disadvantage of PageRank is that it favors older pages. A new page, even a very good one, will not have many links unless it is part of an existing site (a site being a densely connected set of pages, such as Wikipedia).
The Google Directory (itself a derivative of the Open Directory Project) allows users to see results sorted by PageRank within categories. The Google Directory is the only service offered by Google where PageRank fully determines display order.[citation needed] In Google's other search services (such as its primary Web search), PageRank is only used to weight the relevance scores of pages shown in search results.
Several strategies have been proposed to accelerate the computation of PageRank.[16]
Various strategies to manipulate PageRank have been employed in concerted efforts to improve search results rankings and monetize advertising links. These strategies have severely impacted the reliability of the PageRank concept, which purports to determine which documents are actually highly valued by the Web community.
Since December 2007, when it started actively penalizing sites selling paid text links, Google has combatted link farms and other schemes designed to artificially inflate PageRank. How Google identifies link farms and other PageRank manipulation tools is among Google's trade secrets.
PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the power iteration method[17][18] or the power method. The basic mathematical operations performed are identical.
At Failed to parse (Missing texvc executable; please see math/README to configure.): t=0 , an initial probability distribution is assumed, usually
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(p_i; 0) = \frac{1}{N}
.
At each time step, the computation, as detailed above, yields
- Failed to parse (Missing texvc executable; please see math/README to configure.): PR(p_i;t+1) = \frac{1-d}{N} + d \sum_{p_j \in M(p_i)} \frac{PR (p_j; t)}{L(p_j)}
, or in matrix notation
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R}(t+1) = d \mathcal{M}\mathbf{R}(t) + \frac{1-d}{N} \mathbf{1}
, (*) where Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R}_i(t)=PR(p_i; t)
and Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{1}
is the column vector of length Failed to parse (Missing texvc executable; please see math/README to configure.): N
containing only ones.
The matrix Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}
is defined as
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}_{ij} = \begin{cases} 1 /L(p_j) , & \mbox{if }j\mbox{ links to }i\ \\ 0, & \mbox{otherwise} \end{cases}
i.e.,
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M} := (K^{-1} A)^T
, where Failed to parse (Missing texvc executable; please see math/README to configure.): A
denotes the adjacency matrix of the graph and Failed to parse (Missing texvc executable; please see math/README to configure.): K
is the diagonal matrix with the outdegrees in the diagonal.
The computation ends when for some small Failed to parse (Missing texvc executable; please see math/README to configure.): \epsilon
- Failed to parse (Missing texvc executable; please see math/README to configure.): |\mathbf{R}(t+1) - \mathbf{R}(t)| < \epsilon
, i.e., when convergence is assumed.
For Failed to parse (Missing texvc executable; please see math/README to configure.): t \to \infty
(i.e., in the steady state), the above equation (*) reads
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R} = d \mathcal{M}\mathbf{R} + \frac{1-d}{N} \mathbf{1}
. (**) The solution is given by
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R} = (\mathbf{I}-d \mathcal{M})^{-1} \frac{1-d}{N} \mathbf{1}
, with the identity matrix Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{I} .
The solution exists and is unique for Failed to parse (Missing texvc executable; please see math/README to configure.): 0 < d < 1 . This can be seen by noting that Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}
is by construction a stochastic matrix and hence has an eigenvalue equal to one as a consequence of the Perron–Frobenius theorem.
If the matrix Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}
is a transition probability, i.e., column-stochastic with no columns consisting of
just zeros and Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R}
is a probability distribution (i.e., Failed to parse (Missing texvc executable; please see math/README to configure.): |\mathbf{R}|=1
, Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{E}\mathbf{R}=1
where Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{E}
is matrix of all ones), Eq. (**) is equivalent to
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R} = \left( d \mathcal{M} + \frac{1-d}{N} \mathbf{E} \right)\mathbf{R} =: \widehat{ \mathcal{M}} \mathbf{R}
. (***) Hence PageRank Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R}
is the principal eigenvector of Failed to parse (Missing texvc executable; please see math/README to configure.): \widehat{\mathcal{M}}
. A fast and easy way to compute this is using the power method: starting with an arbitrary vector Failed to parse (Missing texvc executable; please see math/README to configure.): x(0) , the operator Failed to parse (Missing texvc executable; please see math/README to configure.): \widehat\mathcal{M}
is applied in succession, i.e.,
- Failed to parse (Missing texvc executable; please see math/README to configure.): x(t+1) = \widehat{\mathcal{M}} x(t)
, until
- Failed to parse (Missing texvc executable; please see math/README to configure.): |x(t+1) - x(t)| < \epsilon
.
Note that in Eq. (***) the matrix on the right-hand side in the parenthesis can be interpreted as
- Failed to parse (Missing texvc executable; please see math/README to configure.): \frac{1-d}{N} \mathbf{I} = (1-d)\mathbf{P} \mathbf{1}^t
, where Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{P}
is an initial probability distribution. In the current case
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{P} := \frac{1}{N} \mathbf{1}
.
Finally, if Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}
has columns with only zero values, they should be replaced with the initial
probability vector Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{P} . In other words
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}^\prime := \mathcal{M} + \mathcal{D}
, where the matrix Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{D}
is defined as
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{D} := \mathbf{P} \mathbf{D}^t
, with
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{D}_i = \begin{cases} 1, & \mbox{if }L(p_i)=0\ \\ 0, & \mbox{otherwise} \end{cases}
In this case, the above two computations using Failed to parse (Missing texvc executable; please see math/README to configure.): \mathcal{M}
only give the same PageRank if their
results are normalized:
- Failed to parse (Missing texvc executable; please see math/README to configure.): \mathbf{R}_{\textrm{power}} = \frac{\mathbf{R}_{\textrm{iterative}}}{|\mathbf{R}_{\textrm{iterative}}|} = \frac{\mathbf{R}_{\textrm{algebraic}}}{|\mathbf{R}_{\textrm{algebraic}}|}
.
PageRank MATLAB/Octave implementation <source lang="matlab"> % Parameter M adjacency matrix where M_i,j represents the link from 'j' to 'i', such that for all 'j' sum(i, M_i,j) = 1 % Parameter d damping factor % Parameter v_quadratic_error quadratic error for v % Return v, a vector of ranks such that v_i is the i-th rank from [0, 1]
function [v] = rank(M, d, v_quadratic_error)
N = size(M, 2); % N is equal to half the size of M v = rand(N, 1); v = v ./ norm(v, 2); last_v = ones(N, 1) * inf; M_hat = (d .* M) + (((1 - d) / N) .* ones(N, N));
while(norm(v - last_v, 2) > v_quadratic_error) last_v = v; v = M_hat * v; v = v ./ norm(v, 2); end
endfunction </source>
Example of code calling the rank function defined above: <source lang="matlab"> M = [0 0 0 0 1 ; 0.5 0 0 0 0 ; 0.5 0 0 0 0 ; 0 1 0.5 0 0 ; 0 0 0.5 1 0]; rank(M, 0.80, 0.001) </source>
Depending on the framework used to perform the computation, the exact implementation of the methods, and the required accuracy of the result, the computation time of the these methods can vary greatly.
The Google Toolbar's PageRank feature displays a visited page's PageRank as a whole number between 0 and 10. The most popular websites have a PageRank of 10. The least have a PageRank of 0. Google has not disclosed the specific method for determining a Toolbar PageRank value, which is to be considered only a rough indication of the value of a website.
PageRank measures the number of sites that link to a particular page.[19] The PageRank of a particular page is roughly based upon the quantity of inbound links as well as the PageRank of the pages providing the links. The algorithm also includes other factors, such as the size of a page, the number of changes, the time since the page was updated, the text in headlines and the text in hyperlinked anchor texts.[7]
The Google Toolbar's PageRank is updated infrequently, so the values it shows are often out of date.
The search engine results page (SERP) is the actual result returned by a search engine in response to a keyword query. The SERP consists of a list of links to web pages with associated text snippets. The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank. The SERP rank of a web page is a function not only of its PageRank, but of a relatively large and continuously adjusted set of factors (over 200),[20][21] commonly referred to by internet marketers as "Google Love".[22] Search engine optimization (SEO) is aimed at achieving the highest possible SERP rank for a website or a set of web pages.
After the introduction of Google Places into the mainstream organic SERP, PageRank played little to no role in ranking a business in the Local Business Results.[23] While the theory of citations still plays a role in the algorithm, PageRank is not a factor since business listings, rather than web pages, are ranked.
The Google Directory PageRank is an 8-unit measurement. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displays the bar, never the numeric values.
In the past, the PageRank shown in the Toolbar was easily manipulated. Redirection from one page to another, either via a HTTP 302 response or a "Refresh" meta tag, caused the source page to acquire the PageRank of the destination page. Hence, a new page with PR 0 and no incoming links could have acquired PR 10 by redirecting to the Google home page. This spoofing technique, also known as 302 Google Jacking, was a known vulnerability. Spoofing can generally be detected by performing a Google search for a source URL; if the URL of an entirely different site is displayed in the results, the latter URL may represent the destination of a redirection.
For search engine optimization purposes, some companies offer to sell high PageRank links to webmasters.[24] As links from higher-PR pages are believed to be more valuable, they tend to be more expensive. It can be an effective and viable marketing strategy to buy link advertisements on content pages of quality and relevant sites to drive traffic and increase a webmaster's link popularity. However, Google has publicly warned webmasters that if they are or were discovered to be selling links for the purpose of conferring PageRank and reputation, their links will be devalued (ignored in the calculation of other pages' PageRanks). The practice of buying and selling links is intensely debated across the Webmaster community. Google advises webmasters to use the nofollow HTML attribute value on sponsored links. According to Matt Cutts, Google is concerned about webmasters who try to game the system, and thereby reduce the quality and relevancy of Google search results.[24]
The original PageRank algorithm reflects the so-called random surfer model, meaning that the PageRank of a particular page is derived from the theoretical probability of visiting that page when clicking on links at random. However, real users do not randomly surf the web, but follow links according to their interest and intention. A page ranking model that reflects the importance of a particular page as a function of how many actual visits it receives by real users is called the intentional surfer model.[25] The Google toolbar sends information to Google for every page visited, and thereby provides a basis for computing PageRank based on the intentional surfer model. The introduction of the nofollow attribute by Google to combat Spamdexing has the side effect that webmasters commonly use it on outgoing links to increase their own PageRank. This causes a loss of actual links for the Web crawlers to follow, thereby making the original PageRank algorithm based on the random surfer model potentially unreliable. Using information about users' browsing habits provided by the Google toolbar partly compensates for the loss of information caused by the nofollow attribute. The SERP rank of a page, which determines a page's actual placement in the search results, is based on a combination of the random surfer model (PageRank) and the intentional surfer model (browsing habits) in addition to other factors.[26]
A version of PageRank has recently been proposed as a replacement for the traditional Institute for Scientific Information (ISI) impact factor,[27] and implemented at eigenfactor.org. Instead of merely counting total citation to a journal, the "importance" of each citation is determined in a PageRank fashion.
A similar new use of PageRank is to rank academic doctoral programs based on their records of placing their graduates in faculty positions. In PageRank terms, academic departments link to each other by hiring their faculty from each other (and from themselves).[28]
PageRank has been used to rank spaces or streets to predict how many people (pedestrians or vehicles) come to the individual spaces or streets.[29][30] In lexical semantics it has been used to perform Word Sense Disambiguation[31] and to automatically rank WordNet synsets according to how strongly they possess a given semantic property, such as positivity or negativity.[32]
A dynamic weighting method similar to PageRank has been used to generate customized reading lists based on the link structure of Wikipedia.[33]
A Web crawler may use PageRank as one of a number of importance metrics it uses to determine which URL to visit during a crawl of the web. One of the early working papers [34] that were used in the creation of Google is Efficient crawling through URL ordering,[35] which discusses the use of a number of different importance metrics to determine how deeply, and how much of a site Google will crawl. PageRank is presented as one of a number of these importance metrics, though there are others listed such as the number of inbound and outbound links for a URL, and the distance from the root directory on a site to the URL.
The PageRank may also be used as a methodology to measure the apparent impact of a community like the Blogosphere on the overall Web itself. This approach uses therefore the PageRank to measure the distribution of attention in reflection of the Scale-free network paradigm.
In any ecosystem, a modified version of PageRank may be used to determine species that are essential to the continuing health of the environment.[36]
An application of PageRank to the analysis of protein networks in biology is reported recently.[37]
In early 2005, Google implemented a new value, "nofollow",[38] for the rel attribute of HTML link and anchor elements, so that website developers and bloggers can make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combat spamdexing.
As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See: Spam in blogs#nofollow)
In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[39]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic has been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[40]
PageRank was once available for the verified site maintainers through the Google Webmaster Tools interface. However on October 15, 2009, a Google employee confirmed[41] that the company had removed PageRank from its Webmaster Tools section, explaining that "We’ve been telling people for a long time that they shouldn’t focus on PageRank so much; many site owners seem to think it's the most important metric for them to track, which is simply not true."[41] The PageRank indicator is not available in Google's own Chrome browser.
The visible page rank is updated very infrequently.
On 6 October 2011, many users mistakenly thought Google PageRank was gone. As it turns out, it was simply an update to the URL used to query the PageRank from Google.[42]
Google now also relies on other strategies as well as PageRank, such as Google Panda[43].
- ^ "Google Press Center: Fun Facts". www.google.com. Archived from the original on 2009-04-24. http://web.archive.org/web/20090424093934/http://www.google.com/press/funfacts.html.
- ^ Lisa M. Krieger (1 December 2005). "Stanford Earns $336 Million Off Google Stock". San Jose Mercury News, cited by redOrbit. http://www.redorbit.com/news/education/318480/stanford_earns_336_million_off_google_stock/. Retrieved 2009-02-25.
- ^ Richard Brandt. "Starting Up. How Google got its groove". Stanford magazine. http://www.stanfordalumni.org/news/magazine/2004/novdec/features/startingup.html. Retrieved 2009-02-25.
- ^ a b c d e f Brin, S.; Page, L. (1998). "The anatomy of a large-scale hypertextual Web search engine". Computer Networks and ISDN Systems 30: 107–117. DOI:10.1016/S0169-7552(98)00110-X. ISSN 0169-7552. http://infolab.stanford.edu/pub/papers/google.pdf. edit
- ^ David Vise and Mark Malseed (2005). The Google Story. p. 37. ISBN ISBN 0-553-80457-X. http://www.thegooglestory.com/.
- ^ Page, Larry, "PageRank: Bringing Order to the Web", Stanford Digital Library Project, talk. August 18, 1997 (archived 2002)
- ^ a b 187-page study from Graz University, Austria, includes the note that also human brains are used when determining the page rank in Google[dead link]
- ^ "Google Technology". Google.com. http://www.google.com/technology/. Retrieved 2011-05-27.
- ^ Li, Yanhong (August 6, 2002). "Toward a qualitative search engine". Internet Computing, IEEE (IEEE Computer Society) 2 (4): 24–29. DOI:10.1109/4236.707687.
- ^ USPTO, "Hypertext Document Retrieval System and Method", U.S. Patent number: 5920859, Inventor: Yanhong Li, Filing date: Feb 5, 1997, Issue date: Jul 6, 1999
- ^ Greenberg, Andy, "The Man Who's Beating Google", Forbes magazine, October 05, 2009
- ^ "About: RankDex", rankdex.com
- ^ Cf. especially Lawrence Page, U.S. patents 6,799,176 (2004) "Method for scoring documents in a linked database", 7,058,628 (2006) "Method for node ranking in a linked database", and 7,269,587 (2007) "Scoring documents in a linked database"2011
- ^ Matt Cutts's blog: Straight from Google: What You Need to Know, see page 15 of his slides.
- ^ Taher Haveliwala and Sepandar Kamvar. (March 2003). "The Second Eigenvalue of the Google Matrix" (PDF). Stanford University Technical Report: 7056. arXiv:math/0307056. Bibcode 2003math......7056N. http://www-cs-students.stanford.edu/~taherh/papers/secondeigenvalue.pdf.
- ^ Gianna M. Del Corso, Antonio Gullí, Francesco Romani (2005). "Fast PageRank Computation via a Sparse Linear System". Internet Mathematics 2 (3). DOI:10.1.1.118.5422.
- ^ Arasu, A. and Novak, J. and Tomkins, A. and Tomlin, J. (2002). "PageRank computation and the structure of the web: Experiments and algorithms". Proceedings of the Eleventh International World Wide Web Conference, Poster Track. Brisbane, Australia. pp. 107–117. DOI:10.1.1.18.5264.
- ^ Massimo Franceschet (2010). "PageRank: Standing on the shoulders of giants". arXiv:1002.2858 [cs.IR].
- ^ Google Webmaster central discussion on PR
- ^ Aubuchon, Vaughn. "Google Ranking Factors - SEO Checklist". http://www.vaughns-1-pagers.com/internet/google-ranking-factors.htm.
- ^ Fishkin, Rand; Jeff Pollard (April 2, 2007). "Search Engine Ranking Factors - Version 2". seomoz.org. http://www.seomoz.org/article/search-ranking-factors. Retrieved May 11, 2009.
- ^ http://www.infoworld.com/t/search-engines/google-corrupt-search-me-428
- ^ "Ranking of listings : Ranking - Google Places Help". Google.com. http://google.com/support/places/bin/answer.py?hl=en&answer=7091. Retrieved 2011-05-27.
- ^ a b "How to report paid links". mattcutts.com/blog. April 14, 2007. http://www.mattcutts.com/blog/how-to-report-paid-links/. Retrieved 2007-05-28.
- ^ Jøsang, A. (2007). "Trust and Reputation Systems". In Aldini, A. (PDF). Foundations of Security Analysis and Design IV, FOSAD 2006/2007 Tutorial Lectures.. 4677. Springer LNCS 4677. pp. 209–245. DOI:10.1007/978-3-540-74810-6. http://www.unik.no/people/josang/papers/Jos2007-FOSAD.pdf.
- ^ SEOnotepad. "Myth of the Google Toolbar Ranking". http://www.seonotepad.com/search-engines/google-seo/myth-of-the-google-toolbar-ranking/.
- ^ Johan Bollen, Marko A. Rodriguez, and Herbert Van de Sompel. (December 2006). "Journal Status". Scientometrics 69 (3): 1030. arXiv:cs.GL/0601030. Bibcode 2006cs........1030B.
- ^ Benjamin M. Schmidt and Matthew M. Chingos (2007). "Ranking Doctoral Programs by Placement: A New Method" (PDF). PS: Political Science and Politics 40 (July): 523–529. http://www.people.fas.harvard.edu/~gillum/rankings_paper.pdf.
- ^ B. Jiang (2006). "Ranking spaces for predicting human movement in an urban environment". International Journal of Geographical Information Science 23 (7): 823–837. arXiv:physics/0612011. DOI:10.1080/13658810802022822.
- ^ Jiang B., Zhao S., and Yin J. (2008). "Self-organized natural roads for predicting traffic flow: a sensitivity study". Journal of Statistical Mechanics: Theory and Experiment P07008 (07): 008. arXiv:0804.1630. Bibcode 2008JSMTE..07..008J. DOI:10.1088/1742-5468/2008/07/P07008.
- ^ Roberto Navigli, Mirella Lapata. "An Experimental Study of Graph Connectivity for Unsupervised Word Sense Disambiguation". IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 32(4), IEEE Press, 2010, pp. 678–692.
- ^ Andrea Esuli and Fabrizio Sebastiani. "PageRanking WordNet synsets: An Application to Opinion-Related Properties" (PDF). In Proceedings of the 35th Meeting of the Association for Computational Linguistics, Prague, CZ, 2007, pp. 424–431. http://nmis.isti.cnr.it/sebastiani/Publications/ACL07.pdf. Retrieved June 30, 2007.
- ^ Wissner-Gross, A. D. (2006). "Preparation of topical readings lists from the link structure of Wikipedia". Proceedings of the IEEE International Conference on Advanced Learning Technology (Rolduc, Netherlands): 825. DOI:10.1109/ICALT.2006.1652568. http://www.alexwg.org/publications/ProcIEEEICALT_6-825.pdf.
- ^ "Working Papers Concerning the Creation of Google". Google. http://dbpubs.stanford.edu:8091/diglib/pub/projectdir/google.html. Retrieved November 29, 2006.
- ^ Cho, J., Garcia-Molina, H., and Page, L. (1998). "Efficient crawling through URL ordering". Proceedings of the seventh conference on World Wide Web (Brisbane, Australia). http://dbpubs.stanford.edu:8090/pub/1998-51.
- ^ Burns, Judith (2009-09-04). "Google trick tracks extinctions". BBC News. http://news.bbc.co.uk/2/hi/science/nature/8238462.stm. Retrieved 2011-05-27.
- ^ G. Ivan and V. Grolmusz (2011). "When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks". Bioinformatics (Vol. 27, No. 3. pp. 405-407) 27 (3): 405–7. DOI:10.1093/bioinformatics/btq680. PMID 21149343. http://bioinformatics.oxfordjournals.org/content/27/3/405.
- ^ "Preventing Comment Spam". Google. http://googleblog.blogspot.com/2005/01/preventing-comment-spam.html. Retrieved January 1, 2005.
- ^ "PageRank Sculpting: Parsing the Value and Potential Benefits of Sculpting PR with Nofollow". SEOmoz. http://www.seomoz.org/blog/pagerank-sculpting-parsing-the-value-and-potential-benefits-of-sculpting-pr-with-nofollow. Retrieved 2011-05-27.
- ^ "PageRank sculpting". Mattcutts.com. 2009-06-15. http://www.mattcutts.com/blog/pagerank-sculpting/. Retrieved 2011-05-27.
- ^ a b Susan Moskwa. "PageRank Distribution Removed From WMT". http://www.google.com/support/forum/p/Webmasters/thread?tid=6a1d6250e26e9e48&hl=en. Retrieved October 16, 2009
- ^ WhatCulture!. 6 October 2011. http://whatculture.com/technology/google-pagerank-is-not-dead.php. Retrieved 7 October 2011.
- ^ Google Panda Update: Say Goodbye to Low-Quality Link Building, Search Engine Watch, 08.02.11, http://searchenginewatch.com/article/2067687/Google-Panda-Update-Say-Goodbye-to-Low-Quality-Link-Building
- Altman, Alon; Moshe Tennenholtz (2005). "Ranking Systems: The PageRank Axioms" (PDF). Proceedings of the 6th ACM conference on Electronic commerce (EC-05). Vancouver, BC. http://stanford.edu/~epsalon/pagerank.pdf. Retrieved 2008-02-05.
- Cheng, Alice; Eric J. Friedman (2006-06-11). "Manipulability of PageRank under Sybil Strategies" (PDF). Proceedings of the First Workshop on the Economics of Networked Systems (NetEcon06). Ann Arbor, Michigan. http://www.cs.duke.edu/nicl/netecon06/papers/ne06-sybil.pdf. Retrieved 2008-01-22.
- Farahat, Ayman; LoFaro, Thomas; Miller, Joel C.; Rae, Gregory and Ward, Lesley A. (2006). "Authority Rankings from HITS, PageRank, and SALSA: Existence, Uniqueness, and Effect of Initialization". SIAM Journal on Scientific Computing 27 (4): 1181–1201. DOI:10.1137/S1064827502412875.
- Haveliwala, Taher; Jeh, Glen and Kamvar, Sepandar (2003). "An Analytical Comparison of Approaches to Personalizing PageRank" (PDF). Stanford University Technical Report. http://www-cs-students.stanford.edu/~taherh/papers/comparison.pdf.
- Langville, Amy N.; Meyer, Carl D. (2003). "Survey: Deeper Inside PageRank". Internet Mathematics 1 (3).
- Langville, Amy N.; Meyer, Carl D. (2006). Google's PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press. ISBN 0-691-12202-4.
- Page, Lawrence; Brin, Sergey; Motwani, Rajeev and Winograd, Terry (1999). The PageRank citation ranking: Bringing order to the Web. http://dbpubs.stanford.edu:8090/pub/showDoc.Fulltext?lang=en&doc=1999-66&format=pdf&compression=.
- Richardson, Matthew; Domingos, Pedro (2002). "The intelligent surfer: Probabilistic combination of link and content information in PageRank" (PDF). Proceedings of Advances in Neural Information Processing Systems. 14. http://www.cs.washington.edu/homes/pedrod/papers/nips01b.pdf.
- Original PageRank U.S. Patent—Method for node ranking in a linked database—Patent number 6,285,999—September 4, 2001
- PageRank U.S. Patent—Method for scoring documents in a linked database—Patent number 6,799,176—September 28, 2004
- PageRank U.S. Patent—Method for node ranking in a linked database—Patent number 7,058,628—June 6, 2006
- PageRank U.S. Patent—Scoring documents in a linked database—Patent number 7,269,587—September 11, 2007