About the Alexa Traffic Rankings
A listing of all sites on the Web, sorted by traffic...
Alexa computes traffic rankings by analyzing the Web usage of millions of Alexa Toolbar users. The information is sorted, sifted, anonymized, counted, and computed, until, finally, we get the traffic rankings shown in the Alexa service. The process is relatively complex, but if you have a need to know, please read on. In this point the search engine optimization services carry out a critical mission.
What is Traffic Rank?
The traffic rank is based on three months of aggregated historical traffic data from millions of Alexa Toolbar users and is a combined measure of page views and users (reach). As a first step, Alexa computes the reach and number of page views for all sites on the Web on a daily basis. The main Alexa traffic rank is based on the geometric mean of these two quantities averaged over time (so that the rank of a site reflects both the number of users who visit that site as well as the number of pages on the site viewed by those users). The three-month change is determined by comparing the site's current rank with its rank from three months ago. For example, on July 1, the three-month change would show the difference between the rank based on traffic during the first quarter of the year and the rank based on traffic during the second quarter.
What are sites and Web hosts?
Traffic is computed for sites, which are typically defined at the domain level. For example, the Web hosts www.msn.com, carpoint.msn.com and slate.msn.com are all treated as part of the same site, because they all reside on the same domain, msn.com.You should be carreful about choosing your host and search engine optimization services An exception is blogs or personal home pages, which are treated separately if they can be automatically identified as such from the URLs in question. Also, sites which are found to be serving the "same" content are generally counted together as the same site.
What is Reach?
Reach measures the number of users. Reach is typically expressed as the percentage of all Internet users who visit a given site. So, for example, if a site like yahoo.com has a reach of 28%, this means that of all global Internet users measured by Alexa, 28% of them visit yahoo.com. Alexa's one-week and three-month average reach are measures of daily reach, averaged over the specified time period. The three-month change is determined by comparing a site's current reach with its values from three months ago.
What are Page Views?
Page views measure the number of pages viewed by Alexa Toolbar users. Multiple page views of the same page made by the same user on the same day are counted only once. The page views per user numbers are the average numbers of unique pages viewed per user per day by the users visiting the site. The three-month change is determined by comparing a site's current page view numbers with those from three month ago.The page views can be increased by using a better search engine optimization service.
How Are Traffic Trend Graphs Calculated?
The Trend graph shows you a three-day moving average of the site's daily traffic rank, charted over time. The daily traffic rank reflects the traffic to the site based on data for a single day. In contrast, the main traffic rank shown in the Alexa Toolbar and elsewhere in the service is calculated from three months of aggregate traffic data.
Daily traffic rankings will sometimes benefit sites with sporadically high traffic, while the three-month traffic ranking benefits sites with consistent traffic over time. Since we feel that consistent traffic is a better indication of a site's value, we've chosen to use the three-month traffic rank to represent the site's overall popularity. We use the daily traffic rank in the Trend graphs because it allows you to see short-term fluctuations in traffic much more clearly.Some search engine optimization services can serve on showing traffic trend graphs.
It is possible for a site's three-month traffic rank to be higher than any single daily rank shown in the Trend graph. On any given day there may be many sites that temporarily shoot up in the rankings. But if a site has consistent traffic performance, it may end up with the best ranking when the traffic data are aggregated into the three-month average. A good analogy is a four-day golf tournament: if a different player comes in first at each match, but you come in second at all four matches, you can end up winning the tournament.
How are Movers & Shakers Calculated?
The movers and shakers list is based on changes in average reach (numbers of users). For each site on the net, we compute the average weekly reach and compare it with the average reach during previous weeks. The more significant the change, the higher the site will be on the list. The percent change shown on the Movers & Shakers list is based on the change in reach. It is important to note that the traffic rankings shown on the Movers & Shakers page are weekly traffic rankings; they are not the same as the three-month average traffic rankings shown in the other Alexa services and are not the same as the reach numbers used to generate the list. The most important compare is making the mentioned web site inside and you can work with a reliable search engine optimization service to get better optimizated web site.
Some Important Disclaimers
The traffic data are based on the set of toolbars that use Alexa data, which may not be a representative sample of the global Internet population. Known biases include (but are likely not limited to) the following:
Our users are disproportionately likely to visit sites that are featured on alexa.com such as amazon.com and archive.org, and traffic to these sites may be overcounted.
The extent to which our sample may overcount or undercount users of the various browsers is unknown. Alexa's sample includes users of Internet Explorer, Firefox and Mozilla browsers. The AOL/Netscape and Opera browser is not supported, which means that sites operated by these companies may be undercounted.
The extent to which our sample may overcount or undercount users of various operating systems is unknown. Alexa sample includes toolbars built for Windows, Macintosh and Linux.
The rate of adoption of Alexa software in different parts of the world may vary widely due to advertising locality, language, and other geographic and cultural factors. For example, to some extent the prominence of Chinese sites among our top-ranked sites reflects known high rates of general Internet usage in China, but there may also be a disproportionate number of Chinese Alexa users.
In some cases traffic data may also be adversely affected by our "site" definitions. With tens of millions of hosts on the Internet, our automated procedures for determining which hosts are serving the "same" content may be incorrect and/or out-of-date. Similarly, the determinations of domains and home pages may not always be accurate. When these determinations change (as they do periodically), there may be sudden artificial changes in the Alexa traffic rankings for some sites as a consequence.
The Alexa Toolbar turns itself off on secure pages (https:). Sites with secure page views will be under-represented in the Alexa traffic data.
In addition to the biases above, the Alexa user base is only a sample of the Internet population, and sites with relatively low traffic will not be accurately ranked by Alexa due to the statistical limitations of the sample. Alexa's data come from a large sample of several million Alexa Toolbar users; however, this is not large enough to accurately determine the rankings of sites with fewer than roughly 1,000 total monthly visitors. Generally, Traffic Rankings of 100,000+ should be regarded as not reliable because the amount of data we receive is not statistically significant. Conversely, the more traffic a site receives (the closer it gets to the number 1 position), the more reliable its Traffic Ranking becomes.
You may be wondering about "ia_archiver" and be curious about why it is visiting your site, or you may want to invite the robot to crawl your site. To block ia_archiver from crawling your site, please read below. Web masters are basically different from Seos, search engine optimization services.
The Alexa crawler (robot), which identifies itself as ia_archiver in the HTTP "User-agent" header field, uses a web-wide crawl strategy. Basically, it starts with a list of known URLs from across the entire Internet, then it fetches local links found as it goes. There are several advantages to this approach, most importantly that it creates the least possible disruption to the sites being crawled.
We will not index anything you would like to remain private. All you have to do is tell us. How? By using the Standard for Robot Exclusion (SRE).
The SRE was developed by Martijn Koster at Webcrawler to allow content providers to control how robots behave on their sites. All of the major Web-crawling groups, such as AltaVista, Inktomi, and Google, respect this standard. Alexa Internet strictly adheres to the standard:
The Alexa crawler looks for a file called "robots.txt". Robots.txt is a file website administrators can place at the top level of a site to direct the behavior of web crawling robots.
The Alexa crawler will always pick up a copy of the robots.txt file prior to its crawl of the Web. If you change your robots.txt file while we are crawling your site, please let us know so that we can instruct the crawler to retrieve the updated instructions contained in the robots.txt file.
To exclude all robots, the robots.txt file should look like this:
To exclude just one directory (and its subdirectories), say, the /images/ directory, the file should look like this:
Web site administrators can allow or disallow specific robots from visiting part or all of their site. Alexa's crawler identifies itself as ia_archiver, and so to allow ia_archiver to visit (while preventing all others), your robots.txt file should look like this:
To prevent ia_archiver from visiting (while allowing all others), your robots.txt file should look like this:
For more information regarding robots, crawling, and robots.txt visit the Web Robots Pages at www.robotstxt.org, an excellent source for the latest information on the Standard for Robots Exclusion.
Fill out the form below to be crawled by Alexa.
There are a few reasons that Alexa may not have visited your site. You can get necessary details from your search engine optimization services Perhaps your site is new or we haven't discovered any links on the web that lead to your site. Or perhaps we haven't had any Alexa users visit your site. It is also possible that your web site administrator has disallowed crawlers from visiting your site - please read the information about robots.txt that we have provided above.
In any event, simply by visiting your site with the Alexa Toolbar open, Alexa will learn of your site and add it to our list of sites to visit, thus ensuring your inclusion in the Alexa service and in the Alexa archive.
If you are the type of person who won't be satisfied until you get to click a button that says "Crawl My Site," then we have just the form for you. Simply type your site's web address into the box below, and then click the button. Alexa will include your site in the next crawl of the web, usually within 8 weeks of submission.
Alexa is gathering Web information and learning from content and paths to create the Alexa Service.
How and Why We Crawl the Web
Alexa is continually crawling all publicly available web sites to create a series of snapshots of the Web. We use the data we collect to create features and services:
Site Information: Traffic rankings, pictures of sites, links pointing to sites and more etc..( You can get the more details by asking you search engine optimization services)
Related Links: Sites that are similar to the one you are currently viewing
Alexa has been crawling the Web since early 1996, and we have continually increased the amount of information that we gather. We are currently gathering approximately 1.6 Terabytes (1600 gigabytes) of Web content per day. After each snapshot of the Web, which takes approximately two months to complete, Alexa has gathered 4.5 Billion pages from over 16 million sites.
To access Alexa's vast information about the Web, please visit Alexa Web Information Service. To keep Alexa from crawling your site, please visit this page.
Gathering Web Usage Information
In addition to the Alexa Crawl, which can tell us what is on the Web, Alexa utilizes web usage information, which tells us what is being seen on the web. This information comes from the community of Alexa Toolbar users. Some search engine optimization services can support the web usage information.Each member of the community, in addition to getting a useful tool, is giving back. Simply by using the toolbar each member contributes valuable information about the web, how it is used, what is important and what is not. This information is returned to the community with improved Related Links, Traffic Rankings and more.
Finding Patterns in Data
The Alexa services are derived from our uniquely powerful combination of Web content and usage information.
Alexa gathers Site Stats from a variety of sources to provide key statistics about each site on the web. These include: Traffic Rank and Speed which are derived from Web usage information, and Other sites that link to this site, and Online Since, both of which come from Web content. For an example of Site Stats, see the Alexa Overview page for Schwab.com.
Alexa provides contact information for Web sites by mining for Web content gathered in the crawl. This information includes Site Owner, Address, Phone Number and contact e-mail address. See Contact Info for Schwab.com.
Web usage information is utilized to provide information about the number of page views and number of users that Web sites receive. This data is also the basis for the Alexa traffic rank and traffic history graphs. See Traffic Details for Schwab.com
Our goal for these features is to help people navigate the Web more efficiently by giving them all the information they need to make informed decisions about the sites they visit.
Whenever an Alexa Toolbar user visits a web page, the Alexa Toolbar retrieves information from the Alexa servers to suggest other pages that might be of interest to the user. To generate Related Links, we use several techniques, including:
The usage paths of the collective Alexa community- this is the most important source of our information, since these paths show us which web sites our users believe are important and interesting.
Clustering - the hundreds of millions of links on the Web can be used to find clusters of sites that are similar and relevant to one another. We mine this data by using custom databases to find and identify these clusters.
Users' suggestions - we consider our users' suggestions to augment our Related Links recommendations.
The Alexa Toolbar
The Alexa toolbar is a program written by Alexa Internet that users install into the browser. Every time the user changes pages, the Alexa toolbar communicates with Alexa servers to retrieve information which is then displayed in the toolbar.
Donation of the Information to the Internet Archive
As a service to future historians, scholars, and other interested parties, Alexa Internet donates a copy of each crawl of the Web to the Internet Archive, a (501(c)3) nonprofit organization committed to the long-term preservation and maintenance of a growing collection of data about the Web. At Alexa, we believe that saving and preserving our early digital heritage is important today and essential for future generations. We also believe that a public charity is the best kind of organization for preserving this global asset. More information about accessing archived materials is available at the Internet Archive,
Everything You Need to Know About Link Popularity
The number of websites that link to your website is one of the factors that help search engines determine your relevancy for a search term. Link popularity and gaining new links from outside websites to your website have proven to be a popular concept for people seeking to improve their search engine rankings. It should be remembered that the link popularity scores can be handled by search engine optimization services
What is link popularity and how exactly does it work? Search engines don't just look at the content of your website to determine if you are a match for a search. They also look at the number of outside websites that can validate, by linking, that you are a good match.
Search engines have also begun to rank the importance of the sites that link to you. This means if the New York Times links to your site, your credibility is higher than if Joe's Online Newspaper provides a link to your website. Search engines also consider the text contained in the link that is pointing to your website. If the text in the links contains keywords you are trying to compete for, the search engines consider your site to have even greater credibility.
Since link popularity has become a factor that people feel like they have some influence over in determining their search engine positioning, many solutions have been proposed for growing your online link popularity. One of the more popular ways is also one of the least effective.
Several software programs have been written that help you create lists of websites in your space that might be willing to link to you. These programs also help you gather the email addresses for these sites and even help you craft an email requesting that the site add a link to yours.
The concept sounds good but the results are often mixed. If you use one of these tools and simply follow the templates they give you, your email will read like a spam message that won't be taken seriously.
The best way to build long-term link popularity is to offer good content and features that provide real value to your audience. As people discover your website and realize its benefits, the likelihood of them linking to your website naturally increases. Link popularity offers the best search engine ranking and some search engine optimization services are also optains the services like that.
There are several critical targets if you want to build up your link popularity without appearing to be a spammer. The first is good links from Yahoo and the Open Directory Project. Both of these sites are human based directories that have a lot of influence over search results. If your site is listed in the correct category and has a good description, links from these two websites are seen as validating you are the real thing.
The second place it's important to have a link from is topic specific or niche directories. These are websites that are dedicated to news and information that is an exact match for what you provide online. If you have a website that deals with tractor parts, being listed on sites that focus on tractors is very important. In the case where a niche website doesn't know about your website, it's okay to ask them to link to your website. But your message should be personalized to them and also tell them the benefit or feature their users will get from linking to you.
Another part of the web that helps build your link popularity is resource sites. Resource sites are lists of links that people put up on their own. These pages are often spread amongst friends and readers who find good information available from the resource. In order to reach this audience, a good PR campaign and press releases can ensure that these individuals know you exist and have a link to your website that they can easily include.
One of the most overlooked spots for building link popularity is links from partners and vendors for your business. Because you already have a business relationship with these companies or individuals, you are more likely to be able to request and receive a link from their website. These websites help validate your place online and also establish you within a community of websites online. If you are visible to the community, you are more visible to the search engines.
Link popularity also starts at home. You must make sure your link architecture is solid and easily followed by search engines. That's the first way that search engines see you. It's also the way that visitors find information within your website. The easier you make it on your audience to find good information, the more likely they are to link to it.
The final way you can work to increase your link popularity is to participate in newsletters and online forums that relate to your website. You don't want to just jump in and give a plug for your URL. You must participate in the discussion as an expert or authority who gives good advice. When you sign your name at the bottom of your posting, be sure to include a signature that includes a link to your website. If these forums and newsletters are archived and remain online, search engines continue to see them and the links they contain.
If your website doesn't have a lot of content and you are wondering how you can build your link popularity you should think about building a tool or feature on your website that will be valuable to your online audience. Marketleap's Search Engine Marketing tools are a good example of a feature built to generate link popularity.
Marketleap's free tools provide unique data for search engine marketers that they can't find other places. We've also made it possible for people to place our tools on their sites easily by cutting and pasting a piece of HTML code into their web page. Because the tools are valuable to our community, many websites have linked to the tools or added the tool to their own website. Consequently, if you search for "link popularity" at Google, Marketleap will usually appear in the top 3 results on the first page.
Link popularity will continue to be an essential factor in successful search engine marketing initiatives for the foreseeable future. Links are a helpful tool for search engines trying to wade through billions of documents and find ones that are relevant to their users.
Meta tags give information about your site to web servers, search services, and other programs. Meta tags are different from other XHTML elements, because meta tags give information about the pages themselves rather than about the content of the pages. Meta tags are not displayed by browsers for human use but are processed by the browsers and other software for their own use. About correct meta tag usage you can get a help from your web site search engine optimization services.
Keep in mind, that not all search services use meta tags, and the ones that do use them may not use all of them. Thus, meta tags may help you get more hits on your web site, but they are not a cure-all. For example, the current trend is that search services get keywords from the text in your pages instead of from meta tags. Some services may use meta tags, and the tags should be included in your pages.
Meta Tag Syntax:
Unless you have an editor that creates meta tags, you will have to go to the XHTML level of editing to insert them. Let's discuss the syntax of meta tags.
Because meta tags are XHTML tags, they are enclosed in angle brackets. Each meta tag has three dedicated words, meta, name and content. Meta identifies the tag as being a meta tag. Name assigns a property to the tag. Content assigns a value to the property.
When search services return results, the results include short descriptions of the pages. Some services obtain descriptions from the text of the page, while other services use descriptions given in description meta tags. By providing brief, legible descriptions of your pages in meta tags, you are able to control the descriptions used by the services that use meta tags. Keep the descriptions to 150 characters, including spaces and punctuation. Include in the description the keywords that are important to the page.
<meta name="description" content="A web site giving hints and tips for those who love camping in the outdoors the scouting way." />
People searching the Internet specify keywords to be searched. The search services check their databases for web pages that are indexed by the keywords. At the present time, most search services extract keywords from the page text, but some services may use the keywords meta tag. Keywords meta tags have the following syntax (all one line).
<meta name="keywords" content="scouts,camping,tents,hiking,cooking,food,hammocks,bsa" />
Notice the keywords are separated by commas, and the string of keywords is enclosed in quotation marks (double quotes). The number of characters allowed in the content of the meta tag by search services probably varies with the service, but plan for around 1000 characters. Spaces are not used after the commas, because they aren't needed and would only make the content of the meta tag longer. Have short phrases as well as single words between the commas.
Choose carefully the keywords you insert into meta tags. Pretend you are searching the web for sites that have themes similar to your site. List keywords you think people might typically use. Include popular synonyms of your keywords (since search services are now extracting keywords from text, the inclusion of synonyms is one of the most important uses of keyword meta tags). Also include common misspellings of the keywords. Include phrases of two or three words that might be used by persons searching for your site. These phrases are of special importance, because when people search on single words, they usually get more hits than they can examine. People then enter phrases of two, three, or more words to narrow the searches. If you place the same phrases in your meta tags, your site will more likely come up earlier in the searches. Make nouns plural to catch both singular and plural words.
Do not enter keywords more than once or twice. Words in phrases should not also be entered as individual words. Search engines check for repetitions of words and penalize pages that have high repetition rates. This penalization varies with the engine, so to be safe, enter words only one or two times.
The meta tag keyword list is ordered in importance from left to right. Add keywords that are unique to the page to the beginning of the meta tag list. Then end the list with keywords that are unique to your site. By doing this, the most important keywords will be placed at the beginning of the list.
A common but unethical practice that some people practice is to create a framed page and link it to another site. The linked page will be displayed, but because the link is in a framed page, the browser will display the address of the framed page not the address of the page at the link. The result is that the displayed page appears to be published under the domain of the framed page. To avoid this happening to your site, place the "window-target" meta tag in your pages.
<meta http-equiv="window-target" content="_top" />
Robots are spider or crawler programs that are "owned" by search services and spammers. The robots constantly roam the web and index web sites into their search databases. The robots meta tag, if it is honored by the spiders, allows you to control which pages are indexed and if local links on those pages are followed to obtain more pages to index. Replace xxx with "index", "noindex", "follow", "nofollow", "all", "none", or combinations of those words. "all" is the same as "index,follow", and "none" is the same as "noindex,nofollow".
<meta name="robots" content="xxx" />
Meta Tag Location
Meta tags are placed anywhere within the header of your page. For example, a meta tag could be placed in the header, as follows.
<meta name="keyword" content="dog,cat,horse,pig,cow,bird" />
More information to WEBMASTERS for GOOGLE applications.
GOOGLE Section One click
GOOGLE Section Two click
GOOGLE Section Three click
Secure Online SHOPPING with you Credit or Debit Card