University of Vermont

UVM Self-help Web Guide

Search Engine Optimization (SEO)

Improving search results

When working with Google searches (or most other search engines), use these tips and guidelines to improve the search experience for users trying to find your content.

Content and Design
Frequently update content on key pages
Fresh content is one of the major factors in search engine optimization (SEO). Updating content on key pages is the simplest way to ensure search engine bots return frequently and sends positive signals to human visitors that important information is current. [Noel-Levitz 2012 E-Expectations Report]
Make web pages for users, not for search engines
Create a useful, information-rich content site. Write pages that clearly and accurately describe your content. Don't load pages with irrelevant words. Think about the words users would type to find your pages, and make sure that your site actually includes those words within it.
Focus on text
Focus on the text on your site. Make sure that your TITLE and ALT tags are descriptive and accurate. Since the Google crawler doesn't recognize text contained in images, avoid using graphical text and instead place information within the alt and anchor text of pictures. When linking to non-HTML documents, use strong descriptions within the anchor text that describe the links your site is making.
Make your site easy to navigate
Make a site with a clear hierarchy of hypertext links. Every page should be reachable from at least one hypertext link. Offer a site map to your users with hypertext links that point to the important parts of your site. Keep the links on a given page to a reasonable number (fewer than 100).
Ensure that your site is linked
Ensure that your site is linked from all relevant sites both within and outisde of uvm.edu. Interlinking between sites and within sites gives the Google crawler additional ability to find content, as well as improving the quality of the search.
Technical
Make sure that the Google crawler can read your content
Validate all HTML content to ensure that the HTML is well-formed. Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If extra features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine crawlers may have trouble crawling your site.
Allow search bots to crawl your sites without session IDs or arguments that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots is entirely different. Using these techniques may result in multiple copies of the same document being indexed for your site, as crawl robots will see each unique URL (including session ID) as a unique document.
Ensure that your site's internal link structure provides a hypertext link path to all of your pages. The Google search engine follows hypertext links from one page to the next, so pages that are not linked to by others may be missed.
Understand why some documents may be missing from the index
Each time that Google updates its database of web pages, the documents in the index can change. Here are a few examples of reasons why pages may not appear in the index:
  • Your content pages may have been intentionally blocked by a robots.txt file or ROBOTS meta tags.
  • Your web site was inaccessible when the crawl robot attempted to access it, due to network or server outage. If this happens, Google will retry multiple times; but if the site cannot be crawled, it will not be included in the index.
  • The Google crawl robot cannot find a path of links to your site from the starting points it was given.
  • Your content pages may not be considered relevant to the query you entered. Ensure that the query terms exist on your target page.
  • Your content pages contain invalid HTML code.
Avoid using frames
The Google search engine supports frames to the extent that it can. Frames tend to cause problems with search engines, bookmarks, e-mail links and so on, because frames don't fit the conceptual model of the web (where every document corresponds to a single URL).
Searches that return framed pages will most likely only produce hits against the "body" HTML page and present it back without the original framed "Menu" or "Header" pages. Google recommends that you use tables or dynamically generate content into a single page (using ASP, JSP, PHP, etc.), instead of using FRAME tags. This will ultimately maintain the content owner's originally intended look and feel, as well as allow most search engines to properly index your content.
Avoid placing content and links in script code
Most search engines do not read any information found in SCRIPT tags within an HTML document. This means that content within script code will not be indexed, and hypertext links within script code will not be followed when crawling. When using a scripting language, make sure that your content and links are outside SCRIPT tags. Investigate alternate HTML technologies to dynamic web pages, such as HTML layers.
Measuing SEO success rates using Google Analytics

To determine whether your SEO efforts are paying off and whether your content aligns with your users and creates that desired user experience, you'll need to keep track of a few key metrics. Evaluate how users are consuming your content. Sure, you'll want to look at pages visited, but think a little bit outside the box.

Time on Site:
Keeping track of time on site using a web analytics tool like Google Analytics is a good way to get insight about whether people like consuming your content. Now, depending on your type of content, users may not be spending that much time per page. Depending on the nature of your site, your target duration for a visit will vary.
Bounce Rates:
Google has publicly stated that bounce rate does not factor in as part of its ranking calculation; however, bounce rate (which a web analytics tool like Google Analytics can also report on) can give you some information about user experience. A healthy bounce rate for a site that produces a large volume of content is 70% or less. There are a few things to keep in mind when looking at this metric. Look at it on a page by page basis, and consider that each page will have its own unique bounce rate. Some pages will naturally have higher rates than others, and that’s okay. You would expect that especially from something like your 'Contact Us' page, for example.
Clickthrough Rates (CTR):
There are several different types of clickthrough rates you can look at, but I would recommend two types in particular. First, track the clickthrough rates of your search listings, which Google Analytics will provide to you if you have Webmaster Tools set up for your website. Generally speaking, you will have lower CTRs where you do a poor job of communicating what your site is about -- meaning your Page Title, URL and description don’t align, or you have a poor site structure. Second, track the CTRs for your various calls-to-action (CTAs) on your web pages. Remember, one of the main goals for getting your web pages to rank well in search is to get people to click through to an offer. Therefore, your CTA clickthrough rates on those pages will tell you how effectively your traffic is getting routed to the landing pages for your offers.
Conversion Rates:
Once you get people from search engines to your landing pages, they still need to fill out the form and convert! Conversion rates should be tied directly to your business goals. A conversion might be completing an application, signing up for a mailing list, or downloading a brochure.
Social Signals:
Social media is about relationships, and your social signals are the metrics that help you determine whether your content is being shared in social media -- and the impact it's having. Beyond tracking the number of Likes and shares for your content, also consider the following: traffic from social media (and individual social networks), overall social media reach, and how many leads and customers you can attribute to your social media presence. Remember -- social media influences SEO, so it's important not to ignore that fact.

Preventing Google™ from Searching Your Site

There may be occasions when you wish to prevent Google™ or other search engines from searching your Web site or a particular page on your Web site. Complete Web sites can be excluded from search enigines using the robots.txt file located at the Web server root. Contact your server administrator for more information. At the page level, you can instruct the search engines to ignore a page completely; index a page but ignore all the links on it; and prevent an cache (or copy) of the page from being preserved on the Google™ servers. Additionally, you can prevent a entire Web site constructed using the UVM Web Publishing System from being indexed by search enigines using the custom meta tags feature.

Preventing the Indexing of a Web Page

To prevent search engines from indexing a page on your site, you'd place the following meta tag into the section of your page:

<meta name="robots" content="noindex, nofollow" />
To allow search engines to index the page but instruct them not to follow outgoing links, you'd eliminate the "noindex" portion from this example.

Preventing a Caching of a Web Page

To prevent a search engine from making a copy (or cached) version of your Web page, use the following in the section of the desired Web page:

<meta name="robots" content="noarchive" />

Removing a Deleted Page from Google's™ Cache

If you have deleted a page but a cached version still shows up when you search Google.com. You can submit a request to Google for your "outdated link" to be removed.

Last modified November 02 2012 11:05 AM