• Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 837 other followers

  • Karen Kallets Twitter

  • Recent Posts

  • Top Clicks

    • None
  • Pages

  • Top Posts

  • Categories

  • Recent Comments

    Shana on How can you get your blog inde…
    Quentin on Blogging Mistakes for Beg…
    Adell on What other blogs are saying ab…
    Jackie on Ping-O-Matic is your one stop…
    Hermine on What other blogs are saying ab…
    Ans on Tools for creating business we…
    Harris Ferencz on 2010 Marketing Plans: Facebook…
    Kevin Moreland on How can you get your blog inde…
    Twitter on Retweeting your own tweet…
    spam filtering on How can you get your blog inde…
    tweet on Retweeting your own tweet…
    Rich on 5 Best Printer Friendly WordPr…
    rtyecript on How can you get your blog inde…
    cp cheats on How can you get your blog inde…
    cp cheats on Ping-O-Matic is your one stop…
  • Archives

Creating a video sitemap for Google search

Adding video to your website is a key element today,  just as adding social media elements is a must do in today’s world of organic search. To gain page one video placement you need to get the video indexed by Google.  We all know of pushing an XML site map to gain organic traffic to our content, however many do not know to add a video sitemap, in addition. Check with your webmaster or agency to be sure they know to do this.  The Quick Online blog had a great article on this a few months back, and it is worth revisiting or visiting for the first time, especially if you are adding videos to your site or are already using video.  The blog  also addresses, hosting videos on your site vs feeding from YouTube or DailyMotion.

How to Get Your Videos Indexed by Google  by Quick Online blog
March 1st, 2010
Guest post by Keiros.
How to get your videos listed on Google? Have you already noticed while performing a search on Google, those videos thumbnails displayed on the left side of some sites? They are definitely great traffic attractors, and having such a video associated with your site when being on 5th or 6th positions of the search results page may be better in terms of “click-thru-rate” than being on 1st position without it.
There is an additional bonus in having a video associated to your site: on some occasions, Google displays in a block titled “Video results for Keyword“, often on the 3rd or 5th position, several videos, and when it is the case, you can jump several positions and land directly on the first page to join an already existing video results block.

The trick is to find a search argument for which your page is already ranking well but not yet on the first page, with Google displaying a Video Results group on first page: you will benefit from the optimization work of those who made the way to the first page.

Found the perfect search request? All you need to do now is to make your video indexed by Google, and that’s the easiest part !

1.Put a video on your page : I mean a real video, hosted on your site, not on Youtube or DailyMotion. You need a .flv file (or .mpg, .avi, .mp4) and a video plugin that supports this kind of video formats. Jing is perfect to record onscreen action and make nice video tutorials.
2.Create a thumbnail of your video : that’s the thumbnail that Google will display on its search results page, so making a good quality thumbnail out of your video is an important step. Personally, I’ve found that 160×120 gif or jpg works well for that purpose….and they actually don’t need to be extracted from your video. Any image pertinent to the content of your video will make the trick.
3.Create a video sitemap : Video sitemaps are XML files similar to usual sitemaps but using a different syntax. See an example.
4.Submit your video sitemap to Google : Video sitemaps can be submitted in Google Webmaster Tools exactly as a standard sitemap. If you had not yet registered your site in Google Webmaster Tool, that’s the perfect occasion to do it now.

Video Sitemap example

( See HTML on the Quickonline site)

The video-specific XML tags are pretty straightforward, but if you need more detail, you can check this page on Google Webmaster Central: Creating Video Sitemaps. Don’t forget to optimize your video sitemap by inserting your main keywords in the video tags, it will give an additional boost to your ranking if done meaningfully.

Guest author Keiros is an outsourcing and infrastructure management consultant, and a contributor to SEO Chat Forum. You can also write guest posts and share your Google tips with our readers.

Creating a sitemap on Google, Bing, Yahoo and Ask.

XML Sitemap—or sitemap is a list of the pages on your website, each search engine needs it to their specifications and requirements.  When you create and submit a sitemap helps make sure that Google, Yahoo, Bing and Ask know about all the pages on your site, including URLs that may not be  found by the  normal crawling process.  Having a site map for your website it a key to getting searched by the various search engine bots that I have written about in a previous blog about the 5 kinds of  Search Bots.  Your webmaster or agency should have all this information at their finger tips, if they do not,  you may not be getting the proper search engines you want going to your site.

Knowing what to ask, beyond… do we have a sitemap or do we have an XML feed needs this knowledge.  Ask if they have done all the formatting needed to have the XML sitemap work on all the search engines is key.  Having all the information at your finger tips to get the search engines to the search engines and what they need is priceless. 

Here is the data that I use from each one of the search engines as to what they require and how to:

____________________________________________________________________

Creating and submitting sitemaps to Google

The Google site says:

“About Sitemaps

Sitemaps are a way to tell Google about pages on your site we might not otherwise discover. In its simplest terms, a XML Sitemap—usually called Sitemap, with a capital S—is a list of the pages on your website. Creating and submitting a Sitemap helps make sure that Google knows about all the pages on your site, including URLs that may not be discoverable by Google’s normal crawling process.”

In addition to regular Sitemaps, you can also create Sitemaps designed to give Google information about specialized web content, including video, mobile, News, Code Search, and geographical (KML) information.

When you update your site by adding or removing pages, tell Google about it by resubmitting your Sitemap. Google doesn’t recommend creating a new Sitemap for every change.”
______________________________________________________________________________

Bing Webmaster Tools

To submit an XML-based Sitemap to Bing:

Step 1: Copy and paste the entire URL below as a single URL into the address bar of your browser:

     http://www.bing.com/webmaster/ping.aspx?sitemap=www.YourWebAddress.com/sitemap.xml

Step 2: Change “www.YourWebAddress.com” to your domain name

Step 3: Press ENTER”

_________________________________________________________________________

How Yahoo! supports sitemaps

“How Yahoo! Supports Sitemaps

Yahoo! supports the Sitemaps format and protocol as documented on www.sitemaps.org .

You can provide us a feed in the following supported formats. We recognize files with a .gz extension as compressed files and decompress them before parsing.

  • RSS 0.9, RSS 1.0, or RSS 2.0, for example, CNN Top Stories
  • Sitemaps, as documented on  www.sitemaps.org
  • Atom 0.3, Atom 1.0, for example, Yahoo! Search Blog
  • A text file containing a list of URLs, each URL at the start of a new line. The filename of the URL list file must be urllist.txt. For a compressed file the name must be urllist.txt.gz.
  • Yahoo! supports feeds for mobile sites. Submitting mobile Sitemaps directs our mobile search crawlers to discover and crawl new content for our mobile index. When submitting a feed that points to content on the mobile web, please indicate whether the encoding of the content was done with xHTML or WML.

    In addition to submitting your Sitemaps to Yahoo! Search through Yahoo! Site Explorer, you can:

  • Send an HTTP request. To send an HTTP request, please send using our ping API.
  • Specify the Sitemap location in your site’s robots.txt file. This directive is independent of the user-agent line, so you can place it wherever you like in your file.
  • Yahoo! Search will retrieve your Sitemap and make the URLs available to our web crawler. Sitemaps discovered by these methods cannot be managed in Site Explorer and will not show in the list of feeds under a site.

    Note: Using Sitemap protocol supplements the other methods that we use to discover URLs. Submitting a Sitemap helps Yahoo! crawlers do a better job of crawling your site. It does not guarantee that your web pages will be included in the Yahoo! Search index.”

    ___________________________________________________________________

    Ask.com webmaster FAQ

    “Web Search

    The Ask.com search technology uses semantic and extraction capabilities to recognize the best answer from within a sea of relevant pages. Instead of 10 blue links, Ask delivers the best answer to user’s questions right at the top of the page. By using an established technique pioneered at Ask, our search technology uses click-through behavior to determine a site’s relevance and extract the answer. Unlike presenting text snippets of the destination site, this technology presents the actual answer to a user’s question without requiring an additional click through. Underpinning these advancements are Ask.com’s innovative DADS, DAFS, and AnswerFarm technologies, which break new ground in the areas of semantic search, web extraction and ranking. These technologies index questions and answers from numerous and diversified sources across the web. It then applied its semantic search technology advancements in clustering, rephrasing, and answer relevance to filter out insignificant and less meaningful answer formats. In order to extract and rank exciting answers, as opposed to merely ranking web pages, Ask.com continues to develop a unique algorithms and technologies that are based on new signals for evaluating relevancy specifically tuned to questions.

    The Ask Website Crawler FAQ

    Ask’s Website crawler is our Web-indexing robot (or crawler/spider). The crawler collects documents from the Web to build the ever-expanding index for our advanced search functionality at Ask and other Web sites that license the proprietary Ask search technology.

    Ask search technology is unique from any other search technology because it analyzes the Web as it actually exists — in subject-specific communities. This process begins by creating a comprehensive and high-quality index. Web crawling is an essential tool for this approach, and it ensures that we have the most up-to-date search results.

    On this page you’ll find answers to the most commonly asked questions about how the Ask Website crawler works. For these and other Webmaster FAQs, visit our Searchable FAQ Database.

    Frequently Asked Questions

    1. What is a website crawler?
    2. Why does Ask use a website crawler?
    3. How does the Ask crawler work?
    4. How frequently will the Ask Crawler index pages from my site?
    5. Can I prevent Teoma/Ask search engine from showing a cached copy of my page?
    6. Does Ask observe the Robot Exclusion Standard?
    7. Can I prevent the Ask crawler from indexing all or part of my site/URL?
    8. Where do I put my robots.txt file?
    9. How can I tell if the Ask crawler has visited my site/URL?
    10. How can I prevent the Ask crawler from indexing my page or following links from a particular page?
    11. Why is the Ask crawler downloading the same page on my site multiple times?
    12. Why is the Ask crawler trying to download incorrect links from my server? Or from a server that doesn’t exist?
    13. How did the Ask Website crawler find my URL?
    14. What types of links does the Ask crawler follow?
    15. Can I control the rate at which the Ask crawler visits my site?
    16. Why has the Ask crawler not visited my URL?
    17. Does Ask crawler support HTTP compression?
    18. How do I register my site/URL with Ask so that it will be indexed?
    19. Why aren’t the pages the Ask crawler indexed showing up in the search results?
    20. Can I control the crawler request rate from Ask spider to my site?
    21. How do I authenticate the Ask Crawler?
    22. Does Ask.com support sitemaps?
    23. How can I add Ask.com search to my site?
    24. How can I get additional information?

     

    Q: What is a website crawler?
    A: A website crawler is a software program designed to follow hyperlinks throughout a Web site, retrieving and indexing pages to document the site for searching purposes. The crawlers are innocuous and cause no harm to an owner’s site or servers.

    Q: Why does Ask use website crawlers?
    A: Ask utilizes website crawlers to collect raw data and gather information that is used in building our ever-expanding search index. Crawling ensures that the information in our results is as up-to-date and relevant as it can possibly be. Our crawlers are well designed and professionally operated, providing an invaluable service that is in accordance with search industry standards.

    Q: How does the Ask crawler work?

    • The crawler goes to a Web address (URL) and downloads the HTML page.
    • The crawler follows hyperlinks from the page, which are URLs on the same site or on different sites.
    • The crawler adds new URLs to its list of URLs to be crawled. It continually repeats this function, discovering new URLs, following links, and downloading them.
    • The crawler excludes some URLs if it has downloaded a sufficient number from the Web site or if it appears that the URL might be a duplicate of another URL already downloaded.
    • The files of crawled URLs are then built into a search catalog. These URL’s are displayed as part of search results on the site powered by Ask’s search technology when a relevant match is made.

     

    Q: How frequently will the Ask Crawler download pages from my site?
    A: The crawler will download only one page at a time from your site (specifically, from your IP address). After it receives a page, it will pause a certain amount of time before downloading the next page. This delay time may range from 0.1 second to hours. The quicker your site responds to the crawler when it asks for pages, the shorter the delay.

    Q. Can I prevent Teoma/Ask search engine from showing a cached copy of my page?
    A: Yes. We obey the “noarchive” meta tag. If you place the following command in your HTML page, we will not provide an archived copy of the document to the user.

    < META NAME = “ROBOTS” CONTENT = “NOARCHIVE” >

    If you would like to specify this restriction just for Teoma/Ask, you may use “TEOMA” in place of “ROBOTS”.

    Q: Does Ask observe the Robot Exclusion Standard?
    A: Yes, we obey the 1994 Robots Exclusion Standard (RES), which is part of the Robot Exclusion Protocol. The Robots Exclusion Protocol is a method that allows Web site administrators to indicate to robots which parts of their site should not be visited by the robot. For more information on the RES, and the Robot Exclusion Protocol, please visit http://www.robotstxt.org/wc/exclusion.html.

    Q: Can I prevent the Ask crawler from indexing all or part of my site/URL?
    A: Yes. The Ask crawler will respect and obey commands that direct it not to index all or part of a given URL. To specify that the Ask crawler visit only pages whose paths begin with /public, include the following lines:

    # Allow only specific directories
    User-agent: Teoma
    Disallow: /
    Allow: /public

     

    Q: Where do I put my robots.txt file?
    A: Your file must be at the top level of your Web site, for example, if http://www.mysite.com is the name of your Web site, then the robots.txt file must be at http://www.mysite.com/robots.txt.

    Q: How can I tell if the Ask crawler has visited my site/URL?
    A: To determine whether the Ask crawler has visited your site, check your server logs. Specifically, you should be looking for the following user-agent string:

    User-Agent: Mozilla/2.0 (compatible; Ask Jeeves/Teoma)

     

    Q: How can I prevent the Ask crawler from indexing my page or following links from a particular page?
    A: If you place the following command in the section of your HTML page, the Ask crawler will not index the document and, thus, it will not be placed in our search results:

    < META NAME = “ROBOTS” CONTENT = “NOINDEX” >

    The following commands tell the Ask crawler to index the document, but not follow hyperlinks from it:

    < META NAME = “ROBOTS” CONTENT = “NOFOLLOW” >

    You may set all directives OFF by using the following:

    < META NAME = “ROBOTS” CONTENT = “NONE” >

    See http://www.robotstxt.org/wc/exclusion.html#meta for more information.

    Q: Why is the Ask crawler downloading the same page on my site multiple times?
    A: Generally, the Ask crawler should only download one copy of each file from your site during a given crawl. There are two exceptions:

    • A URL may contain commands that “redirect” the crawler to a different URL. This may be done with the HTML command:
      < META HTTP-EQUIV=”REFRESH” CONTENT=”0; URL=http://www.your page address here.html” >

      or with the HTTP status codes 301 or 302. In this case the crawler downloads the second page in place of the first one. If many URLs redirect to the same page, then this second page may be downloaded many times before the crawler realizes that all these pages are duplicates.

    • An HTML page may be a “frameset.” Such a page is formed from several component pages, called “frames.” If many frameset pages contain the same frame page as components, then the component page may be downloaded many times before the crawler realizes that all these components are the same.

     

    Q: Why is the Ask crawler trying to download incorrect links from my server? Or from a server that doesn’t exist?
    A: It is a property of the Web that many links will be broken or outdated at any given time. Whenever any Web page contains a broken or outdated link to your site, or to a site that never existed or no longer exists, Ask will visit that link trying to find the Web page it references. This may cause the crawler to ask for URLs which no longer exist or which never existed, or to try to make HTTP requests on IP addresses which no longer have a Web server or never had one. The crawler is not randomly generating addresses; it is following links. This is why you may also notice activity on a machine that is not a Web server.

    Q: How did the Ask Website crawler find my URL?
    A: The Ask crawler finds pages by following links (HREF tags in HTML) from other pages. When the crawler finds a page that contains frames (i.e., it is a frameset), the crawler downloads the component frames and includes their content as part of the original page. The Ask crawler will not index the component frames as URLs themselves unless they are linked via HREF from other pages.

    Q: What types of links does the Ask crawler follow?
    A: The Ask crawler will follow HREF links, SRC links and re-directs.

    Q. Can I control the rate at which the Ask crawler visits my site?
    A. Yes. We support the “Crawl-Delay” robots.txt directive. Using this directive you may specify the minimum delay between two successive requests from our spider to your site.

    Q: Why has the Ask crawler not visited my URL?
    A: If the Ask crawler has not visited your URL, it is because we did not discover any link to that URL from other pages (URLs) we visited.

    Q: Does Ask crawler support HTTP compression?
    A: Yes, it does. Both HTTP client and server should support this for the HTTP compression feature to work. When supported, it lets webservers send compressed documents (compressed using gzip or other formats) instead of the actual documents. This would result in significant bandwidth savings for both the server and the client. There is a little CPU overhead at both server and client for encoding/decoding, but it is worth it. Using a popular compression method such as gzip, one could easily reduce file size by about 75%.

    Q: How do I register my site/URL with Ask so that it will be indexed?
    A: We appreciate your interest in having your site listed on Ask.com and the Ask.com search engine. Your best bet is to follow the open-format Sitemaps protocol, which Ask.com supports. Once you have prepared a sitemap for your site, add the sitemap auto-discovery directive to robots.txt, or submit the sitemap file directly to us via the ping URL. (For more information on this process, see Does Ask.com support sitemaps?) Please note that sitemap submissions do not guarantee the indexing of URLs.

    Create your Web site and set up your Web server to optimize how search engines look at your site’s content, and how they index and trigger based upon different types of search keywords. You’ll find a variety of resources online that provide tips and helpful information on how to best do this.

    Q: Why aren’t the pages the Ask crawler indexed showing up in the search results at Ask.com?
    A: If you don’t see your pages indexed in our search results, don’t be alarmed. Because we are so thorough about the quality of our index, it takes some time for us to analyze the results of a crawl and then process the results for inclusion into the database. Ask does not necessarily include every site it has crawled in its index.

    Q: Can I control the crawler request rate from Ask spider to my site?
    A: Yes. We support the “Crawl-Delay” robots.txt directive. Using this directive you may specify the minimum delay between two successive requests from our spider to your site.

    Q. How do I authenticate the Ask Crawler?
    A: A. User-Agent is no guarantee of authenticity as it is trivial for a malicious user to mimic the properties of the Ask Crawler. In order to properly authenticate the Ask Crawler, a round trip DNS lookup is required. This involves first taking the IP address of the Ask Crawler and performing a reverse DNS lookup ensuring that the IP address belongs to the ask.com domain. Then perform a forward DNS lookup with the host name ensuring that the resulting IP address matches the original.

    Q: Does Ask.com support sitemaps?
    A: Yes, Ask.com supports the open-format Sitemaps protocol. Once you have prepared the sitemap, add the sitemap auto-discovery directive to robots.txt as follows:

    SITEMAP: http://www.the URL of your sitemap here.xml

    The sitemap location should be the full sitemap URL. Alternatively, you can also submit your sitemap through the ping URL:

    http://submissions.ask.com/ping?sitemap=http%3A//www.the URL of your sitemap here.xml

    Please note that sitemap submissions do not guarantee the indexing of URLs. To learn more about the protocol, please visit the Sitemaps web site at http://www.sitemaps.org.

    Q: How can I add Ask.com search to my site?
    A: We’ve made this easy, you can generate the necessary code here.

    Q: How can I get additional information?
    A: Please visit our full Searchable FAQ Database.”

    The FREE Tools list for social media on the web.

    Guest blogger: Serena Carpenter agreed that I may share some  info from her social media classes that she teaches at Arizona State. She used this hand out for an American Marketing Association meeting that she was a panelist on.  I found her to be a breath of fresh air, really gets the space and is teaching students leading edge info, as it is happening.

    She has so much great info that she uses with her students and is giving us access via her blog, which also has some other great info that you can read and learn from.

     FREE Tools by Serena Carpenter /Arizona State University, – used with her permission.

    MONITOR YOUR REACH (Twitter Tools Featured Lower)

    • YouTube “Insight” (Video)
    • View detailed statistics about uploaded videos including views, popularity peaks, demographics
    • SiteMeter (Traffic analyzer)
    • Tracks location, key words used, referrals, site visit length average, average visits per day, etc. for your Web site.
    • Google Analytics (Traffic Analyzer)
    • Tracks how people get to your site, the content clicked on, average pages per visit, etc. for your Web site.
    • Google Feedburner (Subscribers)
    • Tracks the number of people who subscribe to your RSS feeds.  Submit your site also to Feed-Squirrel and OctoFinder Web sites.
    • Google Alerts (Monitor Brand and Conversations)
    • Provides email message when someone writes about you.
    • Site Valuation and Other Stats Sites
    • Website Outlook (http://www.websiteoutlook.com/www.problogger.net)
    • Enter your URL to determine how much your site is worth.  It includes other stats.  Other similar sites include QuarkBase, urlfan, StatBrain, CubeStat, WebTrafficAgents, and AboutTheDomain.
    • Social Media Monitoring Sites
    • SM2 http://sm2.techrigy.com/main/ offers tools to help businesses to find out what people are talking about.  You can measure your page’s popularity using Social Meter http://www.socialmeter.com/
    • Analytics Toolbox from Mashable – 50 ways to monitor site traffic
    • http://mashable.com/2009/01/12/track-online-traffic/

    SHARING

    • YouTube (Video)
    • Share your video postings on Facebook, Twitter or Google Reader.
    • Vimeo Widget (Video)
    • Embed a visual montage of your latest videos widget to your site.  You can cross-post video to YouTube and Vimeo.
    • Slideshare (Presentations)
    • Add the Blog Sidebar Widget to highlight your latest presentations.
    • FriendFeed (Share numerous social media sites) use Socialthing or OpenID
    • Share all of your social media activity with your Facebook friends or embed a widget for your site.
    • Delicious (bookmarking site)
    • Share your linkroll or a badge on other sites by embedding code.
    • MediaWiki (http://www.mediawiki.org/wiki/MediaWiki)
    • Create a wiki for your FAQ or Customer Service knowledge base.  Let your consumers enter the problems they’ve had via a public forum (the wiki), and provide your responses publicly as well.

    GET YOUR SITE INDEXED ON SEARCH ENGINES

    • Add blog to community portals and search engines
    • Try adding your site to BlogCatalog, Blogged, Technorati, BlogPulse, Google Blogs, or NetworkedBlogs(Facebook).
    • Update different search engines that your blog has updated
    • Sign up for http://pingomatic.com/ and http://blogsearch.google.com/ping.
    • Squidoo
    • Community website that allows people to create pages (called “lenses”) on various topics.  Creating a topic related to your blog and include your feed in that page would help your blog get indexed.

     TWITTER TOOLS

    1. TweetStats.  Provides the average per day (http://tweetstats.com)
    2. WeFollow.  Add yourself to the Twitter search http://wefollow.com/
    3. Favstarhttp://favstar.fm/.  Favstar shows you who favorites your tweets.
    4. Twinfluence ranking.  Expect it to be low.  http://twinfluence.com/
      1. If you have a grade of 75%, it means that you have a higher reach than 75% of the other twitterers analyzed.
      2. If your rank is #400, that means there are 399 other twitterers in the system who have higher reach scores.
    5. TwitterGrader http://twitter.grader.com/

    Both sites grade you on the number of followers you have and your influence on these followers.  Grader also measures the pace of your updates, while Influence measures the informational value of your followers.

    6.      Twitalyzer http://www.twitalyzer.com/twitalyzer/index.asp.  Measures your impact and success in social media

    7.      Retweet http://www.retweetist.com/ lets you know what posts have been RTed.

    8.      Twazzup http://www.twazzup.com/ and inView

    http://myinview.com/login.php tracks mentions of you on Twitter

    9.      Link shorteners capture traffic to twitter feed.  For example, use trim and bit.ly to shorten urls, then later add a + to see how many people clicked!  Example: http://bit.ly/JTkk+

    10.  Search Twitter  http://search.twitter.com/

    270 Tools for Running Business Online  http://mashable.com/2008/09/21/270-online-business-tools/

    FixYa Social Tech Support  http://www.fixya.com

    Top 5 Business Blogging Mistakes and How to Fix Them  http://mashable.com/2009/09/21/business-blogging-mistakes/

    Where to find the women consumers in social media.

    Mashable again has a knock out recap of info from Information is Beautiful on demographics by site.  This is really valuable to CMO’s in evaluating your teams plans or the agencies next proposal.  It also says to me that if you are targeting men, Digg is one not to overlook. This also gives you a good check list to be sure you are represented on all the main social media platforms.

    Here is the Link: http://mashable.com/2009/10/03/women-rule-the-social-web/

    Here is a recap: “Women rule social media at least according to an infographic by Information is Beautiful. The stats, compiled by Brian Solis from Google Ad Plannerdata, show that equal numbers of men and women use sites like LinkedIn, DeviantArt and YouTube.

    When it comes to sites like Flickr, Facebook, Twitter, FriendFeed, MySpace and Bebo, however, women outnumber men. In fact, there’s only one major holdout for men on the social web: social news site Digg, where 64% of users are male.

    womensocialweb

    SEO gut check for CMO’s. How to check up on your websites back end.

    SEO, meta tags, alt tags and other “web team” responsibilities leave most CMO’s in a blind spot of trust or of ignorance when it comes to their back end of the web site.  Simple questions that I ask such as:  Does your web site have a site map?  Blank look.  Are your meta tags in place, or still relevant..blank stare.  I don’t know, or why do I need to know, is the response I get from 99% of the senior execs with whom I speak. 

    The question is, it is really important?  The answer is yes.  Not that Meta tags are weighed as heavily as they use to be, but it speaks to the competency of the team building, or maintaining your website.  If the basic building blocks are missing, you most likely have a staff training or education issue and are wasting lots of money.

    The check up is fairly simple, if you know there are many domain tools that look at your site and give feedback.  Your web team uses these tools all the time, so should you.  An easy check up tool is www.whois.domaintool.com.  If you put in your web site name,  it will give you current meta tags, alt tags ect., and an overall SEO score for Search Engine Optimization. 

    If you go to the right hand column and click on the SEO Text Browser, it will give you page by page feedback on what is missing from their perspective.  It will also give you a grade for each page, so you will be able to talk with the team as to what is going on and how to improve.

    As an example, I just ran this tool on a client to see what areas needed attention, since they indicated at the meeting that the web consultant they where using had it all covered.  Result: No meta tags, no alt tags, a failing score of 56% for SEO since all the pages were designed as a .gif and could not be spidered by search engines.  Pretty web site with great design, not searchable, waste of money….not good.

    This is also a great tool to peak in to the back end of your competitions website to see their score and what issues they have not addressed that you can use to your advantage. So do CMO gut check on your SEO and website, it may be the best improvement for no money you can do.

    TweetSaver is a way to manage your Twitter tweets for search, tag and bookmarking

    What to do with Twitter as a CMO?   Are you thinking how do I monetize ROI, manage and utilize our Social Media efforts with Twitter in a way that will allow me to optimize and manage our tweets more effectively? 

     Take a look at Tweetsaver- it costs $10.00 to $20.00 annually, and may be a good solution for finding a way with your Twitter program to do a central back up, allow a bookmark of your tweets and tag the tweets for organic search.  This will start to monetize your efforts on Twitter as content and SEO.  TweetSaver gives you a way to archive, search, tag and share all of the content that you are creating on twitter.

    They say “One year of backups, searching, tagging, etc AND Save 50% for spreading the word about TweetSaver! When you sign up, we’ll send the following tweet on your behalf: I’m backing up, searching, and tagging all my tweets with http://tweetsaver.com, shouldn’t you?”

    Here is the link to check it out:  http://tweetsaver.com/

    Google Caffeine will change your search ranking for key terms.

    Update from Mashable: that it seems that comparegoogle.com has stopped working, but you can also check out a similar service called www.comparecaffeine.com.
    I had heard all about upgrading to Google Chrome to increase the speed of my search as a consumer searching on Google.  What  I am now hearing about is Google Caffeine and this is a much bigger alert item for CMO’s-need to know list. 
    The big watch list item for me is called Google Caffeine.  This change will change the way your web pages are being presenting in organic search.  Thus that big SEO project you completed last year is about to need a facelift and fast.
    I was trying to find some good info for myself and to recommend to you and the best I found was from Webupd8.  They have a good write up on the subject and I tested the tool they recommended on a couple of my URLs and found  that I will need to do some changes to improve my SEO to get back some ranking and in cases some it will gain me a much better ranking  then I had before for key terms and I am thrilled.
    Webupd8 blogs says:
    “Google is in the process of implementing some very major changes to their search algorithms and this can therefore alter the existing rankings of your web pages in Google Search results of the future.

    For instance, if one of your web pages is currently ranked in Google at #3 for the phrase “wallpapers”, it’s quite possible that the position may shift either way after the new version of Google goes live.

    Google has created a preview site (Caffeine)where you can play around with the upcoming changes before these get pushed to the main google.com website. And that itself is expected to happen in the next few weeks.

    google search side-by-side caffeine
     
     
     If you like to compare the existing Google ranks of your web pages with that of the future, check out Compare Google – this tools places the top 20 results from the old and new search engine side by side so you can easily figure out how your site will perform in the future.

    Another Google comparison tool is located at Cartercole.comwhich, unlike Compare Google, displays Google search results in their original form – this is perfect for queries that trigger Google Universal Searchso you can know what images and video clips results will get displayed in Google Caffeine for your queries.”