• Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 838 other followers

  • Karen Kallets Twitter

  • Recent Posts

  • Top Clicks

    • None
  • Pages

  • Top Posts

  • Categories

  • Recent Comments

    Shana on How can you get your blog inde…
    Quentin on Blogging Mistakes for Beg…
    Adell on What other blogs are saying ab…
    Jackie on Ping-O-Matic is your one stop…
    Hermine on What other blogs are saying ab…
    Ans on Tools for creating business we…
    Harris Ferencz on 2010 Marketing Plans: Facebook…
    Kevin Moreland on How can you get your blog inde…
    Twitter on Retweeting your own tweet…
    spam filtering on How can you get your blog inde…
    tweet on Retweeting your own tweet…
    Rich on 5 Best Printer Friendly WordPr…
    rtyecript on How can you get your blog inde…
    cp cheats on How can you get your blog inde…
    cp cheats on Ping-O-Matic is your one stop…
  • Archives

Creating a video sitemap for Google search

Adding video to your website is a key element today,  just as adding social media elements is a must do in today’s world of organic search. To gain page one video placement you need to get the video indexed by Google.  We all know of pushing an XML site map to gain organic traffic to our content, however many do not know to add a video sitemap, in addition. Check with your webmaster or agency to be sure they know to do this.  The Quick Online blog had a great article on this a few months back, and it is worth revisiting or visiting for the first time, especially if you are adding videos to your site or are already using video.  The blog  also addresses, hosting videos on your site vs feeding from YouTube or DailyMotion.

How to Get Your Videos Indexed by Google  by Quick Online blog
March 1st, 2010
Guest post by Keiros.
How to get your videos listed on Google? Have you already noticed while performing a search on Google, those videos thumbnails displayed on the left side of some sites? They are definitely great traffic attractors, and having such a video associated with your site when being on 5th or 6th positions of the search results page may be better in terms of “click-thru-rate” than being on 1st position without it.
There is an additional bonus in having a video associated to your site: on some occasions, Google displays in a block titled “Video results for Keyword“, often on the 3rd or 5th position, several videos, and when it is the case, you can jump several positions and land directly on the first page to join an already existing video results block.

The trick is to find a search argument for which your page is already ranking well but not yet on the first page, with Google displaying a Video Results group on first page: you will benefit from the optimization work of those who made the way to the first page.

Found the perfect search request? All you need to do now is to make your video indexed by Google, and that’s the easiest part !

1.Put a video on your page : I mean a real video, hosted on your site, not on Youtube or DailyMotion. You need a .flv file (or .mpg, .avi, .mp4) and a video plugin that supports this kind of video formats. Jing is perfect to record onscreen action and make nice video tutorials.
2.Create a thumbnail of your video : that’s the thumbnail that Google will display on its search results page, so making a good quality thumbnail out of your video is an important step. Personally, I’ve found that 160×120 gif or jpg works well for that purpose….and they actually don’t need to be extracted from your video. Any image pertinent to the content of your video will make the trick.
3.Create a video sitemap : Video sitemaps are XML files similar to usual sitemaps but using a different syntax. See an example.
4.Submit your video sitemap to Google : Video sitemaps can be submitted in Google Webmaster Tools exactly as a standard sitemap. If you had not yet registered your site in Google Webmaster Tool, that’s the perfect occasion to do it now.

Video Sitemap example

( See HTML on the Quickonline site)

The video-specific XML tags are pretty straightforward, but if you need more detail, you can check this page on Google Webmaster Central: Creating Video Sitemaps. Don’t forget to optimize your video sitemap by inserting your main keywords in the video tags, it will give an additional boost to your ranking if done meaningfully.

Guest author Keiros is an outsourcing and infrastructure management consultant, and a contributor to SEO Chat Forum. You can also write guest posts and share your Google tips with our readers.

Jing is free software that adds visuals to your online conversations

Jing by Techsmith, is a tool that has captured my attention. It is a simple to use tool for instant sharing. It can be used for screen shots, videos, call outs and to show people what you are seeing on your screen with a click of a button with a linking URL.  Much better than the process of screen shot, save to word ect.  It sits on your desk top as a yellow half-moon ready for when you want to screen shot with three choices of capture, history and more.

Their website says it is to add visuals to your online conversation. There is a free version and a paid version for adding video. With the free version you also get 2 mg on Screencast which is a nice benefit.

Here is a few ways they suggest using Jing

  • collaborateCollaborate on a design project
  • share snapshotShare a snapshot of a document
  • speechNarrate your vacation photos
  • bugCapture that pesky bug in action
  • family tech supportShow Dad how to use iTunes
  • homeworkComment verbally on students’ homework
  • tidbitsPost tidbits from your life on Twitter or Facebook

What I also like is that is integrated with their other programs, that are more complex.

Screencast.com Screencast.  It not only provides free storage and instant sharing of your Jing content…it’s also a super place to share your high-quality videos, images, presentations, and all manner of digital content.

 

Snagit Snag-It . This Swiss-army knife of screen capture picks up where Jing leaves off.  Powerful image editing, scrolling window capture, cursor capture, tagging, search, and rugged good looks.

Camtasia Studio Camtasia Studio.  Jing’s big brother on the video side—and the ideal choice for creating longer, more polished screen videos. Camtasia Studio can edit your Jing videos, too.

Bing has issued best practices guidelines for SEO, SEM

Search Engine Optimization Journal is an SEO Blog that discusses Search Engine Marketing, Search Engine Ranking and Positioning for the new and advanced reader and has a very interesting article on Bing and branding your website. I found this article very interesting, since SEO and SEM have become such a part of our lives in marketing. Bing  is recommending that you take an approach that utilizes a robust social marketing element into your daily routine.  They also talk about focus on link building as a key to the success of your website.  I also was interested in the term going “unnatural” in the search engines which manipulates the system in order to achieve higher rankings. Having Bing talk about link building and other best practices will make it clearer as to your next steps with you agency or team.   Good read.

Bing Gets Serious About Link Building

Writing by Nick Stamoulis on Friday, February 26, 2010

I think one of the best things I have ever heard a webmaster blog from a major search engine say is to build your website like a brand. Bing’s exact words state: “Develop your site as a business brand and be consistent about that branding in your content”

Too many people out there get all antsy to have their internet marketing look like a recipe. If it looks like a recipe it is a recipe for disaster. Bing’s official webmaster blog states that it is very important and vital to treat your website like a brand. How would a brand build its image? If you take that step you will build your business the right way rather than just go after rankings. Rankings are important but they are not the only and final goal you should be worried about for your internet marketing campaign. Building business is the most important aspect to your SEO campaign. Bing also recommends taking an approach that really utilizes a robust social marketing element into your daily routine. Link building is very vital to the success of a website but unfortunately it is important to do it in a way that allows your business to grow and not just your rankings. Bing also refers to going “unnatural” in the search engines which states that an unnatural approach is one that blatantly attempts to manipulate the system in order to achieve higher rankings.

Bing’s exact words go as follows:
” So what does it mean to go unnatural? It means you’re trying to fake out the search engines, to try to earn a higher ranking that the quality of your site’s content dictates as natural through manipulation of search engine ranking algorithms.”

Bing hasn’t been the first search engine to come out with this type of best practice’s guide. Google has been doing it for years now just not many people want to follow it. When the top two search engines in the world layout a best practices guide on how you should conduct your search engine optimization efforts it is time to listen.

Please take a look at the Bing Webmaster Post that talks about link building and SEM, it is a great read:

http://www.bing.com/community/blogs/webmaster/archive/2009/11/20/link-building-for-smart-webmasters-no-dummies-here-sem-101.aspx

Creating a sitemap on Google, Bing, Yahoo and Ask.

XML Sitemap—or sitemap is a list of the pages on your website, each search engine needs it to their specifications and requirements.  When you create and submit a sitemap helps make sure that Google, Yahoo, Bing and Ask know about all the pages on your site, including URLs that may not be  found by the  normal crawling process.  Having a site map for your website it a key to getting searched by the various search engine bots that I have written about in a previous blog about the 5 kinds of  Search Bots.  Your webmaster or agency should have all this information at their finger tips, if they do not,  you may not be getting the proper search engines you want going to your site.

Knowing what to ask, beyond… do we have a sitemap or do we have an XML feed needs this knowledge.  Ask if they have done all the formatting needed to have the XML sitemap work on all the search engines is key.  Having all the information at your finger tips to get the search engines to the search engines and what they need is priceless. 

Here is the data that I use from each one of the search engines as to what they require and how to:

____________________________________________________________________

Creating and submitting sitemaps to Google

The Google site says:

“About Sitemaps

Sitemaps are a way to tell Google about pages on your site we might not otherwise discover. In its simplest terms, a XML Sitemap—usually called Sitemap, with a capital S—is a list of the pages on your website. Creating and submitting a Sitemap helps make sure that Google knows about all the pages on your site, including URLs that may not be discoverable by Google’s normal crawling process.”

In addition to regular Sitemaps, you can also create Sitemaps designed to give Google information about specialized web content, including video, mobile, News, Code Search, and geographical (KML) information.

When you update your site by adding or removing pages, tell Google about it by resubmitting your Sitemap. Google doesn’t recommend creating a new Sitemap for every change.”
______________________________________________________________________________

Bing Webmaster Tools

To submit an XML-based Sitemap to Bing:

Step 1: Copy and paste the entire URL below as a single URL into the address bar of your browser:

     http://www.bing.com/webmaster/ping.aspx?sitemap=www.YourWebAddress.com/sitemap.xml

Step 2: Change “www.YourWebAddress.com” to your domain name

Step 3: Press ENTER”

_________________________________________________________________________

How Yahoo! supports sitemaps

“How Yahoo! Supports Sitemaps

Yahoo! supports the Sitemaps format and protocol as documented on www.sitemaps.org .

You can provide us a feed in the following supported formats. We recognize files with a .gz extension as compressed files and decompress them before parsing.

  • RSS 0.9, RSS 1.0, or RSS 2.0, for example, CNN Top Stories
  • Sitemaps, as documented on  www.sitemaps.org
  • Atom 0.3, Atom 1.0, for example, Yahoo! Search Blog
  • A text file containing a list of URLs, each URL at the start of a new line. The filename of the URL list file must be urllist.txt. For a compressed file the name must be urllist.txt.gz.
  • Yahoo! supports feeds for mobile sites. Submitting mobile Sitemaps directs our mobile search crawlers to discover and crawl new content for our mobile index. When submitting a feed that points to content on the mobile web, please indicate whether the encoding of the content was done with xHTML or WML.

    In addition to submitting your Sitemaps to Yahoo! Search through Yahoo! Site Explorer, you can:

  • Send an HTTP request. To send an HTTP request, please send using our ping API.
  • Specify the Sitemap location in your site’s robots.txt file. This directive is independent of the user-agent line, so you can place it wherever you like in your file.
  • Yahoo! Search will retrieve your Sitemap and make the URLs available to our web crawler. Sitemaps discovered by these methods cannot be managed in Site Explorer and will not show in the list of feeds under a site.

    Note: Using Sitemap protocol supplements the other methods that we use to discover URLs. Submitting a Sitemap helps Yahoo! crawlers do a better job of crawling your site. It does not guarantee that your web pages will be included in the Yahoo! Search index.”

    ___________________________________________________________________

    Ask.com webmaster FAQ

    “Web Search

    The Ask.com search technology uses semantic and extraction capabilities to recognize the best answer from within a sea of relevant pages. Instead of 10 blue links, Ask delivers the best answer to user’s questions right at the top of the page. By using an established technique pioneered at Ask, our search technology uses click-through behavior to determine a site’s relevance and extract the answer. Unlike presenting text snippets of the destination site, this technology presents the actual answer to a user’s question without requiring an additional click through. Underpinning these advancements are Ask.com’s innovative DADS, DAFS, and AnswerFarm technologies, which break new ground in the areas of semantic search, web extraction and ranking. These technologies index questions and answers from numerous and diversified sources across the web. It then applied its semantic search technology advancements in clustering, rephrasing, and answer relevance to filter out insignificant and less meaningful answer formats. In order to extract and rank exciting answers, as opposed to merely ranking web pages, Ask.com continues to develop a unique algorithms and technologies that are based on new signals for evaluating relevancy specifically tuned to questions.

    The Ask Website Crawler FAQ

    Ask’s Website crawler is our Web-indexing robot (or crawler/spider). The crawler collects documents from the Web to build the ever-expanding index for our advanced search functionality at Ask and other Web sites that license the proprietary Ask search technology.

    Ask search technology is unique from any other search technology because it analyzes the Web as it actually exists — in subject-specific communities. This process begins by creating a comprehensive and high-quality index. Web crawling is an essential tool for this approach, and it ensures that we have the most up-to-date search results.

    On this page you’ll find answers to the most commonly asked questions about how the Ask Website crawler works. For these and other Webmaster FAQs, visit our Searchable FAQ Database.

    Frequently Asked Questions

    1. What is a website crawler?
    2. Why does Ask use a website crawler?
    3. How does the Ask crawler work?
    4. How frequently will the Ask Crawler index pages from my site?
    5. Can I prevent Teoma/Ask search engine from showing a cached copy of my page?
    6. Does Ask observe the Robot Exclusion Standard?
    7. Can I prevent the Ask crawler from indexing all or part of my site/URL?
    8. Where do I put my robots.txt file?
    9. How can I tell if the Ask crawler has visited my site/URL?
    10. How can I prevent the Ask crawler from indexing my page or following links from a particular page?
    11. Why is the Ask crawler downloading the same page on my site multiple times?
    12. Why is the Ask crawler trying to download incorrect links from my server? Or from a server that doesn’t exist?
    13. How did the Ask Website crawler find my URL?
    14. What types of links does the Ask crawler follow?
    15. Can I control the rate at which the Ask crawler visits my site?
    16. Why has the Ask crawler not visited my URL?
    17. Does Ask crawler support HTTP compression?
    18. How do I register my site/URL with Ask so that it will be indexed?
    19. Why aren’t the pages the Ask crawler indexed showing up in the search results?
    20. Can I control the crawler request rate from Ask spider to my site?
    21. How do I authenticate the Ask Crawler?
    22. Does Ask.com support sitemaps?
    23. How can I add Ask.com search to my site?
    24. How can I get additional information?

     

    Q: What is a website crawler?
    A: A website crawler is a software program designed to follow hyperlinks throughout a Web site, retrieving and indexing pages to document the site for searching purposes. The crawlers are innocuous and cause no harm to an owner’s site or servers.

    Q: Why does Ask use website crawlers?
    A: Ask utilizes website crawlers to collect raw data and gather information that is used in building our ever-expanding search index. Crawling ensures that the information in our results is as up-to-date and relevant as it can possibly be. Our crawlers are well designed and professionally operated, providing an invaluable service that is in accordance with search industry standards.

    Q: How does the Ask crawler work?

    • The crawler goes to a Web address (URL) and downloads the HTML page.
    • The crawler follows hyperlinks from the page, which are URLs on the same site or on different sites.
    • The crawler adds new URLs to its list of URLs to be crawled. It continually repeats this function, discovering new URLs, following links, and downloading them.
    • The crawler excludes some URLs if it has downloaded a sufficient number from the Web site or if it appears that the URL might be a duplicate of another URL already downloaded.
    • The files of crawled URLs are then built into a search catalog. These URL’s are displayed as part of search results on the site powered by Ask’s search technology when a relevant match is made.

     

    Q: How frequently will the Ask Crawler download pages from my site?
    A: The crawler will download only one page at a time from your site (specifically, from your IP address). After it receives a page, it will pause a certain amount of time before downloading the next page. This delay time may range from 0.1 second to hours. The quicker your site responds to the crawler when it asks for pages, the shorter the delay.

    Q. Can I prevent Teoma/Ask search engine from showing a cached copy of my page?
    A: Yes. We obey the “noarchive” meta tag. If you place the following command in your HTML page, we will not provide an archived copy of the document to the user.

    < META NAME = “ROBOTS” CONTENT = “NOARCHIVE” >

    If you would like to specify this restriction just for Teoma/Ask, you may use “TEOMA” in place of “ROBOTS”.

    Q: Does Ask observe the Robot Exclusion Standard?
    A: Yes, we obey the 1994 Robots Exclusion Standard (RES), which is part of the Robot Exclusion Protocol. The Robots Exclusion Protocol is a method that allows Web site administrators to indicate to robots which parts of their site should not be visited by the robot. For more information on the RES, and the Robot Exclusion Protocol, please visit http://www.robotstxt.org/wc/exclusion.html.

    Q: Can I prevent the Ask crawler from indexing all or part of my site/URL?
    A: Yes. The Ask crawler will respect and obey commands that direct it not to index all or part of a given URL. To specify that the Ask crawler visit only pages whose paths begin with /public, include the following lines:

    # Allow only specific directories
    User-agent: Teoma
    Disallow: /
    Allow: /public

     

    Q: Where do I put my robots.txt file?
    A: Your file must be at the top level of your Web site, for example, if http://www.mysite.com is the name of your Web site, then the robots.txt file must be at http://www.mysite.com/robots.txt.

    Q: How can I tell if the Ask crawler has visited my site/URL?
    A: To determine whether the Ask crawler has visited your site, check your server logs. Specifically, you should be looking for the following user-agent string:

    User-Agent: Mozilla/2.0 (compatible; Ask Jeeves/Teoma)

     

    Q: How can I prevent the Ask crawler from indexing my page or following links from a particular page?
    A: If you place the following command in the section of your HTML page, the Ask crawler will not index the document and, thus, it will not be placed in our search results:

    < META NAME = “ROBOTS” CONTENT = “NOINDEX” >

    The following commands tell the Ask crawler to index the document, but not follow hyperlinks from it:

    < META NAME = “ROBOTS” CONTENT = “NOFOLLOW” >

    You may set all directives OFF by using the following:

    < META NAME = “ROBOTS” CONTENT = “NONE” >

    See http://www.robotstxt.org/wc/exclusion.html#meta for more information.

    Q: Why is the Ask crawler downloading the same page on my site multiple times?
    A: Generally, the Ask crawler should only download one copy of each file from your site during a given crawl. There are two exceptions:

    • A URL may contain commands that “redirect” the crawler to a different URL. This may be done with the HTML command:
      < META HTTP-EQUIV=”REFRESH” CONTENT=”0; URL=http://www.your page address here.html” >

      or with the HTTP status codes 301 or 302. In this case the crawler downloads the second page in place of the first one. If many URLs redirect to the same page, then this second page may be downloaded many times before the crawler realizes that all these pages are duplicates.

    • An HTML page may be a “frameset.” Such a page is formed from several component pages, called “frames.” If many frameset pages contain the same frame page as components, then the component page may be downloaded many times before the crawler realizes that all these components are the same.

     

    Q: Why is the Ask crawler trying to download incorrect links from my server? Or from a server that doesn’t exist?
    A: It is a property of the Web that many links will be broken or outdated at any given time. Whenever any Web page contains a broken or outdated link to your site, or to a site that never existed or no longer exists, Ask will visit that link trying to find the Web page it references. This may cause the crawler to ask for URLs which no longer exist or which never existed, or to try to make HTTP requests on IP addresses which no longer have a Web server or never had one. The crawler is not randomly generating addresses; it is following links. This is why you may also notice activity on a machine that is not a Web server.

    Q: How did the Ask Website crawler find my URL?
    A: The Ask crawler finds pages by following links (HREF tags in HTML) from other pages. When the crawler finds a page that contains frames (i.e., it is a frameset), the crawler downloads the component frames and includes their content as part of the original page. The Ask crawler will not index the component frames as URLs themselves unless they are linked via HREF from other pages.

    Q: What types of links does the Ask crawler follow?
    A: The Ask crawler will follow HREF links, SRC links and re-directs.

    Q. Can I control the rate at which the Ask crawler visits my site?
    A. Yes. We support the “Crawl-Delay” robots.txt directive. Using this directive you may specify the minimum delay between two successive requests from our spider to your site.

    Q: Why has the Ask crawler not visited my URL?
    A: If the Ask crawler has not visited your URL, it is because we did not discover any link to that URL from other pages (URLs) we visited.

    Q: Does Ask crawler support HTTP compression?
    A: Yes, it does. Both HTTP client and server should support this for the HTTP compression feature to work. When supported, it lets webservers send compressed documents (compressed using gzip or other formats) instead of the actual documents. This would result in significant bandwidth savings for both the server and the client. There is a little CPU overhead at both server and client for encoding/decoding, but it is worth it. Using a popular compression method such as gzip, one could easily reduce file size by about 75%.

    Q: How do I register my site/URL with Ask so that it will be indexed?
    A: We appreciate your interest in having your site listed on Ask.com and the Ask.com search engine. Your best bet is to follow the open-format Sitemaps protocol, which Ask.com supports. Once you have prepared a sitemap for your site, add the sitemap auto-discovery directive to robots.txt, or submit the sitemap file directly to us via the ping URL. (For more information on this process, see Does Ask.com support sitemaps?) Please note that sitemap submissions do not guarantee the indexing of URLs.

    Create your Web site and set up your Web server to optimize how search engines look at your site’s content, and how they index and trigger based upon different types of search keywords. You’ll find a variety of resources online that provide tips and helpful information on how to best do this.

    Q: Why aren’t the pages the Ask crawler indexed showing up in the search results at Ask.com?
    A: If you don’t see your pages indexed in our search results, don’t be alarmed. Because we are so thorough about the quality of our index, it takes some time for us to analyze the results of a crawl and then process the results for inclusion into the database. Ask does not necessarily include every site it has crawled in its index.

    Q: Can I control the crawler request rate from Ask spider to my site?
    A: Yes. We support the “Crawl-Delay” robots.txt directive. Using this directive you may specify the minimum delay between two successive requests from our spider to your site.

    Q. How do I authenticate the Ask Crawler?
    A: A. User-Agent is no guarantee of authenticity as it is trivial for a malicious user to mimic the properties of the Ask Crawler. In order to properly authenticate the Ask Crawler, a round trip DNS lookup is required. This involves first taking the IP address of the Ask Crawler and performing a reverse DNS lookup ensuring that the IP address belongs to the ask.com domain. Then perform a forward DNS lookup with the host name ensuring that the resulting IP address matches the original.

    Q: Does Ask.com support sitemaps?
    A: Yes, Ask.com supports the open-format Sitemaps protocol. Once you have prepared the sitemap, add the sitemap auto-discovery directive to robots.txt as follows:

    SITEMAP: http://www.the URL of your sitemap here.xml

    The sitemap location should be the full sitemap URL. Alternatively, you can also submit your sitemap through the ping URL:

    http://submissions.ask.com/ping?sitemap=http%3A//www.the URL of your sitemap here.xml

    Please note that sitemap submissions do not guarantee the indexing of URLs. To learn more about the protocol, please visit the Sitemaps web site at http://www.sitemaps.org.

    Q: How can I add Ask.com search to my site?
    A: We’ve made this easy, you can generate the necessary code here.

    Q: How can I get additional information?
    A: Please visit our full Searchable FAQ Database.”

    RSS Ranking tool to see how effective your RSS feed is?

    RSS stands for Really Simple Syndication and is a way for those who wish to get a feed of your information sent to them everytime you add content.  If you  have set up an RRS feed, you now should be asking  how effective the feed is?  This tool will help check: http://www.rssmicro.com/feedsubmit.web

    What I really like about this tool is that the FeedRank algorithm checks 16 different areas and
    measures the quality of the content on your feed.  This is critical to a successful RSS feed from your website, blog or other forms that you are using for RSS.  Here is what they ask you for to get your ranking:

    Submit Your RSS/Atom Feed

    New! Check your site FeedRank only on RSSMicro

    No registration is required, instant results tailored to your feed(FeedRank algorithm checks 16 different areas and
    measures the quality of the content on your feed)

    What is your FeedRank?!

    RSSMicro FeedRank
    Learn more

    First Name: (optional) Last Name: (optional)

    Email: (optional)
    Get updated information on your feed by email. We respect your privacy, we do not use your email for any purpose other than your feed submission to RSSMicro.

    Feed URL: (required) (Sample URL: http://www.mysite.com/myfeed)
    We support RSS Feed Auto-Discovery

    Category: (optional)
    RSS feed category element (if specified) will take over
    Select OneAdultsArtsAutomotiveBlogs (Personal)Blogs (Professional)Books/MagazinesBusiness/FinanceCareersComputers/InternetEducationEditorialElectronicsEntertainmentEnvironmentFood & WineGamesHealthHumor & FunKids & TeensLawLifestyle/ReligionMoviesMusicNewsParentingPeoplePetsPodcastPoliticsScience/TechnologyShoppingSportsReal EstateReviewsTravelTVVideoWeatherOther

     Submitting your RSS/Atom feed URL does not guarantee that the feed will be indexed by RSSMicro.

    Enter Code
    No code or you can’t read?
    Refresh the page.

    Enter the code shown above (required):

    Creating fan pages in Facebook for blogs or your company step by step

    Quick Online tips  did a great blog on  “How to create a  Facebook page for your blog”. This information is really good for anyone who wants to set up a fans page and integrate it with your blog. What I like about it is they give a step by step with pictures as to what you need to look for at each step.  This is a must bookmark for future use.

    Quick Online Tips blog and email that I read and is a really great publication to read on a daily basis.

    Here is the blog on this subject from them.

    “Guest Post By Dee Barizo
    Do you know how to create Facebook page for your blog? A couple days ago, I was doing research on my blog’s niche. I wanted to know which social media sites were being used by my audience. Did they have blogs? Were they active on Twitter or forums? Did they participate in MySpace or Facebook?

    I found out that Facebook was a popular place to hang out. Also, my referral stats showed that I was getting traffic from Facebook even though I never promoted my blog there. With these things in mind, I created a Facebook Page for my blog. I’m planning to use the page to join the conversation on Facebook. Hopefully, the result will be more traffic and increased branding.

    Step by Step Guide to Create Facebook Pages

    Here’s a step by step guide for creating your own Facebook page.

    1. Log in to Facebook.

    To create a page, you must have a personal account. If you don’t have your own account, go to the Facebook home page and sign up.

    2. Go to Create New Facebook Page

    For category, chose “Brand, Product, or Organization” and then scroll down on the drop down menu and choose “Website”. On the next section, enter the name of your page. I just used the name of my blog. Click “Create Page”.

    fb-page-opening-screen

    3. Add a relevant image.

    Move the mouse pointer to the question mark on the left and you’ll see the words “Change Picture” pop up. Click it to add an image related to your blog. I used an image from my header but you can also use your logo or icon.

    change-picture

    4. Include information about your blog.

    Click the “Info” tab and then “Edit Information” on the right.

    edit-info

    Type in the relevant information including the date your blog was founded, the URL, and an overview.

    site-info

    5. Import your RSS feed.

    By importing your RSS feed, your Wall will be populated with your newest posts. Also, it will be updated whenever you publish a new post. Go to the left sidebar and under your picture, click “Edit Page”.

    edit-page

    Scroll down until you see the Notes application. Then click “Edit”.

    notes-edit

    Look to the right and click “Import a blog”.

    import-a-blog

    Enter your RSS feed URL, click the check box, and click “Start Importing”.

    enter-rss-feed

    Click “Confirm Import” on the right.

    confirm-import

    6. Write a short blurb about your blog.

    You’re back on the home page. Look to your right, click the fifth link under the picture, and write a short blurb.

    short-blurb

    Final Facebook Page

    You’re done! That’s all there is to it. Here’s the final product.

    final-fb-page

    Oh, one last thing. Don’t forget to become a fan of your new page.

    become-a-fan

    What’s the big deal with iPhone apps?

    Guest blogger: Niall McSheffrey from TERRALEVER

    On occasion, I invite guest bloggers to share a subject they have expertise in.  Niall McSheffrey from the online agency Terralever recently met with me to discuss new trends for CMOs and his companies expertise in iPhone apps. It was of great interest to me and I asked him to write a blog, so I could to bring to you.  I value  his insight and expertise working on iPhone apps for Red Bull and other of his clients.  We spoke about the shift to iPhone apps, why you should consider development in the 2010 plans and what it takes to pull this off.  His blog is on this subject to help the CMO’s and Marketing Directors across the country bring focus in this area with their agencies and teams.

    In April 2009, Apple announced that there have been more than one billion apps downloaded for the iPhone. What’s the big deal for a company that currently holds 52% of the Smartphone market and has over 10 million active users in the USA?

    The big deal is that the place where upcoming companies and established brands are spending is in Smartphone applications. In this space the iPhone is easily the leader followed by Microsoft and Google mobile networks. There are expected to be over 300,000 iPhone apps by late 2010 and these can be used for almost everything (Apple’s website shows the top apps by category here). In fact iPhone applications are replacing the way people are doing business and choosing their products. In reality, iPhone apps are not just a novelty but are used for everyday business. Some  examples:

    • Companies like square (www.square.com) are setting up credit card transactions using iPhones.
    • Being able to stay connected with your professional network using LinkedIn on your Smartphone
    •  Tracking potential clients, logging your time spent with clients or doing your expense report using your iPhone (see business apps here).

    If you have a product or service that is being marketed on a website, it is likely that it will work well as an iPhone application. The cost for developing an iPhone app begins at $25,000 and most likely will cost more depending on their complexity. However, depending on the app,  these costs can be mitigated through sales of the app on in apples store or with an upgraded version with more features.. Also, online additional revenue can be gained through advertising on your Smartphone through a mobile advertiser such as AdMob.

    TERRALEVER | Niall McSheffrey | Business Development | P 480.347.5343 |
    www.terralever.com