• Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 846 other followers

  • Karen Kallets Twitter

  • Recent Posts

  • Top Clicks

    • None
  • Pages

  • Top Posts

  • Categories

  • Recent Comments

    Shana on How can you get your blog inde…
    Quentin on Blogging Mistakes for Beg…
    Adell on What other blogs are saying ab…
    Jackie on Ping-O-Matic is your one stop…
    Hermine on What other blogs are saying ab…
    Ans on Tools for creating business we…
    Harris Ferencz on 2010 Marketing Plans: Facebook…
    Kevin Moreland on How can you get your blog inde…
    Twitter on Retweeting your own tweet…
    spam filtering on How can you get your blog inde…
    tweet on Retweeting your own tweet…
    Rich on 5 Best Printer Friendly WordPr…
    rtyecript on How can you get your blog inde…
    cp cheats on How can you get your blog inde…
    cp cheats on Ping-O-Matic is your one stop…
  • Archives

Jing is free software that adds visuals to your online conversations

Jing by Techsmith, is a tool that has captured my attention. It is a simple to use tool for instant sharing. It can be used for screen shots, videos, call outs and to show people what you are seeing on your screen with a click of a button with a linking URL.  Much better than the process of screen shot, save to word ect.  It sits on your desk top as a yellow half-moon ready for when you want to screen shot with three choices of capture, history and more.

Their website says it is to add visuals to your online conversation. There is a free version and a paid version for adding video. With the free version you also get 2 mg on Screencast which is a nice benefit.

Here is a few ways they suggest using Jing

  • collaborateCollaborate on a design project
  • share snapshotShare a snapshot of a document
  • speechNarrate your vacation photos
  • bugCapture that pesky bug in action
  • family tech supportShow Dad how to use iTunes
  • homeworkComment verbally on students’ homework
  • tidbitsPost tidbits from your life on Twitter or Facebook

What I also like is that is integrated with their other programs, that are more complex.

Screencast.com Screencast.  It not only provides free storage and instant sharing of your Jing content…it’s also a super place to share your high-quality videos, images, presentations, and all manner of digital content.

 

Snagit Snag-It . This Swiss-army knife of screen capture picks up where Jing leaves off.  Powerful image editing, scrolling window capture, cursor capture, tagging, search, and rugged good looks.

Camtasia Studio Camtasia Studio.  Jing’s big brother on the video side—and the ideal choice for creating longer, more polished screen videos. Camtasia Studio can edit your Jing videos, too.

Advertisements

LinkedIn Strategy for how you connect. Lion, Turtle, HoundDog or Alley Cat.

Have you ever wondered why some people put [LION] after their name on LinkedIn?  Most CMOs are still trying to avoid LinkedIn except for people they know or want to know.  For others, there are strategies to how, what and why.  For myself, found out  I am more of an Alley Cat.  I like to connect, do it often, but want and need a connection as to why I have the contact. I do turn people down that I do not know.   So let’s try an experiment,  here is my linked in http://www.linkedin.com/in/karenkallet.  If you tell me that you came via this CMO Guide to Social Media blog, I will accept you.  However, tell me why you want the connection.  That is the Alley Cat in me.

 This blog from Community Marketing Blog lays out the why and what.  You need to decide which one are you and know if you see a tag on their name this is what it is meaning.

 Are You a LION, Turtle, HoundDog, or Alley Cat? What’s Your LinkedIn Strategy?

LI in Oz2
 LinkedIn is a fairly harmonious place.  People tend to act professional and when there are opposing opinions they typically become a case where people “agree to disagree”.  Things change though when you began discussing LIONS.  Suddenly the conversation isn’t so rosy.  

LION’s, for those who don’t know are open networkers.  They connect to just about anyone.  They see opportunity increasing as the number of connections increases.  Those who disagree see LIONS as simply driving their ego’s by counting the connections, as if the purpose of LinkedIn is to proudly claim to have 1,000’s of connections.

For the record I don’t consider myself a LION, yet I’m an open networker.  When writing my first LinkedIn book I identified three LinkedIn connection strategies.  This year I added a fourth to define how I now connect.

How you choose to connect will impact how you use LinkedIn and in the end your chances of finding success.

Before we look at the four connection strategies I want to make one point.  How you choose to connect on LinkedIn should be of no concern to anyone else.  It’s your network and your strategy.  As long as it works for you thats all that matters.

The Four LinkedIn Connection Strategies:

The LION
As stated above LIONS are completely open connectors.  They seek to increase their connections through actively sending out and accepting connection invitations.  While I’m sure there are a few who take pride in touting the specific number, the majority simply believe that large networks lead to more opportunity.

Steve Burda is a LION with over 30,000 connections.  I don’t know Steve but I’ve seen countless references to his taking time to help others.  So yes he has a large network, but no its not about the number.  Its about having the opportunity to help a significant number of people.  If this leads to new business for him, more power to him.

The Turtle
Turtles are the opposite of LIONS.  Turtles primarily only connect to those they know well.  They see value in having a tight network made up of individuals that they completely trust.  Their networks tend to be highly selective and can be counted on to pass on introductions, much like a private networking group.

I don’t know many Turtles but the ones I do know are like Steve interested in being a productive resource for those they choose to connect to.  LinkedIn is a way to enhance their offline networking making their existing relationships a little more connected.

The Hound Dog
When I first joined LinkedIn I was only aware of LIONS.  I knew right away that LinkedIn added an additional layer of connectivity to those I knew.  I also realized that it could help me meet other local business professionals that I did not know.

At each Chamber meeting they would pass out copies of everyone’s business cards.  After each meeting I would see who was on LinkedIn and then invite them to connect.  At the next Chamber meeting the connection provided a great ice breaker.  It also established connections with those people who only attended a single meeting.

I also used LinkedIn to seek out people I would like to connect with.  Doing this allowed me to establish connections with other business professionals who might help my clients, become a referral partner, and some who were prospects.  This ability to hunt for specific people led me to define the strategy as a Hound Dog.

A Hound Dog is someone who uses LinkedIn to connect to those they know, to connect to those they would like to know, and accepts invitations from those that would be beneficial to be connected to.  

For the first year that I was serious about using LinkedIn I followed this strategy.  Then one day I had a thought, “How do I know whether or not a connection I know could benefit from a connection that I didn’t know?”   The answer was that I didn’t know.

It was that at this point that I changed my strategy for connecting on LinkedIn. 

The Alley Cat
I still only send invitations to people I know or people that I have a specific reason for connecting to.  What changed is that I now accept invitations from just about anyone.  There is value in knowing your connections but there are also unexpected opportunities that develop from establishing new connections, known and unknown.

This connection strategy supports my overall LinkedIn strategy which is this:  I seek to provide value to and help as many people as possible.  Much of that value is provided through the Social Media Sonar blog, sharing tips and strategies with others on how to more effectively utilize LinkedIn and social media/networking.  Sometimes its through being the hub to connect two people.  At other times its through conducting workshops, writing LinkedIn books and guides, etc.  The more people I am connected to the more people that I can share with.

I believe that to create opportunity you have to first be willing to help others.  Then, by consistently sharing value over time, you allow people to move through the Process of Familiarity.  A process that has to happen before someone will choose to do business with you.

What I call the Process of Familiarity likely has been called many things by other people.   The three components are:

  • People need to Know You or at a Minimum Know Of You:  Often connecting or engaging in conversations will accomplish this.
  • People Must Like You or Have a Positive Opinion:  How you interact with others and the value of the content you share will help here.  If people like your content they will like you.
  • People Must Trust You:  Building trust is dependent upon engaging on conversations or sharing value consistently over time.  As people see you on an ongoing basis and are exposed to the value you share the “Like” will grow into “Trust”.

Through this process here’s what I’ve seen happen.  Each week I write one or two blog posts that show people how to utilize LinkedIn.  I then use the tools LinkedIn provides to communicate that there is a new blog post.  People visit the blog for the first time or as a repeat visitor.  At some point they check out my profile and learn what it is that I do and see how I can help them.  

If they like the content they begin to have a positive impression of me and this eventually moves to a sense of trust.  At this point if they ever have a need for my services I am top of mind and they will contact me.

Something else happens as well.  People like to share content on other Social Media sites so at some point they become my social media amplification system.  This introduces my blog to people outside of the communities I’ve built.

Wrap Up
The connection strategy you choose will depend upon how you want to use LinkedIn.  There is no right or wrong choice as long as your connection strategy supports the goals you have determined.  For me the change to an Alley Cat has helped generate 3 to 5 contacts per week about my services.

Which strategy are you using and why?  If you agree or disagree with the post please leave a comment.  Your perspective is as important as mine, so share it with everyone.

Sean Nelson is the author of the Social Media Sonar blog and has written three LinkedIn eBooks including one of the first books detailing how to strategically use LinkedIn to grow your business. “LinkedIn Marketing Secret Formula”.  He is a Partner in SONARconnects.”

Creating a sitemap on Google, Bing, Yahoo and Ask.

XML Sitemap—or sitemap is a list of the pages on your website, each search engine needs it to their specifications and requirements.  When you create and submit a sitemap helps make sure that Google, Yahoo, Bing and Ask know about all the pages on your site, including URLs that may not be  found by the  normal crawling process.  Having a site map for your website it a key to getting searched by the various search engine bots that I have written about in a previous blog about the 5 kinds of  Search Bots.  Your webmaster or agency should have all this information at their finger tips, if they do not,  you may not be getting the proper search engines you want going to your site.

Knowing what to ask, beyond… do we have a sitemap or do we have an XML feed needs this knowledge.  Ask if they have done all the formatting needed to have the XML sitemap work on all the search engines is key.  Having all the information at your finger tips to get the search engines to the search engines and what they need is priceless. 

Here is the data that I use from each one of the search engines as to what they require and how to:

____________________________________________________________________

Creating and submitting sitemaps to Google

The Google site says:

“About Sitemaps

Sitemaps are a way to tell Google about pages on your site we might not otherwise discover. In its simplest terms, a XML Sitemap—usually called Sitemap, with a capital S—is a list of the pages on your website. Creating and submitting a Sitemap helps make sure that Google knows about all the pages on your site, including URLs that may not be discoverable by Google’s normal crawling process.”

In addition to regular Sitemaps, you can also create Sitemaps designed to give Google information about specialized web content, including video, mobile, News, Code Search, and geographical (KML) information.

When you update your site by adding or removing pages, tell Google about it by resubmitting your Sitemap. Google doesn’t recommend creating a new Sitemap for every change.”
______________________________________________________________________________

Bing Webmaster Tools

To submit an XML-based Sitemap to Bing:

Step 1: Copy and paste the entire URL below as a single URL into the address bar of your browser:

     http://www.bing.com/webmaster/ping.aspx?sitemap=www.YourWebAddress.com/sitemap.xml

Step 2: Change “www.YourWebAddress.com” to your domain name

Step 3: Press ENTER”

_________________________________________________________________________

How Yahoo! supports sitemaps

“How Yahoo! Supports Sitemaps

Yahoo! supports the Sitemaps format and protocol as documented on www.sitemaps.org .

You can provide us a feed in the following supported formats. We recognize files with a .gz extension as compressed files and decompress them before parsing.

  • RSS 0.9, RSS 1.0, or RSS 2.0, for example, CNN Top Stories
  • Sitemaps, as documented on  www.sitemaps.org
  • Atom 0.3, Atom 1.0, for example, Yahoo! Search Blog
  • A text file containing a list of URLs, each URL at the start of a new line. The filename of the URL list file must be urllist.txt. For a compressed file the name must be urllist.txt.gz.
  • Yahoo! supports feeds for mobile sites. Submitting mobile Sitemaps directs our mobile search crawlers to discover and crawl new content for our mobile index. When submitting a feed that points to content on the mobile web, please indicate whether the encoding of the content was done with xHTML or WML.

    In addition to submitting your Sitemaps to Yahoo! Search through Yahoo! Site Explorer, you can:

  • Send an HTTP request. To send an HTTP request, please send using our ping API.
  • Specify the Sitemap location in your site’s robots.txt file. This directive is independent of the user-agent line, so you can place it wherever you like in your file.
  • Yahoo! Search will retrieve your Sitemap and make the URLs available to our web crawler. Sitemaps discovered by these methods cannot be managed in Site Explorer and will not show in the list of feeds under a site.

    Note: Using Sitemap protocol supplements the other methods that we use to discover URLs. Submitting a Sitemap helps Yahoo! crawlers do a better job of crawling your site. It does not guarantee that your web pages will be included in the Yahoo! Search index.”

    ___________________________________________________________________

    Ask.com webmaster FAQ

    “Web Search

    The Ask.com search technology uses semantic and extraction capabilities to recognize the best answer from within a sea of relevant pages. Instead of 10 blue links, Ask delivers the best answer to user’s questions right at the top of the page. By using an established technique pioneered at Ask, our search technology uses click-through behavior to determine a site’s relevance and extract the answer. Unlike presenting text snippets of the destination site, this technology presents the actual answer to a user’s question without requiring an additional click through. Underpinning these advancements are Ask.com’s innovative DADS, DAFS, and AnswerFarm technologies, which break new ground in the areas of semantic search, web extraction and ranking. These technologies index questions and answers from numerous and diversified sources across the web. It then applied its semantic search technology advancements in clustering, rephrasing, and answer relevance to filter out insignificant and less meaningful answer formats. In order to extract and rank exciting answers, as opposed to merely ranking web pages, Ask.com continues to develop a unique algorithms and technologies that are based on new signals for evaluating relevancy specifically tuned to questions.

    The Ask Website Crawler FAQ

    Ask’s Website crawler is our Web-indexing robot (or crawler/spider). The crawler collects documents from the Web to build the ever-expanding index for our advanced search functionality at Ask and other Web sites that license the proprietary Ask search technology.

    Ask search technology is unique from any other search technology because it analyzes the Web as it actually exists — in subject-specific communities. This process begins by creating a comprehensive and high-quality index. Web crawling is an essential tool for this approach, and it ensures that we have the most up-to-date search results.

    On this page you’ll find answers to the most commonly asked questions about how the Ask Website crawler works. For these and other Webmaster FAQs, visit our Searchable FAQ Database.

    Frequently Asked Questions

    1. What is a website crawler?
    2. Why does Ask use a website crawler?
    3. How does the Ask crawler work?
    4. How frequently will the Ask Crawler index pages from my site?
    5. Can I prevent Teoma/Ask search engine from showing a cached copy of my page?
    6. Does Ask observe the Robot Exclusion Standard?
    7. Can I prevent the Ask crawler from indexing all or part of my site/URL?
    8. Where do I put my robots.txt file?
    9. How can I tell if the Ask crawler has visited my site/URL?
    10. How can I prevent the Ask crawler from indexing my page or following links from a particular page?
    11. Why is the Ask crawler downloading the same page on my site multiple times?
    12. Why is the Ask crawler trying to download incorrect links from my server? Or from a server that doesn’t exist?
    13. How did the Ask Website crawler find my URL?
    14. What types of links does the Ask crawler follow?
    15. Can I control the rate at which the Ask crawler visits my site?
    16. Why has the Ask crawler not visited my URL?
    17. Does Ask crawler support HTTP compression?
    18. How do I register my site/URL with Ask so that it will be indexed?
    19. Why aren’t the pages the Ask crawler indexed showing up in the search results?
    20. Can I control the crawler request rate from Ask spider to my site?
    21. How do I authenticate the Ask Crawler?
    22. Does Ask.com support sitemaps?
    23. How can I add Ask.com search to my site?
    24. How can I get additional information?

     

    Q: What is a website crawler?
    A: A website crawler is a software program designed to follow hyperlinks throughout a Web site, retrieving and indexing pages to document the site for searching purposes. The crawlers are innocuous and cause no harm to an owner’s site or servers.

    Q: Why does Ask use website crawlers?
    A: Ask utilizes website crawlers to collect raw data and gather information that is used in building our ever-expanding search index. Crawling ensures that the information in our results is as up-to-date and relevant as it can possibly be. Our crawlers are well designed and professionally operated, providing an invaluable service that is in accordance with search industry standards.

    Q: How does the Ask crawler work?

    • The crawler goes to a Web address (URL) and downloads the HTML page.
    • The crawler follows hyperlinks from the page, which are URLs on the same site or on different sites.
    • The crawler adds new URLs to its list of URLs to be crawled. It continually repeats this function, discovering new URLs, following links, and downloading them.
    • The crawler excludes some URLs if it has downloaded a sufficient number from the Web site or if it appears that the URL might be a duplicate of another URL already downloaded.
    • The files of crawled URLs are then built into a search catalog. These URL’s are displayed as part of search results on the site powered by Ask’s search technology when a relevant match is made.

     

    Q: How frequently will the Ask Crawler download pages from my site?
    A: The crawler will download only one page at a time from your site (specifically, from your IP address). After it receives a page, it will pause a certain amount of time before downloading the next page. This delay time may range from 0.1 second to hours. The quicker your site responds to the crawler when it asks for pages, the shorter the delay.

    Q. Can I prevent Teoma/Ask search engine from showing a cached copy of my page?
    A: Yes. We obey the “noarchive” meta tag. If you place the following command in your HTML page, we will not provide an archived copy of the document to the user.

    < META NAME = “ROBOTS” CONTENT = “NOARCHIVE” >

    If you would like to specify this restriction just for Teoma/Ask, you may use “TEOMA” in place of “ROBOTS”.

    Q: Does Ask observe the Robot Exclusion Standard?
    A: Yes, we obey the 1994 Robots Exclusion Standard (RES), which is part of the Robot Exclusion Protocol. The Robots Exclusion Protocol is a method that allows Web site administrators to indicate to robots which parts of their site should not be visited by the robot. For more information on the RES, and the Robot Exclusion Protocol, please visit http://www.robotstxt.org/wc/exclusion.html.

    Q: Can I prevent the Ask crawler from indexing all or part of my site/URL?
    A: Yes. The Ask crawler will respect and obey commands that direct it not to index all or part of a given URL. To specify that the Ask crawler visit only pages whose paths begin with /public, include the following lines:

    # Allow only specific directories
    User-agent: Teoma
    Disallow: /
    Allow: /public

     

    Q: Where do I put my robots.txt file?
    A: Your file must be at the top level of your Web site, for example, if http://www.mysite.com is the name of your Web site, then the robots.txt file must be at http://www.mysite.com/robots.txt.

    Q: How can I tell if the Ask crawler has visited my site/URL?
    A: To determine whether the Ask crawler has visited your site, check your server logs. Specifically, you should be looking for the following user-agent string:

    User-Agent: Mozilla/2.0 (compatible; Ask Jeeves/Teoma)

     

    Q: How can I prevent the Ask crawler from indexing my page or following links from a particular page?
    A: If you place the following command in the section of your HTML page, the Ask crawler will not index the document and, thus, it will not be placed in our search results:

    < META NAME = “ROBOTS” CONTENT = “NOINDEX” >

    The following commands tell the Ask crawler to index the document, but not follow hyperlinks from it:

    < META NAME = “ROBOTS” CONTENT = “NOFOLLOW” >

    You may set all directives OFF by using the following:

    < META NAME = “ROBOTS” CONTENT = “NONE” >

    See http://www.robotstxt.org/wc/exclusion.html#meta for more information.

    Q: Why is the Ask crawler downloading the same page on my site multiple times?
    A: Generally, the Ask crawler should only download one copy of each file from your site during a given crawl. There are two exceptions:

    • A URL may contain commands that “redirect” the crawler to a different URL. This may be done with the HTML command:
      < META HTTP-EQUIV=”REFRESH” CONTENT=”0; URL=http://www.your page address here.html” >

      or with the HTTP status codes 301 or 302. In this case the crawler downloads the second page in place of the first one. If many URLs redirect to the same page, then this second page may be downloaded many times before the crawler realizes that all these pages are duplicates.

    • An HTML page may be a “frameset.” Such a page is formed from several component pages, called “frames.” If many frameset pages contain the same frame page as components, then the component page may be downloaded many times before the crawler realizes that all these components are the same.

     

    Q: Why is the Ask crawler trying to download incorrect links from my server? Or from a server that doesn’t exist?
    A: It is a property of the Web that many links will be broken or outdated at any given time. Whenever any Web page contains a broken or outdated link to your site, or to a site that never existed or no longer exists, Ask will visit that link trying to find the Web page it references. This may cause the crawler to ask for URLs which no longer exist or which never existed, or to try to make HTTP requests on IP addresses which no longer have a Web server or never had one. The crawler is not randomly generating addresses; it is following links. This is why you may also notice activity on a machine that is not a Web server.

    Q: How did the Ask Website crawler find my URL?
    A: The Ask crawler finds pages by following links (HREF tags in HTML) from other pages. When the crawler finds a page that contains frames (i.e., it is a frameset), the crawler downloads the component frames and includes their content as part of the original page. The Ask crawler will not index the component frames as URLs themselves unless they are linked via HREF from other pages.

    Q: What types of links does the Ask crawler follow?
    A: The Ask crawler will follow HREF links, SRC links and re-directs.

    Q. Can I control the rate at which the Ask crawler visits my site?
    A. Yes. We support the “Crawl-Delay” robots.txt directive. Using this directive you may specify the minimum delay between two successive requests from our spider to your site.

    Q: Why has the Ask crawler not visited my URL?
    A: If the Ask crawler has not visited your URL, it is because we did not discover any link to that URL from other pages (URLs) we visited.

    Q: Does Ask crawler support HTTP compression?
    A: Yes, it does. Both HTTP client and server should support this for the HTTP compression feature to work. When supported, it lets webservers send compressed documents (compressed using gzip or other formats) instead of the actual documents. This would result in significant bandwidth savings for both the server and the client. There is a little CPU overhead at both server and client for encoding/decoding, but it is worth it. Using a popular compression method such as gzip, one could easily reduce file size by about 75%.

    Q: How do I register my site/URL with Ask so that it will be indexed?
    A: We appreciate your interest in having your site listed on Ask.com and the Ask.com search engine. Your best bet is to follow the open-format Sitemaps protocol, which Ask.com supports. Once you have prepared a sitemap for your site, add the sitemap auto-discovery directive to robots.txt, or submit the sitemap file directly to us via the ping URL. (For more information on this process, see Does Ask.com support sitemaps?) Please note that sitemap submissions do not guarantee the indexing of URLs.

    Create your Web site and set up your Web server to optimize how search engines look at your site’s content, and how they index and trigger based upon different types of search keywords. You’ll find a variety of resources online that provide tips and helpful information on how to best do this.

    Q: Why aren’t the pages the Ask crawler indexed showing up in the search results at Ask.com?
    A: If you don’t see your pages indexed in our search results, don’t be alarmed. Because we are so thorough about the quality of our index, it takes some time for us to analyze the results of a crawl and then process the results for inclusion into the database. Ask does not necessarily include every site it has crawled in its index.

    Q: Can I control the crawler request rate from Ask spider to my site?
    A: Yes. We support the “Crawl-Delay” robots.txt directive. Using this directive you may specify the minimum delay between two successive requests from our spider to your site.

    Q. How do I authenticate the Ask Crawler?
    A: A. User-Agent is no guarantee of authenticity as it is trivial for a malicious user to mimic the properties of the Ask Crawler. In order to properly authenticate the Ask Crawler, a round trip DNS lookup is required. This involves first taking the IP address of the Ask Crawler and performing a reverse DNS lookup ensuring that the IP address belongs to the ask.com domain. Then perform a forward DNS lookup with the host name ensuring that the resulting IP address matches the original.

    Q: Does Ask.com support sitemaps?
    A: Yes, Ask.com supports the open-format Sitemaps protocol. Once you have prepared the sitemap, add the sitemap auto-discovery directive to robots.txt as follows:

    SITEMAP: http://www.the URL of your sitemap here.xml

    The sitemap location should be the full sitemap URL. Alternatively, you can also submit your sitemap through the ping URL:

    http://submissions.ask.com/ping?sitemap=http%3A//www.the URL of your sitemap here.xml

    Please note that sitemap submissions do not guarantee the indexing of URLs. To learn more about the protocol, please visit the Sitemaps web site at http://www.sitemaps.org.

    Q: How can I add Ask.com search to my site?
    A: We’ve made this easy, you can generate the necessary code here.

    Q: How can I get additional information?
    A: Please visit our full Searchable FAQ Database.”

    Do free promotion offers work? Follow the line to Dennys

    Here is a photo that I took of the Denny’s Free breakfast crowd from the road as I was leaving today.

    By the looks of things at my local Denny’s the Free Breakfast  promotion announced during the Superbowl was a sell out.  I drove into the parking lot to go to another business and could not find a place to park in three parking lots!  The crowds were pouring out of their cars, like they were going to the state fair.  The line was out the door and around the building.

     So give them credit for gaining trial.  Customer Engagement at it’s finest.  Customers using texts, blogs and emails all about the promotion is a social media strategists dream.   What about retention, I am not so sure.  Quality service, is in the hands of the wait staff and cooks.  Will people stand in line for free food? Yes… they will.  Will they come back and pay?… Denny’s will need to report of that.  ROI… priceless…

    Trend Watch: Channels – SlideShare adds Content Channels.

    SlideShare is adding Content Channels and is a trend to get on board with now.  SlideShare is an online Powerpoint /slide show program that you can use for presentations, conference calls, feed to your blogs, to LinkedIn or is available for organic search from the search engines.   This is sort of pod casting made easy for static presentations.  SlideShare is also a great place to do research on a subject, since many speakers put their presentations on Slide share and you are allowed to download them.

    Here is the info on thier new launch and an explaination of how some are using it from their newsletter.

    “Today we added Channels – the newest kid on the SlideShare block. Channels are branded spaces on SlideShare for companies and brands. They provide an extra bit of oomph for companies with great content. 

    • If you are looking for interesting content, then go to the Channels section to find interesting channels to browse. We will feature those with the most interesting content. Some of the early channels are
      Microsoft Office has setup a channel focused on parenting topic (project done in collaboration with our partners, Federated Media)
      Ogilvy has setup a concept channel for Pharma
      Razorfish Marketing uploads about interactive marketing & technology
    • Pew Internet has shared a lot of their research reports about the internet & internet usage
    • Whitehouse is sharing almost 1000 presentations and documents
      We’re also rolling out topical Channels that are curated by our content team. For example, channels on Cloud Computing and Social Media. You’ll more of them in the coming months as we roll out new topics.

    If your company or organization is interested in channels, then get in touch by going here.

    Free serif typefaces revealed on the Prisma Graphic blog.

    Looking at where dollars go in your budget may leave you feeling you need more control. Your designers are always looking for new type faces and I bet if you check the budget you are paying for them.  So PrismaGraphic.com  who is a top line printer does a great blog on industry tips.  Check out their blog at Prisma Graphic blog. I subscribe and this one caught my eye:

    Huge Collection of Free Serif Fonts for Graphic Design

    February 4, 2010 <!–prismagraphic–>Edit

    We understand that the graphic designers that we work with are always looking for new fonts.  In typography, serifs are semi-structural details on the ends of some of the strokes that make up letters and symbols. We have put together a list of some of the best serif fonts on the web.

    20 Classy Free Serif Fonts by D-List

    30 Extremely Elegant Serif Fonts by Webdesign Ledger

    20+ Beautiful Free Serif Fonts by Think Design Blog

    10 Free Serif Fonts by MyInkBlog

    Tapping into Mommy Bloggers to promote your product or services.

    Mommy Bloggers are the new alpha females of the marketing opinion world.  An entire industry has grown with conventions and all for the Mommy Bloggers to meet, exchange ideas and even more important, to promote their blogs to companies and marketers to do product reviews, sell advertising, hold promotions and drive business for the companies that tap into this Blogging  marketing venue.

    There are two upcoming Mommy Blogging conventions you may want to check out.   BlissDom 10,  Feb 4-6 in Nashville (which is already a sell out), and the blogger conference  BlogHer  which is in NYC Aug 5-7.  These conferences have big time package goods sponsors all trying to woo the Mommy bloggers.  Sponsors on the BlissDom website include: Con Agra Foods, Orville Redenbacher, Hunt’s, Healthy Choice, Hebrew National, Arnold, Blue Bunny, Hallmark, Invisalign, iRobot, Lands’ End, Lifetime Moms, Red Plum.

    Mommy Bloggers sell sponsorships of themselves to represent you at the conference. The ads say: “Why I am the blogger you company should sponsor for BlissDom  ’10.  It is done through blogs as ads that will tell you things like:  Blogger with 2000 + subscribers seeks sponsorship to…

     The Mommy bloggers use ad badges provided by the conference to promote if they are looking for a sponsor.  Like the one shown here is for the Blissdom10 Conference.  It is a way to get them to the conference and their costs paid and get your product known in the Mommy Blogging world.

    The bloggers are from all walks of life and interests,who go to these conferences.  They go to network, go to seminars,promote their blogs and twitters, as well as write blogs and promote their sponsors who helped pay to get them there.

    Their blogs have a female focus target on home making, babies, or products, all subjects aimed at women readers.  Some have advertising on their blogs at cheep rates.  Some show you what they have done for other companies, some have HTML badges for their blog  to be taken and promoted on others website or blogs.    The Mommy Bloggers do contests and sweepstakes, coupons codes and deals, product reviews, tips, recipes.  A really nice business model, from thier kitchen counter.

    BlogHer conference has been held for 5 years on the West Coast and Midwest and is now going to NYC.  Bloggers like Domestic Debacle, badge shown on the left,  promote that they are looking to represent you.  As disclosure, they did a promotion of a product called Fridge-It that I am involved in and that is what got me interested in this subject of Mommy Blogging. The connection was initially made in the Twitter world and then moved into the blogging world. 

     The BlogHer advertising materials on the website says that they reach more than 16 million unique women each month and states that BlogHer is the leading participatory news, entertainment and information network for women online

    Here are some Mommy Blogger examples, that will show and explain the concept.  There are many, many more, and if you are a Mommy Blogger and want to add your Mommy blog to this lists, please comment and give some info on your Mommy blog.

    Mommy Blogger examples: Thriftymaven.com , Domesticdebacle.com , Abusymommy.com   , Thepennypinchingmama.blogspot.com 

    Can I hear a wOOt?!    BTW: What does wOOt  mean? W00t” was originally a truncated term for players of Dungeons and Dragons role-playing game for “Wow, loot!” The term wOOT was used by the web culture in video game communities, now it has lost its original meaning and is used simply as a term of excitement. Sort of the new Ya Hoo….