Day 1 at Mozcon 2012

It’s been a couple of years since I have been at Mozcon; 3 exactly. Wow, has it grown! I am always excited to be apart of the community at SEOmoz. Shout out to Charlene from SEOmoz for helping me out with my hotel issue. Alright,let’s get to the good stuff…today was a full day of great speakers.
Here’s my top 3 picks for today:



Paddy Moogan | Distilled

Paddy spoke on 35 Ways To Get Links. He talked about many simple and some in-depth strategies to get external link backs to your website. This is why I love coming to conferences. Sometimes I get so stuck in routine that I forget about some of these really simple strategies that work. If anyone has done link building in-house or for a client, you will relate to my love hate relationship with link building.



Wil Reynolds | SEER Interactive
Wil talked about Real Company Sh*t (RCS). Loved it! Loved it! Loved! To sum it up, Will talked about the difference in real content that was meaningful and genuine vs link building schemes that give what we do (SEO) a bad name. I absolutely agree with Wil in regards to being inspired by companies that stand for something and the marketability of these companies.



Rand Fishkin | SEOmoz
I always love what Rand has to say. Rand kicked off the conference today and talked about the future of SEOmoz tool set. Not only that, but Rand added value through out the day with some great questions for the rest of the speakers.

All and all, I am never disappointed when I attend an SEOmoz conference. More to come tomorrow.

Lindsay Viscount
Owner/Creative Director
[email protected]

If you work in web design or web development, then you probably can’t help but snarl when saying “Internet Explorer”. You might also share our aversion to any IE-related applications such as Microsoft/Yahoo’s Bing search engine. We have largely ignored Bing in favour of Google, but we keep our minds open to anything that might affect SEO.

Just this month, Bing has released the ‘Phoenix’ update for its Webmaster Tools. The interface has been overhauled and optimised for usability. At first glance, it is definitely reminiscent of Google Analytics, but Bing’s interface is more straightforward. Data is much easier to access with fewer clicks required to get more in-depth information.

New (beta) features include the Link Explorer, which allows you to view the backlinks of any domain, and the SEO Analyzer/SEO Reports, which can perform an analysis of the on-page SEO of any page from one of your verified domains. The Link Explorer allows the checking of backlinks to a URL, with an option to check backlinks to any page on a domain.

The Bing Webmaster Tools also include a tool for doing keyword research with organic search results (results that exclude data from adCenter), as well as a built-in markup validator with results that display the page you validated as it appears in the browser with convenient annotations that contain deeper information you can access simply by hovering over certain data.

While these tools exist elsewhere on the internet – and many of them are available in Google Adwords or Google Analytics – it is very handy to have such tools integrated to an SEO suite. As a package, it’s very attractive. It was attractive enough for me to test it out with our site. Initially, because of a bug, I wasn’t able to view any data. At the time of posting this, however, the bug seems to be fixed and I can really see the value of using Bing Webmaster Tools.

The value of Bing-specific data.

Google dominates search; there’s no doubt about that. However, Internet Explorer is the default browser on any new computer running Windows. Since IE has integrated Bing as its search engine, Bing has a large automatic user-base made up of people who either don’t know the difference between – or don’t know how to change – browsers. However, Bing is nowhere close to Google in terms of their users.

Google has about 60-70% of the market share while Bing has roughly 20-30%. Is it worth doing two sets of keyword research when the keyword research I can do through Google is relevant to 60-70% of searchers rather than the 20-30% using Bing? If keyword research results from Bing are different from Google’s, should I simply ignore the data altogether?

The question on my mind is how relevant the data I’m getting from Bing really is. How similar Bing and Google were regarding how they rank and index pages? I found a great article on SEO moz detailing the commonalities between Google and Bing. To summarize their findings, Google and Bing are quite similar, with the two becoming increasingly more similar as time goes on (more likely a case of Bing become more like Google than the other way around).

I take the findings as confirmation that it’s not a waste of time to play with Bing’s Webmaster Tools, which makes me happy because I really like the simplicity of its interface and the depth of the data it provides. I wonder if we’ll see some changes from Google Analytics and Google Adwords in an attempt to provide a similar application that merges the two. Exciting!

When I was in school, I learned that having my websites’ code be 100% W3C standards compliant wasn’t just something to brag about, it was something to be ashamed of failing at. It’s an attitude that I’ve maintained while working at Longevity Graphics, which has lead to some pretty frustrating days. Some of our clients’ sites use CMS’s such as WordPress, Drupal, or Perch that have their own code structure that is not compliant with W3C standards. Even social media applications such as the Facebook ‘like’ button and Google Plus buttons don’t validate!

It got me wondering whether validation is truly crucial. Specifically, I wondered how important W3C validation is for SEO. I did some research, and while I found some contrasting views, W3C validation does not seem to be an important factor for SEO – with some caveats.

Out of curiosity, I used the W3C validator to check Google’s home page for errors. The result was 34 errors and 3 warnings. It shouldn’t be surprising, however, as Matt Cutts himself has said explicitly that W3C validation does not offer a ‘boost’ to the ranking of any site (see Matt Cutts’ explanation here). If you don’t want to watch the video, I can paraphrase: Google cares more about loading times, browser compatibility, usability, and content than validation for the sake of validation. This should be enough to end any debate, but there are a few things to bear in mind even if 100% W3C validation is not a factor in itself for SEO.

The first thing to remember is that many of the factors that W3C validation is based on are also important to SEO. For instance, the use of alt tags on images is mandatory for validation and it’s also important to use relevant keywords in alt tags for SEO. So while W3C validation is not directly beneficial to SEO, there are a lot of overlapping standards. Having clean code should also help to increase load time, which is a ranking factor for Google. Improving load time will enhance the crawlability of the site (especially important for large, deep sites such as online stores with dozens or hundreds of product pages). Smaller files and cleaner code will help you to reduce the fatigue of search engines while crawling your site, so they will be less likely to time out and not index deeply nested pages.

The bottom line is that the W3C validation points out a lot of issues that, when fixed, help to keep the code clean and easier for crawlers to index. A couple of validation errors on a page is not the end of the world. However, we should take as many steps as possible to increase the crawlability of the site to ensure that all content is being indexed.

Have you noticed your site’s rankings drop considerably in the last few weeks? It might have been a result of Google’s latest search engine update, Penguin.

Launched on April 24th, Google Penguin is entirely about promoting white-hat practices and upholding the standards of Google’s quality guidelines, and has little to do with directly improving search results. The Penguin update has been specifically designed to penalize and de-rank sites that have boosted ranking through grey-hat means. What is grey-hat, you ask?

Simply speaking, grey-hat techniques are any SEO techniques that are intended to boost ranking for a site without regard for user experience. These techniques include keyword stuffing and link schemes, which have been all too common in the past. If you’ve had your site’s ranking fall drastically as a result of Penguin, chances are high that you or your SEO person used one of these techniques.

According to Matt Cutts, the Penguin update is intended to reward those sites that have made use of white-hat SEO techniques. White-hat SEO is complicit with Google’s quality guidelines and includes doing things such as making sites load faster, writing informative and unique content, and improving usability. These are judged to be white-hat because they benefit the user as well as the search engines; they focus on building the quality of the site rather than on boosting its rankings.

The bottom line is, grey-hat techniques will not get you results as they once did – they will, in fact, be harmful to your rankings if implemented. Thankfully, there are SEO specialists like Longevity Graphics who hold fast to Google’s quality guidelines and pay close attention to what works and what doesn’t. The Penguin update is about promoting quality on the web, and that’s something we have always believed in.

Embedding WordPress RSS Feeds on an .ASP Website should be simple, RIGHT?

That’s what I thought too. I searched through WordPress forums, Google SERP’s, Webmaster Forums, Feed readers/burners software… No luck!

It is simple it you are working in PHP but ASP is a little more difficult. I found a bunch of javascript scripts but that completely defeats the SEO value of the blog feeds.

I came across this resource that made it super simple and wanted to share this in case anyone else runs into this same issue:

https://bytescout.com/?q=/products/developer/rss2htmlpro/how_to_display_rss_using_asp.html

Worked like a charm and easy to implement. Hope this helps someone else.

Lindsay Viscount
Owner/Creative Director
www.LongevityGraphics.com

What is Google PageRank?

PageRank is the algorithm used by the Google search engine. It is based on the premise, prevalent in the world of academia, that the importance of a research paper can be judged by the number of citations the paper has from other research papers. Google has simply transferred this premise to its web equivalent: the importance of a web page can be judged by the number of hyperlinks pointing to it from other web pages.

What affects Page Rank?

Simply put, the PageRank of a web page is therefore calculated as a sum of the PageRanks of all pages linking to it (its incoming links), divided by the number of links on each of those pages (its outgoing links).

Is Page Rank important?

It is a common belief that page rank isn’t important anymore. I, however, find it hard to believe that Google would assign a page rank system that isn’t used for something. It is true that websites with low page rank can rank high with in the search results pages for targeted keywords. There are many different algorithms at play to determine this.

So what is page rank for and how can we leverage it?

Generally, page rank of a website can be used as a starting point to determine where it is at in regards to domain authority. Where page rank plays a big role in SEO is determining better external link backs to your website. Having higher page rank domains pointing to your website, help to establish more authority within the eyes of the SERPs.

To learn more about leveraging page rank and how developing an extensive link building strategy can improve your website’s results and increase your traffic to your website, contact one of our internet marketing specialists today.

[email protected]

Lindsay Viscount
Owner/Creative Director
www.LongevityGraphics.com