Analytics Success Lies in Digesting Related Segments


Our 10th analytics class focused on how to best analyze and understand data.

The top-level message I came away with is a data metric shouldn’t be evaluated in isolation, which is meaningless.  For example, knowing only that users spent an average of five minutes on a page offers no insight to validate or adjust your strategy. It may mean only users from a paid referral on one day of the year spent this much time on the site and on other days or referrals from organic/social/direct bounced after 10 seconds.  This would call for course corrections to improve results but without looking at the ‘big picture,’ these issues might go undetected.

Specifically, you should assess data metrics:

  • In context with the organization’s goals, what competitors are doing, industry information, internal initiatives, external events/trends…
  • Ideally in related segments, such as: referral sources (paid/social/organic/direct), days of the week (weekends and weekdays), times of the day or platforms used to access a site.

Sofia cautioned against comparing unrelated data metrics, such as tablet use with social referrals or creating compound or ‘super’ combinations like Alexa page rank with inbound links.

I found it interesting to note that higher numeric data (e.g. 10,000 contact page exits on every 100,00 visits), might deliver exactly the same statistical significance as smaller data (e.g. 1,000 contact page exits out of 10,000 visits) but is more effective when expressed as a clear outcome, such as 10%, versus a ‘muddy’ ratio like 10,000:100,000 or 1,000:10,000. However, I think higher numbers do strengthen a metric’s value.

For the class exercise, we re-examined the metrics set for our senior project, identified segments to be assessed and ways they might be visually depicted.

For my project, here are some KPI segmentation and visualization options:

  • Percentage of conversions by each referral source, such as social (fb, Twitter, Reddit), organic, paid and direct in a specific month, compared to non-conversions by referral sources in the same month.  (Visual depiction – 2 pie charts or a bar graph)
  • Specific Senior Care Share modules engaged in a specific month by referral sources. (Visual depiction – segmented bar chart)
  • Page views for a specific month, segmented by weekdays versus weekends. (Visual depiction – segmented bar chart, pie chart or even infographic with other metrics)
  • Page views in a specific month, segmented by referral sources. (Visual depiction – segmented bar chart)

However, as with all analysis, you should:

  • Find out what’s happening across the organization, such as other initiatives; business changes or help desk calls.
  • Consider external events that might impact data, such as holidays, market trends or even weather/power outages.
  • Possibly do surveys, click density analysis or other research….

….to get the full picture and extract maximum value from your data.

Illustration Source: Haml via Morgue File.

Drilling Down to Social Strategies

Our fourth Analytics class focused on the first of two parts on analytics for social media. This first part highlighted facebook, twitter and LinkedIn in terms of what users can do on these sites and “how” you can use these platforms to measure online, as well as offline, initiatives.

As with all analytics, you need to identify your SMART goals, KPIs, metrics and measurement methods in advance to ensure you capture all the relevant data and don’t waste time or lose important stats.

I think it’s interesting to note that if you are running a campaign that includes an offline tactic (e.g. a coupon or redeemable voucher), you can measure its impact online — IF you plan in advance.  The reverse also applies. For example, you can measure the offline response of a Facebook ad by using “offer claims.”

For this week’s in class exercise, we had to develop a social media strategy for promoting Parks Canada’s “Unplugged” campaign. For this, we had to identify SMART Goals and tactics for achieving them, as well as KPIs and metrics to measure progress.  I found this exercise a little confusing because we had just learned that traditionally KPIs are metrics that relate directly to your goals and generally impact the business.  Specifically, KPIs traditionally apply to revenue-related metrics (e.g. cost per lead, return on invest) or direct conversions that impact the business (new memberships).  However, I learned that the KPIs for a social media campaign are an exception to this. That is, they are social, as are the SMART Goals that support the campaign (e.g. generating X number of tweets with a specific hashtag).  This is good to note, as ‘when’ I launch my senior project, I will likely do it with a social campaign to drive traffic to the site and ideally generate conversions (member sign-ups).

In the exercise, we had to create a strategy for Twitter, Facebook and LinkedIn. The first two were straightforward, as the campaign was consumer-facing, which both Facebook and Twitter can be.  However, LinkedIn forced us to think ‘outside the box’ as it’s more of a B2B channel.  This was a challenge but good because sometimes clients decide they want to use a specific social media channel and you need to find a way to make it relevant to your marketing needs.

The Heart of Analytic Success: SMART Goals and the Trinity Strategy


For our first analytics class, Sofia provided an overview of what analytics is, its value and how every successful project must start with SMART goals. Sofia defined analytics as the use of data to gain insights and make better decisions. I agree with this but would add that analytics helps you report up to decision-makers on how investment in your initiative has impacted behaviour and user experience to advance business goals, as reflected in outcomes. Analytics also helps you gain executive (or client) buy-in for future projects that build on a prior campaign’s analytics or results.

I think analytics is becoming increasingly important as we continuously have more volume and variety of information or ‘big data.’ Sofia defined big data as “a collection of data from traditional and digital sources inside and outside a company,” which companies are increasingly looking at for ongoing discovery and analysis to inform their decisions. This data includes many things we can measure and analyze from digital sources, such as how many people access a website through specific social media platforms and from what countries, as well as traditional sources, such as how much money an event raises or how many people attend it. What I found particularly new was learning the difference between ‘structured’ data, which is quantitative, such as how many visitors access your site via Twitter, and ‘unstructured’ data, which is more qualitative, such as comments posted on your company’s facebook page.

Before you can attain analytics, you need to set goals for a website or digital property. Sofia explained that you summarize each goal in a sentence that includes Specific, Measurable, Attainable, Realistic and Time-bound (SMART) attributes. This is a slight variation from PR campaigns (which I’m familiar with), where goals are broad and objectives have SMART attributes. In PR, I’m also more used to setting goals and objectives for campaigns with a definitive end point.  In this first class, we did an exercise to identify a website’s goals. The feedback from this exercise gave me the impression that SMART goals can be for an overall website or company (such as the Toronto Star) versus a set campaign. If that’s the case: how can you make these goals time-bound?  I think you might set them for an initial period goal, such as six or three months, and then reassess but it would be good to know for certain.

Sofia told us the grandfather of analytics is Avinash Kaushik, who discusses his theories extensively in his blog: Occam’s Razor.  The blog is named after a 14th-century English logician and one of the alternate translations of his principle is: plurality should not be posited without necessity.  I think this suits analytics because it provides a methodology for specifically identifying what big data an organization needs to measure and why (necessity), versus trying to measure all the data it can access (plurality).

We also learned about Kaushik’s Trinity Strategy. This is a strategic approach to extract insights and metrics from a website/other platform, based on the users’ behaviour and experience, as well as the overall outcomes.  It’s imperative that these insights and metrics can be ‘acted on,’ that is used to make decisions that alter the organization’s approach or to design future initiatives.

For example, let’s say you implement a promotional campaign to sell featured books highlighted on a page in your ecommerce site. You also promote it through direct mail flyers and social media. Your goal may be to sell 50 copies of each featured book within one month. The overall process begins with the clickstream data, which is the data collected through the site. I can include: who is accessing the book page, from where, via what devices or user path, etc. From this data, you can measure and assess users’ behaviour in response to the promotion, such as:

  • How many came to the site via each social media platform?
  • How many were driven there by the flyer?
  • Which book image or caption attracted users when they first landed on the page or made no impact for the duration (as measured by a heat map)

You then measure outcomes, such how much revenue was generated through specific online book sales. The third element is the experience, which tells why the users behaved the way they did. My understanding is this data is often accessed through additional steps, such user testing to measure effectiveness of user paths, experimenting  with site changes (e.g. trying book purchasing buttons in different positions) and customer/user surveys to assess how they feel about the site. These three elements, combined with competitive intelligence, help you uncover insights about what is attracting users to the site and getting them to buy the books, as well as what changes might improve outcomes. For example, in assessing this data, you may want to adjust the page layout, if users consistently miss a specific book displayed.