Thursday, October 20, 2016

6 Cross Channel Tactics That Successful Marketers Do That Most Don't

Orchestrating highly targeted and individualised marketing campaigns for your prospects and customers takes more than just great creative. It takes a cohesive team working together to deliver the optimal cross channel experience.



These are tactics that the best marketers in the world do that quite frankly, others do not.



1. Focus on technology as a way to enable good people to deliver great results, not just focus on buying technology to try and solve problems (many of which are organisational or knowledge based problems). The latter can result in lots of technology that gets unused, or overlaps and when the challenges still persist, technology is blamed and replaced.



2. Have someone on the payroll whose job title ends in “Analyst” – this indicates that they are serious enough about gaining insight from data to drive their strategy forward rather than using common sense and guesswork – they don't just follow “best practices” they write their own. On top of that, there is no direct attributable “revenue” performance attached to Analysis so to invest in it means you and your business understand the value.



3. Invest time in training and development of their teams – At a minimum 10% of resources time should be dedicated to training and development. The fact is cloud technology evolves very fast and you need to keep up. You also need to utilise as much of your technology investment as possible.



4. Build organisational bridges – using methods to improve ways of working between teams, such as virtual organisations which bring people together with common skills (e.g. copy writing, SQL data manipulation, HTML technical skills) to share ideas. We work in silos by default because they are the easiest way and instead of trying to destroy that default model, successful marketers are finding collaboration methods that work alongside a silo based hierarchy.



5. Utilise their data assets beyond “single use data” – when a consumer browses a product online but doesn't buy it – of course you should trigger a behavioural program to try and turn that into a conversion, but the smartest marketers and using data aggregated over many sessions and from many sources to build an implied behavioural profile to use far beyond a single website abandonment event and get significant (50%+) database coverage from that asset (when compared to an explicit preference centre where the proportion of customers completing it is typically very low (~5% maximum.)



6. Use business rules to manage clashes in messages to the customer – not only do messages clash between channels but often within THE SAME channel, and using governance controls to prioritise messages and ensure offers and messages don't clash is vital not only in improving the customers experience but also saving out bidding yourself in your budget for offers.



At a minimum you should be managing these clashes in the same channel where data and deployment are connected in a single system, but the best marketers are achieving this across multiple channels using a DMP.



Download Cross Channel Orchestration Fundamentals: Aligning Web With All Marketing Channels to learn how you can deliver the most meaningful, positive, and consistent customer experiences across all channels that enhance loyalty and deliver results.





Microsoft beats estimates with $22.3 billion in quarterly revenue

Cloud services were strong, but the company's consumer devices revenue was generally flat or down, except for Surface computers.



Please visit Marketing Land for the full article.


The 7 Epiphanies Needed to Intuitively Grasp Statistical Significance

There is only one danger more deadly to an online marketer than ignorance, and that danger is misplaced confidence.


Whenever a marketer omits regular statistical significance testing, they risk infecting their campaigns with dubious conclusions that may later mislead them. But because these conclusions were based on “facts” that the marketer “empirically” observed with their own two eyes, there is scant possibility that these erroneous ideas will ever be revisited, less questioned.


The continued esteem given to these questionable conclusions causes otherwise sane marketers to irrationally believe their sterile photos to be superior, their shoddy headlines to be superb, and their so-so branding to be sublime.


Statistical significance testing is the cure to this woe. There are math courses aplenty that describe this field, but the world has no need for another. Today, I propose something different: a crash course in the necessary intuition. The goal of this piece, then, is to instill in you a series of “a-ha” moments that'll make statistical significance click all while sending warm, fuzzy rushes of understanding into your mind.


Epiphany #1: Large sample sizes dilute eccentricity


Imagine you see a book rated an “average” of 5 stars on Amazon. If this average was based on the review of only a single reader, you would hardly think this book better than another which was rated lower (say 4.2 stars), albeit on the back of hundreds of reviews. Common sense informs you that the book with one rating might have been reviewed by a reader who, for purely idiosyncratic reasons, happened to adore it. But you, as a cautious potential purchaser, cannot tell whether that single review was more reflective of the reviewer rather than the book. Without further information, you cannot be confident of the book's quality.


As you can see with this Amazon book review example, small sample sizes give eccentricity a chance to express itself. It's for this exact same reason that an advertising campaign report containing only five clicks makes for an unreliable source of truth. Here, it's possible that those five advert-clickers were oddly passionate fans of your product who happened to see your advert at the right time. The excellent results your advertising campaign seemed to enjoy may, in reality, have just been a fluke.


The intuitive idea we have just seen has not gone unnoticed by mathematicians. They have indeed packaged it up with a delightfully self-explanatory name: “The Law of Large Numbers”.


Epiphany #2: P-values are trade-offs between certainty and experiment length


Imagine a gambler who bets his house on a coin being rigged. We would pity his stupidity if he bet after seeing a coin land “heads” only three times in a row. But no one would doubt this mental acuity if the coin had instead landed “heads” a million times in a row. Intuitively-and indeed mathematically-this is a sound bet.


But notice that the gambler is still betting. He can never be fully, totally, and absolutely certain that the coin is rigged. Even after seeing a million “heads” in a row, there is still an infinitesimal yet nevertheless existent chance that a fair coin could have given the result of “a million heads in a row”. But, practically speaking, this is exceedingly unlikely, so the gambler shouldn't let such a tiny shard of uncertainty deter him from making a fundamentally sound bet.


Now we have two extremes: flipping a coin three times, after which it is too early to make a confident bet; and flipping a coin a million times, after which it is exceedingly secure to bet. But what if our gambling man has a family wedding to attend that afternoon. He still wants to bet on the coin in confidence, but he doesn't want to wait around until it has been flipped “heads” a million times in a row. Translated into the business context, what if we, as advertisers, don't want to continue our Puppies vs Kitten Photo A/B test for 10 years before deciding which photo was better for sales? By waiting that long, we would have wasted 10 years in showing a proportion of our customers a photo that was comparatively ineffective at effecting sales. Had we figured out which photo was better earlier on, we could have had our best foot forward for a much longer period and earned higher profits all that while.


The crux of the matter is this: There is a trade-off between certainty and experiment length. We can see this intuitively by considering how our feelings of confidence would develop after an increasingly long series of coin flips. After 1 flip of “heads”, none of us would suspect the coin of being rigged. After 5 flips, we'd start seriously entertaining the thought. After 10 flips, most of us would strongly suspect it, but perhaps not enough to bet the house on it. After 100 flips, little doubt could remain in our minds, and we'd feel confident about making a serious wager. After 1,000 flips, we'd be screaming at the top of our lungs for the bookie to take our money.


As we have seen, the more consecutive “heads” flips we witnessed landing, the more certain we'd feel about the coin being rigged. But given that we are not immortal and that we will never reach 100% certainty with anything in our lives, we all must choose a pragmatic point where our uncertainty reaches a tolerably low level, a point where we put our hands up and say, “I've seen enough-let's do this thing”.


This trade-off point is quantified by statisticians with a figure they dub the p-value. Very roughly speaking, the p-value corresponds to the chance you have of being wrong about your conclusion. The p-value can thus be thought of as a preference, one that represents your desired trade-off between certainty and experiment length. Typically, marketers set their p-value to .05, which corresponds to having a 1 in 20 chance of being wrong. If you are risk averse about making mistakes, you could set your p-value to .01, which would mean you have only a 1 in 100 chance of being wrong (but your experiment would take much longer to attain this heightened level of certainty).


jelly-beans-statisical-significance
(Image Source)


Perhaps no industry is as wedded to the use of p-values as the pharmaceutical industry. As Ben Goldacre points out in his chilling book, Bad Pharma, there is terrible potential for the pharmaceutical industry to hoodwink doctors and patients with p-values. For example, a p-value of .05 means that one trial in 20 will incorrectly show a drug to be effective, even though, in actuality, that drug is no better than placebo. A dodgy pharmaceutical company could theoretically perform 20 trials of such a drug, bury the 19 trials showing it to be rubbish, and then proudly publish the one and only study that “proves” the drug works.


For the same probabilistic reasons, the online marketer who trawls through their Google AdWords/Facebook Ads/Google Analytics reports looking for patterns runs a big risk of detecting trends and tendencies which don't really exist. Every time said marketer filters their data one way or the other, they are essentially running an experiment. By sheer force of random chance, there will inevitably be anomalies, anomalies which the marketer will then falsely attribute to underlying pattern. But these anomalies are often no more special than seeing a coin land “heads” five times in a row in 1/100 different experiments where you flipped five fair coins.


Epiphany #3: Small differences in conversion rates are near impossible to detect. Large ones, trivial.


Imagine we observed the following advertising results:


advertising-results-conversions


Upon eyeballing the data, we see that the goat variant tripled its equestrian competitor's conversion rate. What's more, we see that there was a large number of impressions (1,000) in each arm of the experiment. Is this enough to satisfy the aforementioned “Law of Large Numbers” and give us the certainty we need? Surely these data mean that the “Miniature Goat” is the better photo in a statistically significant way?


Not quite. Without going too deep into the math, these results fail to reach statistical significance (where p=.05). If we concluded that the goat was the better photo, we would have a 1 in 6 chance of being wrong. Our failure to reach statistical significance despite the large number of impressions shows us that impressions alone are insufficient in our quest for statistically significant results. This might surprise you. After all, if you saw a coin land “heads” 1,000 times in a row, you'd feel damn confident that it was rigged. The math of statistical significance supports this feeling-your chances of being wrong in calling this coin rigged would be about 1 in 1,000,000,000,000,000,000,000,000,000,000… (etc.)


So why is it that the coin was statistically significant after 1,000 flips but the advert wasn't after 1,000 impressions? What explains this difference?


Before answering this question, I'd like to bring up a scary example that you've probably already encountered in the news: Does the use of a mobile phone increase the risk of malignant brain tumors? This is a fiendishly difficult question for researchers to answer, because the incidence of brain tumors in the general population is (mercifully) tiny to start off with (about 7 in 100,000). This low base incidence means that experimenters need to include absolutely epic numbers of people in order to detect even a modestly increased cancer risk (e.g., to detect that mobile phones double the tumor incidence to 14 cases per 100,000).


Suppose that we are brain cancer researchers. If our experiment only sampled 100 or even 1,000 people, then both the mobile-phone-using and the non-mobile-phone-using groups would probably contain 0 incidences of brain tumors. Given the tiny base rate, these sample sizes are both too small to give us even a modicum of information. Now suppose that we sampled 15,000 mobile phone users and 15,000 non-users (good luck finding those).


At the end of this experiment, we might count two cases of malignant brain cancer in the mobile-phone-using group and one case in the non-mobile-using group. A simpleton's reading of these results would conclude that the incidence of cancer (or the “morbid conversion rate”) with mobile phone users is double that of non-mobile-phone users. But you and I know better, because intuitively this feels like too rash a conclusion-after all, it's not that difficult to imagine that the additional tumor victim in the mobile-phone-using group turned up there merely by random chance. (And indeed, the math backs this up: this result is not statistically significant at p=.05; we'd have to increase the sample size a whopping 8 times before we could detect this difference.)


Let's return to our coin-flipping example. Here we only considered two outcomes-that the coin was either fair (50% of the time it lands “heads”) or fully biased to “heads” (100% of the time it lands “heads”). Phrasing the same possibilities in terms of conversion rates (where “heads” counts as a conversion), the fair coin has a 50% conversion rate, whereas the biased coin has a 100% conversion rate. The absolute difference between these two conversion rates is 50% (100% – 50% = 50%). That's stonking huge! For comparison's sake, the (reported) difference between the miniature pony and miniature goat photo variants (from the example at the start of this section) was only .2%, and the suspected increase in cancer risk for mobile phone users was .01%.


Now we get to the point: It is easier to detect large differences in conversion rates. They display statistical significance “early” (i.e., after fewer flips or fewer impressions, or in studies relying on smaller sample sizes). To see why, imagine an alternative experiment where we tested a fair coin against one ever so slightly biased to “heads” (e.g., one that lands “heads” 51% of the time). This would require many, many coin flips before we would notice the slight tendency towards heads. After 100 flips we would expect to see 50 “heads” with a fair coin and 51 “heads” with the rigged one, but that extra “heads” could easily happen by random chance alone. We'd need about 15,000 flips to detect this difference in conversion rates with statistical significance. By contrast, imagine detecting the difference between a coin biased 0% to “heads” (i.e., always lands “tails”) and one biased 100% to “heads” (in other words, imagine detecting a 100% difference in conversion rates). After 10 coin flips we would notice that the results would be either ALL heads or ALL tails. Would there really be much point in continuing to flip 90 more times? No, there would not.


This brings us to our next point, which is really just a corollary of the above: Small differences in conversion rates are near impossible to detect. The easiest way to understand this point is to consider what happens when we compare the results of two experimental variants with identical conversion rates: After a thousand, a million, or even a trillion impressions, you still won't be able to detect a difference in conversion rates, for the simple reason that there is none!


Bradd Libby, of Search Engine Land, calculated the rough number of impressions necessary in each arm of an experiment to reach statistical significance. He then reran this calculation for various different click-through rate (CTR) differences, showing that the smaller the expected conversion rate difference, the harder it is to detect.


impressions-needed-ctr


Notice how in the final row an infinite number of impressions are needed; as we said above, we will never detect a difference, because there is none to detect. The consequence of all this is that it's not worth your time, as a marketer, to pursue tiny expected gains; instead, you'd be better off going for a big win that you have a chance of actually noticing.


Epiphany #4: You destroy a test's validity by pulling the plug before its preordained test-duration has passed


Anyone wedded to statistical rigor ought to think twice about shutting down an experiment after perceiving what appears to be initial promise or looming disaster.

Medical researchers, with heartstrings tugged by moral compassion, wish that every cancer sufferer in a trial could receive what's shaping up to be the better cure-notwithstanding that the supposed superiority of this cure has yet to be established with anything approaching statistical significance. But this sort of rash compassion can have terrible consequences, as happened in the history of cancer treatment. For far too long, surgeons subjected women to a horrifically painful and disfiguring procedure known as the 'radical mastectomy'. Hoping to remove all traces of cancer, doctors removed the chest wall and all axillary lymph nodes, along with the cancer-carrying breast; it later transpired that removing all this extra tissue brought no benefit whatsoever.


Generally speaking, we should not prematurely act upon the results of our tests. The earlier stages of an experiment are unstable. During this time, results may drift in and out of statistical significance. For all you know, two more impressions could cause a previous designation of “statistically significant” to be whisked out from under your feet. Moreover, statistical trends can completely switch direction during their run-up to stability. If you peep at results early instead of waiting until an experiment runs its course, you might leave with a conclusion completely at odds with reality.


For this reason, it's best practice not to peek at an experiment until it has run its course-this being defined in terms of a predetermined number of impressions or a preordained length of time (e.g., after 10,000 impressions or two weeks). It is crucial that these goalposts be established before starting your experiment. If you accidentally happen to view your results before these points have been passed, resist the urge to act upon what you see or even to designate these premature observations as “facts” in your own mind.


Epiphany #5: “Relative” improvement matters, not “absolute” improvement


Look at the following table of data:


photo-variation-conversion-data


After applying a statistical significance test, we would see that the 80s rocker photo outperforms the 60s hippy photo in a statistically significant way. (The numerical details aren't relevant for my point so I've left them out.) But we need to be careful about what business benefit these results imply, lest we misinterpret our findings.


Our first instinct upon seeing the above data would be to interpret it as proving that the 80s rocker photo converted at a 16% higher rate than the 60s hippy photo, where 16% is the difference by subtraction between the two conversion rates (30% – 14% = 16%).


But calculating the conversion rate difference as an absolute change (rather than a relative change) would lead us to understate the magnitude of the improvement. In fact, if your business achieved the above results, a switch from the incumbent 60s hippy pic to the new 80s rocker pic would cause you to more than double your number of conversions, and, all things being equal, you would, as a result, also double your revenue. (Specifically, you would have a 114% improvement, which I calculated by dividing the improvement in conversion rates, 16%, by the old conversion rate, 14%.) Because relative changes in conversion rates are what matter most to our businesses, we should convert absolute changes to relative ones, then seek out the optimizations that provide the greatest improvements in these impactful terms.


Epiphany #6: “Statistically insignificant” does not imply that the opposite result is true


What exactly does it mean when some result is statistically insignificant? The example below has a p-value of approximately .15 for the claim that the Mini Goat photo is superior, making such a conclusion statistically insignificant.


photo-variation-pony


Does the lack of statistical significance imply that there is a full reversal of what we have observed? In other words, does the statistical insignificance mean that the “Miniature Pony” variant is, despite its lower recorded conversion rate, actually better at converting than the “Miniature Goat” variant?


No, it does not-not in any way.


All that the failure to find statistical significance says here is that we cannot be confident that the goat variant is better than the pony one. In fact, our best guess is that the goat is better. Based on the data we've observed so far, there is an approximately 85% chance that this claim is true (1 minus the p-value, .15 = .85). The issue is that we cannot be confident of this claim's truth to the degree dictated by our chosen p-value-to the minimum level of certainty we wanted to have.


One way to intuitively understand this idea is to think of any recorded conversion rate as having its own margin of error. The pony variant was recorded as having a .1% conversion rate in our experiment, but its confidence interval might be (using made-up figures for clarity) .06% above or below this recorded rate (i.e., the true conversion rate value would be between .04% and .16%). Similarly, the confidence interval of the goat variant might be .15% above or below the recorded .3% (i.e., the true value would be between .15% and .45%). Given these margins of error, there exists the possibility that the pony's true conversion rate would be at the high end (.16%) of its margin of error, whereas the goat's true conversion rate would lie at its low end (.15%). This would cause a reversal in our conclusions, with the pony outperforming the goat. But in order for this reversal to happen, we would have had to take the most extreme possible values for our margins of error-and in opposite directions to boot. In reality, these extreme values would be fairly unlikely to turn up, which is why we say that it's more likely that goat photo is better.


Epiphany #7: Any tests that are run consecutively rather than in parallel will give bogus results


Statistical significance requires that our samples (observations) be randomized such that they fairly represent the underlying reality. Imagine walking into a Republican convention and polling the attendees about who they will vote for in the next US presidential election. Near everyone in attendance is going to say “the Republican candidate”. But it's self-evident that the views of the people in that convention are hardly reflective of America as a whole. More abstractly, you could say that your sample doesn't reflect the overall group you are studying. The way around this conundrum is randomization in choosing your sample. In our example above, the experimenter should have polled a much broader section of American society (e.g., by questioning people on the street or by polling people listed in the telephone directory.) This would cause the idiosyncrasies in voting patterns to even out.


If you ever catch yourself comparing the results of two advertising campaigns that ran one after the other (e.g., on consecutive days/weeks/months), stop right now. This is a really really bad idea, one that will drain every last ounce of statistical validity from your analyses. This is because your experiments are no longer randomly sampling. Following this experimental procedure is the logical equivalent of extrapolating America's political preferences after only asking attendees of a Republican convention.


To see why, imagine you are a gift card retailer who observed that 4,000% as many people bought Christmas cards the week before Christmas compared to the week after. You would be a fool if you concluded that the dramatic difference in conversion rates between these two periods was because the dog photo you advertised with during the week preceding Christmas was 40 times better at converting than the cat photo used the following week. The real reason for the staggering difference is that people only buy Christmas cards before Christmas.


Put more generally, commercial markets contain periodic variation-ranging in granularity from full-blown seasonality to specific weekday or time of day shopping preferences. These periodic forces can sometimes fully account for observed differences in conversion rates between two consecutively run advertising campaigns, as happened with the Christmas card example above. The most reliable way to insulate against such contamination is to run your test variants at the same time as one another, as opposed to consecutively. This is the only way to ensure a fair fight and generate the data necessary to answer the question 'which advert variant is superior?' As far as implementation details go, you can stick your various variants into an A/B testing framework. This will randomly display your different ads, and once the experiment ends you simply tally up the results.


Perhaps you are thinking, “My market isn't affected by seasonality, so none of this applies to me”. I strongly doubt that you are immune to seasonality, but for argument's sake let's assume your conviction is correct. In this case, I would still argue that you have a blind spot in that you are underestimating the temporally varying effect of competition. There is no way for you to predict whether your competitors will switch on adverts for a massive sale during one week only to turn them off during the next, thereby skewing the hell out of your results. The only way to protect yourself against this (and other) time-dependent contaminants is to run your variants in parallel.


Conclusion


Having been enlightened by the seven big epiphanies for understanding statistical significance, you should now be better equipped to pull up your sleeves and dig into statistical significance testing from a place of comfortable understanding. Your days of opening up Google AdWords reports and trawling for results are over; instead, you methodically set up parallel experiments, let them run their course, choose your desired trade-off for certainty vs. experimental time, give adequate sample sizes for your expected conversion rate differences, and calculate business impact in terms of relative revenue differences. You will no longer be fooled by randomness.


About the Author: Jack Kinsella, author of Entreprenerd: Marketing for Programmers.




7 Top-Performing Unbounce Marketing Emails to Copy, Paste & Customize

featured-image-7-emails-to-steal-650


I've been wanting to write this post for a while, though I always had a slightly different format in mind. But after seeing this incredibly delightful and generous post by messaging app Drift, I knew I had to steal borrow their format (sharing is caring, right?).


If you're like me, you have multiple email accounts: one for work, one for pleasure and - admittedly - one for newsletters and other spam. As you can likely empathize with, I get a buttload of emails every day. Of which I read very few.


Here are some reasons why I actually read an email:



  • I have to. It's either necessary for me to do my job or for me to function in society (i.e., “Your cell phone bill is overdue”).

  • It's from a friend or family member. (My grandma is legit tech savvy.)

  • The email subject line is so gosh darn intriguing that I literally cannot not click.

  • It's from a source I trust, and/or the value is clearly indicated in the subject line or in the email opening.


That's pretty much it. And a quick poll of a few of my team members revealed they shared my sentiment.


giphy

^ Yes, my team members are Simpsons characters. Image via Giphy.

So, if you're writing an email on behalf of your business (or on behalf of your client), it better fall into one of the above categories. Otherwise, it'll end up in email heaven. Or if you don't believe in heaven… email nothingness.


Lucky for you, we've compiled our (Unbounce's) top-performing emails (based on open rate, click-through rate or other defined metric of engagement). AND WE WANT YOU TO STEAL THEM.


Here's what's included:



  1. Blog welcome email

  2. Re-engagement email

  3. Campaign follow-up email

  4. Holiday email

  5. Feature launch email

  6. Oops email

  7. Nurture track welcome email


Plus we've highlighted what made them so successful and provided actionable tips for translating what worked to your own emails.


And for your copy-and-paste pleasure, here they all are in a Google doc.


Get all of Unbounce's top-performing emails

Copy, paste and customize em for your own email campaigns.

By entering your email you'll receive weekly Unbounce Blog updates and other resources to help you become a marketing genius.

1. Start a conversation with your blog welcome email


This email was just one of the amazing things to come out of our two-week publishing hiatus. It's the welcome email that is sent after someone subscribes to the Unbounce blog:


Subject line: Just another welcome email


blog-welcome-email


We had three goals for re-working the blog welcome email: (1) inject personality, (2) get recipients to actually engage and (3) gain insights about what our readers actually struggle with as marketers.


Whereas our previous blog welcome email got at most one response per month, this email prompted 20 genuine responses in its first 30 days. Here's one:


blog-email-reply-1


This one, too:


blog-email-reply-2


If that wasn't enough, the email also sustained a ~60% open rate (multiple online sources state 50% is industry average for this type of email). We'd like to think it has something to do with our borderline self-deprecating subject line… but then, maybe we were just lucky.


Pro tip: Steer clear from generic subject lines such as, “Welcome to [company name]”. Instead, think about how to leverage pattern disruption to cut through the generic garbage and get noticed.

2. Revive dormant contacts with a re-engagement email


After the release of our web series The Landing Page Sessions, we sent this email to our unknown subscribers. (Note: We define unknown subscribers as people whose email is the only data we have, and thus cannot determine if they are market qualified.)


The goal of the email was to get recipients to check out the first season of the show, which showcases our product in a delightful and actionable way.


Subject line: Wanna binge watch the Netflix of marketing videos? 
<div style='clear: both;'></div>
</div>
<div class='post-footer'>
<div class='post-footer-line post-footer-line-1'>
<span class='post-author vcard'>
Posted by
<span class='fn' itemprop='author' itemscope='itemscope' itemtype='http://schema.org/Person'>
<meta content='https://www.blogger.com/profile/05258315617190530477' itemprop='url'/>
<a class='g-profile' href='https://www.blogger.com/profile/05258315617190530477' rel='author' title='author profile'>
<span itemprop='name'>Animal & Pest Control Specialists serving all of New Jersey & New York..</span>
</a>
</span>
</span>
<span class='post-timestamp'>
at
<meta content='http://njbedbugremoval.blogspot.com/2016/10/7-top-performing-unbounce-marketing.html' itemprop='url'/>
<a class='timestamp-link' href='http://njbedbugremoval.blogspot.com/2016/10/7-top-performing-unbounce-marketing.html' rel='bookmark' title='permanent link'><abbr class='published' itemprop='datePublished' title='2016-10-20T16:32:00-07:00'>4:32 PM</abbr></a>
</span>
<span class='post-comment-link'>
<a class='comment-link' href='http://njbedbugremoval.blogspot.com/2016/10/7-top-performing-unbounce-marketing.html#comment-form' onclick=''>
No comments:
  </a>
</span>
<span class='post-icons'>
<span class='item-control blog-admin pid-9055127'>
<a href='https://www.blogger.com/post-edit.g?blogID=1383011985231351335&postID=2544926120599580826&from=pencil' title='Edit Post'>
<img alt='' class='icon-action' height='18' src='https://resources.blogblog.com/img/icon18_edit_allbkg.gif' width='18'/>
</a>
</span>
</span>
<div class='post-share-buttons goog-inline-block'>
<a class='goog-inline-block share-button sb-email' href='https://www.blogger.com/share-post.g?blogID=1383011985231351335&postID=2544926120599580826&target=email' target='_blank' title='Email This'><span class='share-button-link-text'>Email This</span></a><a class='goog-inline-block share-button sb-blog' href='https://www.blogger.com/share-post.g?blogID=1383011985231351335&postID=2544926120599580826&target=blog' onclick='window.open(this.href,

The Importance of Storytelling in Content Marketing #MPB2B

shane-snow-mpb2b-2016


Once upon a time …


A long time ago in a galaxy far, far away …


There once was …


These aren't just classic fairy tale or movie beginnings. They represent memories. They evoke feelings. They get our minds churning. They signal that an exciting story is about to unfold-and those stories are powerful.


As content marketers, we need to be in the business of storytelling if we want our content to resonate and inspire. As Shane Snow, Chief Creative Officer and Co-Founder of Contently, said during his session Create, Connect, And Optimize Through Storytelling: The New Formula For Content Success at the MarketingProfs B2B Marketing Forum.


“When you engage in a story, your brain lights up.”


We want our audiences' brains to light up when they experience our content. And at a time when content is absolutely everywhere, weaving a good story can set it apart from the rest. Here's why:


#1 – We're programmed for stories.


Stories and the way we react to them is part of our human makeup-dating back to caveman times and evolving to where we are now. As Snow said, stories make our brains light up. Research shows that when people are engaged in a story, there are a number of different parts of the brain that are connected and firing.


#2 – Stories break down barriers.


Stories help us understand each other. They connect us. They can inspire compassion. They can change our attitudes.


#3 – Stories build relationships.


Our human predisposition toward embracing stories, coupled with the connectedness stories give us, helps us build stronger relationships.


To open his session, Snow referenced a recent poll he had conducted. He asked his following to vote on who they would elect as president of the United States: Harry Potter author J.K. Rowling or England's Queen Elizabeth.


The response? Overwhelmingly in favor of J.K. Rowling. Why? Because people felt like they knew her. There was a connectedness there-a relationship had been built.


How Do You Tell Better Stories?


Snow's framework for content marketing storytelling is simple: Create, Connect and Optimize.


Create: Create content that can tell your audience compelling stories at each stage of the buyer's journey. At the top, address shared interests and values. In the middle, tell stories about your company and your customers. At the bottom, dive into stories about your products and services. In addition, at each stage, the style of content you create could fall into three buckets: timely, seasonal or evergreen.


Connect: Choose how you will share your content with your audience based on Utilize your owned site and your email list to share content for sure. When it comes to additional tactics such as social media or webinars, make your choice based on whether you're hoping for brand awareness or conversions.


Optimize: Once you've created and connected, you need to find out whether your content is actually effective. Start by looking at engagement metrics to determine where you're winning and losing. Then take the steps to refine your content is created and shared with people.


“Stories have the potential to unlock changes and kindness, and improve our humanity,” Snow said. “If we think about it this way, we can do a lot of good in the world.”


If you're attending MarketingProfs B2B Marketing Forum, we'd love to connect. Find us on Twitter at @TopRank, @leeodden, @azeckman, @amywhiggins@leiladlf and @CaitlinMBurgess.




Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.


© Online Marketing Blog - TopRank®, 2016. |
The Importance of Storytelling in Content Marketing #MPB2B | http://www.toprankblog.com

The post The Importance of Storytelling in Content Marketing #MPB2B appeared first on Online Marketing Blog - TopRank®.




Tuesday, October 18, 2016

50 B2B Marketing Influencers Speaking at #MPB2B 2016

b2b marketing influencers


First of all, Happy 10th Birthday to Ann Handley and the MarketingProfs B2B Forum team!


Time flies when you're working hard, smart and with style – just like Sharon Hudson, Julie Pildner and their team have been for so many years to create a great experience for B2B Forum attendees.


There are a very small number of industry conferences that I look forward to attending year after year and the B2B Forum is at the top of that list. Many other B2B marketing practitioners and leaders that feel the same way and that's why the content is so good year after year. MarketingProfs has built something solid both in terms of the event and the community.


Being a part of that community is a special thing and it's a joy to connect, collaborate and participate with so many top notch marketers. My participation with this year's event has touched on several levels ranging from Ashley and Tom from my team helping with some event promotions and Ann's opening keynote to my presentation on activating B2B influencers and tools.


Speaking of influencers, below is a list of 50 speakers that are sure to influence your knowledge about B2B marketing this week at the B2B Forum in Boston.


As always, lists are great to be on, but can't include everyone. The ranking of this list is powered for the most part by Traackr, the influencer relationship management platform that we work with. The specific topic we sorted by was of course, “B2B marketing” and we just had to start it off with the queen bee of B2B, Ann Handley herself.


Ann Handley @marketingprofs

Chief Content Officer, MarketingProfs


Michael Brenner @brennermichael

CEO, Marketing Insider Group


Douglas Burdett @MarketingBook

Principal, Artillery Marketing


Lee Odden @leeodden

CEO, TopRank Marketing


Margaret Molloy @MargaretMolloy

Global CMO & Head of Business Development, Siegel+Gale


Matt Heinz @HeinzMarketing

President & Founder, Heinz Marketing


Doug Kessler @dougkessler

Creative Director & Co-Founder, Velocity Partners


Joe Pulizzi @joepulizzi

Founder, Content Marketing Institute


Jon Miller @jonmiller

CEO and Co-Founder, Engagio


Jason Miller @JasonMillerCA

Senior Manager – Global Content Marketing, LinkedIn


Carlos Hidalgo @cahidalgo

CEO & Principal, ANNUITAS


Ardath Albee @ardath421

CEO & B2B Marketing Strategist, Marketing Interactions


Jeffrey L. Cohen @jeffreylcohen

Director, Content Strategy at Oracle Marketing Cloud


Carlos Gil @CarlosGil83

Global Head of Social Media, BMC Software


Scott Brinker @chiefmartec

President, ion interactive, inc.


Mitch Joel @mitchjoel

President, Mirum Agency


Heidi Cohen @heidicohen

Chief Content Officer, Actionable Marketing Guide


Lauren Goldstein @laurenondemand

VP Strategy & Partnerships, Babcock & Jenkins


Christopher Penn @cspenn

Vice President of Marketing Technology, SHIFT Communications


Samantha Stone @samanthastone

Founder & CMO, The Marketing Advisory Network


Andy Crestodina @crestodina

Strategic Director, Orbit Media Studios, Inc.


Stephan Hovnanian @stephanhov

Content Solutions Architect, Bambu (by Sprout Social)


Jeremy Miller @StickyBranding

Brand Builder, Sticky Branding


Larry Kim @larrykim

Founder & CTO, WordStream, Inc


Steve Garfield @stevegarfield

Investor, Writer, Humorist


Jay Acunzo @jayacunzo

VP of Platform, NextView Ventures


Doug Bewsher @Dougino

CEO, Leadspace


Nick Westergaard @nickwestergaard

Chief Brand Strategist, Brand Driven Digital


Andrew Moravick @Amoravick

Senior Research Associate, Marketing Effectiveness & Strategy, Aberdeen Group


Scott Monty @scottmonty

CEO, Co-Managing Partner, Brain+Trust Partners


John Foley @johnfoleyjr

CEO/CMO, interlinkOne/Grow Socially


Andrea Vahl @andreavahl

Social Media Consultant and Speaker, Social Media Consultant


Amanda Nelson @amandalnelson

Senior Manager AppExchange Content & Community, Salesforce


Andrew Davis @drewdavishere

CEO, Monumental Shift


Tom Webster @webby2001

Vice President, Strategy and Marketing, Edison Research


Tamsen Webster @tamadear

Founder and CEO Strategic Speaking


Yoav Schwartz @yostar

Co-Founder, Uberflip


Paul Roetzer @paulroetzer

Founder and CEO, PR 20/20


Bob Ruffolo @BobRuffolo

Founder and CEO, IMPACT Branding & Design LLC


Jim Beretta @beretta

President, Customer Attraction Marketing Consulting


Dean Shaw @deanshaw

Global Chat Program Manager, SAS Institute


Marcus Sheridan @TheSalesLion

President, The Sales Lion


Oli Gardner @oligardner

Co-Founder, Unbounce


Sarah Lewis @sarahelizalewis

Assistant Professor, Harvard University & Author of The Rise


Michael Wesch @mwesch

University Distinguished Teaching Scholar/Associate Professor of Anthropology, Kansas State University


Erica McGillivray @emcgillivray

Senior Community Manager, Moz


Shane Snow @shanesnow

Chief Creative Officer and Co-Founder, Contently


Janet Driscoll-Miller @janetdmiller

President, Marketing Mojo


Chris Moody @cnmoody

Content Marketing Leader, GE Digital


Kristen Craft @thecrafty

Director of Business Development, Wistia


Thank you to all of the speakers who will be sharing their knowledge this week and especially to the team at MarketingProfs from Rob Zaleski to Ann Handley herself for creating such an amazing conference experience to look forward to. I know Ashley Zeckman, Caitlin Burgess and Leila De La Fuente from my team are looking forward to it!


There will be plenty of Tweeting from @toprank @azeckman @caitlinmburgess and @Leiladlf as well as liveblogging of the conference here on TopRank's Online Marketing Blog. Be sure to check back for updates.




Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.


© Online Marketing Blog - TopRank®, 2016. |
50 B2B Marketing Influencers Speaking at #MPB2B 2016 | http://www.toprankblog.com

The post 50 B2B Marketing Influencers Speaking at #MPB2B 2016 appeared first on Online Marketing Blog - TopRank®.




How to Run an A/B Test in Google Analytics

Designs don't always work out as intended.


The layout looks good. The color choices seem great. And the CTA balances clever and clear.


But…


It's not working. All of it. Some of it. You're not completely sure, but something's gotta give.


Despite everyone's best intentions, including all the hours of research and analyses, things don't always work out as planned.


That's where continuous testing comes in. Not a one-and-done or hail & pray attempt.


Even better, is that your testing efforts don't need to be complex and time consuming.


Here's how to set-up split test inside Google Analytics in just a few minutes.


What are Google Analytics Content Experiments?


Let's say your eCommerce shop sells Pug Greeting Cards. (That's a thing by the way.)


Obviously, these should sell themselves.


But let's just suspend disbelief for a moment and hypothesize that sales are low because you're having trouble getting people into these individual product pages in the first place.


Your homepage isn't a destination; it's a jumping off point.


Peeps come in, look around, and click somewhere else.


Many times that's your Product/Service pages. Often it's your About page.


Regardless, the goal is to get them down into a funnel or path as quickly as possible, (a) helping them find what they were looking for while also (b) getting them closer to triggering one of your conversion events.


The magic happens on a landing page, where these two things – a visitor's interest and your marketing objective – intertwine and become one in a beautiful symphony.


So let's test a few homepage variations to see which do the best job at directing new visitors into your best-selling products.


One has a video, the other doesn't. One is short and sweet, the other long and detailed. One has a GIF, the other doesn't.


pug-chest-scratch


New incoming traffic gets split across these page variations, allowing you to watch and compare the number of people completing your desired action until you can confidently declare a winner.


(It's probably going to be the one featuring this video.)


Running simple and straightforward split test like this is landing page optimization 101, where you identify specific page variables that result in the best results for your audience and multiply them across your site.


Google Analytics comes with a basic content experiments feature that will allow you to compare different page variations, split traffic to them accordingly, and get email updated about how results are trending and whether you're going to hit your defined objective or not.


But… they're technically not a straightforward A/B test. Here's why, and how that's actually a good thing.


Why Content Experiments Can Be Better than Traditional A/B Tests


Your typical A/B test selects a very specific page element, like the headline, and changes only that one tiny variable in new page variations.


The interwebs are full of articles where switching up button color resulted in a 37,596% CTR increase* because people like green buttons instead of blue ones. Duh.


(*That's a made up number.)


There's a few problems with your classic A/B test though.


First up, tiny changes often regress back to the mean. So while you might see a few small fluctuations when you first begin running a test, small changes usually only equal small results.


changes-regress-to-the-mean-ab-test
(Image Source)


The second problem is that most A/B tests fail.


And if that weren't bad enough, the third issue is that you're going to need a TON of volume (specifically, 1,000 monthly conversions to start with and a test of at least 250 conversions) to determine whether or not those changes actually worked or not.


Google Analytics Content Experiments use an A/B/N model instead. Which is like a step in between one-variable-only A/B tests and coordinated-multiple-variable multivariate tests.


(After typing that last sentence, I realized only hardcore CRO geeks are going to care about this distinction. However it's still important to understand from a high level so you know what types of changes to make, try, or test).


You can create up to 10 different versions of a page, each with their own unique content or changes.


In other words, you can test bigger-picture stuff, like: “Does a positive or negative Pug value proposition result in more clicks?”


Generally these holistic changes can be more instructive, helping you figure out what messaging or page elements you can (and should) carry through to your other marketing materials like emails, social and more.


And the best part, is instead of requiring a sophisticated (read: time consuming) process to set up to make sure all of your variable changes are statistically significant, you can use Google Analytics Content Experiments to run faster, iterative changes and learn on-the-go.


Here's how to get started.


How to Setup Google Analytics Experiments


Setting up Content Experiments only takes a few seconds.


You will, however, have to set-up at least one or two page variations prior to logging in. That topic's beyond the scope here, so check out this and this to determine what you should be testing in the first place.


When you've got a few set-up and ready to go, login to Google Analytics and start here.


Step #1. Getting Started


Buried deep in the Behavior section of Google Analytics – you know, the one you ignore when toggling between Acquisition and Conversions – is the vague, yet innocuous sounding 'Experiments' label.


Chances are, you'll see a blank screen when you click on it that resembles:


google-analytics-experiments-empty


To create your first experiment, click the button that says Create Experiment on the top left of your window.


With me so far? Good.


Let's see what creating one looks like.


Step #2. Choose an Experiment


Ok now the fun starts.


Name your experiment, whatever.


And look down at selecting the Objective. Here's where you can set an identifiable outcome to track results against and determine a #winning variation.


content-experiment-step-one


You have three options here. You can:



  • Select an existing Goal (like opt-ins, purchases, etc.)

  • Select a Site Usage metric (like bounce rate)

  • Create a new objective or Goal (if you don't have one set-up already, but want to run a conversion-based experiment)


The selection depends completely on why you're running this test in the first place.


For example: most are surprised to find that their old blog posts often bring in the most traffic. The problem? Many times those old, outdated pages also have the highest bounce rates.


Navigate to: Behavior > Secondary Dimensions + Google/Organic > Top Pageviews > Bounce Rate.


Here's an example:


google-analytics-source-medium


(Here are a few other actionable Google Analytics reports to spot similarly low hanging fruit when you're done setting up an experiment.)


Let's select Bounce Rate as the Objective for now, so we can make page changes to the layout, or increasing the volume and quantity of high quality visuals to get people to stick around longer.


After selecting your Objective, you can click on Advanced Options to pull up more granular settings for this test.


content-experiments-advanced-options


By default, these advanced options are off, and Google will “adjust traffic dynamically based on variation performance”.


However if enabled, your experiment will simply split traffic evenly across all the page variations you add, run the experiment for two weeks and shoot for a 95% statistical confidence level.


Those are all good places to start in most cases, however you might want to change the duration depending on how much traffic you get (i.e. you can get away with shorter tests if this page will see a ton of traffic, or you might need to extend it longer than two weeks if there's only a slow trickle).


So far so good!


Step #3. Configure Your Experiment


The next step is to simply add the URLs for all of the page variations you want to test.


Literally, just copy and paste:


content-experiment-add-urls


You can also give them helpful names to remember. Or not. It will simply number the variants for you.


Step #4. Adding Script Code to your Page


Now everyone's favorite part – editing your page's code!


The good news, is the first thing you see under this section is a helpful toggle button to just email all this crap code over to your favorite technical person.


If you'd like to get your hands dirty however, read on.


setting-up-experiment-code


First up, double check all of the pages you plan on testing to make sure that your default Google Analytics tracking code is installed. If you're using a CMS, it should be, as it's usually added site-wide initially.


Next, highlight and copy the code provided.


You're going to need to look for the opening head tag in the Original variation (which should be located literally towards the top of your HTML document. Search for to make it easy:


head-tag-source-code


Once that's done, click Next Step back in Google Analytics to have them verify if everything looks A-OK.


Not sure if you did it right? Don't worry – they'll tell you.


For example, the first time I tried installing the code for this demo I accidentally placed it underneath the regular Google Analytics tracking code (which they so helpfully and clearly pointed out).


tracking-before-experiment-code-google-analytics


After double checking your work and fixing, you should see this:


review-and-start-content-experiment


And now you're ready to go!


See, that wasn't so bad now was it?!


Conclusion


Websites are never truly done and finished.


They need iteration; including constant analysis, new ideas, and changes to constantly increase results.


Many times, that means analyzing and test entire pages based on BIG (not small) changes like value propositions or layouts. These are the things that will deliver similarly big results.


Landing page optimization and split testing techniques can get extremely confident and require special tools that only CRO professionals can navigate.


However Google Analytics includes their own simple split testing option in Content Experiments.


Assuming you already have the new page variations created and you're comfortable editing your site's code, they literally only take a few seconds to get up-and-running.


And they can enable anyone in your organization to go from research to action by the end of the day.


About the Author: Brad Smith is a founding partner at Codeless Interactive, a digital agency specializing in creating personalized customer experiences. Brad's blog also features more marketing thoughts, opinions and the occasional insight.




Monday, October 17, 2016

Mastering the Art of Blogging: 6 Recipes to Help You Become A More Successful Content Creator

mastering-the-art-of-blogging


Last Saturday, Ashley Zeckman presented to a packed room of bloggers and shared a presentation titled: Mastering the Art of Blogging: 6 Recipes to Help You Become A More Successful Content Creator at the annual MN Blogger Conference.


The MN Blogger Conference offers a great opportunity for bloggers and content marketers to gather and learn insights on content strategy, search engine optimization, and social media. In her presentation, Ashley shared that unfortunately only 37% of companies have a documented content strategy (according to Content Marketing Institute and MarketingProfs) which presents an opportunity for content marketers to gain an advantage over competitors. Ashley was able to outline six recipes or strategies to gain your advantage. The six content marketing recipes include:



  1. Discovery Donuts

  2. Content Mission Canapés

  3. Best Answer Biscuits

  4. Less is More Lasagna

  5. Content Cheeseburger

  6. Influencer Ice Cream Sundae


Recipe 1: Discovery Donuts


The discovery donuts recipe focuses on understanding your audience. Ashley stated, “Today's consumers are overwhelmed with information and have become much more selective about which companies or brands they interact with.” To provide the best information for your audience, you need to build personas to target the right person within your audience. To build personas, ask your current customers directly with surveys, social polls, or calling them fairly continuously. Also, review your analytics to identify content types or key phrases from internal search that people are finding on your website. Analytics is a great tool that can help content marketers build a strategy that targets each personas. The last way to build a persona is to conduct keyword research to discover the “best answer” for their pain point.


Recipe 2: Content Mission Canapés


Ashley quoted Joe Pulizzi “To work, your [content] mission statement has to be all about the pain points of your readers.” To create a content mission statement that targets pain points you should include:



  • Who you are.

  • Who you're trying to reach.

  • How you plan to reach them.

  • What you want to accomplish.


Recipe 3: Best Answer Biscuits


One successful content marketing strategy is to provide the “best answer” for your audience's questions. Lee Odden was quoted, “Create signals of authority that position you as the best answer wherever your customers are searching.” The reason you want to be the best answer is because 50% of content gets 8 shares or less (BuzzSumo & Moz). The best answer will continue to be found via search and help your target audience. Key to a best answer strategy is creating resource rich content that ranks well in search engines. These pages should essential be the “best answer” for a particular topic.


Creating a consistent experience is also an important part of a “best answer” strategy since blogging today is actually a multi-channel approach. Everything from your blog posts to site content and social profiles need to create a consistent experience. Provide a consistent look and feel, along with messaging, to make your audience now that they are interacting with you.


Recipe 4: Less is More Lasagna


We are publishing so much content, that it is causing content overload. According to Ann Handley, “We don't need more content, we need better content.” The overall idea is to provide more quality content over quantity.  Ashley states that why more content isn't better.  She states, “When you try to pump out more content, the lowered quality can actually negatively impact the perception of your blog or brand.” Instead, focus on creating resource rich, high quality content.


To better reach prospective customers, identify key traits of your current ideal customers. Then develop profiles or personas to better target these identified audience members to focus on content impact. To create impact, make highly relevant content. Also, use contacts, tools and data to help determine relevance. Lastly, think about your distribution strategy.


Recipe 5: Content Cheeseburger


When it comes to content create evergreen content that can be slightly modified (or not) and be used again and again. Truth is, only a fraction of your audience saw it the first time, so why not share again. Also, use analytics to determine which posts resonate most with your audience. Always go back to your analytics to continue to identify what posts could be revised to meet changing audience needs. Another great content strategy would be to start a series to keep your readers coming back. Then you can point back to the previous posts in the series to breath new life into them.


Recipe 6: Influencer Ice Cream Sundae


Ashley shared that, “you can begin including influencer contributions into your blog content, without event having to perform outreach.” also utilize influencers to facilitate greater reach of your content, improve quality and some say influencer content can inspire more buyer engagement to leads and booked deals.


Ongoing influencer involvement can also lead to organic advocacy of your brand, products and services.If you curate or co-create, you can reach out to influencers and make it easy for them to share. Influencers are busy people so you want to remove as much friction as possible.




You can see Ashley's full presentation below on SlideShare:






Email Newsletter
Gain a competitive advantage by subscribing to the
TopRank® Online Marketing Newsletter.


© Online Marketing Blog - TopRank®, 2016. |
Mastering the Art of Blogging: 6 Recipes to Help You Become A More Successful Content Creator | http://www.toprankblog.com

The post Mastering the Art of Blogging: 6 Recipes to Help You Become A More Successful Content Creator appeared first on Online Marketing Blog - TopRank®.