Close

How can we help you?

We want to hear from you. And it’s easy to reach us. Give us a shout at 844-886-2252, send us an email at hello@mediumgiant.co, visit our contact page, or fill out the form right here. You can expect a response within two business days.

Close

What are you looking for?

Making the case for self-reported attribution in your marketing programs

If you’re a marketer, measurement can be one of your biggest headaches. And because of privacy concerns impacting digital analytics or the increasing worry of validating a data-driven approach in the face of a recession, this isn’t going to change anytime soon.

One particularly pesky challenge is attribution. 

Marketers have become increasingly aware that software attribution-driven models (see below) have blind spots. Software attribution tends to reward the last touches in the customer journey and miss the experiences that influence behavior along the way. 

This is part of the reason brands tend to over index on performance marketing and shy away from brand advertising. This phenomenon also affects content marketing initiatives, such as podcasts, blogging, community building, and social media — making it difficult to get them off the ground or give them enough runway to prove business value. 

In response, an old tactic — asking customers directly, “How did you hear about us?” — has made its way back into marketing attribution frameworks. Something so simple — and free, save for a little bit of someone’s time to implement — appealed to us. So we decided to test it.

The problem

A B2B client had invested heavily in content marketing in the form of blogs and a podcast. They felt like the programs were working, but they were unable to prove success through data. 

The hypothesis

We had a hunch that the programs were driving qualified leads, but our client was not set up to properly measure success — partly because they were unsure what metrics to track, and partly because of limitations in their Google Analytics and attribution modeling.

The test

We added a required, open-text field with a “How did you hear about us?” prompt on our client’s high-intent lead form. Making the response open text and required was critical to remove any biases that might come from a drop-down list and to ensure that we were collecting as many responses as possible. During a 12-month period we captured 314 leads.

For the test, we wanted to compare the last-touch channel attributed by Hubspot vs. the qualitative data collected from website visitors in the self-reported attribution field. 

The results

Software attributed 88% of lead volume to direct and organic traffic, 7% to “other sources,” and nearly 4% to paid search — which is funny because they don’t run any paid search!

Insights from the self-reported attribution field on the lead form told a different — and valuable — story:

  • 43% of the leads identified as a client or trade association referral. Most mentioned a specific name of the person or organization. We learned that one particular trade association was driving a ton of pipeline growth. We were also pleased at how many previous or existing clients referred business. 
  • 36% of the leads identified as coming from search engines (mostly Google, but also some others). This was great validation of our SEO-driven content and local SEO initiatives. 
  • 9% of the leads were attributed specifically to blogs or the client’s podcast. In many cases, the person named the specific blog that resonated with them, and even how they searched on Google to find it. 
  • Another 2.5% of the leads were attributed to specific conferences or lectures that our client had attended or spoken at. 

When we broke this down and looked only at “direct” or “dark” traffic, the insights were equally enlightening: 60% of the direct traffic self-identified as a customer or trade association referral, but 25% was attributed to SEO: 

Self-reported Attribution of Direct Traffic. Pie Chart.

Another positive? The richness of the responses. Below are just a few examples of the type of insights we got directly from prospects, redacted to protect client anonymity:

What’s happening here?

Software attribution has its limitations. In this case, the model was overweighting the last touch in the customer journey: where the demand was captured. People were consuming a lot of information during the course of their research and decision-making process, and when they were ready to buy, they came to the site directly (typically meaning “dark traffic”) or through branded organic search (searching online by our client’s name). 

When visitors come to a site these ways — direct or through branded search — it suggests that something they consumed during their research has created demand, such as discovering a meaningful piece of content or word-of-mouth. In the case of our client, that something was their thought leadership, in the form of blogs and a podcast, plus referrals from Facebook groups, existing clients, and trade associations. We know this because we asked website visitors and they told us — and, in some cases, they were wonderfully specific.

What are the limitations of attribution software?

  • Last-touch attribution models give greater weight to the final touch in the customer journey — where the demand was captured — and not what happened before then.
  • Blended attribution models assign equal weight to all touches, which doesn’t provide insight into the most meaningful experiences.
  • Even attribution models that give weight to first touch (U-shaped and first touch) generally require a visit to the website to be measured, so word-of-mouth or content consumed in other platforms, such as podcasts or social media, will not be measured. Google also recently announced it will be deprecating a few of these early-touch attribution models. 
  • GA3 Universal Analytics historically provided only a 90-day lookback window, and while GA4 is coming, shifts in cookie behavior from browsers will continue to be problematic for brands with longer sales cycles. (Note: GA4 will bring data-driven attribution, which may solve some attribution issues but still requires a site visit to be added to the attribution model.)
  • Modern multitouch attribution platforms can help, but they can be too complex, require too many resources to manage, or they’re just flat-out too expensive for many companies to implement. They also typically require a site visit to be included in the attribution model.

Does this mean you should ditch software attribution altogether? 

Absolutely not! Self-reported attribution is simply a valuable, additional data point in your attribution arsenal — and a low-cost and easy-to-implement one at that. It’s important to use both software and self-reported attributions for a holistic view of your marketing efforts across the customer journey. 

Self-reported attribution generally captures things such as blogs, podcasts, social media content consumed or shared on a platform, and digital or physical word-of-mouth. Usually this is the most meaningful touchpoint or channel in the prospect’s eyes; in other words, it’s the experience that stuck with them the most. Generally this will give you “program”-level attribution (social or podcast) and, in rare cases, the exact content. However, it’s still imperfect, as it relies on humans, after all. 

Software attribution provides insight into how you captured the demand once it was created. Relying only on software attribution can cause marketing teams or company leadership to overinvest in what captured the demand, such as paid search, and underinvest in growth channels, such as content marketing. 

Both are critical to measure, as ideally you want to be running demand-gen and demand-capture programs simultaneously. Refine Labs has dubbed this a “hybrid attribution model.” 

Did we see a drop in conversion rate when we added the new field? 

Nope. Conversion rate before and after implementation was virtually unchanged. However, for B2B businesses like this one with longer sales cycles, prospects converting through the lead form on the website are typically highly motivated and won’t be deterred by additional fields. 

If you’re a B2C marketer, selling products where people can buy quickly based on emotion, it’s always worth testing a proposed change to ensure that it doesn’t negatively impact your conversion rate. 

What can marketers do?

  • Add self-reported attribution to your lead forms. Make it a required, open-text field. 
  • Do interviews with prospects or existing customers to unlock their buyer journey and purchase habits. 
  • If in B2B, add traffic deanonymizing software such as Clearbit, Metadata, or Zoominfo to better qualify website traffic that doesn’t convert.
  • Make branded search demand a KPI.
  • Prioritize capturing first-party data. Create a newsletter or find other ways to capture emails early in the sales cycle (e.g., an offer) so you can continually measure people’s behavior in your CRM.
  • Run controlled pilots when testing new marketing programs to ensure you can effectively measure their impact. 
  • Understand how channels play together, and measure marketing programs holistically. This could also include moving to media mix modeling , depending on your media spend and the maturity of your marketing programs. 

If you want more content like this, explore our case studies, sign up for our newsletter, or follow us on LinkedIn. If you’re ready to talk to one of our experts, then check out our contact form, where you can see that we practice what we preach. If you happen to fill it out, be sure to mention this blog post!

Stay curious.

Making the case for self-reported attribution in your marketing programs

If you’re a marketer, measurement can be one of your biggest headaches. And because of privacy concerns impacting digital analytics or the increasing worry of validating a data-driven approach in the face of a recession, this isn’t going to change anytime soon.

One particularly pesky challenge is attribution. 

Marketers have become increasingly aware that software attribution-driven models (see below) have blind spots. Software attribution tends to reward the last touches in the customer journey and miss the experiences that influence behavior along the way. 

This is part of the reason brands tend to over index on performance marketing and shy away from brand advertising. This phenomenon also affects content marketing initiatives, such as podcasts, blogging, community building, and social media — making it difficult to get them off the ground or give them enough runway to prove business value. 

In response, an old tactic — asking customers directly, “How did you hear about us?” — has made its way back into marketing attribution frameworks. Something so simple — and free, save for a little bit of someone’s time to implement — appealed to us. So we decided to test it.

The problem

A B2B client had invested heavily in content marketing in the form of blogs and a podcast. They felt like the programs were working, but they were unable to prove success through data. 

The hypothesis

We had a hunch that the programs were driving qualified leads, but our client was not set up to properly measure success — partly because they were unsure what metrics to track, and partly because of limitations in their Google Analytics and attribution modeling.

The test

We added a required, open-text field with a “How did you hear about us?” prompt on our client’s high-intent lead form. Making the response open text and required was critical to remove any biases that might come from a drop-down list and to ensure that we were collecting as many responses as possible. During a 12-month period we captured 314 leads.

For the test, we wanted to compare the last-touch channel attributed by Hubspot vs. the qualitative data collected from website visitors in the self-reported attribution field. 

The results

Software attributed 88% of lead volume to direct and organic traffic, 7% to “other sources,” and nearly 4% to paid search — which is funny because they don’t run any paid search!

Insights from the self-reported attribution field on the lead form told a different — and valuable — story:

  • 43% of the leads identified as a client or trade association referral. Most mentioned a specific name of the person or organization. We learned that one particular trade association was driving a ton of pipeline growth. We were also pleased at how many previous or existing clients referred business. 
  • 36% of the leads identified as coming from search engines (mostly Google, but also some others). This was great validation of our SEO-driven content and local SEO initiatives. 
  • 9% of the leads were attributed specifically to blogs or the client’s podcast. In many cases, the person named the specific blog that resonated with them, and even how they searched on Google to find it. 
  • Another 2.5% of the leads were attributed to specific conferences or lectures that our client had attended or spoken at. 

When we broke this down and looked only at “direct” or “dark” traffic, the insights were equally enlightening: 60% of the direct traffic self-identified as a customer or trade association referral, but 25% was attributed to SEO: 

Self-reported Attribution of Direct Traffic. Pie Chart.

Another positive? The richness of the responses. Below are just a few examples of the type of insights we got directly from prospects, redacted to protect client anonymity:

What’s happening here?

Software attribution has its limitations. In this case, the model was overweighting the last touch in the customer journey: where the demand was captured. People were consuming a lot of information during the course of their research and decision-making process, and when they were ready to buy, they came to the site directly (typically meaning “dark traffic”) or through branded organic search (searching online by our client’s name). 

When visitors come to a site these ways — direct or through branded search — it suggests that something they consumed during their research has created demand, such as discovering a meaningful piece of content or word-of-mouth. In the case of our client, that something was their thought leadership, in the form of blogs and a podcast, plus referrals from Facebook groups, existing clients, and trade associations. We know this because we asked website visitors and they told us — and, in some cases, they were wonderfully specific.

What are the limitations of attribution software?

  • Last-touch attribution models give greater weight to the final touch in the customer journey — where the demand was captured — and not what happened before then.
  • Blended attribution models assign equal weight to all touches, which doesn’t provide insight into the most meaningful experiences.
  • Even attribution models that give weight to first touch (U-shaped and first touch) generally require a visit to the website to be measured, so word-of-mouth or content consumed in other platforms, such as podcasts or social media, will not be measured. Google also recently announced it will be deprecating a few of these early-touch attribution models. 
  • GA3 Universal Analytics historically provided only a 90-day lookback window, and while GA4 is coming, shifts in cookie behavior from browsers will continue to be problematic for brands with longer sales cycles. (Note: GA4 will bring data-driven attribution, which may solve some attribution issues but still requires a site visit to be added to the attribution model.)
  • Modern multitouch attribution platforms can help, but they can be too complex, require too many resources to manage, or they’re just flat-out too expensive for many companies to implement. They also typically require a site visit to be included in the attribution model.

Does this mean you should ditch software attribution altogether? 

Absolutely not! Self-reported attribution is simply a valuable, additional data point in your attribution arsenal — and a low-cost and easy-to-implement one at that. It’s important to use both software and self-reported attributions for a holistic view of your marketing efforts across the customer journey. 

Self-reported attribution generally captures things such as blogs, podcasts, social media content consumed or shared on a platform, and digital or physical word-of-mouth. Usually this is the most meaningful touchpoint or channel in the prospect’s eyes; in other words, it’s the experience that stuck with them the most. Generally this will give you “program”-level attribution (social or podcast) and, in rare cases, the exact content. However, it’s still imperfect, as it relies on humans, after all. 

Software attribution provides insight into how you captured the demand once it was created. Relying only on software attribution can cause marketing teams or company leadership to overinvest in what captured the demand, such as paid search, and underinvest in growth channels, such as content marketing. 

Both are critical to measure, as ideally you want to be running demand-gen and demand-capture programs simultaneously. Refine Labs has dubbed this a “hybrid attribution model.” 

Did we see a drop in conversion rate when we added the new field? 

Nope. Conversion rate before and after implementation was virtually unchanged. However, for B2B businesses like this one with longer sales cycles, prospects converting through the lead form on the website are typically highly motivated and won’t be deterred by additional fields. 

If you’re a B2C marketer, selling products where people can buy quickly based on emotion, it’s always worth testing a proposed change to ensure that it doesn’t negatively impact your conversion rate. 

What can marketers do?

  • Add self-reported attribution to your lead forms. Make it a required, open-text field. 
  • Do interviews with prospects or existing customers to unlock their buyer journey and purchase habits. 
  • If in B2B, add traffic deanonymizing software such as Clearbit, Metadata, or Zoominfo to better qualify website traffic that doesn’t convert.
  • Make branded search demand a KPI.
  • Prioritize capturing first-party data. Create a newsletter or find other ways to capture emails early in the sales cycle (e.g., an offer) so you can continually measure people’s behavior in your CRM.
  • Run controlled pilots when testing new marketing programs to ensure you can effectively measure their impact. 
  • Understand how channels play together, and measure marketing programs holistically. This could also include moving to media mix modeling , depending on your media spend and the maturity of your marketing programs. 

If you want more content like this, explore our case studies, sign up for our newsletter, or follow us on LinkedIn. If you’re ready to talk to one of our experts, then check out our contact form, where you can see that we practice what we preach. If you happen to fill it out, be sure to mention this blog post!

Stay curious.

Want content like this delivered to your inbox?

More Like This

Three people stretching on a wireframe image.
January 12, 2023
Intelligence
Intelligence, Inbound Marketing
460
Health care marketing photo illustration
November 3, 2022
Intelligence
Intelligence, Inbound Marketing
159
Fundamental website principles graphic
September 22, 2022
Intelligence
Intelligence, Inbound Marketing
186