A very warm welcome to all the new subscribers. I’m thrilled to have you as readers and truly appreciate your feedback and support.
In this week’s SPN:
Inspiration from work being rewarded in regions beyond Europe and the US
How to wrestle with performance and create a different view
Jobs & Opps that caught my eye this week
Let’s dig in!
Fundraise Up stands out in our industry as one of the verrry few nonprofit software providers to achieve ISO 27001 certification.
Lots of software solutions say they prioritize security but an ISO certification takes it many steps further, and represents a true tangible commitment to security supported by rigorous standards and audits.
Choosing Fundraise Up means selecting a solution that not only meets your fundraising needs, it also prioritizes the highest levels of security and compliance for your Org.
Game changer? It was for me.
Be Inspired
Spikes Asia and Dubai Lynx, the advertising awards show for the Asia Pacific and Middle East regions, took place last month and both of them had something that’s been missing from Cannes Lions for a while - campaigns that swept the boards.
Having started out my nonprofit adventure on the founding team of UNICEF Kid Power, the world’s first wearable for good that counted steps and turned them into lifesaving impact (nutrition), I was tickled speckled pink to watch VML Melbourne’s “FitChix” campaign for the Honest Eggs Co who built the world’s first wearable step counter for chickens!
They printed eggs with the average step count of the chickens that laid them to demonstrate the brand’s commitment to animal welfare.
Campaign notes:
“By tracking our chickens and sharing their steps we brought honesty to a misleading industry, empowering every day people and farmers to help change the egg industry for good.”
Australians consume 6.6 billion eggs a year
Most eggs come from enclosed battery farms
The campaign created social media accounts for the chickens, attached the wearables made from hen-friendly material and put their workouts on Strava
Achieved 493% increase in online conversation
Achieved 222% increase in stockists and started selling out
“FitChix” won five Grands Prix in total, in Brand Experience & Activation, Creative Commerce, Healthcare, Integrated and Outdoor.
Dentsu Tokyo’s digital stamp-collecting campaign for Japan Railways Groups (My Japan Railway) also went on a winning streak, taking Grands Prix in the Industry Craft, Digital Craft and Direct categories.
Leo Burnett Jeddah’s campaign for Saudia Airlines “ProtecTasbih” was inspired. They created the world’s first dual purpose prayer beads that sanitize hands to promote hygiene at one of the world’s largest gatherings of people - in Makkah for Hajj pilgrimage and Umrah season. Sans alcohol, tea tree oil was the secret sauce! And at Dubai Lynx they won four Grands Prix in the Brand Experience & Activation, Design, Direct and PR categories.
Of the three, FitChix and My Japan Railway are too old to be entered into any category at Cannes this year except Creative Effectiveness (which rewards long-term results), so this list isn’t much help when it comes to predicting where the big Lions are going to go in June.
Still, it’s interesting (and I find it energizing) to see the kind of work that gets rewarded in regions beyond Europe and the US!
Jobs & Opps 🛠️
Sierra Club: Associate Director, Digital Fundraising Strategy ($90,000-$100,000)
National Geographic Society: Director, Marketing (Revenue Generating Programs) ($133,000 - $158,000)
UNHCR USA: Senior Director, Direct Response ($181,294 - $201,438)
Salesforce.Org: Director, Nonprofit Advisor ($228,970 - $306,250)
United Way of New York City: Senior Vice President, Community Impact Officer ($170,000 - $200,000)
Worth Rises: Director of Marketing & Comms ($135,000)
Action Against Hunger: Content Lead ($90,000-$95,000)
Latinos For Education: Chief Advancement Officer ($220,000 - $250,000)
Girl Scouts: Chief Marketing Officer ($325,000 - $375,000)
IRC: Director, Fundraising Operations ($130,000 - $160,000)
AnitaB.org: SVP Brand and Revenue ($201,000-$215,000)
Catholic Medical Mission Board (CMMB): Director of Brand Strategy & Media Relations (from $115,000)
Save the Children US: Advisor, Demand Generation ($68,850 - $94,050) and Managing Director, Social Media & Influencer Relations ($107,950 - $147,250)
American Cancer Society: Director, Media Strategy ($89,000 to $111,000)
A Better View of Performance
If I had to speculate, I’d venture that every Nonprofit Operator worldwide wrestles with Attribution at least a few times a week. At least I hope it’s not just me. The intricacies of some Attribution is truly perplexing. There’s always a piece of the puzzle that doesn't quite fit. And which model to adopt…
Is the last click more significant than the first click?
How do you factor in frequency?
Why should I place faith in Google’s “Data-Driven Attribution” when its inner workings remain a mystery?
And what's the deal with the fact that “Facebook doesn’t share data with Google Analytics and you need to decide how much you trust the platform?”
I struggle to trust any of them. But, I get it, no framework is without its flaws.
Brand Awareness metrics, for instance, are ineffective when viewed in isolation. Every report I’ve encountered invariably emphasizes the campaign's success in driving a “5x increase in ad recall and 3x increase in consideration,” yet this never translates into a noticeable uptick in donation volume.
MMM is slow. And I’ve yet to witness any Org fully utilizing it to inform all its spending decisions due to limited use cases.
And so on for every other model.
Measuring digital performance is undeniably challenging. The industry's current frenzy over the deprecation of third-party cookies has held measurement and attribution in the spotlight – but does it truly alter the landscape, or are we merely swapping one enigma for another?
What do you mean you haven’t figured out measurement yet?!
I don’t think anybody has figured it out. Below I’ve dropped my “summary framework”. There’s no perfect model to answer every “what works and what doesn’t?” question, and any of them used alone is no better than a coin flip – otherwise, everybody would be using it already.
What I’ve shared is how I used various performance views throughout the donor lifecycle – each contributing to an overall picture for me and others in the Org. Perhaps this is helpful to you too? None of them require expensive, long-to-implement tools and should apply to Orgs of all sizes:
One note before we jump in: Most measurement frameworks are geared towards determining Channel, Ad, or Creative performance - not Audience performance. I discussed segmentation and finding the most likely-to-convert donors in SPN #15. Here I’ve focused on measuring Channel performance in reaching those audiences at various stages of the lifecycle. There’s no such thing as “Cost per Donation in Display” or “Cost per Donation in Search” – there’s only “Cost per Donation for Audience A in Display”
A Summary Framework for Each Step of the “Funnel”
1. Journey Stage: Top of Funnel
Measure: Generating new, first-time donors
Key questions for this lifecycle phase: What/Who is the best audience? What is the right channel mix/budget for each channel to reach that audience?
How: Incrementality Testing and Geo Holdouts worked best for me.
When launching a new channel – or twice a year for channels that have been running for a long time already – this approach helped me:
Pulling the Geo Performance report at a ZIP code level for the given channel.
Separating them into 3 even segments based on spend to reflect Scale.
Within each quartile, ranking ZIP codes from best to worst based on the revenue generated, ROI, and one more qualitative metric – I usually use Average Donation Value or Conversion Rate.
Creating a summarized, “final” rating as a sum of 3 ratings above.
Within each of the 3 segments, separating ZIP codes into 4 quartiles based on this ranking
Randomly picking 3 out of 12 resulting segments – and turning off the specific channel spend for half of the ZIP codes for a month.
I usually look at what I call the “dynamic control group” – for the month when media is turned off, monitoring the MoM metrics for the affected ZIP codes (is revenue going down? Is my count of donations decreasing?) versus the same MoM metrics for the non-affected ZIP codes in the same segment, comparing not the actual values but their change. This approach has continuously helped me make sense of whether each of the channels I’m running is contributing to the performance of campaigns and whether that contribution justifies continuous investment.
Also, Brand Awareness measured as an Increase in Branded Paid Search Terms is another one of my favorite metrics for the top-of-funnel channels. Google Analytics path-to-conversion report is immensely helpful here, showing whether exposure to any of the channels is followed by a Branded Paid Search impression.
For Branding campaigns, I’m pulling those numbers into an aggregated table, looking at the “cost of generated paid search impression” and then using that to compare branding channels or campaigns against one another.
2. Journey Stage: Mid-Funnel
Measure: Converting one-time donors to recurring
Key question: Which channel (campaign, or ad) contributes the most to moving donors to the next stage of the engagement?
How: Measurement needs to focus on Conversion Rate Lift. Conversion Paths report in GA again comes in handy.
For every new channel – and every 6 months for channels running already – run a “ghost holdout test” as the following:
In GA, export all the conversion paths for the last month for a given audience (if I didn’t have good audience definitions, then I selected ZIP code blocks, same as I described in Step 1) – separating the ones including and excluding the specific channel I’m analyzing.
Export all these paths into the Excel file - there is an option in the GA export dropdown which immediately creates an easy-to-read spreadsheet with every touchpoint in its own cell and every path in its own row.
Duplicate the spreadsheet and filter one version to include the pathways with the channel we are analyzing/filter another one to exclude them.
For the version of the file that includes the target touchpoints, create two more copies
In one of them, include only the paths that “start with” the target touchpoint - ie, they were the first click
In the second one, include only the paths that “end with” the target touch points - ie, they were the last click
In the third version, exclude both scenarios above - ie, the target touchpoints were only anywhere in the middle
With 4 resulting files, you can compare the Conversion Rate to the baseline - to the paths excluding the analyzed channel - to immediately see if spending money in it makes sense, what is the incremental lift in CVR it drives, and which attribution model (First Click, Last Click, or Linear) should be used for daily optimizations without deeper analysis.
3. Journey Stage: Bottom of Funnel
Measure: Increasing the LTV of recurring donors
Key question: Does spending money in a particular channel lower churn?
How: Conversion pathways are a great resource here again, with a slight change in logic
The process is two-fold:
The first step is to pull all the pathways for donors who have churned in the last period (I usually run these reports every 3 months) versus the ones who haven’t.
Count how many pathways a channel appears in a churned donors bucket and convert that to percentage – versus the same count for not churned donors’ pathways.
For example, if the channel is Paid Search, then:
“Paid Search -> Paid Search -> Organic” pathway should be counted once
“Organic -> Paid Search -> Display” pathway should be counted once
“Organic -> Display -> Paid Social” pathways shouldn’t be counted at all
That difference in percentage is a trustworthy source to calculate how much each channel is decreasing average churn rate. That decrease rate can be further used to count the “cost to not lose a donor” – counted as average cost per touch in the channel, divided by the percentage decrease.
Comparing channels by the cost to not lose a donor helps identify channels worth spending on vs the ones that can be disabled.
Wrapping Up
The above measurement “plays” are not a replacement for a holistic, one-size-fits-all attribution model or MMM – they’re too labor-intensive to use every day, or even every week. But for Orgs looking to outperform the competition, MMM or Attribution simply don’t make the cut as the only way to gauge performance. I hope the above approaches help you to test faster, improve performance, and/or have a better view at performance rather than just a coin flip.
That’s all for today!
If you enjoyed this, please consider sharing with your network. Thank you to those that do. If a friend sent you this, get the next newsletter by signing up.
And huge thanks to this Quarter’s sponsor Fundraise Up for creating a new standard for online giving.
Now onto the interesting stuff!
Reads From My Week
How Americans Use Social Media (Pew Research Center)
Media Lab learnings for 2024 planning (Think with Google)
TikTok’s Business, in Charts (WSJ)
Chips, tacos and the world's most innovative companies of 2024 (Fast Company)
How Streamers Are Fighting The Plight of Shrinking TV Ad Inventory (AdExchanger)
Influencers are now asking to be in our brand ad. And we should listen (The Drum)
A course-correction from minimalism continues to find purchase in marketing as brands chase young consumers (Marketing Dive)
47% of the US Population 12+ Has Listened To a Podcast In the Last Month (Podnews)
Privacy Compliance Is At The Top of The Tech Lab’s 2024 To-Do List (AdExchanger)
AI is getting to the interesting stuff - working with Liverpool FC on the best tactics for taking corners (Training Ground)