“The numbers and data is startling and there should never be anything in our business that is causing us to lose momentum that comes down to our own fault or process or tools. Do we have 100% understanding of the problems and what will fix the problems or do we just have guesses and need more time to investigate and get confirmation? We can not afford to have ANY problems feeding leads into our AEs - that part of our business has to be 10000000% perfect.”
^^ I started off this past week with that message from my CEO staring me in the face. Fun feeling, right?
But he was absolutely right. For the prior two weeks, I’d noticed something was off with our pipeline production and had brought that to light with our GTM leadership team, but I couldn’t pinpoint the cause with 100% confidence.
Sponsor: HockeyStack
Your CEO just slacked you: "We have our board meeting in 2 days. Send me a slide with what's working + anything they need to know."
Did your palms get sweaty reading that?
Or is this an easy ask because you can easily get to this info already?
I'm neurotic about forecasting + tracking performance. I have reports upon reports and dashboards upon dashboards.
So when Emir + Claudia over at HockeyStack asked me what my perfect dashboard would look like, I immediately started geeking out.
And when they sent this dashboard mock-up back to me yesterday, I was in nerd heaven 🙌
2025 is underway. And it's also never too late to add some solid reports or views to your dashboard, so hopefully this provides some inspiration to any of you who are looking to level up your dashboarding game.
The first inkling something was off
The beauty of focusing on a few, core metrics is that it’s easy to spot when something isn’t quite right. For me, every week I’m in our systems and reporting tools looking at three key metrics (high-intent handraisers, qualified opportunities, and closed won deals) the cost pers + conversion rates between all of them, and how those numbers should look compared to previous period(s) and goals.
Every Friday, I send an update to my CEO. Part of that update is a weekly snapshot on top- and mid-funnel performance. This was the message a few weeks ago when I noticed something was off. Top of funnel performance had been the strongest it had ever been, but for some reason we weren’t seeing that same trend with our pipeline. These normally have a very strong direct correlation based on historical data, so that’s what signaled to me that either:
Something changed at the top of the funnel (who was coming in + the quality of them)
A core process between these two stages was breaking
I’ve learned my lesson in making knee-jerk responses to weekly data fluctuations, so after this first “down” week, I didn’t jump in and make any drastic changes like changing our targeting or making changes in the marketing > sales handoff tech we use. I also had a feeling that this wasn’t just the result of 1 thing, but was a multi-variable problem. So I wrote down a list of everything that could be a cause and began working my way down that list to rule those in/out.
The analysis
This was the “fun” part. (And by fun, I mean 100% manual, painstaking, row-by-row data analysis in Salesforce, Hubspot, spreadsheets, our demo routing tool, ad platforms, and more.)
But it was the work that needed to be done if I was to get an answer about what was going on. After a few days of aggregating + analyzing data, my hypothesis that this was a multi-variable problem was validated as there were 3 core variables that each was playing into this.
Two of them were slower, more drawn out variables that had been slowly occurring over a handful of quarters. Each on their own of minor significance, but combined and over time, had become significant. Think of the metaphor of a plane going off-course by 1°. Not significant over a short distance, but when you’re going across the country, that’s the difference of flying from New York City to Los Angeles or from New York City to ~40 miles into the Pacific Ocean.
Then there was an added kicker of a newly introduced variable that was unrelated from the other two, but contributed to the problem and was what pushed this over the edge into “something’s going on” territory.
The solution + a lesson in ownership
Since we were looking at a multi-variable problem, the solution wasn’t a blanket fix that would solve all 3 variables. Each had to be addressed individually since they were independent from one another. They also weren’t equal in terms of the impact they were making to the pipeline conversion, so they needed to be prioritized + addressed accordingly.
The newest variable was the most important one to solve for. It was the one impacting our conversion rate the most as seen by the decrease in recent weeks. I wasn’t happy when I figured out what it was. And I’m even more unhappy when I look back at how I (didn’t) manage it.
The tool that we use to have demo requests choose a date + time to schedule their demo with a member of our sales team updated their platform and were making all of their customers migrate to the new version. But rest assured, “you will see that we have automatically rebuilt your current setup in the new platform” so this should be a seamless transition.
So the migration happened and all was fine. Meetings were still being scheduled, the tool was working as it should, or so it seemed on the surface. Each day, I’d see demo requests come in and meeting scheduled on our sales team’s calendars.
But when I zoomed out from the daily to the weekly, that’s where the trends started to emerge. Hindsight being 20/20, I would have went straight to the migration date this occurred and looked to see if things started getting “wonky” on that date. But since I wasn’t sure what all was factoring into this problem, this was one of multiple areas I was auditing during the two day deep dive. And once I saw things did start getting “wonky” on that date, I immediately jumped into this to see just how much damage was being done.
And here lies the biggest learning for me from this migration.
Do not take a software vendor’s "word” that they will migrate everything for you and that it will match exactly how it was before.
Yes, I work for a software vendor. Yes, I would still say this to any customer coming our way and asking about the migration.
My biggest mistake was that I didn’t take full ownership of this migration and immediately QA all of the data flowing in/out during the first few days. I would become aware when a bug or hiccup occurred. AKA, I was being reactive, not proactive. And I’ll tell you what, this lesson stung, but I can also promise you that it’s going to stick with me for the rest of my career.
I had to own + explain to our leadership team why I was responsible for the recent decrease in pipeline. I had to not take the easy route of blaming the software vendor and recognize that I was to blame by assuming a critical business function undergoing change would work 100% perfectly upon the update.
And that leads me to today. Still in spreadsheets cleaning up the mess I made. Going through spreadsheets to triple check that anyone who came to us requesting a demo is put in touch with a member of our sales team. A chunk of them simply fell into the abyss due to issues that I should’ve caught in the early days of QAing.
The lesson?
Spend a few hours QAing early on + stop the snowball in its tracks.
Assume a vendor/someone else QA’d everything perfectly + spend a week diverting an avalanche from picking up steam.
Book quote of the week
Ever have a case of right book, right time? This passage couldn’t have tied in better with this past week.
“Fear of failure, I thought, will never be our downfall as a company. Not that any of us thought we wouldn’t fail; in fact we had every expectation that we would. But when we did fail, we had faith that we’d do it fast, learn form it, and be better for it.”
- Shoe Dog, Phil Knight
In case you missed these this week
See you next Saturday,
Sam