Number 1 issue with how the sales funnel is tracked: Overwriting data. Why? The buying process is not linear. Many buyers sent to sales do not become customers (arguably, for most companies, 99% of them do not become customers). So, the buyer is set to a Recycled status, and marketing does its job, and after a while, the Buyer is sent to sales again. At this point, all the date stamps get overwritten or cleared (e.g. Sales Ready Date, Working Date, etc). So, if you had 100 Sales Ready in June, and 50 of them were recycled and then became Sales Ready in October, your report for # of Sales Ready in June is now 50. If your conversion rate from Sales Ready to Pipeline for June was 20% it's now doubled to 40%. You're cannibalizing your data. Which is really confusing/annoying for the people looking at the reports. How can you plan or improve if your data is being lost? The better way is to create a record of each "journey." Like in the image where we have 4 attempts through the funnel for 1 buyer. Create a record each time you send a buyer to sales. Then, update it as the buyer progresses. Then close it if they get Recycled. And create a new record when they are sent to sales again. We prefer to use a custom object in SFDC for this. It retains the data. Keeps your reports looking good, and your team/leadership sane. #gtmoperations #marketingoperations #revenueoperations
UGH that's frustrating. A. Never overwrite data. Compound, augment, make it a string, but never overwrite. I've seen this a lot in nonprofits, and it always bothered me as to why the fundraising and IT team would say "YES" to overwriting the data. B. Always ensure date aggregation on each event. From first click to first download to subsequent events, always have a record of dates that match with field. Very spot on man. I hear you!
Great post! CS2 is the only RevOps agency I see talking about timestamping, and it’s one of the most important parts of data collection. We need to make this a bigger conversation across the RevOps space because incorrectly building timestamps across the customer journey is one of the primary reason for data distrust and incorrect insights. On a tactical note, I personally prefer to do timestamping from event log data that then is aggregated based on criteria defined fields. For example, ‘Sales Ready’ criteria could be based on meeting date log, meeting transcript with pain and compelling event confirmed, and AE/BDR email log with stakeholder intro. The dates cross these four criteria log events would then get aggregated to a Sales Ready timestamp. This can always be done in comparison to the AEs manual update to the Sales Ready stage. I prefer this approach so you can optimize criteria definitions. You also keep the integrity of the original source of the timestamp and then easily adjust processes without having to rebuild date stamps and redo trending when criteria definitions are removed/updated/added.
Well said!
🔥 Fixing MQL Hell with HubSpot and Salesforce CRM 🔥
7mo💯 this is one of the biggest problems of what we affectionately call #MQLHell. I prefer using SF Tasks to capture the multiple interactions when integrating with HubSpot, but the general concept is the same. Ironically, this is exactly how HubSpot is tackling this same issue with their "Lead" object, which, unlike Salesforce, has a one-to-many relationship with the Contact object... I