Q: Do you have any ideas or practices for scoring interactions not in a CRM? For example, we occasionally learn that a faculty member works closely with an alum. Those interactions never hit our CRM but are often very important/meaningful.
A: There is a lot to unpack here! Generally, our recommendation is to track all meaningful interactions in the CRM. Tracking faculty relationships is critically important to growing prospect engagement (the same is true of doctor/patient connections in healthcare!); we’d recommend tracking them with their own solicitor type, such as “natural partner.” Whether it’s through a regular quarterly meeting or more ad hoc conversations, consider establishing that practice. And of course, data related to majors, graduation years, and perhaps even data from the registrar can help you to make educated guesses. Once the relationship is coded, it is important to enter relevant contact reports into the system. We would not typically see faculty doing this directly, but Advancement staff should seek to establish relationships with Deans and faculty whereby they meet periodically to review the list of individuals for whom they are a natural partner and receive a “download” at that time. Support staff for those faculty partners can also be invaluable resources in this work.
Q: In scoring, how might you incorporate the quality of experiences/interactions? Is an individual’s negative experience the same score as an individual’s positive experience?
A: It is important that the engagement score is considered alongside other data to determine a prospect strategy. We don’t recommend adding a means of tracking the quality of their experience; rather, we assume that if a visit goes poorly, the constituent would be marked as disqualified in the system (with an appropriate, but not too detailed, summary of the rationale). We also don’t anticipate that a single negative experience would have a major impact on their score. That said, not all “negative feedback” is an indicator that a prospect shouldn’t be prioritized for continued development – sometimes a negative survey response after an event can provide a productive opportunity for conversation. Someone who cares enough to provide feedback, even if negative, is demonstrating strong engagement.
Q: What would be the actions you would recommend if seeing, for example, a particular relationship manager (RM) has much lower engagement in their pool?
A: Prospect engagement can shift rapidly, which underscores the value of an automated dashboard like the Level 3 example we shared in the webinar. The best prospects will demonstrate high capacity and high engagement, and so portfolios should be regularly reviewed with both criteria in mind. If there are unassigned prospects with higher engagement than the current portfolio, careful thought should be given to rebalancing the portfolio. It is also helpful to look at the scores for particular categories of engagement – for example, if an RM’s portfolio scores high on Help but low in other categories, this might suggest that visiting with prospects at volunteer events is a particularly good use of the RM’s time.
Q: Do you find value in utilizing both descriptive scoring methods and predictive models together? I find that leadership and gift officers are more comfortable with the descriptive, as they have more control and can more easily understand how it adds up. I’ve been trying to incorporate both in our analyses in hopes it gets them more comfortable and accepting of predictive modeling.
A: Absolutely! In building a predictive model we sometimes start with something descriptive to get a basic level of understanding of what we are seeing before we go deeper. It’s smart to think about your audience and what they will understand and trust. And vice versa – occasionally, we start with a predictive model to figure out what those key variables are and then build out additional descriptive models.