Is choosing a financial planning tool a straightforward process? I’ve been on the choosing end of a financial planning tool on more than one occasion in my life, enough to know it’s not as simple as it seems. I’ve had the experience, both as a corporate buyer on behalf of advisors, but also, as an advisor on behalf of clients, each with surprisingly different outcomes.
As a “corporate buyer,” I put on my rational, “advisor hat” to assess the options. This had some interesting consequences … I was especially concerned about the detail, the decimal point accuracy of the calculations and the extent to which everything was covered. I counted the features, tabulated the options and carefully compared one tool against another. It seemed about as scientific a process as one can possibly perform, the winner, outperforming its peers with a resounding “seven features more” than the others. A conclusive contest if ever I saw one!?!?
What I wasn’t counting on, was how the sum of the parts did not equal a positive user experience, neither for clients nor advisors. Needless to say, our tool of choice needed lots of training and took on an ongoing stream of mumbling and grumbling with very little return in terms of productivity and happy client feedback.
Learning from my initial “rational approach”, my next lesson happened as an advisor. I swapped out the rational, “advisor hat” and in it’s place put on a “client hat,” which came to a surprisingly different outcome. I stopped counting the features and focused squarely on the experience that could be created between the advisor and the client. I even considered “feelings.” Needless to say, we came to a remarkably different conclusion from an essentially similar set of options. Let’s just say that the new outcome was materially more successful, not only did the solution massively take away from the “training and complaining” but clients loved it.
In reflection, I learned a few things … The biggest lesson is that it’s not always clear what we want from a planning tool. It’s not always clear what consitiutes a “successful outcome.” We may think it’s obvious but we can be easily misled. My first attempt, focused on “efficiency optimisation” a good outcome by any standard … “Does it do a monte carlo simulation? Does it calculate an RA deduction? Does it do … blah, blah, blah…” Unintentionally, I found myself measuring the options based on my needs “as an advisor.” I was looking for a solution to quieten my OCD, to relax the anxiety I felt about not knowing the future. None of which took me nearer to achieving my unsaid (actual) goal … to help clients live their best lives possible.
I realise now, that to achieve my goals for my clients, I need to be asking a different question, looking to measure a different definition of a “successful outcome” … “Does the tool enable an advice conversation, based on empathy and trust, that elevates the client and moves them forward?” Only by putting my own anxieties second to what clients need can I uncover a solid framework for choosing a solution that gets me closer to my true goal.
In conclusion, I realise that we put at risk our ability to be effective in helping clients to move forward, when we put our need to optimise stuff to be 100% accurate in the driver’s seat. We need to choose stuff in our practices that gets clients to affect positive change in their lives, over, getting an “estate duty calc” perfectly. Both of these outcomes might be deemed desirable, but in a world where the client experience is the deciding factor, we need to choose which one we will pursue first.