We decided not to use AI to build the custom investment portfolios. We use deterministic algorithms because we need control and predictability. A hallucination in a portfolio allocation is a non-starter for us.
The app shows users our underlying estimates and gives context for the math. We only use GenAI to generate the text explaining why the portfolio fits them.
Here is the weird part.
We're seeing a trust gap. Users find the transparent, logic-based advice interesting, but they hesitate to act. Yet these same users tell us they often ask ChatGPT for financial advice, even while admitting they know they should be skeptical of it.
It feels like people prefer a confident "black box" over a transparent "glass box" that shows its work.
We've tried bridging this by keeping it non-custodial (we don't touch the money) and sticking to blue-chip ETFs, but I'm trying to figure out the psychological block here.
Separately, we're building an agentic system to analyze user accounts and suggest specific financial planning actions like tax loss harvesting, but I'm hesitant to lean too hard into the AI label if it scares people.
My question for this crowd: Are you fine with AI running your money if the UX is good? Or do you genuinely prefer a traditional, transparent algorithmic approach? What would actually get you to pull the trigger on a trade?
(Platform is https://www.FulfilledWealth.co if you want to check the approach)
At least with this project, there is a sense of reason when knowing when you need to use LLMs instead of throwing it everywhere where it is not needed.
> My question for this crowd: Are you fine with AI running your money if the UX is good? Or do you genuinely prefer a traditional, transparent algorithmic approach? What would actually get you to pull the trigger on a trade?
IANAL but I don't think you would want a liability risk if you are unable to explain why the AI malfunctioned when managing someone else's money. Given that LLMs really cannot be held to account, you likely need lots of disclaimers on your product so that the users know what they are interacting with.
On the other hand, If you can transparently explain why the AI is not functioning as expected, if something goes wrong then that is more trustworthy than: 'We don't know where your money went as the AI hallucinated the result by misreading a single digit in the data'.
People trusting ChatGPT with their finances is a separate problem, and the diagnosis is simply stupidity. There's no conflation between these questions, rest assured.