The reasons are manifold. There is clearly a generational difference insofar as older investors feel uncomfortable using a machine to help them invest their money while younger investors are more at ease with the concept of a robo-adviser. Interestingly, though, those investors who like to use a robo-adviser tend to do so because it is easier to blame a machine for a failed investment than to sit with a human adviser and blame her. The moment the robo-adviser becomes more human (e.g. by having a name or being humanised in one way or another), investors become more reluctant to follow the advice of the robo.
Combine all these observations and you end up with the results of two new experiments by Camila Back, Stefan Morana, and Martin Spann who asked students to participate in an experimental investment game where they got €2000.00 to invest in cash or a range of different risky assets. Experiment 1 pitted a control group that received no advice vs. a group that received advice from a robo. The group that got advice from a robo achieved returns that were more than twice as high as those of the control group.
The key to this higher performance was that the robo adviser helped investors avoid the disposition effect. The disposition effect describes our inclination to sell stocks we hold with a profit too soon and hold on to losers for too long. Because stocks move in trends, selling winners too soon reduces the profits achieved in these investments while holding on to losers too long increases the losses. Investors who relied on the robo tended to follow its advice and predominantly sold losing investment sooner, thus increasing their returns. After all, if they sold a losing stock and it subsequently recovered, they could always blame the robo for making that decision.
In a second experiment, the researchers pitted an anonymised algorithm against a humanised robo adviser to see what effect the humanisation of the robo had on the investment results. It is commonly believed that investors are less reluctant to engage with a humanised robo than with an anonymous machine and thus they should be more willing to accept recommendations from the humanised robo. Alas, the opposite happened.
When dealing with a humanised robo, the investors were simply more reluctant to actively ask for advice than when dealing with an anonymous machine. The fact that the robo was more humanlike made it harder for the investors to blame the robo for decisions that went wrong, so they rather not ask for advice in the first place. As I said before, companies should resist the temptation to humanise their robos and let the machine be a machine. They would likely do better for themselves and their clients.