User-Centric Product Design: Key Strategies for Boosting AI Adoption Rates
You’ve got your AI features set up and running on your website. State-of-the-art technology, algorithms proven to give the best to your users. Yet in practice, something disappointing comes to view: only a few people use your feature, or perhaps none at all. What’s happening?
A few weeks ago, I attended the annual "Beyond Technology: Utrecht AI Event," which brings together professionals to discuss AI topics beyond technology, including ethics, human behavior, and societal impact.
I caught a talk by Professor Judith Masthoff from Utrecht University on why users do or don’t adopt AI tools. With a packed venue, the issue remains relevant even after 6 years since the “85% of AI initiatives fail” statistic. Masthoff showed how technology isn’t always the problem behind bad AI adoption rates. There are aspects around user experience and human behavior that we need to take into account when adopting AI into our workflows. I've noticed this in other sources too: users even want personalization.
For this blog post, we will explore what prevents users from utilizing AI in line with their mental model. Cognitive load is crucial to consider when serving customers: you want your services to be as easy for them as possible. With that in mind, what do we need to do to make AI adoption easy for our end users?
4 factors of undertrust and how to overcome them
AI can be integrated into our lives in different ways: from simple “autosuggestions” to full-on AI Agents doing everything for you. In Utrecht, we discussed various integration levels and how they affect daily life. Using a simple agree/disagree set of statements, we often concluded that “it depends on the use case” for many of them. Depending on how it affects you directly, you want to do more things yourself or trust someone or something else to do them for you.
This situation illustrates undertrust and overtrust. Both terms reflect how likely a user is to use a technology and not whether its response is accurate. For example, ChatGPT's tendency to agree with you can increase trust: confirmation bias is a form of overtrust. Cognitive ease is another. If an answer is easy to understand, you may be more likely to accept it.
For this blog post, I will focus on undertrust, aiming to understand why people don't use AI. Masthoff presented four specific factors, and online sources support these. Let’s review the factors and how to address them.

1. Risk aversion
The start of this section describes an example of risk aversion. The higher the risk, the less likely users are to use AI. So, minimize the risk the user has to take.
In product design, consider adding alternative choices and making the AI less of an “absolute choice maker” and more like a recommender for small issues. When tackling something big, as a programmer, I can advise the divide and conquer method: subdivide your issue into smaller problems that are easier to solve and less dangerous to get wrong for the customer.
Provide links and references to your answers. This gives the user more confidence and safety to trust your response: they can check what it’s based on. I personally use this with ChatGPT a lot when Google doesn’t do the job well enough. I want to read and verify that the response is accurate and indeed relevant to my question. However, it’s important that you don’t overwhelm your user with information. This can cause hesitation and lower clickthroughs to your primary services, leading to the next hurdle: negativity bias.
2. Negativity bias: the first impression is most important
Negativity bias is perhaps the most well-known of the four: one bad experience leads to persistent undertrust. This skepticism stays, even if the following answers are adjusted. It relates to user experience: 1 in 3 customers is shown to leave after just one bad interaction.
When it comes to generating results, negativity bias can be overcome through minimizing risk aversion. Hallucination is also important to take into account, especially if you’re generating responses. The term itself has become so well-known that it was named word of the year before. By focusing on a specific use case, you can tackle this much more easily, since you don’t have to create a solution that works for all situations.
User experience should also be taken into account. Chatbots like ChatGPT and Gemini are very popular, but they might not be the most optimal choice in all use cases, leading users to back away. Align with what your users are used to and avoid inventing new interfaces. Maybe a simple, generic form is enough. We at CAIDEL do this too, by not just giving a form experience, but also one that fits more in line with searching.
3. Confidence heuristics: design around your users
Confidence heuristics concern how we interpret AI responses. When AI sounds uncertain, you might be less likely to believe it. Confidence shows you’re well-versed and accurate.
This is really related to the users you serve, since different people have different interpretations. Just sounding correct might not be enough. Perhaps adding alternatives, as described above, alongside a primary choice helps convey knowledge and authority to the user. The goal is to align the AI’s tone with what users expect from a knowledgeable assistant.
4. Identity: customer is king
Finally, make sure not to shape your AI to take over what the user wants to do themselves. This can be interpreted as losing control. Instead, present it as a servant that helps them along the way. If your users want to click “add to card” by themselves, add this in your response and don’t do it automatically. Both this and giving a select few alternative choices give the user control. This helps overcome undertrust, as the final choice is in their own hands.
How UX design helps overcome undertrust

I want to give specific attention to user experience and product design. You might have already noticed it showing up in the section above as one of the ways to overcome undertrust in general.
A good user experience combines trust with ease of use: both are equally important for designing a successful system. By optimizing the user experience through product design, you can overcome the four issues of undertrust outlined above.
Personal anecdote: As a more technical person, when designing the UI for the CAIDEL form builder, I created a top-left "toolbar" containing, for me, the common features of save, undo/redo, help, and navigation. We did our first test with non-technical users last year, and one of the suggestions was to put the save button on the bottom right, which is the complete opposite of where I put it. It doesn't mean my approach is wrong, but we were making the interface for primarily non-technical users.
We must follow users’ mental models, not our own, when designing interfaces. Know what your users expect. For example, don’t deviate from ingrained UX practices, such as highlighting required form fields. Instructions add cognitive load, even in forms. This can overwhelm users (where to start?) and lead to abandonment, as mentioned in my post on hesitation.
It also shows that introducing new things to the mental model is difficult. In an ideal world, you wouldn’t need to do this. Sometimes, that ideal world is more realistic than you might think. Google does this quite well. Over almost 30 years, Google's primary interface hasn't changed a bit: it's the logo with a search bar below it. But in the background, the search engine and many like it have evolved from keyword search to supporting natural language, and finally to AI overviews. As a user, you don't have to click "generate AI overview": Google has integrated it into the result you see. In my opinion: zero mental load.
In short, minimize friction. If a user habit is search, make your AI part of that search, and not something separate. Keep layouts clear, instructions minimal (users hate reading walls of text), and use established conventions. Every extra page or confusing message is a potential abandonment point, especially since users have limited attention for learning something new.
Conclusion: What Drives Successful Adoption?

Remember, cognitive load is real: people will gravitate to the path of least resistance. So prioritize what users value most, and make that instantly accessible. When introducing new features, do so gradually and in context. For example, show an AI suggestion only after a user has taken a few familiar steps, or let them opt in to a “smart assist” mode rather than forcing it on page load.
Ultimately, making AI adoption easy is about respect for user habits, clarity, and trust. Users don’t want AI for the sake of AI; they want solutions to their problems. If the AI element feels like a helpful shortcut within a known workflow, then adoption will follow. Keep the focus on solving user needs with minimal friction, and you’ll be well on your way to turning that elusive AI feature into something people actually use.