First of all: Is it hard to roll out AI software? Yes. Yes it is. For us, at least.

Why? Well, confidence levels vary widely between different cultures and individuals based on any number of factors that the users find extremely hard to verbalize. There’s a lot of “feeling” going on.

Usability is typically based on following patterns. With new tech, there are no patterns well-established enough to follow; this makes it hard for the designers and the users alike. There’s more but I’ll focus on the Big 4 we talk about the most at CCG.

1. There are no experts.

The phrase “vibe coding” was coined in February of this year. (If you don’t know what that is it’s just describing in natural language what you want an app to do and another app will make it for you. Yes, it’s incredible.) A lot of people do this, it is very successful, and Lovable, one of its biggest platforms was given a couple billion last month. It was not a thing 6 months ago.

This is all moving very, very fast. And that means:

The number of people who know how to make software that use LLM’s as part of the UX and how to roll them out is vanishingly small.

2. It can screw up once. And then screw up a second time in a new way.

LLM’s work on percentages, and guesses. Seriously. They are extremely informed guesses, but these things are basically chaining together likely strings of words. And when you ask them the same question again, they run back the entire chain, not just part — they have no memory (in most cases).

So, percentages being what they are, you’ll get two different answers after two different attempts. We have been trained since the beginning of technology to praise tech that does the opposite of this. If you shoot a rocket like this it goes exactly there, and if you type 9x5 into a calculator you get 45 every single time.

In our experience, this means users think the software is broken. This creates enormous trust issues. And speaking of…

3. Will it take my job?

People think this stuff will take their job. They might be right, but most of the software we’re seeing really are assistants not replacements. Investment in LLM’s is far outstripping returns (still) and the people who say these products can do anything tend to be the technology’s inventors, investors, CEO’s, and the nerds who like inventors. Why would that be?

Also, many of the companies that have fired people and tried to replace them with AI have ended up hiring most of the people back. So there’s that.

4. Chat is not the UI for everything.

Most people, if they use AI, are used to chatting with it. So, most people develop AI software that is operated by a chatbot. And that’s what most people expect.

A lot of the time, however, telling an LLM to adjust a tiny part of the document/project you’re working on is frustrating and unnecessary. The LLM will do the whole thing over (which can lead to errors and is an enormous waste of energy and time), tends not to follow these types of instructions well (which exaggerates the effect of the first flaws and adds to frustration) and then you repeat the process which just makes everything worse. All to change an “8” to a “9” or shrink a separator by 5 pixels. A manual edit button would be better.

Conclusion?

We can do this. Yes, we’re proud of ourselves.