AI Value Creation in UX/UI Design Tools
It’s easy to get very speculative about the applications of LLMs and Generative AI to the field of product design. Venture firms are publishing new AI tooling market maps weekly it seems.
Evaluating tools can be daunting. I’ve created a map of how I think about the current landscape from both the perspective of scalability and network effects, and as an organization making decisions about what to experiment with. Note that this focuses on UI/UX design and so it excludes creative AI tools with image generation at its core (like Stable Diffusion, Midjourney, etc).
The first thing to understand is that you can capture value by either:
a) providing proprietary data about your company so that the AI’s outputs generate a feedback loop that add incremental, scalable value back to the company.
b) building an application that enables the AI to get good at a specific use-case or task within your industry or business context, and get better at it over time
Off-the-shelf foundational model interfaces like ChatGPT (the normal version) are useful in that they boost productivity and creativity as an AI assistant, but is ultimately trained on generic public knowledge. Its limited in its ability to leverage vertical expertise, like say, evaluating a healthcare company’s user journey based on US medical regulations. And because it’s not fine-tuned to any information from the company, it lacks context of company knowledge — its products, customers, sales performance, roadmap, and other data points.
We’re already seeing market saturation in “thin applications” — novel tools , often first-to-market, with use-cases in specific domain areas. In most cases, these are great products and solid companies. These applications create value by the core task they do, wether its one-click wireframes, or a design co-pilot, or UX research assistantance, or a fast copywriter — however because there is no feedback loop to the underlying model, and therefore no incremental training or knowledge being built — the benefits do not scale beyond a single use. You don’t benefit from any network effects. Sameer Singh of Speedinvest ventures wrote an excellent article about this concept .
There is next tier up of apps which are integrated into the organization’s workflow. They largely do the same type of tasks as the thin applications — i.e. prompt based generation — but have the network benefits of being connected and integrated into existing designer workflows or creative suites. Take for example Canva’s ability to quickly create social content at scale, or Adobe Sensei’s proliferation throughout the Creative Cloud suite.
However, these are still all built on foundational LLMs, which are trained on publicly available / generic data. The key is proprietary data.
The fourth tier is uncharted territory as of this post. Purpose-built models that are both equipped to handle specific vertical tasks and are trained on proprietary data. These will likely start at the enterprise level due to the sheer amount of resources (and the right talent) needed to build, not to mention the type of structured data needed. It will certainly be built on top of the foundational models like GPT, Anthropic, etc. This is where you’ll really start to see value being amplified with second and third order effects, and design tools with strong moats built around their business.
This is where it gets interesting to think about the “third wave” of AI for design orgs. You can imagine the big players creating verticalized solutions and features for specific industries. How much more powerful can AI-enabled design software get when it is fine-tuned for teams that work on, let’s say, the nuances e-commerce personalization and conversion optimization? Or the psychology of hospitality booking flows? It will be interesting to see how Purpose-Built Design Tools are monetized. Will it mirror the cloud industry and be based on “cost per compute”? Or will it follow traditional SaaS model subscription pricing? Or will it come with implementation costs like big platform vendors?
There will probably be a significant half-step before we get to this wave. As the foundational models are offering larger context windows (ie ability to upload background and reference materials) — effectively giving users a way to train their models on proprietary data, like we have seen a pro-sumer level with OpenAI’s recently launched Custom GPTs, we may see the “Augmenter” layer of apps become more efficient and effective within the enterprise.
The space is moving super fast and so we’ll probably see some shifts by the end of the year.