Using large language models (LLMs) as creative collaborators

Albert Einstein is (incorrectly) credited with saying that Creativity is intelligence having fun - the source of which is unknown. Being the most intelligent species on our planet has meant that our creativity is the highest form of intelligence having fun. That fun is expressed in many ways - art, music, painting and more. We’ve had tools to help with the many forms of creative expression, but a human has always been the “intelligence”.

Large language models (LLMs) are the latest tool to help us in our creative endeavours. However, this is no mere incremental improvement. LLMs could be what Andy Groove would call a 10x change. He described it as:

What such a change does to a business is profound and how the business manages this transition determines its future. 

The concept of the last interface is that humans interact with a single AI system that enables them to achieve all their needs with technology. This article explains the possibility of LLMs being a 10x technology and explores why they will become one of the last interfaces we use to interact with computers.

LLMs - The (second) last interface

LLMs challenge a long-held idea that humans have cornered the market on creativity. For the first time in history, machines can, on certain dimensions, rival human creativity. Specifically, LLMs based in the realm of problem diagnosis and solution.

There are several fields where problems require diagnosis by a human. These fields include healthcare, customer support, legal, and urban planning. Creating high-impact software solutions for these spaces is challenging, and a lot of companies are struggling taking their first steps in AI. This difficulty comes from the fact that conventional software solutions are rigid in their design. Typical software solutions attempt to guide the user through a linear process. However, exploring problems in the above fields requires flexibility, particularly across numerous contexts that are both difficult to design in advance and are ever-expanding. Their dynamic nature compounds this.

A computer’s ability to participate in a contextually relevant conversation is a level of creativity never seen before in a computer. A computer becomes a true conversational partner.

LLMs enable the versatility that these contexts require, allowing a user to interact in natural language. Natural language results in a conversation with a computer, i.e. a non-linear, iterative, and explorative experience. As a diagnosis is naturally an iterative and explorative experience, it can be stated that conversations are one of the best ways to diagnose a complex problem. A conversation with a computer instantly elevates it above any conventional software experience. A computer’s ability to participate in a contextually relevant conversation is a level of creativity never seen before in a computer. A computer becomes a true conversational partner.

An experience inflection point

Some of society's most valuable services today are ones where experts diagnose complex problems and recommend courses of action. Access to doctors, lawyers and accountants is time-constrained and limited. This article will refer to the ability to diagnose complex problems and recommend actions as a form of creativity.

Utilising LLMs to design solutions around this time-constrained expertise will democratise access. LLMs' inflection point is in bringing this particular human creativity to more people faster and at a lower cost.

Healthcare - scaling your doctor's capabilities

An application of this concept is in the realm of medical services. Ireland, for example, is facing significant challenges in delivering healthcare. Many towns have fewer doctors than the WHO (World Health Organisation) recommended density of 100 doctors/100k citizens. The shortage has particularly impacted those over the age of 75 who on average visit their doctor 6.5 times/year. This is in comparison to the middle-aged who visit 3.8 times/year and younger people who visit 3-3.7 times/year (statista). 

A conversation-based digital assistant that could access the patients medical records would empower the individual to have a greater level of independence to manage their health. With appropriate levels of training and autonomy, the assistant could handle the essential tasks of prioritising and booking appointments, suggest medication refills, or offer tailored advice. Patients get faster and cheaper access to tailor-made assistance. Overall, this would lower the barrier to entry across all demographics and provide faster and more cost-effective support. 

Time, cost, and uncertainty - the 3 obstacles

Paul Grice introduced a set of rules (Maxims) for conversations: Quantity (how much you say), Quality (how truthful you are), Relevance (staying on topic), and Clarity (being clear and not confusing). It describes how people converse. Here's how these rules transfer to LLMs.

Quantity: Say Just Enough

For LLMs, quantity means they need to provide just the right amount of information—not too much, and not too little. LLMs know a lot because they’re trained on vast amounts of text, but they don’t know everything, especially private or niche information. To make up for this, LLMs need a way to get extra info without constantly retraining.

Quality: Be Truthful

Quality means the LLM should provide accurate, reliable information. But there’s a catch: LLMs sometimes make up things when they don't know the answer. They hallucinate. This is one of the bigger challenges in making LLMs reliable.

Relevance: Stay on Topic

An LLM needs to understand the context of a conversation and keep its responses on track. For example, if you're talking about kings, queens, bishops, and rooks, the LLM needs to know you're discussing chess, not royal families or birdwatching. Being able to keep up with the flow of the conversation is crucial.

Clarity: Be Clear

Clarity is all about making sure the LLM’s responses are easy to understand. Long, complicated, or vague answers can make conversations hard to follow.

Keeping these rules in mind, there are still a few hurdles in making LLMs work perfectly:

  • Time: Training LLMs takes a long time and involves massive amounts of data. Even making small updates can take months, which doesn’t work well for fast-paced software development.
  • Cost: Running LLMs is expensive. They need powerful computers to function, and using them regularly can rack up significant costs.
  • Uncertainty: Even with everything in place—good prompts, access to more data, and cost management there’s still no guarantee that the conversation will always go smoothly. Things like tone, style, and emotional nuance are still tough for LLMs to get exactly right.

To conclude about large language models

In summary, LLMs are set to transform how we interact with technology, moving beyond traditional software to become creative collaborators in problem-solving. This shift offers technical leaders an opportunity to scale human-like decision-making and expertise across industries.

However, challenges remain. Ensuring LLMs are reliable, adaptable, and transparent will be essential. As we integrate these systems, the focus should be on enhancing—not replacing—human creativity.

Want to learn more about AI in software engineering?