How to Teach People to Use AI
It's not prompt engineering. It's clear writing.
Almost two years ago, my colleague Sebas and I started an AI workshop within Lufthansa Group. The format was simple: first we introduced the tools available on the market, then split participants into groups to solve playful challenges. One example: “Ask ChatGPT to write a short story using only emojis.”
The game, designed by Sebas, turned into team exercises where colleagues learned by experimenting. Back then, LLMs were still in their early days. GPT-3.5 had just arrived, Sora was still a myth, voice models were far away, and AI-generated images were full of typos. It made the workshop easy to run because most people had never used ChatGPT. They were amazed just to watch it draft an email. I still remember a colleague’s excitement at discovering: “I can just tell ChatGPT to make the text shorter!”
Two years later, AI development has taken many leaps. Image generation now follows instructions with precision; models offer “deep thinking” features; AI agents that can act on your behalf, are emerging. Yet our workshop hasn’t evolved much. Unsurprisingly, we’ve received more feedback like, “I didn’t learn anything new because I already use ChatGPT every day.”
Sebas and I really want to upgrade our workshop, but we face several challenges:
The general audience has no interest in too much technical content.
We could explain neural networks and training processes, but our participants are not engineers. Most people do not want to learn how AI works; they just want to know how to use it. And that makes perfect sense. You do not need to know how to build an air conditioner to enjoy its cool air.
No one fully understands how LLMs work.
LLMs (Large Language Models) are called ‘Large’ for a reason: they contain millions, often billions, of parameters. Experts know how to train them, but no one can say exactly how all those parameters interact. Understanding the architecture won’t necessarily make you a better user.
AI is taking over the agency.
When we teach Photoshop, beginners start with the basics, like how to select an object. More advanced users move on to features such as creating masks.
AI tools are totally different. There are no extra menus or hidden buttons to learn. You do not click around. You simply tell AI what you want. “Help me add text to the photo” is the same plain language as “Help me remove the dog and add an airplane in the background.”
In Photoshop, those two tasks require very different levels of skill and time. With AI, both prompts take the same skill to write: the ability to express what you want in natural language.
Because no one fully understands how all the parameters in large language models behave, people experimented with prompts and tricks to get better results. But these shortcuts quickly become obsolete, since LLMs are improving and learning to understand us more naturally. In the near future, if a human can understand what you mean, AI should be able to as well.
So, what should we teach people to use AI?
Instead of more technical content, we should focus on the real interface of AI: natural language. The better you think, write, and communicate, the better you will use AI.
Ironically, the best training for that has existed for a long time: literature, philosophy, history… all these liberal arts nurture critical thinking and communication. In the last few decades, liberal arts have been dismissed as impractical, while STEM (science, technology, engineering and mathematics) was seen as the safe path. But as AI advances, liberal arts may rebrand themselves as “AI Engineering” or “AI Management.” Studying liberal arts might make you a better programmer in the future.
In our next workshop, after showing all the tools, I don’t want to teach “prompt engineering.” I want to talk more about writing. Because in the long run, the best AI skill isn’t knowing the model’s parameters. It’s knowing how to express your own ideas clearly.

