Cultivating an “AI Mindset”: Interview with Tech Consultant Jens Meijen

Few subjects have triggered as much enthusiasm and fear as Artificial Intelligence (AI). There’s nothing new, however, about the worry that new technology may fundamentally change humanity. Whether it’s with the invention of the printing press, photography, television, or computers, people have often panicked when realising the accelerating pace of modernity. Just like for other developments, it’s highly likely that the hype surrounding the roll-out of generative AI softwares such as ChatGPT will slowly give way to responsible, mindful responses.

To guide us on this progressive familiarisation with emerging AI-powered technologies, I am thrilled to have interviewed Jens Meijen, with whom I have collaborated on several occasions to create forward-thinking solutions to global problems. In the following article, Jens shares his unique perspective on AI training and risk management, emphasising the importance of cultivating an “AI mindset”. It provides practical tips for those seeking to make the most of modern tools.

Our Guest

Man with glasses and a dark suit

Jens Meijen is finishing up his PhD at the University of Leuven and will join Stanford University as a Fulbright Fellow in early 2024. Next to his research, he founded Ulysses AI, a responsible AI consulting firm where he leads consultancy, research, and training projects. His company focuses on AI risk management, AI literacy training, and strategic innovation. Ulysses AI has partnerships with Microsoft for Startups, the Flemish Government, Mission Impact Academy (a global AI school), and the Institute for Technology and Society in Rio.

“A crucial part of the AI mindset is to approach AI systems as tools that take a bit of agency away from us. You need that to work with AI responsibly.”

Interview

S: You’ve been training individuals and companies to make sense of AI and learn how to use it. How do you do that?

J: A lot of training programmes focus on specific tools, like ChatGPT or Midjourney. This is, of course, useful for quite a lot of people, but it’s not a long-term solution. You’ve probably seen the ridiculous speed at which the AI landscape is evolving, so there’s no guarantee that these concrete skills will be useful one year or even a few months from now. Even with the same AI systems, updates might make specific tricks (like types of prompt engineering) obsolete within mere days.

So, we believe that AI training requires a different approach than traditional training for digital tools. There’s nothing wrong with offering concrete tips and tricks, of course, but they should be supplemented, in my view at least, with an AI mindset.

S: An “AI mindset”, what does that mean?

J: It basically means that, to use AI responsibly, you should first focus on “soft skills” before moving on to the “hard skills”. Understanding the technical foundations of AI is one thing, but it’s something else entirely to fully grasp the impact of AI on people and society. AI, in whatever form, changes the way we process information and make decisions—how we work and think. How do you navigate a world full of AI-driven decisions and outputs? You need a different mindset for that, something like ChatGPT isn’t the same as PowerPoint or Slack. The relationship between the user and the technology is different.

A crucial part of the AI mindset is to approach AI systems as tools that take a bit of agency away from us. You need that to work with AI responsibly. Every decision you don’t take, or every decision influenced by an AI recommendation or output, is no longer fully yours. We offer accessible explanations of concepts like agency and free will and apply them to real-world examples in our training sessions. We also teach people how to evaluate potential misinformation and factual errors, how to spot AI-generated images and texts, and how to evaluate the risks of an AI system. You could say we help people detect “hallucinations” in AI output, but I think the term “hallucination” makes AI seem more human than it really is.

3D image of orange and purple colours forming the contour of a face
Image from the 'Visualising AI' project by Google DeepMind

S: So, it’s about being aware of the risks and limitations of AI?

J: That’s part of it, sure. A lot of people like to jump straight into the practical use cases. But that opens you up to a range of risks and makes you a lot less efficient. If you don’t understand why a Large Language Model like GPT-4 can’t reliably give you factual information, you’re much more likely to not question its output. If you don’t understand the societal consequences of Generative AI, you’re more likely to deploy it unquestioningly and carelessly, which could lead to catastrophic results. You’re much better equipped to work with AI if you know why it’s risky.

“You should see the limitations of AI as a chance for you, as a human, to bridge those functional gaps.”

But it’s more than awareness. It’s also, for example, about being a careful explorer: you’re not supposed to be scared of AI, you should be interested in knowing more and finding ways to make it work for you. You should see the limitations of AI as a chance for you, as a human, to bridge those functional gaps. An AI mindset is an attitude to take towards AI as a whole, an attitude that helps you utilise its full potential while staying safe from risks.

Back when I was in primary school, we’d get internet literacy classes, where we’d learn how to verify sources and to never blindly trust what we read on the internet. But we were also taught how to use the internet as a powerful tool. That’s a good analogy for the AI mindset we’re trying to cultivate.

S: Writing has never been easier with the popularisation of Large Language Models, especially ChatGPT. What are your tips to make the most of it?

J: It’s really easy to use ChatGPT (that’s part of why it’s so popular) but it’s tricky to master. My tips are pretty simple: ideate and iterate. Use it to quickly generate new ideas for papers or projects, to take new angles to problems you describe in detail, to change your perspective on things you think you already know through and through.

ChatGPT’s writing is too bland and stale to really write well, but it’s great for what I call “splintering”: letting an initial idea, a little seed of inspiration, branch out in different directions without any effort. Just saying ‘give me five ideas for X’, then picking one or two, and asking it to elaborate. It’s simple but effective. Also, keep iterating outputs: just keep asking to refine and revise responses to really get to where you want to be. A first prompt is great to explore what you could do with an idea or what angle a text could take, but you usually need a few iterations to really get something useful.

Video from the ‘Visualising AI’ project by Google DeepMind

S: I have seen many professors get scared about students using ChatGPT to “cheat” on their written assignment. It’s, however, notoriously difficult to monitor its use. What would you advise them to do to tackle this issue?

J: I’d say: embrace it. Tools like ChatGPT will change and likely become more specialised, but it’s unlikely they’re going away. You can’t systematically detect AI-generated text anyway, so why fight it? For the naysayers, not all is lost: with enough experience, you can learn how to spot AI-generated writing. If you don’t iterate your prompts (which a lot of students don’t do), it’s robotic, monotonous, and stiff.

If you really want to stop students from using AI, there’s a really simple trick. You can instruct students to write an essay in class, on class computers without internet access (or by hand if you’re feeling really cruel), and then you have a stylistic baseline to compare with any subsequent writing. If a student can’t spell to save their life in class, but suddenly writes perfectly, you know ChatGPT was involved. It just requires some linguistic affinity from you as a professor. But, in the long term, is that really a solution? It makes little sense to penalise students for using modern tools to improve their writing.

For more specific and demanding courses, tools like ChatGPT are not even that useful if you don’t know how to prompt really well. It’s similar to the internet, back in the day: we were banned from using Wikipedia because it made written assignments too easy. Teachers adapted and found more original assignments; I’m sure they’ll adapt again.

The best approach I’ve seen was in a class on international politics where I gave a guest lecture on AI. Students were told to use ChatGPT or a similar tool to write policy recommendations. They were also instructed to include their prompts and why they used them. I think that’s a great way to approach AI in the classroom.

Subscribe

for a monthly round-up of the latest advances in technology and global affairs, tips to thrive in academia, resources to support your personal growth, and more!

S: With your consultancy, Ulysses AI, you analyse the risks of AI systems. What are the most common flaws you encounter, and how do you usually advise to address them?

J: We’ve seen that most AI tools are actually quite safe from a technical perspective. Most companies we’ve worked with and talked to who develop AI tools are very aware of the risks their tools might entail (even if they don’t always communicate that clearly). They usually build on fairly robust technical infrastructures and check for potential bias or discrimination. We help companies like these refine their risk management strategies and communicate their efforts better. Essentially, we translate their technobabble into understandable language.

“Responsible AI takes time and effort, but I can guarantee that it’s worth it.”

The real issues often arise in the implementation phase. I don’t want to get too technical here, but a lot of companies have extremely complex processes and structures that make it hard to really maintain an overview of the impact an AI system really has. For example, if you want to start using a tool like ChatGPT in a company, you need to start setting clear rules on its use, train staff so everyone’s on the same page and knows ChatGPT’s limitations, and ensure that people don’t blindly copy-and-paste output. Skipping any of those steps could lead to serious problems: clients really don’t like being addressed in the obvious ChatGPT ‘tone of voice’.

Or let’s say you want to start using an AI system that automatically suggests optimised prices for your products. That forces sales, marketing, and IT into a long-term joint effort. Once your technical team is on board, you still need to run experiments in “sandboxes” (isolated environments that simulate your real processes), collect feedback from everyone involved, and continuously rework the system until its performance is up to par. That’s a long process, and not everyone stays on board. Really implementing AI responsibly is a resource-intensive process. The real danger arises when individuals and businesses cut corners in an attempt to deploy more quickly. That’s when you can get in trouble. Responsible AI takes time and effort, but I can guarantee that it’s worth it.

Key Takeaways

My interview with Jens Meijen highlights two points. First, you should be familiar with how AI tools both work and don’t work to make the most of them. Understanding the limitations and risks of Large Language Models (LLMs) such as ChatGPT is fundamental. Learning to recognise AI-generated content is another skill that will prove crucial. Second, the rise of AI-powered technologies signify that there are new opportunities to harness human creativity and critical thinking. When writing becomes easy, emphasis should be put on the cognitive processes that surround it: asking the right questions, collecting relevant data, scrutinising information,
 More broadly, when AI takes some of our agency away, which are the decisions that remain in our hands? Which are the skills that it can’t replace? Which data does it ignore? Which are the values it can’t compute? These are some of the questions which will keep guiding you when cultivating an AI mindset.

Learn from experts

Are you interested in a training session on how to use AI responsibly and effectively?
Get in touch with Jens via info@ulysses-ai.com.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Related Posts