Few subjects have triggered as much enthusiasm and fear as Artificial Intelligence (AI). Thereâs nothing new, however, about the worry that new technology may fundamentally change humanity. Whether itâs with the invention of the printing press, photography, television, or computers, people have often panicked when realising the accelerating pace of modernity. Just like for other developments, itâs highly likely that the hype surrounding the roll-out of generative AI softwares such as ChatGPT will slowly give way to responsible, mindful responses.
To guide us on this progressive familiarisation with emerging AI-powered technologies, I am thrilled to have interviewed Jens Meijen, with whom I have collaborated on several occasions to create forward-thinking solutions to global problems. In the following article, Jens shares his unique perspective on AI training and risk management, emphasising the importance of cultivating an “AI mindsetâ. It provides practical tips for those seeking to make the most of modern tools.
Our Guest
Jens Meijen is finishing up his PhD at the University of Leuven and will join Stanford University as a Fulbright Fellow in early 2024. Next to his research, he founded Ulysses AI, a responsible AI consulting firm where he leads consultancy, research, and training projects. His company focuses on AI risk management, AI literacy training, and strategic innovation. Ulysses AI has partnerships with Microsoft for Startups, the Flemish Government, Mission Impact Academy (a global AI school), and the Institute for Technology and Society in Rio.
âA crucial part of the AI mindset is to approach AI systems as tools that take a bit of agency away from us. You need that to work with AI responsibly.â
Interview
S: Youâve been training individuals and companies to make sense of AI and learn how to use it. How do you do that?
J: A lot of training programmes focus on specific tools, like ChatGPT or Midjourney. This is, of course, useful for quite a lot of people, but itâs not a long-term solution. Youâve probably seen the ridiculous speed at which the AI landscape is evolving, so thereâs no guarantee that these concrete skills will be useful one year or even a few months from now. Even with the same AI systems, updates might make specific tricks (like types of prompt engineering) obsolete within mere days.
So, we believe that AI training requires a different approach than traditional training for digital tools. Thereâs nothing wrong with offering concrete tips and tricks, of course, but they should be supplemented, in my view at least, with an AI mindset.
S: An âAI mindsetâ, what does that mean?
J: It basically means that, to use AI responsibly, you should first focus on âsoft skillsâ before moving on to the âhard skillsâ. Understanding the technical foundations of AI is one thing, but itâs something else entirely to fully grasp the impact of AI on people and society. AI, in whatever form, changes the way we process information and make decisionsâhow we work and think. How do you navigate a world full of AI-driven decisions and outputs? You need a different mindset for that, something like ChatGPT isnât the same as PowerPoint or Slack. The relationship between the user and the technology is different.
A crucial part of the AI mindset is to approach AI systems as tools that take a bit of agency away from us. You need that to work with AI responsibly. Every decision you donât take, or every decision influenced by an AI recommendation or output, is no longer fully yours. We offer accessible explanations of concepts like agency and free will and apply them to real-world examples in our training sessions. We also teach people how to evaluate potential misinformation and factual errors, how to spot AI-generated images and texts, and how to evaluate the risks of an AI system. You could say we help people detect âhallucinationsâ in AI output, but I think the term âhallucinationâ makes AI seem more human than it really is.
S: So, itâs about being aware of the risks and limitations of AI?
J: Thatâs part of it, sure. A lot of people like to jump straight into the practical use cases. But that opens you up to a range of risks and makes you a lot less efficient. If you donât understand why a Large Language Model like GPT-4 canât reliably give you factual information, youâre much more likely to not question its output. If you donât understand the societal consequences of Generative AI, youâre more likely to deploy it unquestioningly and carelessly, which could lead to catastrophic results. Youâre much better equipped to work with AI if you know why itâs risky.
âYou should see the limitations of AI as a chance for you, as a human, to bridge those functional gaps.â
But itâs more than awareness. Itâs also, for example, about being a careful explorer: youâre not supposed to be scared of AI, you should be interested in knowing more and finding ways to make it work for you. You should see the limitations of AI as a chance for you, as a human, to bridge those functional gaps. An AI mindset is an attitude to take towards AI as a whole, an attitude that helps you utilise its full potential while staying safe from risks.
Back when I was in primary school, weâd get internet literacy classes, where weâd learn how to verify sources and to never blindly trust what we read on the internet. But we were also taught how to use the internet as a powerful tool. Thatâs a good analogy for the AI mindset weâre trying to cultivate.
S: Writing has never been easier with the popularisation of Large Language Models, especially ChatGPT. What are your tips to make the most of it?
J: Itâs really easy to use ChatGPT (thatâs part of why itâs so popular) but itâs tricky to master. My tips are pretty simple: ideate and iterate. Use it to quickly generate new ideas for papers or projects, to take new angles to problems you describe in detail, to change your perspective on things you think you already know through and through.
ChatGPTâs writing is too bland and stale to really write well, but itâs great for what I call âsplinteringâ: letting an initial idea, a little seed of inspiration, branch out in different directions without any effort. Just saying âgive me five ideas for Xâ, then picking one or two, and asking it to elaborate. Itâs simple but effective. Also, keep iterating outputs: just keep asking to refine and revise responses to really get to where you want to be. A first prompt is great to explore what you could do with an idea or what angle a text could take, but you usually need a few iterations to really get something useful.
Video from the ‘Visualising AI’ project by Google DeepMind
S: I have seen many professors get scared about students using ChatGPT to âcheatâ on their written assignment. Itâs, however, notoriously difficult to monitor its use. What would you advise them to do to tackle this issue?
J: Iâd say: embrace it. Tools like ChatGPT will change and likely become more specialised, but itâs unlikely theyâre going away. You canât systematically detect AI-generated text anyway, so why fight it? For the naysayers, not all is lost: with enough experience, you can learn how to spot AI-generated writing. If you donât iterate your prompts (which a lot of students donât do), itâs robotic, monotonous, and stiff.
If you really want to stop students from using AI, thereâs a really simple trick. You can instruct students to write an essay in class, on class computers without internet access (or by hand if youâre feeling really cruel), and then you have a stylistic baseline to compare with any subsequent writing. If a student canât spell to save their life in class, but suddenly writes perfectly, you know ChatGPT was involved. It just requires some linguistic affinity from you as a professor. But, in the long term, is that really a solution? It makes little sense to penalise students for using modern tools to improve their writing.
For more specific and demanding courses, tools like ChatGPT are not even that useful if you donât know how to prompt really well. Itâs similar to the internet, back in the day: we were banned from using Wikipedia because it made written assignments too easy. Teachers adapted and found more original assignments; Iâm sure theyâll adapt again.
The best approach Iâve seen was in a class on international politics where I gave a guest lecture on AI. Students were told to use ChatGPT or a similar tool to write policy recommendations. They were also instructed to include their prompts and why they used them. I think thatâs a great way to approach AI in the classroom.
Subscribe
for a monthly round-up of the latest advances in technology and global affairs, tips to thrive in academia, resources to support your personal growth, and more!
S: With your consultancy, Ulysses AI, you analyse the risks of AI systems. What are the most common flaws you encounter, and how do you usually advise to address them?
J: Weâve seen that most AI tools are actually quite safe from a technical perspective. Most companies weâve worked with and talked to who develop AI tools are very aware of the risks their tools might entail (even if they donât always communicate that clearly). They usually build on fairly robust technical infrastructures and check for potential bias or discrimination. We help companies like these refine their risk management strategies and communicate their efforts better. Essentially, we translate their technobabble into understandable language.
âResponsible AI takes time and effort, but I can guarantee that itâs worth it.â
The real issues often arise in the implementation phase. I donât want to get too technical here, but a lot of companies have extremely complex processes and structures that make it hard to really maintain an overview of the impact an AI system really has. For example, if you want to start using a tool like ChatGPT in a company, you need to start setting clear rules on its use, train staff so everyoneâs on the same page and knows ChatGPTâs limitations, and ensure that people donât blindly copy-and-paste output. Skipping any of those steps could lead to serious problems: clients really donât like being addressed in the obvious ChatGPT âtone of voiceâ.
Or letâs say you want to start using an AI system that automatically suggests optimised prices for your products. That forces sales, marketing, and IT into a long-term joint effort. Once your technical team is on board, you still need to run experiments in âsandboxesâ (isolated environments that simulate your real processes), collect feedback from everyone involved, and continuously rework the system until its performance is up to par. Thatâs a long process, and not everyone stays on board. Really implementing AI responsibly is a resource-intensive process. The real danger arises when individuals and businesses cut corners in an attempt to deploy more quickly. Thatâs when you can get in trouble. Responsible AI takes time and effort, but I can guarantee that itâs worth it.
Key Takeaways
My interview with Jens Meijen highlights two points. First, you should be familiar with how AI tools both work and donât work to make the most of them. Understanding the limitations and risks of Large Language Models (LLMs) such as ChatGPT is fundamental. Learning to recognise AI-generated content is another skill that will prove crucial. Second, the rise of AI-powered technologies signify that there are new opportunities to harness human creativity and critical thinking. When writing becomes easy, emphasis should be put on the cognitive processes that surround it: asking the right questions, collecting relevant data, scrutinising information,⊠More broadly, when AI takes some of our agency away, which are the decisions that remain in our hands? Which are the skills that it canât replace? Which data does it ignore? Which are the values it canât compute? These are some of the questions which will keep guiding you when cultivating an AI mindset.
Learn from experts
Are you interested in a training session on how to use AI responsibly and effectively?
Get in touch with Jens via info@ulysses-ai.com.