In recent years, the role of the solutions engineer has quietly become one of the most intriguing and misunderstood jobs in tech. Sitting at the intersection of software development, user research, design thinking, and now AI, it is a career path that requires creativity as well as technical problem-solving skills. To understand what this work actually looks like, I sat down with Alberto, a biomedical engineer turned developer who has built AI-powered tools in the medical sector and beyond. Our conversation explores how solutions engineers approach complex challenges, why multimodal AI is reshaping software interfaces, and how non-technical people can begin experimenting with app-building tools. It is also a reflection on curiosity, imagination, and the shifting boundaries of work in an AI-driven world.
Alberto Yanez is a biomedical engineer turned developer who has worked across several startups in a variety of roles related to software product development, a new and multi-faceted career path often described as “solutions engineering.”
What is a solutions engineer?
That’s a tricky question, because different people have different definitions, but for me it’s going to the core of the term that is: engineering solutions. So, that means looking at how different problems affect different people and markets, and going from start to end to provide a solution that solves that problem. Personally, I’ve been working in the Software-as-a-Service (SaaS) sector for the past two years and my specialty is solving problems in the health industry using AI software.
Can you tell me more about how web development, AI, and user research intersect in solutions engineering?
To design solutions with software, you have to put on different hats. Typically you start by doing some user research to understand the problem, interviewing different stakeholders to see what are their key pain points, and then trying to address that through the design of a software that could solve that problem easily, in a few clicks. We look into the different technologies available that might help solve the problem. Sometimes it’s just a simple app with a front-end software, sometimes it’s a back-end (the invisible part) and some business logic code. When the problem is more complex, we look for more advanced solutions. That’s where AI comes in. AI tools allow us to address more complex problems, for example in terms of language. For instance, AI can analyse the sentiments of what patients say, which can be useful in the medical field.
Another complex problem concerns “triage”: sorting patients by treatment priority. Up until now triage has been done manually. There have been attempts to design new solutions but it’s difficult because you have lots of medical branches, and depending on the case you could have lots of different answers. To build a triage software, you have to programme all that, all the possibilities, and thanks to AI this is becoming easier: it analyses the answers of the patients to help classification. You have to analyse if and how AI is suitable for the problem you’re trying to solve and that’s where I see the intersection.
Is that one of the examples of the solutions that you have engineered recently?
Yes. For the latest startup I worked with, we built an app that connected patients with clinicians so that the relevant clinicians go to the patient’s house. So, we had to develop some way of sorting the patients.That’s when we introduced and tested how AI tools could solve that problem. The thing is that it’s a bit tricky and you have to test it a lot because it’s a delicate step. So you have to have a high percentage of success rates. So, it took us a while, but yeah, that was one of the applications of AI I integrated in my work.
Could you share another exciting problem that you addressed through your solutions engineering?
One thing that I’ve been recently trying and that I love is that, with AI, you can imagine an interface that you can speak with and that adapts depending on what you’re talking about. Imagine, for example, that you are trying to get an entrance for a cultural event. You could talk with an AI-powered software that has an interface that switches depending on what you’re saying. So, if I’m just saying that I want to go to event B, the interface can direct me to that, and if I’m not sure yet which event I’d like to attend, the interface can switch to another function that helps me decide based on my preferences.
Another exciting thing is multimodality: the use of multiple modes of communication, such as written and spoken language, but also images, sound, or gestures. With AI, the boundaries of multimodality are lowering and software interfaces are increasingly fluid. It’s even possible that AI tools will create interfaces in real time depending on user input. I’m exploring that side of solutions engineering because it’s very exciting to see how everything blends in. I’d say it’s one of the most exciting things that I’m testing right now.
Tell me more about multimodality. How do you make it happen, in practice?
Multimodality is the capacity to communicate in different ways. AI solutions are progressively adapting to that. As you know, AI solutions started with text processing: you type a query and the software responds by providing the output you asked for. Then, you could upload images or videos or speech. Multimodality is the capacity of switching between these different modes of communication, seamlessly, and without losing information. That’s the goal.
In the future, more dimensions will be added. For example, other modalities include converting one file format to another file format. That also could be done seamlessly by AI and that’s the direction in which we are going.
If somebody is curious about building AI applications but they are not tech-savvy, what would you tell them?
I would tell them that it’s a great moment to start because tools are getting easier and easier to use. For example, some of the tools we have been talking about, such as Lovable, Replit, or Claude Code, are good options because they all work with prompts, so just with plain English language you explain what you want and it transforms it into what you want.
What tech companies are mostly tackling right now with this functionality is the creation of web apps. But for sure the same will translate to other things. Right now, these tools are very good at building simple front-end and back-end, but in the future we will see specialised tools that can create AI algorithms, for example. Replit has announced recently the creation of agents just by prompting. So, this is advancing very quickly. What I say today may be outdated tomorrow. But the key takeaway is that it’s very easy right now to create simple software and it will get even easier with time.
What I recommend is to be curious and to have imagination. I think that’s going to become more valuable: the capacity of joining different things and creating by joining, but in a singular way with your unique point of view. So, my advice is to be curious and to approach AI tools as if they were a new language. If you were to start learning French right now, you would start by learning the key words, important phrases, the pronunciation, … With AI tools, it’s the same. If you were to specialise in creating software for biomedical products, like me, you’d need to learn the terminology of that industry and how it translates to web development, so that you can communicate it to AI tools. If you learn the correct terms, it’s faster and easier.
Top AI Tools to Build Software
A no-code/low-code AI tool that builds full web apps from natural-language prompts, making software creation fast and intuitive.
An online development platform with AI features that lets users code, build, and deploy applications quickly in the browser.
An AI assistant that helps write, reason, and generate code, supporting developers with problem-solving, testing, and application design.
What impresses you the most about AI software-building?
The learning path of software building has also been changing. It’s easier because it feels like a toy, like a game. You can learn by doing, and while having fun. For example if you were to continue using Lovable, maybe you would start to see some patterns where if you say “A” it gives you this result and if you say “B” it gives you another. Just like that, you can learn by doing, fine-tuning your design, and incorporating new changes. It’s a dynamic learning process.
How do you actually test what comes up from those tools?
Starting with testing on the technical side, normally you create different test functionalities into your programme so that you can test it. For example, you test all the different possibilities that people could come up with in order to see if the login function works and if the system is robust enough. With AI tools, users can often say whatever they want, if it’s an application where you can write freely. So it’s good to create tests with AI, where the AI itself comes up with lots of different scenarios. You can prompt the AI to create specific tests and situations. Then based on the answers you can fine-tune your algorithm or agent, or whatever programme you’re building.
And then on the user side, what we normally do is just interview people, show them the tool, and ask them to use it freely. You observe the different facial cues, how they become frustrated with certain functions, or if they stop entirely, or overanalyze the software. Those are signs that maybe you have something in the interface that is not totally clear for the user. You can also ask people to do something specific, like “open your profile” and see function by function how the user responds, if it’s easy for them to reach that point or if they get lost. You can also ask them questions, for example if there was something missing or general queries about the experience.
What is a common challenge for solutions engineers?
Creating AI applications is challenging because the technology is constantly changing and evolving. You have to work hard to keep up. More specifically, creating AI applications that are open-ended is very challenging, because they are multimodal and users can interact with them in various ways. You have to explore different behaviours, and very often you see things you weren’t expecting. It’s tricky because sometimes you have to respond to questions you don’t know the answer to, and you have to make the software come up with something that will leave the user satisfied. So, giving a high-quality service with open-ended systems is quite difficult, but it’s evolving fast. We are getting even more insights and access to different variants. As I was saying before, we can now generate different scenarios with AI, test them, and evaluate the software response. So, I think that’s becoming easier to tackle.
Looking into the future, 5-10 years ahead, how do you see the role of a solutions engineer evolve?
What I see is that we can wear more hats. In the future, we will be able to act as a designer, engineer, and customer service all at once. You can be more fluid on the job and that’s a great thing because, until now, you had a specific job function, more siloed. In the future I think AI tools will open up different aspects of work that you like or that you think you are good at, but which you cannot use in your daily job. In the future, I think that you could wear all these different hats, which will also impact dynamics in teams, because it makes cooperation more fluid. It changes the perspective about roles, which from my point of view can be more enriching. It also comes with challenges, because you have to constantly keep up with advances and I am not sure how the acceleration of AI progress will affect us all.


