Audiovisual portal

AI - More than ChatGPT

Show transcript

Hello and welcome to this special episode of the CORDIScovery Podcast.

We're diving into one of the most transformative forces shaping the world

today.

Artificial intelligence and machine learning.

From health to the environment, mobility, industry, culture, and beyond.

AI is changing how we live and work.

I'm joined by representatives of four projects

that have received funding from the EU Horizon Europe program,

and these projects show the potential of AI and machine

learning to drive innovation and competitiveness, while ensuring

that they develop in a trustworthy, ethical and human

centered way.

So, I have with me Thomas Gutt, who is director

of funding projects and coordination at Infineon Technologies in Munich.

Thomas is the coordinator of the Aims 5.0 project.

Welcome, Thomas. Yeah, it's a pleasure to be here.

And I have on my left George Nikolakopoulos

representing the Persephone project.

George is a professor in robotics and artificial intelligence

at the Lulea University of Technology in Sweden.

And he's had some collaborations with NASA, I believe in the past as well.

Correct. Thank you for the invitation.

Nice meeting you.

Very good to have you with us, George.

We also have Sophia Alexandersson with us at the table.

Sophia is representing the Muse-IT project.

She's chief executive and artistic director of SahreMusic and Performing

Arts, the Swedish knowledge Center for Artistic Development and Inclusion.

Welcome, Sophia. Thanks for having me.

And finally, we have Alessandro Kartsiaklis.

Originally from Italy, then moved to Greece.

And these days, based in Denmark.

He is currently the CEO of Navy Blind.

After a career beginning in the organization as an intern.

So, it's been a really impressive ascension through the ranks.

And we're very happy to have you with us here, Alessandro.

And it's great to be here.

Thanks for having us.

So, Thomas, if I could start with you.

Your project uses artificial intelligence

to improve the sustainability of European industry.

At least that's the objective, I believe.

Can you explain how you're doing that?

Yeah.

The sustainability is an important topic, but AI and manufacturing

is the other very big topic.

We are using AI in our daily communication now when we use ChatGPT

but using it in industry seems to be much more difficult.

But there is a huge potential to use that.

Yeah.

That's why we are doing this with a broader consortium.

We have 53 partners from nine industrial domains

that what we want to do, we are not just focusing on one domain.

We think we can learn from each other.

And by this, incorporating more than one domain.

Yeah.

Could you give us an example?

I mean, in what kind of practical ways

could your project potentially improve ecofriendly manufacturing?

Learning is a very good example.

That means, we are examining all the data

with AI, learning much more about our products.

And with that, creating higher yields.

And if we have higher yields, that means with the same input energy,

we can produce more chips than before.

And hence, the energy per chip that was used is reduced.

That's one of the ways to sustainability.

Another way is, for instance, driving out

unwanted materials like PFAs.

These are different ways to sustainability.

Okay.

And I think you're developing through the project

some very practical tools that companies can use, guidelines, checklists,

and that kind of thing.

If they're interested in applying AI in the manufacturing processes.

Can you tell us a bit more about that?

Yeah, we have a let's say two groups of work packages.

One is where all the use cases are that will be our results.

But there is as well input technologies.

For instance, the AI gym, as we call it,

that is the collection of different AI tools that will be compared

and allowing the partners to compare the two tools and learn from each other.

I think that's something which makes us special.

And we are putting a lot of focus on this, so that one can learn from the other.

Okay.

And when you say an AI gym is the idea that you go

from kind of one exercise, one to flex different AI muscles.

Yeah. Like this. Yeah.

Improving our work by gym,

like one machine where you improve your muscles.

But you see as well the enable machine and see if it's maybe more effective.

Or it can be more useful for what you are doing.

And that's like you're together in this room and learn from each other.

Okay.

Thanks for that.

Let me turn to you next, George.

As I said, you're representing the Persephone project here.

We're actually recording this podcast that the research and innovation days,

we have an exhibition space where all of you actually are presenting your projects.

But definitely one of the big attractions downstairs is your robotic dog.

So perhaps you could tell us a little bit

about that and how it's part of your project to develop

solutions to explore and extract deep mineral deposits?

Yes, exactly.

And I think the robot dog is a very nice example of a project

that generates autonomy, for example,

for inspecting deep and abandoned mines.

And what we have in the exhibition is live proof that the exact technology

that has been operating in mine is here.

People can see it.

That's it.

Interact with a fully autonomous device.

So, the project is focusing on how to inspect

these deep and abandoned mines and search for minerals

in order to then process all the data.

And all this could be used as a tool to take decisions, or if

the abandoned mines will be reopened, how safe are operations,

what minerals exist and what actions will be taken?

Right.

And can you just explain briefly why is it important to go deeper

to extract minerals? Very correct.

Because, specifically in abandoned mines

and, the access of humans is completely forbidden.

It's absolutely no place to go there.

You cannot even enter one or two meters.

And the reason is simple.

These mines have been open and closed for 100 years, 60 years.

So, they are not producing minerals anymore.

So, it's completely forbidden for humans to enter.

So, the only tool that you have is these robots to go inside,

inspect the environment, that takes some sampling, and drilling.

We do that also the Persephone project.

Same situation with the deep mines that we're talking about

with deposits more than one kilometer deep in there.

So, it's very difficult conditions, very warm.

And the only solution, again, is to send robots, autonomous

robots.

There are no humans in the loop; there are no remote pilots.

In order to inspect the situation, track the ore and take decisions.

How mine planning, how mine developments should take place.

Right. So, it's to do with human safety.

It's to do with accessing areas you wouldn't be able to access.

Number one, human safety.

A second also that we collect

billions of data that only, so as you mentioned AI,

only AI tools can process this information in a multi-dimensional approach.

So we combine samples,

spectral cameras, and then we take decisions

about how the ore is laying down in the mine.

That's the second reason. Okay.

There's a lot of talk about digital twins.

I believe your project also has that kind of dimension.

Can you say a bit about that? Yes.

So, on autonomous systems, the fundamental operation is localization.

To answer "where am I” for a robot, every robot knows his position.

So, we use, advanced sensor leaders to track space, we do special recognition,

but also we use sensors to build a map

where robots or other mining machines can operate.

And with these robots, we can, let's say, create digital twins,

map representations or point clouds in robotics of kilometers.

And we can combine existing maps with new maps.

We can combine maps coming from different robots.

It's called multi session robotic mapping.

And then we can localize on these maps measurements on ore density, on ore type,

and provide this information to experts to take decisions on how the mine will do.

Okay.

Thanks very much George.

Come back to you in a, in a moment in the discussion.

But let's, let's move on.

Now I'd like to bring in Sophia who's here as I said,

representing the Muse-IT project.

I think the fundamental aim of this project is to improve access

to cultural heritage through technology.

And I know you have a special music platform

that you would like to talk about, but can you just begin

by giving us a kind of overall introduction to the project?

All right.

So, Muse-IT is really focusing on how we can find new ways

of dealing with arts and culture and going beyond perhaps sight

and hearing, using other senses like, haptics, etc.

So, it's about multimodality.

It's about making sure that everyone should have equal rights

to not only experience, but also to explore it

in the way that you actually would like to maybe have your own participation

in focus.

So, we have been working for the last three years

in many different ways to explore this since everything from developing haptics.

For example, let's say you have a painting on a wall.

How can we explore that, not only looking at it,

not only having someone describing it, but maybe having the sound,

which is equal to the painting, or having a haptic sensation from it.

Yeah, yeah.

And have you been working with museums and cultural centers

to actually introduce some of these accessible options?

Yeah, we have.

We have 12 partners in the project.

We are spanning from companies to cultural institutions, to ourselves,

which is a disability organization, I wouldn't say,

but we are a knowledge center focusing on disability inclusion.

So we have been, of course, doing lots of workshop in particular

in our parts of the project, we have been meeting lots of individuals,

disabled individuals and people who are working with them to really also

go for this kind of really user aspect, designing, not trying to create something

which is not in use or in need for anyone.

So yes, we have been meeting a lot, so I would say

cultural institutions is also a very important target group for the project.

Okay.

And are there any particular obstacles you've encountered in

introducing this kind of technology into those spaces?

No, I can't really say that.

I think it's more

about what are the barriers today.

I think accessibility has been very much about,

maybe, focusing on physical things first of all.

But like Burrows university, which is a lead partner,

they have been working a lot with deafblind people

and really looking into technology, especially with haptics and so on,

which is like ensuring also other types of ways of working

and to really overcome these types of barriers.

But what we have been trying to really go into is to not having the

disability itself in focus, but rather

how do we make sure that everyone can participate.

I see, and, I promised

that I would give you a chance to talk about your remote performance

platform for co-creating music, which sounds really exciting.

So as far as I understand, the platform ensures synchronized

sounds and interaction using haptics and biodata.

It allows musicians to rehearse, perform, express feelings, and experiment together

as if they were together physically in the same space.

You know, where did the idea come from?

It's actually an idea we've been working quite a lot with

for many years at ShareMusic.

So actually, when we got the invitation to this consortium,

we said we would love to work and develop this further

because we do think it fits into this call.

So, I think it was really kind of an emergency situation

coming up during the pandemic.

We all had to stay at home, especially for musicians.

You know, it did become a really hard thing.

How do you play together?

Because I mean, obviously Zoom and everything,

you know, you had to sing Happy Birthday to each other,

you know, and it just sounded horrible because you are not in sync.

Yeah. The latency.

So, but for a lot of disabled people,

the situation we had during the pandemics is like their daily life.

They have limited possibilities to maybe take part

in work life to study lots of things.

Not for all of them, but for some people, these distances are there.

So, for us,

this situation has been something we have been kind of looking into in general.

How can we have equal access for people to participate,

even have it as their own work?

So, the idea really came out and then, we also had a chance

to being introduced to JackTrip, which is technology

which has been developed by Stanford, which is about working with low latency.

So, what we had done during the project is really that we have also added

new technology developing both JackTrip peripheral but also other technologies.

So, we are coming to the next phase because what is missing

when you are online.

It's the emotions.

How do you share the emotions?

We are in the same room here right now.

We can feel each other.

You know, our bodies are picking up things,

but what happens when we go online is really hard

and is very important when you want to play and collaborate with people.

So we have been looking into ways of how

you can share that, you know, online.

So what we are sharing downstairs

here now is that people can actually try two things.

One is we are using face recognition.

Then

we ask people to smile into the camera

and then on the screen you can see projected your emotions.

I see.

And if there are any musicians out there who are listening to us

or watching this podcast, is this technology freely available?

Can people access the platform?

Not the platform itself.

It's still a proof of concept right now.

So we need to take it to the next level so it can, you know, buffer

technology needs to be more advanced.

And then in the end, of course, it's going to be what you think a service

which can be available through licensing.

That's the plan at least. But I mean JackTrip in itself is available

and some other technologies are also available.

Okay. Thanks very much, Sophia.

And now I'd like to turn to Alessandro, who's with us from NaviBlind,

which is a project that's developed a GPS navigator that enables blind

and visually impaired people to walk independently to a destination.

And I believe you actually have something you can show us to explain,

exactly how the. Yes, I do.

So that's the hardware that we are using.

It contains

a small satellite navigation unit.

I won't say GPS, because GPS is the American version.

We also have the European Galileo and the Russian and Chinese counterpart.

So here in the cup,

we have a small GNSS processing unit.

It really is small, for those who are listening to us,

who can't see, I mean, it is about 2 or 3cm by 4 or 5cm.

Maybe a bit more.

But yeah, it's still very light and compact.

This allows us to get some accuracy in positioning

that can be up to a few centimeters.

Very open spaces.

But we try to provide navigation with an accuracy

of at least 30 to 40cm in urban environments.

So that we can monitor step by step the route of our users

and also tell them when to turn, when to stop

and provide them with a quick description of the most important features.

Right.

And is this still at the prototype stage, or are you making it available?

No, this is actually our MVP.

So we had our prototype for a couple of years.

Okay.

We had around 20 users, trying it, they loved it

because when we moved to the MVP, all of them, signed up.

So right now, we have

around 20 active users, independent.

And are they using it every day in their lives?

Pretty much. Yes. Okay. Yes.

It's mostly useful when they have to learn a new route

they are not familiar with.

So it's also an alternative to mobility and orientation trainings,

which can take even up to one and a half months.

Right.

With our solution, we, of course, spend some time doing

the quality assurance on the routes, but within 48

hours, the users have the route they have requested.

And what kind of feedback do you get from the users?

What kind of difference does it make to them on a day to day basis?

Well, for many of them it's life changing.

So, basically the idea for this product stemmed

from a personal necessity of a friend of ours, Emile,

who is today also communication manager in the company.

He moved from a small city outside Copenhagen

to Copenhagen, a few years ago to study in his university.

So he suddenly found himself in a completely new environment.

He had to learn how to go to the public transport.

He had to learn how to go to university, to the local shops.

So, what we set out to do with NaviBlind was to create

a first network of points of interest for him.

And this was the way he started his master

years with NaviBlind guiding him through

the network of the city.

Okay.

So, yeah, for, for users, this technology can be life changing

because as I mentioned, it can bring down preparation time,

planning and preparation time for mobility by a big percentage.

And also

enables users to actually be independent in navigation, to choose

active mobility over a taxi service and explore new places.

So we try to offer leisure opportunities, work opportunities,

educational opportunities and socializing also for all users.

Right.

Right.

Listening to all of you, it seems to me

that one of the things you have in common is that you all work with quite

a lot of different partners within consortia for the projects.

But also you have different stakeholders.

You're working with user groups,

disabled people, in the case of MuseIT and NaviBlind, etc.,

working with the mining industry, with the manufacturing

industry, how have you found that experience during the project?

You know, what obstacles have you faced?

What lessons have you learned so that you could maybe share

with others who are embarking on similar projects?

I have a very nice

example, for example, in robotics, when we say exploration,

we mean how the robot will explore an unknown environment.

But in the mining industry,

when you say exploration, it means how you can find the minerals.

So, it took us maybe two years

to set up the vocabulary.

And, of course, Persephone is not our first project.

But when we started working in the mining industry,

it was a problem that was overcome.

And another thing is that the mining industry wants to see

solutions, autonomy,

automation also, in real life;

they want to see things being demonstrated in real life conditions.

And I think that is something that gave us a very good inspiration

to develop systems that will have resiliency, will be able to

operate under unknown conditions in unstructured and harsh lands.

So we learned a totally different way of thinking and developing systems.

Very interesting.

Sophia, do you have a story you could share?

Well, I think in general,

I think what has been really interesting during these

three years has been that we all come from very different perspectives,

but we have had inclusion and access as the kind of core points.

For an organization like ShareMusic that has been working for

such a long time, certain things are just in our backbone, really.

You know, you don't even think about it.

And then you have to

rephrase it, perhaps as you say, or you have to re-describe it.

And also, I think one main question is for me really also how can you embed

accessibility and inclusion so it really becomes the norm.

You don't really have to add it as a kind of add-on when you work with it.

I feel that is something which is really a big takeaway.

How do you do that? Right.

Thanks very much.

How about you Alessandro in your project NaviBlind?

We keep learning on multiple levels.

So when we started, for example,

one of our main concerns was

the technology, how to make it compact, how to make it reliable.

Then as we move on, we see that there is also

an aspect of interpretation of guidance or like the correct way

to provide instructions to the users.

We also,

at some point, had to talk about the design

and stylistic aspects, which we haven't thought so much about it.

So, the choice of different colors in the cap

was something that was suggested by the users themselves.

And now that we are expanding into the German market

from our base in Denmark, we see that we have to

do a lot of work to translate the requirements of the German audience,

which may sound strange,

but is quite different from the from the Danish.

So, yeah.

It's not just about language.

IIt's about acceptance of the technology.

Yes, but not only that, but also lifestyle,

the report with the public organizations of different kinds.

So yeah, it's a very interesting and we keep learning.

And Thomas, I mean you

have a huge consortium, as you said earlier on, it must have been not always

easy to get all of those partners together and agree on a common direction.

I would say you need a strong steering team.

And I mentioned that equilibrium is important,

equilibrium between industry and academic partners.

Yeah.

And I think this is really a very healthy connection.

Because the industry provides interesting data

and get to know the future talents.

And the academic partners,

they can work with their students really deep into topics.

It's something which is very healthy for both sides.

And that's what I like.

And we really focus on that.

We have this in a good equilibrium.

That means, the amount of partners

industry is roughly the amount of partners in the academia.

Very interesting.

What I mean

it sounds like there've been a lot of learnings in

you know, different projects,

a lot of progress made, but still some way to go for all of you.

You have ambitions.

You're all working in areas where things are moving very quickly.

Robotics, AI, technology more generally.

If you were to have your crystal ball and look ahead five years into the future,

where would you ideally like to see your projects

developing, perhaps Sophia, would you like to go first?

All right.

I think it’s pretty clear where we are heading towards.

So for the remote performance platform, I think in five years' time, we do foresee

we are ready to take it to the market somewhere to share it.

Yeah.

Well, good luck with that.

Thank you. How about you, Thomas?

Five years in the future, well,

I'm really hoping that we see a much deeper

integration of AI in manufacturing, because I think it's really helpful.

And we all will benefit a lot of that.

That’s what we’ll see in five years.

And people will be going to work out in your AI gym.

Yes, yes, but the people are still in the project or in the work.

That's the reason why we have this 5.0.

Because industry 5.0 is re-incorporating

the person, the worker in the process, we are not replacing them with robots.

But we make a good new co-work

between the machines, robots, and the human being.

Very important. Alessandro?

Yes. Well, we hope in five years

we have entered the largest market, at least the European market,

but also internationally.

We hope we will keep developing

new sensors and new tools that we can integrate in our solution

but also develop new solutions for accessibility and mobility

more in general, and not only for blind and partially sighted people,

but for everyone that can face mobility issues in daily life.

So potentially a huge market for you there.

Yeah.

And do you think you can stay here in Europe and do that.

We would like to, I mean,

in the phases of our future development,

we are incorporating as much as we can EU technologies.

So even when it comes to AI, we would like to try first

European models as also supporting the AI strategy

from the European Union and the European Commission.

So as much as we can European models and if we cannot, of course,

we prioritize safety of our users and the user experience.

Thanks very much.

And how about you?

For us, we call it “zero entry mining”.

It means that you create mines

where there is no human in the loop, totally zero entry.

I am afraid that this will not happen in five years,

but I envision a future where there will be more autonomy on the mining

machines, reduction of humans operating in risk areas,

also smaller mining machines, smaller impact on the environment

and totally safe and the responsible extraction of minerals.

Okay.

Well, that I'm afraid we've run out of time.

We could continue, I'm sure, fascinating topics,

but I'd like to thank our guests, George, Alessandro, Thomas and Sophia.

Thank you very much for joining us.

And thanks also to you for tuning in to this

special edition of the CORDIScovery podcast.

You can follow us on Spotify and Apple Podcasts.

And please do check out the podcast homepage on the Cordis website.

Subscribe to make sure the hottest research and EU funded science

isn't passing you by,

and you can also find more information and project examples

on the Cordis website, on the European Commission's Research

and Innovation website and in our online magazine Horizon.

Media information
ID I-283740
Date 21/01/2026
Duration 28:33
Languages Original
Category Vodcast
Personalities Tony Lockett
Institution European Commission
Views 63