arrow-down arrow-left arrow-right check clock edit filter phone question-mark search submit-arrow

by Administrator | 22 Mar 2018

AI can bring humans and robots to work together better

Interviewing Selene Baez Santamaria  about Artificial Intelligence — Data Scientist at myTomorrows.

There are many exciting areas in Artificial Intelligence to be studied and researched thoroughly: a future where humans can work with cooperative machines capable of learning and interact with people is closer than we think. What is technically possible and what are the ethical and moral consequences of these possibilities?


Q: Selene, you have developed a great interest towards robotics and you are dedicating your studies and career to this field. Tell us more about your vision and goals.


My particular interests lie in the design of collaborative robotics, with an emphasis on agents that require specialized intelligence to cooperate with people in everyday scenarios. This entails that robots must engage in an ongoing learning process for modeling the human world, resulting in an ever-expanding knowledge base.


I am deeply committed to making robotics reachable to the public. I firmly believe robots should complement society by filling a unique role alongside their human counterparts. It follows that to achieve optimal human-robot cooperation, robots must have their own ways of acting and reacting to environments while still observing basic human thinking.



Q: You are deeply committed indeed! You have a degree in Artificial Intelligence and while you work at the University you also have another job related to the same field.

Yes, currently I have two jobs: Research Fellow at the VU and Data Scientist at myTomorrows. Both allow me to work on different aspects of Artificial Intelligence (AI) in different domains.


One of the major challenges for AI applications is to interact with a complicated world which can only be understood shallowly and partly. On the field, there is a clear division between:


a) top-down approaches — based on symbolic representations and explored via Semantic Technologies

b) bottom-up approaches — focused on sub-symbolic representations and exploited by using Machine Learning techniques.


For the event @Paradiso, we will have a robot on stage that combines these two approaches. It uses low-level machine learning techniques for face identification, as well as gender, name and speech recognition. At the same time, it uses a semantic model to reason over the facts it learns about the world. The talk shows the robot as an imperfect machine and aims at proving that language is an intuitive method for correcting and learning from these robotic agents.


Can computers learn to recognize the meaning of language? Join Selene and Prof. Vossen at Paradiso event to find out and listen to how technology will shape our lives in the near future.