Intelligent AI is not for tomorrow

terminator

Two weeks ago, I had the opportunity to attend a conference of Yann LeCun, famous researcher and director of Facebook laboratory in artificial intelligence (FAIR), talked about AI and its future. It was (unfortunately) not technical at all but very interesting to have the point of view of one of the most knowledgeable person on Earth about this topic.

yannLeCun

 

Apart from historical point of view and some definition reminder, Yann LeCun delivered three important messages

Yes, AI is the future (but not the Answer to the Ultimate Question of Life, the Universe, and Everything)

When asks about (useful) applications of AI, LeCun mentioned smart cars, healthcare applications and many others. He was confident about the fact that these applications will positively change our societies and take greater roles. Although reminding the need for ethical concern about these technologies (see below), he seems sceptical about machine-domination-over-human scenario and was much concerned about misused of  IA by malevolent people.

He also agreed that such technology rise will create a “work shift” toward more specialized jobs but did not think much the humanity will become unemployed because of task automatization. On that point I am less confident but maybe because I read too much cyberpunk books…

Intelligent AI in 40 years

When discussing the point of possible domination of machine over human, LeCun prophesied that no AI as smart as a human may rise before 30-40 years. He thinks that, by that time, researcher will also improve their knowledge of AI and thus avoid to create some Skynet-like AI by mistakes.

I was quite surprised by this 30-40 years perspective, as I read a lot of articles about fastest advances. LeCun’s points was that currently IA are better than human on specialized tasks (chess, go, jeopardize, data analysis…) but miss the ability to really learn from everything and have a global vision of the world. He mentioned what every baby learn (e.g. the gravity) from experience of the world only (without supervision or reinforcement),  something machine seems far to achieve, though they can be trained for specific task at which they overtake humans.

On overall, LeCum was more cautious about timeline of AI improvement than most people claims, though confident about the advantage of this technology

Elon Musk has something in mind

With the question of the terminator-like machine came the claims of Elon Musk, the brilliant founder of SpaceX and other great companies, about dangerousness of AI technologies. LeCun says Musk is less virulent in private and noticed that his openIA project is not exactly and anti-AI project but rather an ethical AI project (which seems a more reasonnable point of view). He also suspect that Musk as another goal in mind, other than simply alerting people against possible threats of AI technologies. Musk’s own dream is Mars colonization. When people invest a bit less in AI, they have more chance to invest in martian projects…

Side note :

One interesting  question asked to LeCun was about interpretability of AI decision to human. A topic I am very sensitive to, as a bioinformatician who use machine learning to decipher biology. LeCun’s answer was totally echoing what I see in labs on a daily basis : considering that most human takes decisions, even at expert level, they cannot explain, even to themself, how would machine that takes decision with much more parameters would explain them to humans ?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s