Restart AI: Deep learning, meet knowledge graphs

“This is what we need to do. It’s not popular right now, but that’s why the things that are popular are not working.” It’s a gross simplification of what scientist, best-selling author and entrepreneur is Gary Marcus has been saying for a number of years now, but at least it’s one made by himself.

The “popular things that do not work” section refers to deep learning, and the “what to do” section refers to a more holistic approach to AI. Marcus does not lack ambition; he is set on nothing but restarting AI. He also does not lack qualifications. He has been working to find out the nature of intelligence, artificial or otherwise, more or less since his childhood.

Questioning deep learning may sound controversial, as deep learning is considered the most successful subdomain in AI at the moment. Marcus for his part has been consistent in his criticism. He has published work highlighting how deep learning fails, exemplified by language models such as GPT-2, Meenaand GPT-3.

Marcus recently published a 60-page paper entitled “The Next Decade in AI: Four Steps to Robust Artificial Intelligence. “In this work, Marcus goes beyond criticism and makes concrete proposals to move AI forward.

As a precursor to Marcus’ coming main note on the future of AI in knowledge connections, ZDNet engaged with him on a wide range of topics. To pick up from the place where we escaped in the first part, today we are expanding our specific approaches and technologies.

Robust AI: 4 blocks versus 4 lines of code

Recently, Geoff Hinton, one of the ancestors of deep learning, argued that deep learning will be able to do everything. Marcus believes that the only way to make progress is to put together building blocks that are already there, but no current AI system combines.

Building block no. 1: A connection to the world of classic AI. Marcus does not suggest getting rid of deep learning, but uses it along with some of the tools for classic AI. Classical AI is good at representing abstract knowledge, representing sentences or abstractions. The goal is to have hybrid systems that can use perceptual information.

Nr. 2: We need to have rich ways to specify knowledge, and we must have great knowledge. Our world is filled with many small pieces of knowledge. Deep learning systems mostly not. They are mostly just filled with connections between certain things. So we need a lot of knowledge.

Nr. 3: We must be able to reason about these things. Let’s say we know physical objects and their position in the world – for example, a cup. The cup contains pencils. Then AI systems need to be able to realize that if we cut a hole in the bottom of the cup, the pencils could fall out. Humans do this kind of reasoning all the time, but current AI systems do not.

Nr. 4: We need cognitive models – things inside our brains or inside computers that tell us about the relationship between the devices we see around us in the world. Marcus points to some systems that can do this some of the time, and why the inferences they can make are far more sophisticated than what deep learning alone does.

To us, it looks like a well-rounded proposal. But there has been some withdrawal of people like Yoshua Bengio no less. Yoshua Bengio, Geoff Hinton and Yan LeCun are considered ancestors of deep learning and recently won the Turing Prize for their work.

deeplearningiconsr5png-jpg.png

There is more to AI than Machine Learning, and there is more to Machine Learning than deep learning. Gary Marcus argues for a hybrid approach to AI by reconnecting it with its roots. Image: Nvidia

Bengio and Marcus have started a debate, where Bengio acknowledged some of Marcus’ arguments while at the same time choosing to draw a metaphorical line in the sand. Marcus mentioned that he finds Bengio’s early work on deep learning to be “more on the hype side of the spectrum”:

“I think Bengio was of the opinion that if we had enough data, we would solve all the problems. And he now sees that that is not true. In fact, he softened the rhetoric quite a bit. He has acknowledged that there was too much hype and recognized the limits of generalization, as I have pointed out for a long time – although he did not attribute this to me, so he has recognized some of the limits.

At this one point, though, I think he and I are still pretty different. We talked about what things you need to build into a system. So there will be a lot of knowledge. Not all of it becomes innate. Much of it is learned, but there may be some core that is innate. And he was willing to acknowledge a certain thing because he said, well, that’s just four lines of computer code.

He did not quite draw a line and said nothing more than five lines. But he said it’s hard to code all of these things. I think it’s silly. We have gigabytes of memory now that cost nothing. So you could easily accommodate the physical storage. It’s really about building and debugging and getting the right amount of code. ”

Congenital knowledge and the 20-year-old hype

Marcus went on to offer a metaphor. He said the genome is a kind of code that has evolved over a billion years to build brains autonomously without a plan, adding that it is a very sophisticated system that he wrote about in a book called The Birth of the Mind. There is plenty of room in that genome to have a basic knowledge of the world.

It’s obvious, Marcus argues, by observing what we call a social animal like a horse that just gets up and starts walking, or an ibex that climbs down the side of the mountain when it’s a few hours old. . There must be some innate knowledge there about what the visual world looks like and how to interpret it, how forces apply to your own limbs, and how it relates to balance, etc.

There is much more than four lines of code in the human genome, the reasoning says. Marcus believes that most of our genome is expressed in our brain as the brain develops. So much of our DNA is actually about building strong starting points in our brains that allow us to then gather more knowledge:

“It’s not nature versus nourishment. Like the more nature you have, the less care you have. And it’s not like there’s a winner there. It’s actually nature and care working together. The more you have built in, the easier is learning about the world. ”

The best technical inventions ever that advanced civilization ZDNet

Exploring intelligence, artificial and otherwise, becomes almost inevitably philosophical. The innate hypothesis refers to whether certain primitives, such as language, are built into elements of intelligence.

BlikPixel

Marcus’ point about having enough storage space to walk past resonated with us, and so did the part about adding knowledge to the mix. After all, more and more AI experts are acknowledging this. We will argue for that the hard part is not so much how to store this knowledge, but how to code, connect it and make it usable.

Which brings us to a very interesting and also hyped point / technology: Knowledge graphs. The term “knowledge graph” is essentially a renaming of an older approach – the semantic web. Knowledge graphs can be hyped right now, but if it’s anything, it’s a 20 year old hype.

That semantic web was created by Sir Tim Berners Lee to bring symbolic AI approaches to the Internet: Distributed, Decentralized, and Large-Scale. Parts of it worked well, others less. It went through its own trough of disillusionment, and now it sees its justification in the form of schema.org takes over the Internet and knowledge graphs are being hyped. Most important, however knowledge graphs see the real world adopted. Marcus referred to knowledge graphs in his paper “Next Decade in AI”, which was a trigger for us.

Marcus recognizes that there are real problems that need to be solved to pursue his approach, and a great deal of effort must go into limiting symbolic search well enough to work in real time for complex problems. But he sees Google’s knowledge graph as at least a partial fashion example of this objection.

Deep learning, meet knowledge graphs

When asked if he is thinking knowledge graphs can play a role in the hybrid approach he advocates, Marcus was positive. One way of thinking about it, he said, is that there is a huge amount of knowledge represented on the Internet that is largely available for free and not utilized by current AI systems. However, much of this knowledge is problematic:

“Most of the world’s knowledge is imperfect in some way. But there’s a huge amount of knowledge that, for example, a bright 10-year-old can just download for free, and we should have RDF be able to do so.

Some examples are first and foremost Wikipediathat says so much about how the world works. And if you have the kind of brain that a human being does, you can read it and learn a lot from it. If you are a deep learning system, you can get nothing out of it or hardly anything at all.

Wikipedia is the stuff that is on the front of the house. At the back of the house, there are things like the semantic web that tag web pages that other machines can use. There is also all kinds of knowledge. It is also left on the floor at current angles.

The kind of computers we dream of can help us, for example, compile medical literature or develop new technologies will need to be able to read those things.

We need to come up with AI systems that can use the collective human knowledge that is expressed in language form and not just as a spreadsheet to really move forward to create the most sophisticated systems. ”

deep-learning-pix.png

A hybrid approach to AI, blending and matching deep learning and knowledge representation as exemplified by knowledge graphs, may be the best way forward

Marcus went on to add that for the semantic web, it turned out to be harder than expected to get people to play together and be consistent about it. But that does not mean that there is no value in the approach and in making knowledge explicit. It just means we need better tools to use it. This is something we can subscribe to and something many people also like.

It’s become clear that we can not really expect people to manually annotate every piece of content published with RDF vocabulary. So much of it now happens automatically or semi-automatically by content management systems. WordPress, the popular blogging platform, is a great example. There are many plugins that comment on content with RDF (in its developer friendly JSON-LD form) when published, with little or no effort required to ensure better SEO in the process.

Marcus believes that machine commentary will get better as machines become more sophisticated, and there will be a kind of upward ratcheting effect when we come to AI that is more and more sophisticated. Right now, AI is so unsophisticated that it doesn’t really help that much, but that will change over time.

The value of hybrids

More generally, Marcus believes that people recognize the value of hybrids, especially in the last year or two, in a way they did not before:

“People fell in love with this notion of ‘I just pour all the data into this one magic algorithm and it will bring me there’. And they thought it would solve driverless cars and chatbots and so on.

But there has been a wake-up call – ‘Hey, it doesn’t really work, we need other techniques’. So I think there has been a lot more hunger after trying different things and trying to find the best of both worlds in the last few years, as opposed to maybe the five years before that. ”

Amen to that, and as mentioned earlier – it looks like the art of art in the real world is close to what Marcus describes also. We visit and end next week with more techniques for knowledge infusion and semantics on a large scale and a look into the future.