AI capable of human-like problem solving

LONDON-AI has moved a step closer to achieving human-like thought, after a new project developed machines capable of abstract thought to pass parts of an IQ test.

Experts from DeepMind, which is owned by Google parent company Alphabet, put machine learning systems through their paces with IQ tests, which are designed to measure a number of reasoning skills.

The puzzles in the test involve a series of seemingly random shapes, which participants need to study to determine the rules of that dictate the pattern. Once they have worked out the rules of the puzzle, they should be able to accurately pick the next shape in the sequence. DeepMind researchers hope that developing AI which is capable of thinking outside the box could lead to machines dreaming-up novel solutions to problems that humans may not ever have considered. A specially-designed software system built for the task was able to achieve a test score of 63 per cent on the IQ-style puzzles.

Researchers at Google’s DeepMind project in London used puzzles known as ‘Raven’s Progressive Matrices’.

Developed by John C Raven in 1936, the Matrices measure participants’ ability to make sense and meaning out of complex or confusing data.

They also test their ability to perceive new patterns and relationships, and to forge largely non-verbal constructs that make it easy to handle complexity.

‘Abstract reasoning is important in domains such as scientific discovery where we need to generate novel hypotheses and then use these hypotheses to solve problems,’ David Barrett at DeepMind told New Scientist.

‘It is important to note that the goal of this work is not to develop a neural network that can pass an IQ test.’

Human candidates sitting the tests can give themselves a boost by heavy preparation, learning the type of rules used to govern the patterns used in the matrices. That means, rather than using abstract thought, they are using knowledge they have learned instead.

This is a particular problem with AI, which use neural networks fed with vast amounts of data to learn, and could easily just be taught to pick up on these patterns without needing to employ abstract thinking.

Instead, the researchers tested a range of standard neural networks on a single property within a matrix but not all of the possible properties. They found they performed extremely poorly, as low as as 22 per cent.

However, a specially designed neural network that could infer relationships between different parts of the puzzle scored the highest mark of 63 per cent.

Due to the design of the tests, it was not possible to compare these scores directly with people, as the AI systems has prior training on how to approach them.

Researchers found participants with a lot of experience with the tests, which would be comparable to the trained machines, could score more than 80 per cent. Newcomers to the tests would often fail to answer all the questions.

The full findings are awaiting peer-review but can be viewed on the pre-print repository Arxiv.

ePaper - Nawaiwaqt