The ideas of the previous chapters fit together into a coherent, symbiotic unit: the master

network. The master network is neither a network of physical entities nor a simple, clever

algorithm. It is rather a vast, self-organizing network of self-organizing programs, continually

updating and restructuring each other. In previous chapters we have discussed particular

components of this network; but the whole is much more than the sum of the parts. None of the

components can be fully understood in isolation.

A self-organizing network of programs does not lend itself well to description in a linear

medium such as prose. Figure 11 is an attempt to give a schematic diagram of the synergetic

structure of the whole. But, unfortunately, there seems to be no way to summarize the all-

important details in a picture. In Appendix 1, lacking a more elegant approach, I have given a

systematic inventory of the structures and processes involved in the master network:

optimization, parameter adaptation, induction, analogy, deduction, the structurally associative

memory, the perceptual hierarchy, the motor hierarchy, consciousness, and emotion.

These component structures and processes cannot be arranged in a linear or treelike structure;

they are fundamentally interdependent, fundamentally a network. At first glance it might appear

that the master network is impossible, since it contains so many circular dependencies: process A

depends on process B, which depends on process C, which depends on process A. But, as

indicated in the previous chapters, each process can be executed independently — just not with

maximal effectiveness. Each process must do some proportion of its work according to crude,

isolated methods — but this proportion may be reduced to a small amount.

Figure 11 and Appendix 2 provide the background necessary for my central hypothesis: that

the master network is both necessary and sufficient for intelligence.

As in Chapter 4, let us define general intelligence as the average of L-intelligence, S-

intelligence, and R.S.-intelligence, without specifying exactly what sort of average is involved.

Then, using the ideas of Chapters 4 and 5, one may easily prove the following:

Theorem 12.1: For any computable set of patterns C, and any degree D of general

intelligence, there is some master network which has general intelligence D relative to C.

However, it is also clear from Chapter 5 that most of the master network is not essential for this

result. In particular, we have:

Theorem 12.2: Theorem 12.1 holds even if the perceptual and motor control hierarchies only

have one level each, and even if the global optimizer works by the Monte Carlo method.

In fact, even this assumes too much. The essential core of the master network consists the

induction processor, the global optimizer and the parameter adaptor. One may show

Theorem 12.3: Theorem 12.1 holds even if all perception, induction and parameter adaptation

are executed by Monte Carlo optimization, and the analogy and deduction processors do not

exist.

FEASIBLE INTELLIGENCE

The problem is that Theorem 12.1 and its offshoots do not say how large a master network

needs to be in order to attain a given degree of intelligence. This is absolutely crucial. As

discussed in Chapter 11, it is not physically possible to build a Turing machine containing

arbitrarily many components and also working reasonably fast. But it is also not possible to build

a quantum computer containing arbitrarily many components and also working reasonably fast.

Quantum computers can be made smaller and more compact than classical Turing machines, but

Planck’s constant would appear to give a minimum limit to the size of any useful quantum

computer component. With this in mind, I make the following hypothesis:

Hypothesis 12.1: Intelligent computers satisfying the restrictions imposed by Theorem 12.3,

or even Theorem 12.2, are physically impossible if C is, say, the set of all N’th order Boolean

functions (N is a very large number, say a billion or a trillion).

This is not a psychological hypothesis, but it has far-reaching psychological consequences,

especially when coupled with the hypotheses made at the end of Chapter 4, which may be

roughly paraphrased as

Hypothesis 12.2: Every generally intelligent system, relative to the C mentioned in the

Hypothesis 12.2, contains a master network as a significant part of its structure.

Taken together, these two hypotheses imply that every intelligent system contains every

component of the master network.

In conclusion, I think it is worth getting even more specific:

Hypothesis 12.3: Intelligent computers (relative to the C mentioned in Hypothesis 12.2) in

which a high proportion of the work of each component of the master network is done

independently of the other components — are physically impossible.

All this does not imply that every intelligent system — or any intelligent system — contains

physically distinct modules corresponding to "induction processor", "structurally associative

memory," and so on. The theorems imply that the master network is a sufficient structure for

intelligence. And the hypotheses imply that the master network a necessary part of the structure

of intelligence. But we must not forget the definition of structure. All that is being claimed is that

the master network is a significant pattern in every intelligent system. According to the

definition given in Chapter 4, this means that the master network is a part of every mind. And,

referring back to the definition of pattern, this means nothing more or less than the following:

representing (looking at) an intelligent system in terms of the master network always yields

a significant amount of simplification; and one obtains more simplification by using the

entire master network than by using only part.

PHILOSOPHY OR SCIENCE?

To demonstrate or refute these hypotheses will require not only new mathematics but also new

science. It is clear that, according to the criterion of falsification, the hypotheses are indeed

scientific. For instance, Hypothesis 12.2 could be tested as follows:

1) prove that system X is intelligent by testing its ability to optimize a variety of complex

functions in a variety of structurally sensitive environments

2) write the physical equations governing X, mathematically determine the set of all patterns in

X, and determine whether the master network is a significantpart of this set

We cannot do this experiment now. We must wait until someone constructs an apparently

intelligent machine, or until neuroscientists are able to derive the overall structure of the brain

from the microscopic equations. But, similarly, no one is going to make a direct test of the Big

Bang theory of cosmology or the theory of evolution by natural selection, at least not any time

soon. Sometimes, in science, we must rely on indirect evidence.

The theory of natural selection is much simpler than the theory of the master network, so

indirect evidence is relatively easy to find. The Big Bang theory is a slightly better analogy: it is

not at all simple or direct. But the theories of differential geometry, functional analysis,

differential equations and so forth permit us to deduce a wide variety of indirect consequences of

the original hypothesis. In principle, it should be possible to do something similar for the theory

of the master network. However, the master network involves a very different sort of

mathematics — theoretical computer science, algorithmic information theory, the theory of

multiextremal optimization, etc. These are very young fields and it will undoubtedly be difficult

to use them to derive nontrivial consequences of the theory of the master network.

Kaynak: A New Mathematical Model of Mind

belgesi-968

0 kişi bu belgeyi faydalı buldu

0 kişi bu belgeyi faydalı buldu