Intelligence as Flexible Optimization, Revisited

    As above, let us consider dynamical systems on spaces SxE, where S is the state space of a
system and E is the set of states of its environment. Such dynamical systems represent
coevolving systems and environments.
   We shall say that such a dynamical system contains an S.-sensitive environment to extent e if
it is S.-sensitive to degree at least e for every system S; and so forth for L., R.S. and S.S.-
sensitivity. One could modify this approach in several ways, for instance to read "for almost any system S," but at this stage such embellishments seem unnecessary. This concept addresses the "unpredictable conditions" part of our definition of intelligence: it says what it means for a system/environment dynamic to present a system with unpredictable conditions.

    Next we must deal with "appropriateness". Denote the appropriateness of a state St in a
situation Et-1 by A(St,Et-1). I see no reason not to assume that the range of A is a subset of the real number line. Some would say that A should measure the "survival value" of the system state in the environment; or, say, the amount of power that S obtains from the execution of a given action. In any case, what is trivially clear is that the determination of appropriate actions may be understood as an optimization problem.

    One might argue that it is unfair to assume that A is given; that each system may evolve its
own A over the course of its existence. But then one is faced with the question: what does it
mean for the system to act intelligently in the evolution of a measure A? In the end, on some
level, one inevitably arrives at a value judgement.
 Now we are ready to formulate the concept of intelligence in abstract terms, as "the ability to
maximize A under unpredictable conditions". To be more precise, one might define a system to
possess S-intelligence with respect to A to degree %%h%% if it has "the ability to maximize A
with accuracy g in proportion b of all environments with S-sensitivity h(a,b,c)=abc and %% %%
is some measure of size, some norm. And, of course, one might define L.-, R.S.- and S.S.-
intelligence with respect to A similarly.
    But there is a problem here. Some functions A may be trivially simple to optimize. If A were
constant then all actions would be equally appropriate in all situations, and intelligence would be a moot point. One may avoid this problem as follows:
Definition 4.5: Relative to some computable set of patterns C, a system S possesses S.-
intelligence to a degree equal to the maximum over all A of the product [S.-intelligence of S with respect to A, relative to C]*[computational complexity of optimizing A]. L., R.S., and S.S.- intelligence may be defined similarly.

This, finally, is our working definition of intelligence. In terms of Sternberg’s triarchic theory, it is essentially a contextual definition. It characterizes the intelligence of a given entity in terms of its interactions with its particular environment; and what is intelligent in one environment may be unintelligent in another. Unfortunately, at present there is no apparent means of estimating the intelligence of any given entity according to this definition.
  For simplicity’s sake, in the following discussion I will often omit explicit reference to the
computable set C. However, it is essential in order that intelligence be possible, and we will
return to it in the final chapter. Anything that is done with d# can also be done with dC.
I believe that high S.S. intelligence is, in general, impossible. The reason for this is that, as will become clear in the Chapter 9, perception works by recognizing patterns; so that if patterns in the past are no use in predicting patterns in the future, mind has no chance of predicting anything. I suggest that intelligence works by exploiting the fact that, while the environment is highlyL., S.- and R.S.-sensitive, it is not highly S.S.-sensitive, so that pattern recognition does have predictive value.

   The master network, described in Chapter 12, is a system S which is intended to produce a
decent approximation to appropriate behavior only in environments E for which the relevant
dynamical system on SxE is not extremely S.S.-sensitive — and not even in all such
environments. It is hypothesized to be a universal structure among a certain subset of L., S.- and R.S.-intelligent systems, to be specified below. Thus, a more accurate title for this book would be The Structure of Certain Liapunov, Structural and Reverse Structural Intelligent Systems.

   In other words: roughly speaking, the main goal of the following chapters is to explore the
consequences of the contextual definition of intelligence just given — to see what it implies about the structure and experiential dynamics of intelligence. To be more precise about this, we shall require a bit more formalism.

belgesi-928

Belgeci , 2280 belge yazmış

Cevap Gönderin