Requisite Cognitive Skills for Decisions About Systems

Gary Chicoine

MetaBridge Ltd., Edradour House, Pitlochry, Scotland, PH16 5JW, UK

Abstract

The organisational learning orientation to systems thinking requires managers to think together and share mental models using a variety of related techniques. It is generally recognised that this is no easy matter since the demand for new ways of thinking puts stress on old habits of mind. The purpose of this paper is to show how treating this difficulty as purely a technical problem falls short by overlooking the cognitive dimension of what new events have to happen in the brains of the managers. Cognitive biology gives a starting point to consider the way decision behaviour relates to implicit mental models. The consequences of mental models without feedback and systemic coherence are illustrated by an analysis of faulty thinking in privatisation. A classification of system and feedback types emphasises that, in managing organisations, uni-dimensional systems thinking is not adequate. The requisite multi-dimensional systems thinking requires holistic multi-factor thinking, multi-future thinking combined with causal feedback thinking. A crucial link between practical consulting, applied cognitive science and applied system science is the use of visual facilitation which increasingly makes use of the power of interactive visual representations of mental models behind decisions.


Requisite Cognitive Skills for Decisions About Systems


The Riddle of the Sphex

Dennett (1984) followed Hofstadter (1982) in referring to Wooldridge (1963, 1968) about some interesting features of a problem in cognitive biology. This problem focused itself on some highlighted behaviour of Sphex ichneumoneus, the digger wasp. We would like to refocus this problem from the perspective of concern about the quality of decision-making about systems, or even whether what appears to be decision-making about systems is actually decision-making or something else.

The digger wasp gets her name from her habit of digging a burrow in which to lay her eggs. However, she does not simply lay her eggs in the burrow she digs. Her decision subsystem of her cognitive system, her internal board of directors or ministerial cabinet in her brain, her planning faculty or memory-of-the-future (prospective memory) seems to give her vision of the future of her offspring in the burrow. She seems to consider a rather gloomy scenario of the future where the offspring, newly hatched, may starve. In the light of this scenario, she implements the decision to dig the burrow with a further decision to sting and paralyse a cricket which her offspring can feed on after they hatch out. However, she will not merely drag the comatose cricket into the burrow and lay her eggs. She seems to be aware of the need for responsible monitoring of the entire system for quality control, so she brings the comatose cricket up to the threshold of her burrow, but does not take it immediately in. Instead she seems to engage an oversight committee in her brain, some faculty of holistic intelligence, judgmental monitoring, or audit function, and goes into the burrow and checks it out thoroughly, looking and feeling around in it, to make sure everything is as it should be before bringing in the cricket and laying the eggs therein. It is significant that her scenario planning capacity, which deals with preparing for future contingency of a possible food shortage, is supplemented by this other cognitive faculty of responsible holistic awareness during implementation. Though she has been in the burrow before (for after all, she is the one who dug it out), she has no way of knowing that the system has remained stable and appropriate for her purposes, no way of knowing that Reality is still what it used to be. She apparently wants to see if there has been a reality-shift in the system before committing resources.

As human beings, with incredibly sophisticated decision and judgement subsystems or faculties, we naturally see the need for considering alternative futures and activating awareness of new, different emergent realities before committing resources. What is incredible here is that the digger wasp, even with its rudimentary little cognitive system or brain is able to be as sophisticated and conscious in decision-making as we are. She too, like us, possesses "free will" and "consciousness". Like us, she never makes unconscious, stupid, automatic pseudo-decisions, nor does she display gross lack of awareness of the system or causal feedback loops in the system.

There has been further research in the cognitive biology of the Sphex. As a result of this research, our respect and admiration for the digger wasp has somewhat deteriorated. If a researcher places the comatose cricket a few inches away from the burrow threshold when our responsible and fully conscious wasp is doing her final check of the burrow, when she comes out and does not find the cricket on the threshold, she drags it the few inches back to the threshold and does an utterly unnecessary, redundant, useless and expensive second check of looking and feeling around in the burrow before she comes out to bring in the cricket and lay her eggs. In fact, she will make this meaningless "final check" up to forty times in a row if the researcher will keep moving the cricket a few inches every time she is in the burrow being "responsible and "conscious" of the "real world".

Apparently, merely having a brain, a cognitive system, does not guarantee that a decision-making entity is necessarily exhibiting genuine free will and consciousness. It would of course be rather frightening if our leaders of government and industry, as well as our best academic minds, were to display symptoms of Sphexish pseudo-will and pseudo-awareness in their decision-making about systems of vital and urgent concern for us, such as the national economy, the competitive viability of the company, or fields of important scientific research and thoughtful disciplinary dialogue of all fields taken together as the whole enterprise of coordinated research and development of technology. It would be painful indeed if our decision-makers could be shown to be lacking rudimentary cognitive faculties or things like holistic l-responsible awareness of multiple realities, prospective memory of multiple futures, or causal perception of feedback. The guidance and control of critical large systems would then be blindly dogmatic, unconscious, short-sighted, over-confident in prediction and expectation, and incorrectly intervening in systems due to not seeing causal feedback loops that will manifest failures of the systems due to incorrect interventions or the lack of correct interventions. We would be facing a massive problem in cognitive anthropology far more serious than our little problem in cognitive biology We would have to face the Riddle of the Sphex becoming the Riddle of Human Decision-making.

To achieve graceful entry (Holland et al, 1986) into dealing with learning deficiencies (Senge, 1990) of decision-makers who make decisions about important large systems (Steinbruner 1974, Axelrod et al 1976, Gigch et al 1987), we need some bridge between important decision-makers commanding vast armies or budgets and lowly, insignificant digger wasps, Sphexes. We need householders making decisions about their thermostats for home heat-control. This may not be as easy as it sounds when we consider the recent statistic that 20% of American people believe that the sun orbits the earth and even 17% of the rest believe the earth goes around the sun once per day (Mestel 1994).


Everyday Decisions About Thermostats

Indeed, there is considerable evidence of a lack of feedback cognition even in simple situations such as a thermostat for regulating heat in rooms or homes (Kempton, 1987). It appears that from 26% to 50% of people have a very shaky perception of how a thermostat works, thus resulting in billions of dollars of energy loss each year in America (figure 1).

It seems that something like a "valve theory" all too often is used in decision-making about temperature settings rather than a "feedback theory". The valve theory is that "folk theory" which perceives the thermostat as something like an accelerator in the automobile or an oven burner knob (which can turn up or down the amount of flame and heat). This leads to things like turning the thermostat setting to an extremely high temperature for awhile, then discovering the room temperature is uncomfortably high, then turning it too low for awhile, which leads to experiencing the room as too cold, leading to further turning of the setting too high again, and so on. We are all familiar with variations of this causal behaviour loop in dealing with people who do not understand feedback heat regulators.

It is precisely in this kind of area that we must face Sphex in human form. This kind of problem in cognitive anthropology (Doughterty, 1985), which is the Folk Theory Problem, as well as understanding the difficulty of proceeding beyond folk theory to improved cognition without bias or faulty judgement (Nisbett and Ross, 1980), may require some other strategy than scholarly lament about people failing to take a course in statistics. We believe that a more fruitful area of research would be the study of acquisition of requisite cognitive skills (Anderson et al, 1981), developing better models of the psychology of human intelligence (Sternberg et al, 1982), and seeking more informed criteria as to the kinds of intelligence we should be measuring for (Gardner, 1983). By seeing this as an adult education problem with a challenge of self-improvement, we might be able to frame this to people better so that they may exercise a more authentic free will and consciousness about feedback situations rather than behaving as automata cycling through a neuro-subroutine in the form of a "folk theory".

We do not believe there is much to be gained by telling people they are mere "folk theorists" and embarrassing them more than they already are about their lack of requisite cognitive faculties, skills, intelligence and conscious free will. Something gentle like, "May I show you how a thermostat works?", might be more to the point, even if we are experiencing the horror of perceiving a human digger wasp blindly wielding a valve theory. Perhaps we can speak of the glorious possibility of the household becoming a "learning organisation" (Senge, 1990).

What is interesting here is that very often it is exactly the thermostat that is used as an analogy when systems thinkers are trying to transmit the truth of causal feedback loops to decision-makers. If the faculty of perception of feedback is missing in such decision-makers, using the thermostat analogy may be counter-productive or we would not have needed Kempton's (1987) paper in cognitive anthropology.

There are problems about whether there actually is something called "consciousness" functioning in human brains (Dennett, 1991) and even as we speak, cognitive neuroscience is amassing some evidence for Eliminative Materialism (Churchland, 1989), which would indeed relegate "consciousness", free-will", "decision-making" and "the human soul'! to the status of naive intuitional folk theories held in the brains of biological robots.

If human decision-makers are going to retain their non-sphexish human dignity and human spirit, it may take more than Qualia Theory (Lycan et al, 1990) to save the day and restore shattered faith in our fundamental human being (Heidegger, 1927) and freedom (Sartre, 1943). We believe, in fact, that it is through demonstrating requisite cognitive faculties and skills that decision-makers will prove they are embodiments of conscious selfhood beyond the mere biological machinery of the brain. It would not be pleasant for any decision-maker to have a vehement neuroscientist storming into his or her office and saying, "I as a properly functioning biological robot find you malfunctioning in the area of causal feedback loop cognition and have recommended that you be decommissioned."!

We will not have to go to the extreme of requiring decision-makers to do original thinking about their conscious selfhood (Husserl, 1933). All we have to do is facilitate their emergent cognitive faculties into viable functioning. But before we do this, let us face it that decision-makers in responsible positions are sometimes struggling with control mechanisms of their economies, companies and other institutions in a manner that is not always productive of the general good. For, after all, we do want decision-makers to make conscious, intelligent decisions about those systems under their control that affect our world, don't we? And we must not forget there are still many influential critics who believe that causal feedback loop analysis and simulation is itself a kind of wrong folk theory or deviation from societal thinking norms (Bloomfield 1986). Many people laughed at system dynamics world models (Meadows et al, 1972) of the Club of Rome because those models directly demonstrated systemic irresponsibility of decision-makers and pointed at an embarrassing lack of simple feedback cognition on the part of government and industry. Wounded human sphexishness has been fighting this battle against systems thinking on all fronts since. In fact, world modelling has become an "undiscussable" as in any organisation suffering from "defensive routines" (Argyris, 1985).

Figure I - Minds thinking valve or thermostat

Decision-makers may want to foster better national use of home thermostats to save billions of dollars of waste of global petroleum-based resources and they may even acknowledge that citizens might benefit from seeing feedback in dealing with the little things of everyday use (Norman, 1988), but even now it would be hard to predict meaningful systems thinking, global scenario planning and activation of requisite holistic intelligence in all the key players. Of course, the old Club of Rome world simulations were missing certain critical variables, such as Resource Terrorism as demonstrated by Iraq to the rest of the world during the Gulf War through setting oil-wells on fire, and Ecological Terrorism, also by Iraq, when they deliberately poured oil into the sea during that same conflict. How are we to deal with cunning and hostile forms of valve theory that destroy resources and the environment?

The Spectre of Privatisation

We could not account for the overwhelming predominance of the Valve Theory driven decisions by using either the Analytic Paradigm (AP), or the Cybernetic Paradigm (CP) of the decision processes (Steinbruner, 1974). Also, AP and CP could not sufficiently explain why the decision-makers, when faced with intense uncertainty, exclude all values but one (such as staying in power, getting re-elected, showing a profit to the shareholders very soon, dominating the field and getting the biggest research grants, and so on); or, why they find it impossible to conceive of any negative outcome of their decisions and tend to make use of only partial information selected from an overly limited set of variables. The one-valued logic would be a natural result of a brain assuming linear causation without interactive feedback loops with other values.

The imagining of only a positive outcome would be the function of a first-scenario logic without seriously considering alternative scenarios. The working with a narrow-band logic in collecting and utilising information would be the function of a cognitive system without the requisite holistic intelligence of extended awareness or search for the full truth of the situation. Observed behaviour thus gives an impression of rather driven, over-confident, and wilfully blind entities. Decision-makers need to develop a set of requisite cognitive faculties that are reasonably operational, such as considering all relevant values, engaging in scenario planning and seeking maximum information variety with greatest possible consciousness.

We think we all agree that we would like our decision-makers to use a new paradigm in their decisions about complex systems. Currently, however, it is easier to find examples of ongoing old paradigms of the decision processes at work. We see the setting of interest rates by the government as something like a thermostat being treated as a valve, and we have to face something potentially even more damaging: privatisation.

The decision to privatise works with a simple logic that says getting to a "market economy" and "reducing government spending" are so desirable that we do not have to understand the prerequisites for a genuine market economy nor consider more than one value in government budgeting. Clearly, to have viable economic competition, we need parallel services in any region, such as two or three different rail-services operating on parallel sets of tracks, or two or three phone companies with two or three parallel sets of phone lines, or two or three parallel sets of water lines maintained by two or three water companies, each with their own purification facilities. The customers in various regions would thus have a real choice of alternative services and there would be an authentic market economy of these services. But the government would have to first be willing to fund the building up of parallel infrastructures for the public good, and that would cost a lot of money, meaning a temporary tax-increase along with problems educating the public as to why we need parallelism. So the apparently cheap solution is to fragment the single system into separated regions or severed functions to make them at least "privately owned" and thus remove the government from accountability as to the outcomes of the systems in question (Figure 2).

What we should be asking now as an urgent issue in system science is whether privatisation is systemically wise or foolish. The first principle that comes to mind is the Eighty-Twenty Law (Beer, 1979),which states:

"In any large, complex system eighty-percent of the output will be produced by only twenty percent of the system." With just this principle as a basis for thinking, we might be able to formulate a new system principle as follows:

The Law of Devolution: "When an existing system is fragmented into separate parts that are no longer organised as a whole, the effect on the whole will be a reduction of efficiency to eighty percent of the former efficiency of the organised whole at an increased cost of twenty-percent to the environment of the former whole. "

PRIVATISATION
  • mutually exclusive services
  • pseudo-choice
  • simplistic infrastructure
  • simplistic capital investment
  • cost increases by fragmentation
COMPETITION
  • parallel services
  • real alternatives
  • complex infrastructure
  • robust capital investment
  • cost reduction through market effectiveness

Figure 2 - The Privatisation Delusion

But in order to see how this works, we will have to come to understand an old Russian, A. Bogdanov (1912) and his own original system science, which he called "Tektology".


Obvious Implications of Tektology

According to Bogdanov (1912), the General System principle that the whole is greater than the sum of the parts in any given system is not actually true. In fact, he states that the degree of organisation in a system is crucial to the performance of the system. In his terminology, he preferred the term "complexes" for systems and only wanted to designate an organised complex as a "system". In order to retrieve Tektology from obscurity, we would prefer to refer to all complexes (whatever their degree of organisation) as systems. So, a system can be in one of three tektological conditions:

I - Organised System: Where the whole is indeed greater than the sum of the parts.

2 - Neutral System: Where the whole is equal to the sum of the parts.

3 - Disorganised System: Where the whole is less than the sum of the parts.

Bogdanov (1912) gives an example of how this works by having us look at the time it takes a man to clear a field. He points out that two men working together can clear the field in less than half the time it takes one man. Why is this? One man, faced with a two hundred pound stone, would have a harder time than he would trying to carry a hundred pound stone twice over. Two men, however, can tackle a stone weighing two-hundred pounds that would disable the single man. So, the two men working together to clear the field is an organised system. If the field were divided in half with each man assigned to clear his half without helping each other (regional privatisation), this would comprise a neutral system. If the field were treated as a battleground or arena for the two men to fight and struggle against each other over the work and interfere with one another's efforts, this would comprise a disorganised system.

ORGANISED SYSTEM

NEUTRAL
SYSTEM
DISORGANISED SYSTEM
1 + 1 = 3
1 + 1 = 2
1 + 1 = 1
generative
static
degenerative

Figure 3 -The Tektological Laws of Systems (Bogdanov)

Let's now take a close look at a simple tektological analysis of regional privatisation to create a neutral system from a previously organised system.

Let us assume that in the organised system we are paying two men an hourly wage to clear a field. According to the Eighty-Twenty Law, eighty percent of the efficiency of the system is performed by twenty percent of the system. It is obvious where this productive twenty percent is in our field with two men working together, for it would comprise their efforts of removing very heavy objects together. The other eighty percent of time removing other things is not what makes their system pay off from working together. So, what happens when we create a neutral system of two half-fields where the two men are not allowed to work together? We lose precisely that twenty percent super-efficient effort they used to make handling very large objects together, which they now must somehow struggle with separately, each in their own half-field. The loss of twenty percent efficiency reduces the overall system to eighty percent of its former performance. Also, due to the individual struggling with the heavy objects, the two men take longer clearing their half-fields, so that the hourly wages total for both men taken together increases by twenty percent. So by disorganising an organised system into a neutral system we are reducing efficiency and increasing cost to the employer, which causes him to have to raise prices for the products or services he sells.

All interventions in systems that increase or decrease the degree of organisation in a system should therefore have their dynamics simulated according to tektological principles, and we hope to see serious research along these lines in the near future. However, in order to do this properly, we will have to create a new system of mathematics and computation where:

1- For organised systems: 1 + 1= 3

2 - For neutral systems: 1+ 1= 2

3 - For disorganised systems: 1 + 1= 1

Only with the services of generative integers (I + I = 3) and degenerative integers (I + 1 = 1) will we have the correct logic for understanding the necessary mathematical models of the kinds of systems we are simulating. This in turn will help us better understand how learning happens in the human brain, in fact, as a generative act of I + I = 3 as shown by Holland et al (1986). From the cognitive point of view, all creativity and innovation is based on generative thinking which organises discontinuous ideas to generate new ideas (Figure 3). We should also be able to show mathematically why destructive criticism and arguments are counter-creative in groups of people and inhibit creative innovation, for tektologically the group's thinking is disorganised and therefore a simulation of the dynamics implies degenerative integers, like in radioactive decay of the uranium atom. For according to Bogdanov, the principles of any science of organisation or tektology must apply to all forms of organisation, whether organic, social, cognitive, or mental. Also, Bogdanov points out, and we believe rightly so, that all systems are open, interactive with their environment and each other, and dynamic, changing.

Now, since all systems are dynamic, we can look for predominating causal feedback conditions according to the state of organisation and we find the following matching five system dynamics:

I) Organising system: Predomination of a generative positive feedback causal loop. A living embryo in a womb would be an example.

2) Growing neutral system: Accumulative negative feedback causal loop. This is an homeostatic system that is expanding in some way. More births than deaths in population would be an example.

3) Stabilised neutral system: Equalised negative feedback causal loop. Equal number of births and deaths in a population would be an example.

4) Decaying neutral system: Diminishing negative feedback causal loop. More deaths than births in a population would be an example.

5) Disorganising system: Degenerative positive feedback causal loop. A decomposing corpse would be an example

Armed with tektological comprehension of feedback in conjunction with degree of organisation, (Figure 4) we can gain insights into the behaviour of complex systems. For instance, work being done by Brian Arthur et al (Waldrop 1992) is moving in this direction, for they are simulating and studying what happens with generative positive feedback loops. Arthur (1990) published a very courageous article of economic scientific heresy in Scientific American that has still not been digested properly by economists and financial investors. It is not within the scope of this paper to discuss Complexity Theory or things like surprising or apparently random, chaotic emergence, but we would like to see complexity researchers review Bogdanov's Tektology carefully and see some of the implications that become obvious for good decision-making, which as we have already pointed out requires the proper System Paradigm of cognition of feedback. Otherwise the stupidity of privatisation and other similar efforts will continue to wreak havoc. Note, for instance, recent crashes of old Aeroflot passenger jets in the new regional carriers who lack money for proper maintenance. The now divided up, privatised Aeroflot is less efficient and more costly (in lives) and there is little prospect of an early improvement or lessening of danger of flying on such regional carrier planes. This problem is perfectly predictable and describable with Tektology; we hope all will acknowledge eventually. George Gorelick (1981) has made some effort to get Tektology looked at, but the whole thing is treated so far as obscure and irrelevant. What the East liked to suppress, the West now likes to ignore. But tektological sorts of things are happening.

SYSTEM CLASS
DOMINANT FEEDBACK

1. ORGANISING SYSTEM
"babyhood"

generative
positive

2. GROWING NEUTRAL SYSTEM "childhood"

accumulative
negative

3. STABILISED NEUTRAL SYSTEM
"adulthood"

equalised
negative

4. DECAYING NEUTRAL SYSTEM
"old age"
diminishing
negative

5. DISORGANISING SYSTEM
"death"
degenerative
positive

Figure 4 -The Five Classes of Systems


Visual Facilitation

Pure systems science research is meaningless without both the scientist and the lay decision-maker possessing the requisite cognitive skills for systems thinking, and, we should add, for decision thinking in general. In order to achieve better decision-making about systems, we should be absolutely clear about the urgency of rapid induction of as many decision-makers as possible into at least the three main requisite cognitive skills of multi-factor awareness, multi-future scenarios planning, and causal feedback comprehension. Since it is unlikely that decision-makers will all be cognitive scientists, corporate planners or systems scientists, what is the method of induction? We think it is through visual facilitation of groups of decision-makers working with live, engaging real world problems and opportunities. There is evidence that practical problem solving only happens when the problem-solving techniques are learned with living, immediately concernful situations rather than as academic exercises (Holland et al, 1986). In addition to this, we are all familiar with Miller's Law which states that a human cognitive system working in the darkness of the brain's internal semantic memory cannot keep track on average of more than five, plus or minus two, items or issues at any one time. We have been told by people in industry, for instance, on more than one occasion, that even the most competent decision-makers at a meeting cannot handle more than three or four important factors at one time together coherently. However, with group visual facilitation techniques, the number of issues that can be coherently arrayed and handled by decision brains dramatically increases. Amheim (1969) and Mckim (1972) made pioneering efforts in demonstrating the usefulness of graphical facilitation of thinking. In the field of facilitating decision-making in general, a great deal of practical field research has been documented by Eden et al (1990), as well as Friend and Hickling (1987), and Checkland and Scholes (1990).

Particular emphasis has been placed by Hodgson (1992) on combining mobile representation, effective thinking frameworks as transitional objects and interactive facilitation skills. Such techniques use a combination of "low tech" tools such as whiteboards and magnetic shapes (idons) with visual tracking software to relieve individual and group memory. Critical in these processes is the capability to stimulate organisation self-reflection (Finney and Mitroff 1986) to ensure that the cognitive skills are activated to "theory in use" rather than "espoused theory". This accumulating praxis with decision-makers as a field of applied cognitive research has led us to isolate three main techniques of visual facilitation that directly stimulate the three main requisite cognitive skills. These are the use of hexagon modelling for shared mapping of all relevant factors or issues in a situation, scenarios matrix modelling for mapping decision-sets across alternative futures, and causal loop modelling for mapping of system dynamics for possible simulations and testing of assumptions about systemic effects of proposed decisions generated by hexagon modelling, scenarios matrix modelling and other techniques (figure 5).

 

Figure 5 - The New Requisite Cognitive Skills for Decisions About Systems

 

The reason these techniques work so well is that pattern recognition is the main cognitive strength of the human brain, just as trying to reproduce complex patterns mentally ("in the head with eyes closed") is its greatest weakness (Cohen et al, 1985), which harks back to Miller's Law. If we want to think better, we have to get our thinking externally represented where we can simultaneously juggle complex factors in a variety of alternative, comprehensible patterns. When we do this, our learning as individuals gets linked through shared meaning (Novak and Gowin, 1984). Decision-making is thus freed from old sphexlike cognitive deficiencies in the brains of those controlling and guiding systems, and this is achieved in an objective and interesting way for all who must share decision thinking and action. We even envision a future when groups of decision-makers will enter their thinking world together in full stereophonic visual representation through Virtual Reality conferencing. There are already signs of the first glimpses of this on the horizon as a serious scientific investigation (Helsel et al, 1991).


REFERENCES

Anderson, J.R., ed. (1981). Cognitive Skills and Their Acquisition; Hillsdale, N J: Erlbaum.

Argyris, C.(1985). Strategy Change and Defensive Routines London:Pitman

Arnheim, R. (1969). Visual Thinking; Berkeley: University of California Press.

Arthur, W.B. (1990). "Positive Feedbacks in the Economy"; Scientific American, 262, Feb '90. pp80-85.

Axelrod, R., ed. (1976). Structure of Decision; Princeton, Nl: Princeton University Press.

Beer, S. (1979). The Heart of theEnterprise: The Managerial Cybernetics of Organisation; 2 Chichester: John Wiley & Son.

Bloomfield, B.P. (1986). Modelling the World: the Social Constructions of Systems Analysis; Oxford: Basil Blackwell.

Bogdanov, A. (1912). The General Science of Organisation: Essays in Tektology (Translated by Gorelik, G.); Seaside, CA: Intersystems Publications.

Checkland, P., Scholes, J. eds. (1990). Soft System Methodology in Action; Chichester: John Wiley & Sons.

Churchland, P.M. (1989). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science; Cambridge, MA: MIT Press.

Cohen, R. ed. (1985). The Development of Spatial Cognition; Hillsdale, NJ: Erlbaum.

Dennett, D.C. (1991). Consciousness Explained; London: The Penguin Press.

Dennett, D.C. (1984). Elbow Room: The Varieties of Free Will Worth Wanting; Oxford: Clarendon Press

Dougherty, J.W.D., ed. (1985). Directions in Cognitive Anthropology; Urbana, University of Illinois Press.

Eden, C, Radford, J., eds. (1990) Tackling Strategic Problems: The Role of Group Decision Support, London: Sage.

Finney, M. and Mitroff, I A. (1986) Strategic Plan Failures in The Thinking Manager; San Francisco, Josey-Bass

Friend, J, Hickling, A. (1987). Planning Under Pressure- 'The Strategic Choice Approach; Oxford: Pergamon Press.

Gardner, H (1984). Frames of Mind: The Theory of Multiple Intelligences; London, William Heineman Ltd.

Gigch,J.P. van ed. (1987). Decision Making About Decision Making. Metamodels and Metasystems; Tunbridge Wells, Kent: Abacus Press.

Gorelik, B. (1987). Bogdanov's Tektologia, General Systems and Cybernetics; Cybernetics and Systems: An International Journal, 18: 157-175, 1987

Heidegger, M. (1927). Being and Time (Translated by Macquarrie, J. & Robinson, E ); Oxford: Basil Blackwell.

Helsel, S.K (1992). Beyond The Vision: The Technology Research and Business of Virtual Reality; Proceeding of Virtual Reality '91, The Second Annual Conference. London: Meckler.

Hofstadter, D.R. (1982). "Can Creativity Be Mechanised?" Scientific American, 247, Sept'. 82, pp. 18-34.

Hodgson, A. M. (1992). Hexagons for Systems Thinking; European Journal of Operational Research 59 (1992) 220-230, North-Holland.

Holland, J.H., Holyoak, K.l., Nisbet, R.E., Thagard, P.R., (1986). Induction: Processes of Inference Learning and Discovery; Cambridge, MA: MIT Press.

Husserl, E. (1933). Cartesian Meditations: An Introduction to Phenomenology (Translated by Caims, D.), London: Martinus Nijhoff Publishers.

Kempton, W. (1987). Two Theories of Home Heat Control Cultural Models in Language & Thought; Holland, D. & Quinn, N, eds.; Cambridge, Cambridge University Press.

Lycan, W.G. ed. (1990). Mind and Cognition: A Reader; Oxford: Basil Blackwell.

McKim, R.H. (1972). Experiences in Visual Thinking, Boston, MA: PWS Engineering.

Meadows, D.H, Meadows, D.L., Randers, J. Behrens III, W.W. (1972). The Limits to Growth: 4 Report for the Club of Rome's Project on The Predicament of Mankind New York: Universe Books.

Mestel,R(1994) Education final frontier; London,New Scientist 26 February 1994.

Miller, G.A. (1956). The Magical Number Seven Plus or Minus Two: Some Limits on Our Capacity For Processing Information; Psychological Review 63, pp. 89-97.

Nisbett, R. & Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgement; Englewood Cliffs, NJ: Prentice-Hall, Inc.

Norman, D.A. (1988). Psychology of Everyday Things; New York: Basic Books, Inc.

Novak, 1. D ., Cowin, D . B . (19B4) . L earning How to Learn; Cambridge, Cambridge University Press

Sartre, J.P. (1943). Being and Nothingness: An Essay on Phenomenological Ontology (Translated by Barnes, H.E.); London Methuen & Co.

Senge, P.M. (1990). The Fifth Discipline: The Art and Practice of the Learning Organisation; New York: Doubleday Currency.

Steinbruner, I.D.(1974). The Cybernetic Theory of Decision-New Dimensions of Political Analysis; Princeton, NJ: Princeton University Press.

Sternberg, R.J ed. (1982) Advances in the Psychology of Human Intelligence Vol. I; Hillsdaly, NJ: Erlbaum.

Waldrop, M.M. (1992). Complexity: The Emerging Science At the Edge of Order and Chaos; London: Viking.

Wooldridge, D. ( 1963). The Machinery of the Brain; New York: McGraw Hill .

Wooldridge, D. (1968). Mechanical Man: The Physical Basis of Intelligent Life; New York: McGraw Hill.

Return to Articles