Sunday, February 1, 2009

A discussion on Ontology of numbers

Summary of Collins A.: ON THE QUESTION ‘DO NUMBERS EXIST?’ in The Philosophical Quarterly, Vol. 48, No. 190 (1998). pp. 23-36

1.0 Introduction

In human thought, the numbers appear to have originated from the various physical situations encountered by man – the difference between a herd and a goat, the difference between 3 deers and 1 deer, the correspondence between number of shadows and number of people etc. At some point, the abstract property which is common to say two groups (ex. 5 arrows and 5 birds) was recognized and this is what is called a number. This is a primitive notion of a number, in contemporary times we have various kinds of numbers – the rationals, the irrationals, the imaginary numbers etc. But all the kinds of numbers are equally abstract. So the question arises, where does the number exist or do they exist at all. The paper is an attempt to this question. The author treats the problem as of having a linguistic origin. The author rejects the usual answers to the problem that is realism and nominalism, so first I will explain these usual answers in brief before presenting authors views.

2.0 Realism and Nominalism

“Realism is a philosophy of mathematics and an ontological commitment” (Collins, pp23). That is, realism is the belief that properties, usually called Universals, exist independently of the things that manifest them. Therefore, if we remove all the rectangular shaped objects from the universe, still the universal rectangle will exist. Thus it can be seen that there are two aspects to realism. First, there is a claim about existence. The billiard tables, the football grounds, the books exist and so does the rectangleness. The second aspect of realism concerns independence. The fact that the billiard table exists and is rectangular is independent of anything anyone happens to say or think about the matter. The question of the nature and plausibility of realism is a controversial issue (Miller, 2002).

Nominalism is an anti-realist stand. The doctrine holding that abstract concepts, general terms, or universals have no independent existence but exist only as names. The relation between universal and name is conventional.

But there is a problem with nominalism as pointed out by Prof. Gomatam during the class discussion; it is the fact that a universal say rectangleness cannot be attributed to arbitary things, so universals are not just names.

3.0 The short argument

It is a familiar thought that we might posit numbers to explain the known arithmetical truths and scientific truths the expression of which requires numerical representation (Collins, pp. 23). To explain this author gives an example,

There are four prime numbers smaller that 8.

If this fact is known, then we know the numbers exist. This is what author calls the short argument for the existence of numbers. Further, the author believes that if there were no debate between realism and nominalism the short argument would be satisfactory. I am unable to agree with this point. First of all, the condition of ‘primality’ comes into discussion when we talk about numbers. It seems that first we have numbers and then we define the condition of primality on them. Thus, the fact that “there are four prime numbers less that 8”, cannot be known without the prior knowledge of numbers. It seems that the short argument is trying to prove the existence of numbers in the hindsight i.e. by at first assuming them. I think Quine is also trying to point to this fact in the following quote, “…indispensability of mathematical entities and the intellectual dishonesty of denying the existence of what one daily presupposes”(Quine as quoted by Putnam in Collins,1998, pp. 26). I think it is right to say that if numbers exist it means that ‘there are four prime numbers smaller than 8’. Although, this does not answer the original question but it shows the problem in the short argument. Even if we consider the scientific truths, for example, the velocity of light is 3 X 108, the same problem arises. The short argument says that since we know this fact, it implies the numbers exist. But the concept of velocity (=distance/time) comes into discussion only if we have prior knowledge of numbers. Author claims that the short argument leaves no space for positing of the numbers. I he knows that there are four primes less than 8 it entails that he already has the knowledge of numbers and he need not posit them, whereas, both realism and nominalism posit numbers.

Quine is a soft realist. As quoted by Putnam, Quine thinks it is intellectual dishonesty to use and talk about mathematical entities, yet deny their existence. Quine treats everything as a myth, but he treats the myth of physical objects as superior to say Homer’s gods. He says, “…The myth of physical objects is epistemologically superior to most in that it has proved more efficacious as a device for working a manageable structure into the flux of experience”. Hartry Field also has a similar view. If believes that if numbers can be shown to be a useful myth then it can be shown that they are fictional thus, they do not exist.

4.0 Linguistic solution

Author says that short argument is sufficient to show the existence of numbers but the question about the way they exist, should not be asked as it is inappropriate. He accepts that numbers do not exist as the physical objects do, still it is right to say that they exist. As pointed by Prof. Gomatam, it seems what author wants to say is that by asking the question about way the numbers exist, we are making a category mistake. For example, if pen is in the pocket, we can ask someone to pull it out. Whereas, if someone says, he has an idea in his head we do not ask him to pull it out.

Another problem while thinking about the existence of numbers is that they are defined by giving them negative attributes like they are non-physical, non-spatial, non-temporal, non-corrutible, non-contingent and they do not enter the causal relationships(Collins, pp. 30).


5.0 Conclusion

Although, I am unable to agree that the short argument is sufficient to establish the existence of numbers. But I am inclined to believe that the numbers exist based on Quine’s argument. Field also recognizes that Quine’s argument is the strongest argument for realism. Quine argues that it is indispensable to talk about mathematical entities yet their existence is denied which implies that there is an intellectual dishonesty. Yet, the question of about what is for them to exist is not answered. Author argues that the question is wrong because our discourse with numbers does not generate the question. Also, because of the fact that the numbers are given negative attributes, the question of what is for them to exist sounds wrong to me.


6.0 References:

1. Miller, A (2002), Stanford Encyclopedia of Philosophy, http://setis.library.usyd.edu.au/stanford/entries/realism/.

2. Collins, A.(1998, ON THE QUESTION ‘DO NUMBERS EXIST?’ in The Philosophical Quarterly, Vol. 48, No. 190 pp. 23-36.

SCIENTIFIC REALISM AND QUANTUM PHYSICS

Summary of Priest G., “Primary Qualities Are Secondary Qualities Too”, British Journal for Philosophy of Science, Vol. 40, (1989), pp. 29-37.

1.0 Introduction

In the paper, the author is comparing the current conflict between quantum physics and scientific realism with the scientific revolution of the 17th century. According to the author, quantum physics is indicating a change in the 17th century scientific conception of matter. By doing so he is arguing that such a change in the conception of matter will lead to a realistic interpretation of quantum mechanics.

2.0 Primary and Secondary Qualities

The mechanistic conception of matter which was formed by the work of primarily Galileo and Descartes, characterizes matter by its extension and locatability in space and time. These are what are called primary properties. Matter would have these properties even if there is no conscious observer present. But some properties of matter like color, smell etc. will not be present without the presence of conscious observer. Such properties are called secondary properties and they arise because of the interaction between an observer and the object.

With the advent of atomic and wave theories, it was possible to show that the dispositions which lead to the rise of secondary properties where really aggregate primary properties of micro-structure of matter.

According to the author, similar kind of revision in the conception of matter is indicated by quantum mechanics. In quantum mechanics, some properties like the coefficients of the eigenstates are observer-independent and hence such prperties are analougous to the primary properties of the mechanistic conception of matter. Whereas, some properties like spin which are primary in the mechanistic conception are observer-dependent and such properties are analougous to the secondary properties of the mechanistic conception.

3.0 EPR

EPR argument brings out the strongest objection to realism. According to realism, the happenings at one place cannot affect the happenings at other places instantaneously, whereas, EPR seems to say the opposite.

The author argues that the problem arises when the idea that there are two particles (in context of EPR) interacting in space-time is forced on to the situation. In fact, there is no problem for realism if we accept that Ψ state is what describes reality and thus there are no to particles out there.

How Physicalism And ‘Common Sense’ Description Of The World Can Be Made Compatible?

1.0 Introduction

It is commonly known that physical theories conflict with our ordinary common sense views. For example, it is our ordinary experience that sun revolves around the earth; whereas, the scientific theory says that both sun and earth revolve about a point called center of mass (for the case of sun and earth, this approximately means that earth revolves around the sun, which is opposite to our ordinary experience). In the paper, the author is arguing for an interpretation of physicalism which is compatible with common sense.


The author argues that scientific discoveries cannot contradict in any fundamental way the tenets of common sense that are based on ordinary experience, if we believe that the scientific investigation involves a refinement of common sense. And, the scientific discoveries can only undermine those of our ordinary views about the world that are based on inadequate or distorted observation. I am unable to agree with this argument. It is our ordinary experience that we have free will (i.e. we have capability to make choices and decide among of them); I think this experience is neither inadequate nor distorted observation. But no physical theory can account for it; rather current physical theories reject it as entirely deceptive.

2.0 Tentative Realism

Physics is a precise discipline, that is, at any time most physicists agree as to which theories are acceptable. But there is no general agreement on the kind of interpretation to be given to the mathematical formalism of a physical theory.
















Now the question arises that does a successful mathematical formalism given a physicalist interpretation, constitute a possible theory of physics, as opposed perhaps to a theory of metaphysics. To this the author answers by presenting Popper’s solution of demarcation between physics and meta-physics on the basis of experimental falsifiability.


That is, the theory belongs to physics if it is experimentally falsifiable. From this requirement it become clear that the kind of interpretation required for the mathematical formalism demanded by physicalism is what is called ‘tentative realism’. It states that the fact that the theory must be open to experimental refutation ensures that it is meaningful to call a theory false, which in turn ensures that it must be meaningful to call the theory true.


3.0
An acceptable Physicalism

Author’s main aim in the paper is to present a kind of physicalism that is compatible with common sense. In order to do that, I think he is using the concept of drawing distinctions propounded by Spencer-Brown in his book ‘Laws of form’. Physical theory and common sense theory draw different kinds of distinctions in the world. Thus, they classify things in terms of different kinds of resemblances between things. Author suggests that the Physicalism that is needed in the present context should classify things in the following way:
  1. The things are classified in the simplest possible way i.e. in terms of causal sequences.
  2. The things are classified in terms of only those resemblances which any intelligent being, however its sensory equipment may be constructed can discern, discover, become aware of. That is, classification is sense-independent.

On the other hand, common sense theory classifies things in terms of resemblances which are discernible to human beings or are associated with their experiences.

Now, common sense theory has a property called ‘color’. This property is discernible to humans because of their sensory equipment. But it falls out of the periphery of physicalism if it satisfies the above two requirements. So in this way the common sense theory and physicalist theory is compatible.


References:

1. http://www.nick-maxwell.demon.co.uk/About%20Me.htm
2. Maxwell N. (1965: May – 1966: Feb), “PHYSICS AND COMMON SENSE”, British Journal for Philosophy of Science, Vol. 16, pp. 295-311.

Why Einstein made the statement that “God does not Play dice”?

1.0 Introduction

Apart from the fact that Einstein contributed fundamentally to physics, he is also known for his life-long opposition to the most successful physical theory – the quantum theory. In this context, he made his famous statement “God does not play dice [with the universe]”. It is difficult to find the exact reasons why the man who at first said that quantum theory was revolutionary, later found objections to the developments in field (especially, Copenhagen interpretation). The author tries to uncover the reasons which motivated Einstein to take this stand. In order to do that the author presents Einstein’s conception of scientific realism (which is also the title of the paper).


2.0 Einstein a Realist

We will start with a quote by Einstein, “It is basic for physics that one assumes a real world existing independently from any act of perception” (Einstein, quoted in Gomatam, pp. 4). From this quote it appears that Einstein was a realist in the sense it is commonly accepted. But on further analysis we find that such a conclusion is not tenable. As Einstein said, “I agree physics concerns the ‘real’, but I am not a realist.

Einstein differs from the conventional views of scientific realism on two counts. First, is that he was not interested in the “context of justification”. He was more interested in the in the relation between physics and reality at the level of theory creation i.e. “context of creation”. Secondly, even at the level of theory creation his conception of realism involves, not the relation between theory and reality, but between the physicist and the reality (Gomatam, pp. 1).

Einstein’s claim is that physics is the attempt at the conceptual construction of a model of the real world and of its lawful structure and that by means of conceptual thinking we can grasp reality. Now, we can see some insights into a prospective solution to the original question of why Einstein made the statement that “God does not Play dice”? According to Einstein, reality can be grasped by means of conceptual thinking, but this is not possible in the Copenhagen interpretation of quantum physics.


3.0 Relation between creator-physicist and reality

The author brings out a very important point about the process of theory creation as viewed by physicists themselves. “Going by their [physicists] testimonies, the creation of a successful theory is grounded in a profound grasp of the physical reality that is neither purely conceptual nor deduced logically from experiences. It seems to be, as it were, a mystical insight into nature, ‘akin to religious worshipper’ as Plank puts it” (Gomatam, pp. 6).

According to Einstein, the physics concerns the real in the sense that physics is an effort of the physicist to express the grasp of reality that he has in his thinking through mathematically constructed concepts that have empirical usefulness. Author points out that Einstein’s realism concerns the relationship between the creator-physicist and reality. From this it is quite clear that there is component of subjectivity in the Einstein’s realism. MacKinnon remarks on this, “The unparalleled success of Einstein’s early efforts gave his realism an extremely personal quality…” Given this form of realism, it becomes clear why Einstein could not fully accept the Copenhagen interpretation because it settled for a probabilistic, non-visualizable account of physical behavior thus even the physicist(the creator of the theory) did not grasp the reality. Hence he said that, “Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the ‘old one’.”

4.0 Conclusion

The paper very clearly brings out some aspects of Einstein’s worldview which help in clarifying the reasons of his objection to quantum theory as the final theory. Although the fact that Einstein’s realism has a subjective component can be attacked. Plank and possibly Newton also had similar views. It is not possible to ignore their views as these physicists are in the highest realm among the physicists.


5.0 References:

1. Gomatam R. (2005) ‘Einstein’s Conception of Scientific Realism’, Unpublished Manuscript.

2. http://en.wikipedia.org/wiki/Einstein.

Saturday, January 24, 2009

Worrall J.: ‘Structural Realism: The Best of Both Worlds?’

RECOVERING A WEAKER FORM OF REALISM – STRUCTURAL REALISM

1.0 INTRODUCTION

Scientific realism is a doctrine that the entities postulated by scientific theories are real entities in the world, with approximately the properties attributed to them by the best available scientific theories. The doctrine was widely accepted till the late nineteenth century primarily due to success of Newtonian mechanics. But with the developments in physical theories especially quantum physics and relativity, it became very difficult to hold on to the realist position. In the paper, author’s aim is to recover some sort of realism. He argues in favor of structural realism, a position held by Poincaré also. I myself tend to believe in realism. I aim to work in physics and I find it very difficult to accept that my work is not about the real world. I think it is appropriate to quote Plank here, “Why should anybody go to the trouble of gazing at the stars if he did not believe that the stars were really there? … .”(Plank, quoted in Gomatam, pp.5)

2.0 NO-MIRACLES ARGUMENT

This is the main argument for a realist. The argument says that it would be a miracle, if a theory made many correct empirical predictions without what that the theory says about the fundamental structure of the universe being correct or “essentially” correct. Since, miracles are not to be accepted, thus, it is plausible to conclude that presently accepted theories are “essentially” correct. The argument requires empirical predictions for which the theory has not been engineered. For example, the predictions about the bending of light rays when they pass near the sun, made by GTR.

Although the argument is forceful it runs into problems right away. Newton’s theory of gravitation had a wide range of predictive success – the return of Halley’s Comet, discovery of Neptune etc. So according to no-miracles argument, Newton’s theory shows the fundamental structure of the universe. But we know that Newton’s theory was rejected in favor of Einstein’s GTR. What is even more striking is that the two theories are logically inconsistent, that is, if GTR is true then Newton’s theory is false and vice versa. That is, Einstein’s theory is not a mere extension of Newton’s theory. This shows that the development of science is not cumulative. Thus, no-miracles argument is not sufficient to establish realism.

Here, the author points out that no present-day realist would claim that we have grounds for holding that presently accepted theories are true. Instead, a scaled downed version called modified realism is propounded. Modified realism says our present theories in mature science are approximately true.


3.0 PESSIMISTIC INDUCTION

This is the main argument against realism. The history of science is full of theories which at different times and for long periods had been empirically successful, and yet were shown to be false in the certain they made about the world. It is similarly full of theoretical terms featuring in successful theories which do not refer. Therefore, by a simple induction on scientific theories, our current successful theories are likely to be false, and many or most of the theoretical terms featuring in them will turn out to be non-referential. Therefore, the empirical success of a theory provides no warrant for the claim that the theory is approximately true. This argument is known as the Pessimistic Induction. It was first propounded by Poincaré.

If the above argument is accepted then undoubtedly realism is untenable. Here author shows two ways that can help to show that the picture of theory-change is wrong. First, the successful empirical content of the previously accepted theory is in general carried over to the new theory, but its basic theoretical claims are not. Thus, theories are best construed as making no claims beyond their empirical consequences. This position is called pragmatic anti-realism. The other alternative uses the fact that although scientific theories are not explanatorily cumulative but predicatively they are cumulative. Here, it is believed that the observation-transcendental parts of scientific theories are not just codification schemes, they are accepted descriptions of reality hidden behind the phenomena, and our present best theories are our presently best shots at truth. Yet, there is no reason to believe that those present theories are closer to truth than their rejected predecessors. This is known as conjectural realism given by Popper. As a realist, I find this position to be fairly acceptable. But scientists generally agree to the point that present theories are closer to the truth than previous ones. Another problem with this position is that conjectural realism makes no concessions to no miracles argument (the main argument for realism).


4.0 STRUCTURAL REALISM

So the aim now is to accommodate intuitions lying under no-miracles argument and historical facts about the theory change in science. Then the position will be more plausible than pragmatic anti-realism and conjectural realism. Structural Realism is such a position. To explain this position, I will give the example used by author. Fresnel’s theory of light was based on the assumption that light consists in periodic disturbances originating in a source and transmitted by an all-pervading mechanical medium (ether). Fresnel’s theory had predictive success like prediction of white spot at the center of the shadow of an opaque disc held in light. Thus, it will be accepted as a mature science. But after Maxwell’s theory, light became to be viewed as a periodic disturbance, not in an elastic medium, but in electromagnetic field. Yet there is continuity in the shift from Fresnel to Maxwell, but the continuity is of structure and not of content. Now, we can say that Fresnel completely misidentified the nature of light, but it is no miracle that this theory enjoyed predictive success, because his theory attributed the right structure to the light.


5.0 CONCLUSION

From the paper, I conclude that the doctrine of Structural Realism is able to accommodate the no-miracles argument and the pessimistic induction argument. This is important because both these arguments are very strong. Falsifying any of them is not easy. Thus, structural realism seems to me to be the most appropriate stand for a realist.


6.0 REFERENCES

1. Worrall J. (1989), ‘Structural Realism: The Best of Both Worlds?’, Dialectica, Vol. 43, pp.99-124.
2. Gomatam R. (2005), ‘Einstein’s Conception of Scientific Realism’, Unpublished manuscript.

Chaos Theory

A. BRIEF HISTORY

1. Father of chaos Theory – Henri Poincaré


Poincaré is regarded as the last "universalist" capable of understanding and contributing in virtually all parts of mathematics. He is considered to be the father of chaos theory, although the term ‘chaos’ was not coined by him. He was working to the famous three-body problem. Due to the efforts of Poincaré, today we know that the problem is not solvable analytically. But in 1889, this was not known. The three-body problem is a specific case of n-body problem (n>=3). A system consisting of two bodies which are interacting under gravitation forces can be expressed as a differential equations. Using Newton’s laws, the motions of two masses can be expressed as a differential equation. In mathematical parlance, it is said that the system is “analytically solvable” or the set of differential equation don’t have a “closed form solution”. Intuitively this indicates that long range prediction of orbits is not possible. Poincaré showed that a system of three bodies interacting under gravitational forces cannot be solved analytically. The solutions in such cases can be approximated by using computational methods.

2. Edward Lorentz

Poincaré’s monumental discovery of what is now called deterministic chaos was neglected. Probably, the time at which it came, scientists were primarily interested in Relativity theory and Quantum Physics. Also, without the computer it is very difficult to show the behavior of chaotic deterministic system.

If we are allowed to say that Poincaré gave birth to chaos theory, then it can be safely said that Lorentz gave it a rebirth. Edward Lorentz was a meteorologist working at MIT on long range prediction of weather using computer models. The models consisted of differential equations, no more complicated than Newton’s laws of motion. He created a basic computer program using mathematical equations which could theoretically predict what the weather might be. One day he wanted to run a particular sequence again, and to save time he started it from the middle of the sequence. After letting the sequence run he returned to find that the sequence had evolved completely different from the original. At first he couldn't comprehend such different results but then realized that he had started the sequence with his recorded results to 3 decimal places, whereas the computer had recorded them to 6 decimal places. As this program was theoretically deterministic we would expect a sequence very close to the original, however this tiny difference in initial conditions had given him completely different results.

This sensitivity to initial conditions became known as The Butterfly Effect. This is because it compares to the idea that a butterfly flapping its wings could produce a tiny change in the atmosphere, and trigger a series of changes which could eventually lead to a hurricane on the other side of the world. The Butterfly Effect is a key characteristic to a chaotic system.

B. LOGISTIC FUNCTION

In this section, I will show the behaviour of a very simple function known as Logistic function. The use of this function is fairly consistent in literature for the introduction to chaos theory. The reason for this is that it is the simplest one dimensional, nonlinear (x squared term), single parameter (b) model that shows an amazing variety of dynamical response. This function can be intuitively seen as a simulation of the population dynamics of some species over the time.

Logistic Function: f(x) = b*x*(1-x) = b*x – b*x^2, where b is a constant known as the effective growth rate.

The population size, ( f^n(x), composition of the function f ‘n’ times) at the nth year, is defined relative to the maximum population size the ecosystem can sustain and is therefore a number between 0 and 1. The parameter b is also restricted between 0 and 4 to keep the system bounded and therefore the model to make physical sense.

The aim is to know, what happens to the long term behavior of the system, for fixed value of the parameter b and any given initial condition, say, x0. So, I change the value of ‘b’ and show the various kinds of dynamical responses of the function.

(a) 0 <= b <= 1
It is clear that for values of b between [0,1], if we start iterating the equation with any value of x the value of x will settle down to 0. The point zero is called the fixed point of the system and is stable for b = [0,1]. A stable fixed point has the property that the points near it are moved even closer to the fixed point under the iterations. For an unstable fixed point the points move away as time progresses. The behaviour of logistic function for b=0.9 is shown on the graph below:

(b) 1 < b < 3

For values of b between (1,3) the iterates instead of being attracted to zero, get attracted to a different fixed point other than zero. The fixed point x = 0 is unstable for b between (1,3) and a new fixed point exists for b >= 1. The new fixed point is that x = 1 - 1/b. The behaviour of logistic function for b=2.8 is shown on the graph below:






(c) 3 <= b <~ 3.5


Apart from 0 and (1 - 1/b) , for all other initial values, the logistic equation will never converge to any fixed point. Instead the system settles down to a period 2 limit cycle. That is, the iterates of the logistic equation oscillate between two values. The fixed points 0 and 1 - 1/b still exist but they are unstable. The behaviour of logistic function for b=3.2 is shown on the graph below:







(d) 3.5 <~ b <~ 3.65


On further increasing b the period 2 limit cycle becomes unstable and a period 4 limit cycle is created. The behaviour of logistic function for b=3.52 is shown on the graph below:











(e) b=4


For b=4, it is found that there are orbits of all periods. This is known as chaotic behaviour. The behaviour of logistic function for b=4 is shown on the graph below:










C. BIFURCATION DIAGRAM


The behaviour of logistic function with different values of ‘b’ can be shown very elegantly on a diagram known as bifurcation diagram. On this diagram, the limiting value of the logistic function is plotted against the value of ‘b’.









D. FORMAL DEFINITION OF CHAOS

Before I give the formal definition of chaos, I will define three terms.

Definition 1 (Density) A set A is dense in another set B, if for all ε >0 and b Є B there is an a Є A such that | a-b | < ε.

Intuitively, Definition 2 (Sensitive dependence on initial conditions) –
We say that the one dimensional map f: D -->D has a sensitive dependence on initial conditions if there exists M>0 such that, for any x Є D and any ε>0 there exists y ЄD and n ЄN such that |x-y| < ε and |f^n(x) – f^n(y)| >M.

Intuitively, Definition 3 (Topological transitivity) - f: D --> D is said to be topologically transitive if for any ε>0 and x,y Є D, there exists z Є D and k Є N such that |x-z| < ε and |y – f^k(z)| < ε. Intuitively, Defination of Chaos:

Let f: D --> D be a one dimensional map. We say that f is chaotic if
1. The set P(f) of periodic points is dense in D.
2. f has sensitive dependence on initial conditions.
3. f is topologically transitive.

Summary of the Papers “The Statue and Clay” By Judith Jarvis Thomson and “Things, stuffs and coincidence” by Nikos Psarros

Both the papers are attempting a solution to the problem of coinciding objects. Thomson takes an example of a piece of clay called “CLAY” and a statue of clay of King Alfred called “ALFRED”. The problem of coinciding objects is about the relationship between CLAY and ALFRED. Both the authors present solutions which are different but still agree on the point that CLAY and ALFRED should not be given same ontologically status i.e. both the authors reject the identity thesis which claims that CLAY and ALFRED are the same.

The Statue and Clay
If I bought ten pound of clay at 9 AM (CLAY) and made a statue at 2 PM on the table (ALFRED) then
Is ALFRED = CLAY?
At 9 AM there was only CLAY and no ALFRED but at 2 PM there is CLAY as well as ALFRED occupying the same place at the same time. Moreover, if I break the statue at 5 PM then again there will be just CLAY and no ALFRED. So, can we say that being a statue is just a temporary property of CLAY? This thesis could be applied to any artifact. This gives an answer to the problem of Identity Thesis. We can say that CLAY = ALFRED at 2 PM and that CLAY ≠ ALFRED at 9 AM and 5 PM. The temporary property of being an artifact can be instantiated from time to time.

But author points to a stronger argument against Identity Thesis. It is the replacement argument. Suppose, I replace one hand of ALFRED with a new one and place the old one on the floor. Then surely, the CLAY is not wholly on the table but it is can be accepted that ALFRED is still on the table because in ordinary thought artifacts are generally capable of undergoing replacement. If we accept this then we have to face a paradox called poor man paradox. If a poor man is given a penny that doesn’t change his status of poor man and he is given another penny that also doesn’t change the situation, but if he is given pennies one by one, then ultimately he would become rich. So how many pennies are needed to make him rich”. Similarly, if all a large part of ALFRED is replaced, then we will have to accept that the old ALRED has been replaced. In such a situation ordinary thought does not supply with answer.

Thomson accepts that the difficulties are serious. “Some philosophers therefore conclude that artifacts cannot undergo replacement of any part, and others that there are no artifacts at all” (Thomson, 1998, pp. 153). Thomson rejects both the arguments and simply supposes that ALFRED is on the table after replacement of one hand but CLAY is not. Now she goes on to define what “constituting is”. The two place relation ‘x constitutes y’ is a temporary relation because before replacement CLAY constituted ALFRED but not after the replacement. This can be easily overcome by using a three place relation instead of two place relation. The three place relation ‘x constitutes y at t’ is a permanent relation.

Now, Thomson establishes a logical framework for further arguments:
(i) x exists at t --> x is a part of x at t.
(ii) x is part of y at t --> x and y both exist at t.
(iii) x is part of y at t <-- --> the space occupied by x at t is part of the place occupied by y at t.

Now, she defines the phrase ‘x constitutes y at t’ or ‘CLAY constitutes ALFRED at t’ by giving logical statements. The first statement in the definition essentially asserts that both CLAY and ALFRED are part of each other. Since we want CLAY to constitute ALFRED, we should not have that ALFRED also constitutes CLAY. The next two statements of the definition endow CLAY with a property that ALFRED does not have. So that we can have CLAY constitutes ALFRED and ALFRED does not constitute CLAY. The second statement of the definition says that CLAY has a part that CLAY cannot loose but which ALFRED can loose. The third statement says that ALFRED does not have any such part which it cannot loose and which CLAY can loose.
So the second and third statements of the definition establish that CLAY is more tightly tied to its parts than ALFRED is. So this condition establishes an ontological difference between CLAY and ALFRED which makes us conclude that CLAY is not identical to ALFRED but merely constitutes ALFRED.


Things, stuffs and coincidence
Psarros approaches the problem from “language-analytical point of view”. He believes that the problem arises when a piece of bronze and a bronze statue are given same ontological status. According to him, “bronze is an abstract substance that does not refer to a thing, but merely to a specific way of talking about a common substantial aspect of things like a bronze statue and a bronze bar” (Psarros, 2001, pp. 27).

The following figure shows his approach:



According to him, the common aspect of bronze bar and bronze statue (the particulars) is called bronze which is abstract and thus a universal. We can proceed further by calling bronze and iron as particulars and the common aspect between them as substance which is a universal. In this way, he makes the relation between bronze and bronze statue clear.

Conclusion
In both the papers, the authors point that the problem of coinciding objects arises when we give same ontological status to CLAY and ALFRED or Bronze and Bronze bar. Further, both the authors give solutions which are quite similar. Thomson gives CLAY a status higher than ALFRED and shows that CLAY has a part that CLAY cannot loose but which ALFRED can loose, which is quite similar to the solution given by Psarros in which he gives bronze the status of genus or universal with respect to bronze statue or bronze bar.


References:

1. Thomson, J.J. (1998), “The statue and the clay”, NOUS, 32:2, (1998), pp. 149-157.
2. Psarros, N. (2001), “Things, stuffs and coincidence”, HYLE--International Journal for Philosophy of Chemistry, Vol. 7, No. 1 , pp. 23-29.
3. http://setis.library.usyd.edu.au/stanford/entries/temporal-parts/.

Summary of the Paper “What do brain data really show?” By Valerie G. Hardcastle and C. Matthew Stewart.

1. Introduction

In the paper, the authors try to show that the current approach of the neuroscientists to localize and modularize brain functions has no basis. This approach has been inspired from the study of body, where it is clear that it is composed of a very large number of parts, and that each part is highly specialized to perform a specific function in service of the survival and reproduction of the organism. Using the body as a model for the brain, neuroscientists guess that the brain, too, is composed of one or more functional parts, each of which is also specialized to facilitate the survival and reproduction of the organism. One by one, authors show that the techniques used by the neuroscientists are not compatible with the approach stated above. According to the authors, “… these are just prejudices, nothing more. And the underlying assumptions could very well be wrong” (Hardcastle, V and Stewart, C. M., pp. S73).


2. Localization and Single Cell Recordings

Neuroscientists study brain using single cell recordings, where they insert an electrode in or near a cell, and then record whether the activity of the cell changes when animal is stimulated in some way. If it does, then the degree of stimulus is changed and recordings are made. If it doesn’t, then the stimulus is changed or nearby cell is examined.

Authors point out that neuroscientists assume the brain to be a discrete system while making single cell recordings. Moreover, the maximum number of simultaneous recordings that they have done is around 150, which is very less compared the number of neurons in various brain areas. Based on the activity of the neurons, they try to make connections between cognitive tasks and the brain areas, which are also restricted by the fact that there are many different types of neurons, with different response properties and different interconnections with other cells.


3. Lesion Studies and the Assumption of Brain Constancy

By using the data from single cell recordings, neuroscientists estimate the behavior of certain brain areas and then they try to disrupt the hypothesized functions by placing lesions in otherwise normal animals.

These methods have serious technical difficulties. “By lesions studies are notorious for highly variable functional damage… A genuine replication of a lesion study in neurosciences is a practical impossibility” (Hardcastle, V and Stewart, C. M., pp. S76). Moreover, it is well known that any functional change in the nervous system leads to compensatory changes elsewhere. So the changes seen after the lesion is placed might be due to these compensatory changes.

Authors give an example of TMJ (Temporo-mandibular joint syndrome) to show that till now we don’t have a sketch of correct neuro-anatomy. TMJ is accompanied by ringing in the ear and till recent times it was not known that there was an auditory nerve that ran alongside the lower jaw. This makes the task of deriving the functions of brain areas, even more difficult.

4. Functional imaging to the rescue?

fMRI and other imaging techniques are the best known non-invasive recording devices currently available. Using these techniques we can observe the activity of the whole brain at the one time tied to some cognitive activity. But authors point out that these techniques have low spatial and temporal resolution. According to them, “…has a spatial resolution of 0.1mm and each scan samples about 5 secs” (Hardcastle, V and Stewart, C. M., pp. S77); whereas, the cell activity is about three to four orders of magnitude smaller and faster.

Moreover, the subtraction method used by experimenters to interpret the data, has an underlying assumption that two conditions under which experiment is conducted differ only in the cognitive task under consideration. This assumption is not necessarily true.

Lastly, the imaging techniques show the metabolic activity in the brain, so they ignore the components of cognition that may not require any changes in the metabolism.


5. Conclusion

Authors have been able to show that the simplifying assumptions of discreteness and constancy of function are not justified. Further, authors argue that the bias of neuroscientists to modularize brain data has no experimental backing; rather, it is just a prejudice. Therefore, the claims about functions are not correct as they assumed the local and specific functions prior to the gathering data for that claim.

References:

Hardcastle, V and Stewart, C. M. (2002), “What do brain data really show?”, Philosophy of science, 69, pp. S72-S82.

Summary of Steven Harnad’s Paper – “Correlation vs Causality: How/Why the Mind-Body Problem is hard”

In the era of modern philosophy, the distinction between mind and body was upon drawn by Descartes. Descartes’ statement that “I think therefore I exist”, shows that he considered the existence of mind to be indubitable unlike the existence of matter. With the progress of science and contributions by Newton, Darwin etc., the existence of mind came to be doubted whereas matter’s existence came to be taken as a given.

The classical Mind-body problem is about how two different things (i.e. Mind and Body) can interact with each other. Thus, dualism is presupposed. But in the contemporary setting, dualism is not presupposed. Harnard’s paper is about the Mind-Body problem in the contemporary setting.

Harnad’s paper is in reply to Humphrey’s paper, “How to solve the Mind-Body Problem.” In his paper, Humphrey is giving arguments that he believes will help to solve the mind-body problem. Harnad believes that the mind body problem is unsolvable and he gives counter arguments to Humphrey’s claims. Harnad’s main line of attack is the following – “Correlations confirm that Mind does indeed supervene on Body, but causality is needed to show how/why Mind is not supererogatory: that’s the hard part”. Harnad states it more clearly as, “I have feelings. Undoubtedly the feelings are in some way caused by and identical to brain processes/structures, but it is not clear how and even less clear why.” Using this line of attack he is able to ward off all the arguments that Humphery has given.

Humphrey gives the example of electrical discharge and lightning. He says, although, we may not be able to say, what makes an electrical discharge manifest also as lightning, but we can predict the occurrence of lightning whenever there is a lightning discharge. According to him, in a similar way, one day we might collect so much information about the mind-brain correlations that we predict which mental state will supervene on any specific brain state. Harnad counter argues by stating that there is no causal model that unifies mind and body, as the correlated phenomena are not of the same kind, unlike electric discharge and lightning. So such an analogy is not applicable here.

Humphrey attributes a lot of importance to Reid’s distinction between perception and sensation. According to Reid,
Sensation has to do with the self, with bodily stimulation, with feelings; perception by contrast has to do with judgments about the objective facts of the external world. But Harnard rejects the idea that there is any insight in Reid’s viewpoint. According to him, it is the relation between feelings and brain states that M/BP is concerned about and not between feelings and other objects, whether external or external.

Humphrey uses Reid’s concept of sensation to explain the evolution the privatization of sensory activity. According to Humphrey, the command signals begin to loop back upon themselves, becoming in process self-sustaining and self-creating and such recursive signals enter a new intentional domain. But again Harnad dismisses such arguments. Harnad argues that all such activities can be done without feelings also, so Humphrey’s explanation is begging the question. Also Harnad regards the internal loops as too easy to give rise to mental states.

Harnad argues that the hard problem cannot be solved in the way Humphrey is proposing, rather he thinks it is solvable at all. Although, he does not give reasons of why he thinks so but he makes a point that it is not because of any limitation of the human mind as McGinn believes. According to him, every mental capacity has both an easy and a hard aspect: the functional aspect is easy, the feeling aspect is hard. But it’s the feeling part that makes it mental and this is the hard M/BP.

Saturday, January 17, 2009

Stored-program Concept

A. HISTORY
The advent of “stored-program” concept is considered to be a milestone in the history of computers. The basic idea of the stored-program concept is to store the instructions (of a program) in the computer memory along with data.

The idea is implicit in the design of Turing machine which was designed by Alan Turing in 1935. He described an abstract digital computing machine consisting of a limitless memory and a scanner that moved back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that is stored in the memory in the form of symbols. This is Turing's stored-program concept, and implicit in it is the possibility of the machine operating on and modifying its own program, but Turing did not recognized its importance in context of designing electronic computers.

The idea was applied in 1946 in the design of EDVAC (Electronic Discrete Variable Automatic Computer) by John von Neumann, John W. Mauchly and J. Presper Eckert. In June 1945, von Neumann independently published a paper entitled ‘First Draft of a report on EDVAC,’ in which he presented all of the basic elements of a stored-program computer. As a result, he got all the credit of the stored-program concept and the machines build using the concept are called “von Neumann machines” [2]. Generally, the term is now avoided in the professional circles as it is considered to be injustice to Mauchly and Eckert. It took a long time to build EDVAC. Before EDVAC was completed, a team coordinated by Maurice Wilkes at Cambridge University built the EDSAC computer. Thus EDSAC was the first stored-program computer actually built.

The motivation for introducing the stored-program concept came from the problems faced while working on ENIAC (Electronic Numerical Integration and Calculator),. ENIAC was the first general purpose computer (i.e. it could run different programs) but the new instructions had to be fed by changing the wiring of the computer [3]. This made the computer not only cumbersome and error prone but it also makes the use of computers, a specialized task. To overcome these problems, during the designing of the next computer, it was proposed that the instructions should be stored with data in the memory so that they can be changed easily. This also allows the program to branch to alternate instruction sequences based on previous calculations.


B. IMPLEMENTATION OF STORED-PROGRAM CONCEPT
Now I will show how to implement the stored-program concept in a basic computer (used in Morris Mano (1993)). The organization of the basic computer is defined by its internal registers, the timing and control structure, and the set of instructions that it uses.

B.1.THE BASIC COMPUTER HAS EIGHT REGISTERS, A MEMORY UNIT, AND A CONTROL UNIT.
B.1.a. REGISTERS AND MEMORY
The eight registers, there capacity and their function are shown in the following table.

Program counter (PC) is the register which stores the address of next instruction to be executed.

The memory has 4096 words with 16 bits per word. To specify the address of each word a 12 bit number is needed (because, 212 = 4096).
Paths are required to transfer information from one register to another and between memory and registers; the wires will be excessive if connections are made between output of each register and inputs of all other registers. So, a common bus is used. The outputs and inputs of memory and all the registers are connected to the bus. Selection rules are defined to coordinate the use of bus.


B.1.b. COMPUTER INSTRUCTIONS
The basic computer has three instruction code formats as shown in the figure below. Each format has 16 bits.



The type of instruction is recognized by the control unit based on 4 bits in positions 12 through 15 of the instruction. If the opcode is not 111, then it is memory reference instruction. The bit in position 15 specifies the addressing mode (it is not required for present purpose) and the rest 12 bits specify the memory address. If the opcode is 111, then the control checks the bit 15, if bit 15 is 0 then it is a register reference instruction and it is 1 then it is an input-output instruction. Basic computer instructions are shown in the figure below.



















B.1.c TIMING AND CONTROL

Master clock generator supplies pulses to all flip-flops and registers in the system. The pulses do not change the state of a register unless the register is enabled by a control signal. The control signals are generated in the control unit and provide control inputs for the common bus, for the processor registers and the micro-operations for the accumulator. The block diagram of the control unit is shown in the figure below.




















An instruction is read and placed in Instruction register (IR). The IR is divided into three parts in the figure. The operation code in bits 12 to 14 is decoded using a 3X 8 decoder. Bit 15 is transferred to the flip-flop designated by I. Bits 0 to 11 are applied to the control logic gates. The output of sequence counter (SC) is decoded into 16 timing signals T0 to T15. SC can be incremented or cleared synchronously. SC is incremented with every positive clock transition unless its CLR input is active. For example, SC in incremented to provide timing signals T0, T1, T2, T3 and T4 in sequence. At time T4, SC is cleared to 0, if decoder D3 is active.

The last three waveforms in the figure show how SC is cleared when D3T4 =1. This is done by connecting the output of an AND gate whose inputs are D3 and T4 to the CLR of SC.










B.2 INSTRUCTION CYCLE

A program in the memory consists of a sequence of instructions. The program is executed by going through a cycle in turn is subdivided into a sequence of sub-cycles:
1. Fetch an instruction.
2. Decode the instruction.
3. Read from memory if instruction is indirect.
4. Execute the instruction.

B.2.A FETCH AND DECODE.
Initially, PC is loaded with the address of first instruction in the program and SC is cleared to 0, providing a timing signal T0. The micro-operations for fetch and decode can be specified as below:
T0: AR <-- PC T1: IR <-- M[AR], PC <-- PC +1 T2: D0,…,D7 <-- decode IR(12-14), AR <-- IR(0-11), I <-- IR(15) Since only AR is connected to the address inputs of the memory, it is necessary to transfer the address from PC to AR at T0. At T1, the instruction read from the memory is placed in the IR and PC is incremented to prepare it for the address of the next instruction. At T2, the operation code in IR is decoded, the indirect bit is transferred to flip-flop IR and the address part of instruction is transferred to AR.

Figure shows how first two register transfer statements are implemented in the bus system. It can be seen that at T0, the contents of PC are placed on the bus by making the selection inputs S2S1S0 equal to 010 and enabling the LD input of AR. Similarly the other instructions are carried out.











B.2.B DETERMINE THE FETCH AND DECODE.

The timing signal is active after T3. During the time T3, the control unit determines the type of instruction that was just read from the memory. The flow chart shows how the control determines the instruction type after the decoding.

The three instruction types are subdivided into four separate paths. The selected operation is activated with the clock transition associated with timing signal T3. This can be symbolized as follows:
D’7IT3: AR <-- M[AR] D’7I’T3: Nothing. D7I’T3: Execute a register-reference instruction. D7IT3: Execute an input-output instruction. After the instruction is executed, SC is cleared to 0 and the control returns to the fetch phase with T0 =1. To execute an instruction various micro-operations need to be carried out. I will show how one memory reference instruction can be executed. Table lists the seven memory-reference instructions.










B.2.C EXECUTION OF THE INSTRUCTION.

I will describe the execution of the first instruction AND. This is an instruction that performs the AND logic operation on pairs of bits in AC and the memory word specified by the effective address. The data must be read from the memory and transferred to a register, so that it can be operated on with logic circuits. The micro-operations that execute this instruction are:
D0T4: DR ß M[AR]
D0T5: AC ß AC ^ DR, SCß o
In this way, other instructions can also be executed.
So it is obvious that instructions and data are stored on the same memory, what makes the difference is the way in which each is interpreted. This is done by placing the 16 bit instruction code in IR and placing the 16 bit data in DR. Now control checks the opcode from IR only and never from DR.

C. DRAWBACKS OF STORED-PROGRAM CONCEPT
The stored-program concept has found universal acceptability; most of the computers built today are based on the concept. But with the advances in the hardware, the concept is found to decrease the efficiency of the computer. Nowadays, the data transfer rate, between the CPU and memory is very low compared to the rate at which CPU works, thus CPU has to wait for a lot of time. This is called “von Neumann bottleneck”, a term coined by John Backus in his 1977 ACM Turing award lecture. According to Backus:
‘Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it.’ [4].
One of the solutions to “von Neumann bottleneck” is to introduce a fast memory ‘cache’ between CPU and conventional memory. Another candidate for solution is the “Harvard Architecture” (the term originated from the Harvard Mark I, a relay-based computer), in which the data and instructions are stored separately. It has an advantage over “von Neumann machines”, as it can access data and instructions in the same CPU cycle, which is not possible in “von Neumann machines”. Many modern chip designs incorporate the aspects of both Harvard architecture and von Neumann architecture.


D. REFERENCES:

1. http://www.maxmon.com/1946ad.htm
2. http://www.kingsford.org/khsWeb/computer/MicroComputers/History_of_ computers/04_the_stored_program_computer.htm
3. http://en.wikipedia.org/wiki/Stored_program
4. http://www.kingsford.org/khsWeb/computer/MicroComputers/ History_of_computers/03_first_generation_computers.htm
5. http://www.maxmon.com/1944ad.htm
6. http://plato.stanford.edu/entries/computing-history/#ENIAC
7. http://www.alanturing.net/turing_archive/pages/Reference%20Articles/ what_is_AI/What%20is%20AI03.html
8. http://webopedia.internet.com/TERM/V/Von_Neumann_machine.html
9. http://www.csc.liv.ac.uk/~ped/teachadmin/histsci/htmlform/lect4.html
10. http://courses.cs.deu.edu.tr/cse223/notes/assemblyprog/node11.html
11. http://www.ntu.edu.au/faculties/technology/schelectrical/staff/saeid/ teaching/ sen464/lectures/ch3/tsld013.html
12. http://concise.britannica.com/ebc/article?eu=404957
13. Mano, M. Morris, (1993),Computer System Architecture, Pearson Education: Singapore.

Does a rock implement every Finite-State Automaton? by David Chalmers

Putnam’s Claim and its consequences

· Every ordinary open system is a realization of every finite automaton.
· If the above statement and the claim of artificial intelligence are true it implies even a rock has mind.


Chalmers Objections

· Putnam’s argument requires “unnatural physical states involving arbitrary disjunctions.
· Putnam’s system does not satisfy the right kind of state-transition conditionals.


Conditionals

· Conditionals are "if . . then . . "claims.
· The standard form for a material conditional is as follows: If P then Q.
· The truth functional characteristics of the material conditional are defined by the following table.


Problem with material conditionals

· Material conditionals can be true even when there is clearly no connection between the antecedent and the consequent, for example "If grass is green then the sky is blue".
· For this reason, material conditionals have often been thought to be an inadequate way of representing the causal connection between antecedent and consequent.


Counterfactuals

· As an example of why counterfactual conditionals are important, consider an event A causing an event B. Now, minimally if A really caused B then A and B must have actually occurred, and that entails the truth of the material conditional "If A then B". But the material conditional does not capture the sense in which there is supposed to be some connection between A's occurrence and B's occurrence.
· That fact is captured by the truth of the following counterfactual conditional "If, all else being equal, A had not occurred, then B would not have occurred".


Putnam’s system does not satisfy the right kind of strong conditionals.

· There are two ways in which Putnam’s system fails to satisfy the strong conditionals:

(a) First concerns the state-transitions that are actually exhibited.
(b) Second concerns the uninhibited state-transitions.


The state-transitions that are actually exhibited.

· Consider the transition from p to q. For the system to be true implementation, this transition must be reliable. But Putnam’s system is open to all sorts of environmental influences.
· The construction is entirely specific to the environmental conditions as they were during the time period in question.
· Slight change in environmental circumstances may lead to a different state.


Unexhibited state transitions

· To qualify as an implementation, it is not only required that are manifested on this run are mirrored in the structure of the physical system.
· It is required that every state transition be so mirrored, including the many that may not be manifested on a particular run.


Possible reply
· To overcome the first objection – It is required that the system reliably transits through a sequence of states irrespective of environmental conditions. A system containing a clock will satisfy this objection. (HOW?)
· To overcome the second objection – It is required that there are extra states to map unexhibited states in a particular run. This can be ensured if the system has a subsystem with an arbitrary number of different states.
· Thus, Putnam’s result is preserved only in a slightly weakened form.
· Further, Chalmers argues that all this has demonstrated is that inputless FSAs are an inappropriate formalism.
· Then he introduces a notion of combinatorial state automaton or CSA.
· Only advantage it has is that states are not monadic but have some internal structure.
· But for every CSA there is an FSA that can implement it.

Self-reference

Self-reference is used to denote any situation in which some one or something refers to itself. Object that refer to themselves are called self-referential. Any object that we can think of as referring to something—or that has the ability to refer to something—is potentially self-referential. This covers objects such as sentences, thoughts, computer programs, models, pictures, novels, etc.

The perhaps most famous case of self-reference is the one found in the Liar sentence:
“This sentence is not true”.

The Liar sentence is self-referential because of the occurrence of the indexical “this sentence” in the sentence. It is also paradoxical. That self-reference can lead to paradoxes is the main reason why so much effort has been put into understanding, modeling, and “taming” self-reference. If a theory allows for self-reference in one way or another it is likely to be inconsistent because self-reference allows us to construct paradoxes, i.e. contradictions, within the theory.

I will try to give an account of the situations in which self-reference is likely to occur. These can be divided into situations involving reflection, situations involving universality, and situations involving ungroundedness.

PARADOXES: Richard’s Paradox, Berry’s Paradox, Liar’s Paradox.

REFLECTION:
Artificial Intelligence
A very explicit form of reflection is involved in the construction of artificial intelligence systems such as for instance robots. Such systems are called agents. Reflection enters the picture when we want to allow agents to reflect upon themselves and their own thoughts, beliefs, and plans. Agents that have this ability we call introspective agents.


UNIVERSALITY:

When we make a statement about all entities in the world, this will necessarily also cover the statement itself. Thus such statements will necessarily be self-referential. We call such statements universal.

If R is the reference relation of our natural language then the sentence
“All sentences are false”

will be universal. The problem about universality is that reflection and universality together necessarily lead to self-reference, and thereby is likely to give rise to paradoxes.


UNGROUNDEDNESS AND SELF-REFERENCE:

Self-reference often occurs in situations that have an ungrounded nature.
If we take the dictionary example, we can give a simple example of ungroundedness. Let R be the reference relation of Webster’s dictionary, that is, let R contain all pairs (a, b) for which b is a word occurring in the definition of a. Since every word of the dictionary refers to at least one other word, every word will be the starting word of an infinite path of R. Here is a finite segment of one of these paths, taken from the 1828 dictionary:

regain → recover → lost → mislaid → laid →
position → placed → fixed → . . .

Since there are only finitely many words in the English language, any infinite path of words will contain repetitions. If a word occurs at least twice on the same path, it will be contained in a cycle. Thus, in any dictionary of the entire English language there will necessarily be words defined indirectly in terms of themselves. That is, any such dictionary will contain (indirect) self-reference.


IMPORTANCE OF SELF-REFERENCE
Although it is very difficult to tame self-reference (it can be done is some cases. Ex. Russell’s Paradox), but it has become very important as it is showing up in almost all sciences.

Mathematics – Russell’s Paradox, Gödel’s incompleteness theorem.
Physics - Heisenberg’s Uncertainty principle, Space-time and rotation.
Computer Sciences – Recursion, self-modifying codes, AI – agent acting on itself.

Studying self-reference can also help in the study of mind i.e. subjectivity, because that is also a self-reference as in that case “mind will be studying itself”.

Chaos theory in Neural Networks

ABSTRACT

Neural networks based on chaos theory can simulate the functioning the brain in a better way as most of the natural processes are chaotic. These neural networks not only help in better understanding of the brain processes but they also have real world applications. Here, I present a review of the literature to put forth the importance of chaotic neural networks.


KEYWORDS
Chaotic networks, neural networks, learning, memory, brain processes.


1. INTRODUCTION.

Research results show that chaotic dynamics is present in the functioning of biological neural systems, ranging from chaotic firing of individual neurons to chaotic spatio-temporal patterns in the EEG. Chaotic behaviour of neural networks is found to have application in biological modeling. In this paper, first I describe the chaos theory and its main characteristics. Next, I show that neural networks are motivated from biological processes in the brain and at last, I present arguments for the use of chaos theory in neural networks.


2. MEANING OF CHAOS.

Chaos theory studies systems whose behavior lies between rigid regularity and randomness. The common meaning of the word “chaos” suggests complete disorder, but that is not how chaotic systems behave. Although, statistically chaotic activity is indistinguishable from randomness.

The typical features of chaos include:
a. Nonlinearity – All chaotic systems are non-linear.
b. Determinism – There are deterministic underlying rules to specify the future state of the system.
c. Sensitivity to initial conditions – Slight changes in initial conditions lead to radically different behavior in the final state of the systems.

It can be explained with a simple example of pseudo-number generator. The rule for pseudo-number generator is a deterministic formula e.g. Xn+1 = c Xn mod(m). The resulting solutions of the equation are very irregular, unpredictable, and sensitive to initial conditions.

In fact, most of the physical systems are chaotic. It seems nature uses chaotic systems more often than it uses linear systems.

3. NEURAL NETWORKS.

Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. In spite of the growing number of publications in the field, there is no consensus on the precise definition of ANNs. The reason for this is that there are too many types of them. Sometimes a more
general term called “connectionism" is used for ANNs. The term connectionism means a methodology of making a complex system by a combination of connected elements that are similar or identical.
The basic nonlinear elements of an ANN are called neurons.
ANN is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.

Applications of neural networks:
In real life applications, neural networks perform particularly well on the following common tasks:
1. Function approximation.
2. Pattern recognition.
3. Time-Series Forecasting.

Functioning of ANNs is based on functioning of neurons in the human brain. Therefore, this is also a good method to study brain.

4. MOTIVATION OF USING CHAOS IN ANNS.

There is a wealth of experimental evidence indicating the significance of complex, non-periodic spatio-temporal oscillations at microscopic, mesoscopic, and macroscopic levels of neural organization using EEG, fMRI, and MEG techniques. Initially it was believed that chaotic behavior could be responsible for epilepsy, insomnia and other such neural disorders, but now the positive aspects of chaotic behavior are being considered. According to Freeman, chaotic behavior is what helps us to recognize a familiar face almost instantaneously. Further, he believes that chaotic activity in the brain is necessary for learning.
Now I will present some points which motivate the usage of chaos theory in ANNs:


1. Assuming that the brain processes exhibit chaotic behaviour. The models if ANNs which use chaos theory can help in better understanding of brain processes.

“One way to find answers to questions about neural chaos is to build models that produce similar emergent chaotic behavior in artificial neural networks” (Andras, 2004).

Another benefit of designing chaotic neural networks is that simple yet biologically plausible networks can be designed which exhibit complex activity of neurons. These networks being simple can help to understand the brain processes in a better way.

2. Chaos could explain the capacity of human memory.

(a) “A study of the various routes to chaos in dynamical systems reveals that significant computation occurs at the onset of chaos. At first blush this is not surprising since statistical mechanics views these as phase transitions with infinite temporal correlations. In computational terms, processes that are in a critical state, like those at the onset of chaos considered here, have an infinite memory” (Crutchfield, 1993).

(b) “Indeed, chaos offers many advantages over alternative memory storage mechanisms used in artificial neural networks. One is that chaotic dynamics are significantly easier to control than other linear or non-linear systems, requiring only small appropriately timed perturbations to constrain them within specific Unstable Periodic Orbits
(UPOs). Another is that chaotic attractors contain an infinite number of these UPOs. If individual UPOs can be made to represent specific internal memory states of a system, then in theory a chaotic attractor can provide an infinite memory store for the system” (Crook and Scheper, 2001).

3. Chaos in memory search

(a) “When a trajectory moves along a chaotic attractor, it moves sequentially from one part to another. If we associate various parts of the attractor with different patterns, then the trajectory will wander between them. In principle, this wandering can be used for the recognition or association purposes: if a trajectory spends most of its time near one of the patterns, then the latter can be considered as “recognized", and if in the sequence of visited patterns there are stable combinations, those patterns may be considered as “associated" with one another. Note that sequences of patterns can be stored into Hopfield-type networks. There is a possibility that chaos may help vary these combinations to learn new ones or to allow one pattern to participate in a number of associations simultaneously” (Potapov and Ali, 2001).

(b) “The network stems from the study of the neurophysiology of the olfactory system. It is shown that the network serves as an associative memory, which possesses chaotic dynamics. The problem addressed is machine recognition of industrial screws, bolts, etc. in simulated real time in accordance with tolerated deviations from manufacturing specifications… The existence of the chaotic dynamics provides the network with it’s capability to suppress noise and irrelevant information with respect to the recognition task”(Yao , Freeman, Burke And Yang, 1990).

4. Learning

(a) Freeman sees chaos as the building block that sets the stage for new learning. The brain is constantly being attracted towards a chaotic state in order not to bias new information in the environment by preexisting learned attractors.


(b) “If we consider a neural network as an element of a larger system interacting with the world, then dynamical chaos can emerge in rather simple models. A number of such models are known, for example, in artificial intelligence. Moreover, systems interacting with their surroundings need a source of `initiatives' to pursue exploration and learning from experience. Dynamical chaos can serve as a source of such initiatives” (Potapov and Ali , 2001).



Thus, chaotic neural networks could help in understanding the various mysteries of the brain. Also, such neural networks may have much more applications in real world.

5. CONCLUSION

I have presented arguments for the use of chaos theory in neural networks. It can be seen that such neural networks can help in understanding the functioning of memory, the quantity of memory, learning etc. In Munakata, Sinha and Ditto (2002), the logical operations like AND, OR, NOT, XOR, and NAND are realized using chaotic elements. They provide a theoretical foundation of computer architecture based on a totally new principle other than silicon chips. Chaos computing may also lead to dynamic architecture, where the hardware design itself evolves during the course of computation. Thus, apart from helping understanding the brain processes, chaos can also help in designing artificial brains.


6. REFERENCES
1. Crutchfield, James P. (1993), Critical Computation, Phase Transitions, and Hierarchical Learning(Santa Fe Institute Studies on the Sciences of Complexity), http://www.santafe.edu/ research / publications/ wpabstract/199310061.

2. Andras, P. (2004), ‘A Model for Emergent Chaotic Order
in Small Neural Networks’, Technical Report Series, University of Newcastle upon Tyne, CS-TR-860, pp. 1-10. Nonlinear Workbook: Chaos, Fractals, Cellular Automata, Neural Networks, Genetic Algorithms, Gene Expression

3. Crook, N. and Scheper T. (2001), ‘A Novel Chaotic Neural Network Architecture’ , European Symposium on Artificial Neural Networks ISBN 2-930307-01-3, pp. 295-300. Neural Nets and Chaotic Carriers (Wiley Interscience Series in Systems and Optimization)


4. Potapov, A.B. and Ali, M.K. (2001), ‘Nonlinear dynamics and chaos in information processing neural networks’, Differential Equations and Dynamical Systems, 9(3&4), pp. 259-319.

5. Albers, D.J. And Sprott, J.C. (1998), ‘Routes To Chaos In Neural Networks With Random Weights’, International Journal of Bifurcation and Chaos, 8(7), pp. 1463-1478.

6. Munakata T., Sinha S., and Ditto W.L. (2002), ‘Chaos Computing: Implementation of Fundamental Logical Gates by Chaotic Elements’, IEEE Transactions on Circuits And Systems, 49(11). CONTROL OF CHAOS IN NONLINEAR CIRCUITS AND SYSTEMS (World Scientific Series on Nonlinear Science, Series a)

7. Freeman, W.J., ‘Chaos In The CNS: Theory And Practice’, http://sulcus.berkeley.edu/FLM/MS/WJF_man2.html.