First of all,some pol sci 101; the state is run for the benefit of its citizens. Where it ceases to be so run, it loses validity, and the citizens can and should withdraw their consent, right up to the point that armed revolt becomes legitimate if – as happened in the British occupation of Ireland – the occupier continues to defy the expressed democratic will of the people.
Since 9/11, we have seen a quickening of another impulse; the state as a hoarder of information garnered about its citizens, on whom it feels empowered to spy. Increasingly, any surplus created by the economic activity of these citizens is diverted into an increasingly disembodied and criminal financial services “industry”, which is again attempting a comeback and will create the winner in the current US presidential “race”. Many brave hacktivists have defied this impulse, sometimes paying with long jail sentences. Other hackers are of course destructive adolescents, about which more later.
The USA was explicitly set up by radicals who wisely judged that anyone who preferred security over liberty deserved neither. The EU, au contraire, has just inflicted a massive debt on Ireland that vastly outweighs the EEC/EU paltry contributions through structural funds, “research” grants etc. the research in particular in general competes with and destroys vastly superior native work.
Universities are meant to be loci of free speech, where the state amplifies the freedom to allow proper intellectual development and checking of ideas. That is the main reason that this tenure fight was initially worth the candle; however, DCU morphed into something very dangerous indeed after SIPTU's idiocy in 2003 and this post is an attempt to illustrate that. When we set DCU up, it was essentially as a group of cultural nationalists and we could have kept it there, sans SIPTU.
My home institution is now Stanford. It features a variety of opinions, from professors who took refuge from UC Berkeley when asked to sign a loyalty oath, and at the other extreme the Hoover institute, once home to Danny O'Hare and now Condi Rice. We disagree with each other, and are encouraged to do so. It would not be allowed the center about to be put in place at DCU . Stanford's motto features freedom; it is also commercially the most successful university in the world. Freedom and success go together in this world.
The Irish state should focus on producing a plan, and accompanying narrative, that generates respect and consent from its citizens, not set up a new spook center. That narrative would stop the “going postal” incidents that are beginning to proliferate; it would also stop the disaffected youth attempting something like the following, using material culled over the past fortnight from “The Daily Show” et al from someone not even interested;
1.Use something like SQL induction, mentioned on Jon Stewart, to wage cyber-warfare against targeted sites
2.Set up a Google e-mail address and claim responsibility on a venue like boards.ie as the Cyber-IRA with whatever moniker one chooses”- P O'Brien”? The famous recent FBI sting featured a boards.ie poster
3.Disconnect the internet connection for the night, generating a new IP address.
4.Continue as before
5.Alternatively, a more secure option; travel using cash only, and stop in different towns using their computers. ETC
Why not just introduce a just system of redistribution in our society, stop trying to spy on us, and the above will almost certainly not happen!
Monday, June 25, 2012
DCU is the wrong place for sensitive research like this
Let's get the narrative straight again;
- An attempt, unprecedented in world history, to introduce summary dismissal for academics was made at DCU starting in 1995 or so;
- This included DCU's president arguing in the High court that he had the right summarily to dismiss anyone working at DCU;
- Eventually, this dark initiative was stopped at the supreme Court, and mainly because the current Chief Justice (who obviously strategically delayed both the verdict and announcement of the remedy in the Cahill case ) is vehemently in favor of academic freedom
- The rest of DCU's criminality- including, and indeed especially, abuse of students, is documented on this blog
So now we find that DCU is chosen to
conduct research into “terrorist' groups;
Nothing wrong there, you might say; it
will prevent deaths. Don't want any more Berwicks, right?
Of course, Berwick , like McVeigh,
Kaczynski, etc, would know how to fly below the radar.
More troublingly, the lead in the study
was outed – by Stratfor itself – as a subscriber, using her DCU
address, to the data base of that CIA-emulating agency;
The rampant criminality at DCU for the
best part of a score years should mean that it is prevented from
snooping on our most sensitive communications
Seán Ó Nualláin Ph
D Stanford 25u Meitheamh 2012
Saturday, June 23, 2012
Inner and outer empiicism in onsciousness resarch
Along with many other academics, I have been compelled to sign over
copyright of much of my work (not my books, as it happens, due to a very
clever Dublin copyright lawyer who also handles my music)
This paper was published in 2006 in the reputable "New ideas in psychology";
http://www.sciencedirect.com/science/article/pii/S0732118X06000195
and benefitted from an excellent commentary in a follow-up article;
https://iris.nuigalway.ie/uat/!W_VA_PUB_OTHER.EDIT?POPUP=TRUE&object_id=1126289
Here it is;
This paper was published in 2006 in the reputable "New ideas in psychology";
http://www.sciencedirect.com/science/article/pii/S0732118X06000195
and benefitted from an excellent commentary in a follow-up article;
https://iris.nuigalway.ie/uat/!W_VA_PUB_OTHER.EDIT?POPUP=TRUE&object_id=1126289
Here it is;
Inner
and Outer Empiricism in Consciousness Research
Seán Ó Nualláin
New
Ideas in Psychology
Volume 24, Issue 1 , April 2006, Pages 30-40
Volume 24, Issue 1 , April 2006, Pages 30-40
With great fanfare, a “new” science of
consciousness has been launched, not to say inflicted on us, starting
perhaps with the first Tucson conference in 1994. Debate has focussed
around issues like neural correlates of consciousness; what can only
be described as extreme experimentation with, to take one example,
ferrets' brains being rewired to see whether visual cortex can behave
auditorily, and vice versa, has been legitimized. In the meantime,
more thoughtful reseachers will admit that we have not identified a
single neural correlate of consciousness; that consciousness studies
has become a canvas on which we project all the problems of the
cognitive sciences; and, indeed, that the major effect of all this
debate is possibly the global warming caused by the peregrinations of
the stars of the discipline. In the meantime, simple questions like
the ones Kafka asked a century ago about what it feels like to be a
cog in a bureaucracy remain unasked.
In
this paper, we study a possible new perspective on methodology in
this area; between “inner” empiricism, an attempt to isolate
exact conditions and methodology for a science of experience, and
“outer” empiricism, a projection of the methodology of Galilean
empirical science onto phenomenal space.
If
there is a science of consciousness distinct from cognitive science,
that science of consciousness is inevitably committed to inner
empiricism, however crude such inner empiricism may initially be. In
fact, the claim made by such researchers as Daniel Dennett (1992)
that introspective data may be scientifically disregarded is
equivalent to the claim that outer empiricism is sufficient to handle
all the phenomena of mind. In other words, we can study such
phenomena as qualia by projecting the methods of classical science
into phenomenal space and thus, incidentally, “disqualifying”
them. In particular, Dennett (1988) has great difficulties with the
notion of qualia. Invoking the name of his teacher, he insists that
qualia should be “quined.” The rationale is essentially that of
William of Ockham: nothing can be said about qualia from the
scientific, third-person perspective (Flanagan, 1992, p. 61), and
they have no explanatory role. He betrays his behaviorist origins:
since qualia have no effect on external behavior, we can deny their
efficacy.
Daniel
Dennett's position is considerably strengthened by its context in an
overall Darwinian Weltanschauung: Design, including
consciousness, is algorithmic. To refute Dennett, we must produce a
more encompassing and/or more valid Weltanschauung (on which,
more later).
Dennett
(1995, p. 59) allows himself considerable latitude as to the
definition of algorithmic: “Are there any limits at all on
what may be considered an algorithmic process? I guess the answer is
no.”
However,
the algorithmic level has a definite role when it comes to answering
questions about design: “If what strikes you as puzzling is the
uniformity of the sand grains or the strength of the blade, an
algorithmic explanation is what will satisfy your curiosity.” (p.
59)
For
Dennett, as he claims also for Darwin, “evolution is an algorithmic
process” (1995, p.61). I have written an extended critique of the
specifics of Dennett's view of consciousness in Ó Nualláin
(1995, pp. 293-295); similarly, his views on what
constitutes an algorithm, and evolution, seem destined to ensure that
all objections fade into the warm blur of his personality, and
Conway's Game of Life. What we are concerned with here is
something quite specific; i.e., first-person methodology. So the
details of that critique are not relevant. What is relevant are the
denials of qualia and thus of the whole first-person enterprise and
how we respond to these denials.
For
Dennett, there is no need to invoke first-person accounts of
anything. Subjectivity does not really exist; it just seems to
exist, and he is concerned to dispel the illusion. He can point to
the vast amount of data supporting the neo-Darwinian position to
buttress his case.
As
hard science, a position like Dennett's is, thus, possibly
irrefutable. Arguments like David Chalmers’ (1996) can be adapted
to prove that there are no “scientific” data that require
subjectivity for their explanation. However, at this point we have
left a fundamental and utterly certain fact behind: that each of us
knows that we are, that “I am.” The difficulty this presents to
conventional science was pointed out with great elegance by
Kierkegaard and Dostoevsky. The latter insists on an “I,” however
inelegant, that reason, however elegant, cannot handle.
Kierkegaard is as scathing as the Dostoevsky of Notes from the
Underground about the possibility of science ever encroaching on
the higher realms of subjectivity (as distinct from those pointed to
in work like that of John Taylor's referenced below).
Kierkegaard’s
masterpiece, The Sickness unto Death, begins; “Man is
spirit. But what is spirit? Spirit is the self. But what is the self?
The self is a relation which relates to its own self... the self is
not the relation but that the relation relates to its own self... So
regarded man is not yet a self” (1941, p.17).
Confused?
You're meant to be: “.. it is the practical work of religion to
make it possible for natural man truly to experience his own
nothingness, his own lack of being” (Needleman, 1982a, p. 36).
This
experience is the first step on the royal road to consciousness. The
first-person methodologies we are about to examine in this article
begin with an experience of nothingness. These methodologies have
historically been embedded in the esoteric core of religious
traditions. Let us continue with the Danish proto-existentialist. He
is concerned with what he sees as the religious task of pointing out
our nothingness, and then: “For Kierkegaard, the highest truth of
which man is capable is an event that takes place within the psyche
when consciousness simultaneously confronts God and the ego”
(Needleman, 1982b, p. 207).
In
the science of consciousness, we are trying to do justice to both
scientists and existentialists. Let's see what such a science might
look like.
Consciousness
studies and cognitive science are two distinct, if overlapping areas.
Cognitive science studies that aspect of mind which can be
informationally described; consciousness studies attempts also to
provide a framework that can do justice to phenomenal experience.
Work like John Taylor’s (in Ó Nualláin et al.,1997)
exemplifies how cognitive science and neuroscience can contribute to
consciousness studies; he states, for example, that we focus on only
one thing at a time because only one informational channel exists. It
is the claim of researchers like Penrose and Hameroff that links
directly from quantum mechanics to subjectivity can similarly be
posited; for example, coherent quantum states are required to explain
our (supposed) continuous sense of self. (There may be a more
fundamental contribution from Von Neumann’s work and the analysis
of observer status. This we examine below.) Anthropology has vast
amounts to say about different types of experience across cultures,
and its findings may eventually be subsumed into cognitive science.
The distinction between the content of consciousness and
consciousness itself becomes paramount here; computer science,
similarly, may tell us much about the former.
Computer
science can significantly inform us about this because, pace
Dennett, the notion of an algorithm is a non-trivial one and we
can describe the vast majority of what goes on in our minds
algorithmically. Let us, for the sake of argument, slightly simplify
matters and say that an algorithm for X is a way to effectively
compute X. As you read these lines, edge-detectors are parsing the
visual scene, letters and words are being extracted, sentences are
being parsed and, finally, a representation is being delivered to
consciousness, a meaning representation which will hopefully coincide
with my intention. The processes which combine to produce this
representation, this content of consciousness, are algorithmic and
unconscious. What then is consciousness, as distinct from the
contents thereof?
This
point is so central that its explanation could well occupy the rest
of this paper. It suffices to say that ―
at least in this writer's opinion ―
the only justification for the existence of a science of
consciousness is the existence of a distinction between its contents,
the subject matter of cognitive science, and consciousness itself. We
shall find it necessary to travel by a rather circuitous route to
this particular conclusion; specifically, we revisit the experience
of self and nothingness. It is possible that the exercises that
contribute the discipline of inner empiricism are useful only to
these who with Kierkegaard know that man is not yet a self, that they
themselves lack being, lack presence. (I am aware that this sentence
will have the same effect that a mathematical equation famously has
in a popular science text—it will halve the readership!) I cannot
find words more cogent that those of Needleman (1988, p.21):
... the traditional teachings
- as expressed in the Bhagavad Gita, for example- make a fundamental
distinction between consciousness on one hand and the contents of
consciousness such as our perception of things, our sense of personal
identity, our emotions and our thoughts in all their color and
gradations on the other hand.
Needleman's
work, as we see again below, has shown
an
increasing concern that this distinction be experienced, rather than
intellectualized, and that this experience be continually
re-engaged:
... anything in ourselves, no matter how
fine, subtle or intelligent... becomes transformed into contents
around which gather all the opinions, feelings and distorted
sensations that are the supports of our secondhand sense of identity.
In short, we are told that the evolution of consciousness is always
'vertical' to the constant stream of mental, emotional and sensory
associations. (1988, p. 21)
Indeed,
to "understand" consciousness, we must continually strive
for a better state of consciousness.
The
philosophy of mind allows us to segue into inner empiricism, the
matter at hand. In particular, Chalmers’ zombie arguments strike me
as having an unintended consequence as already noted. Chalmers can
conceive of a zombie equivalent of himself who, without any
subjective experience, can yet know that his bibulous behavior now
will lead later to a headache.
(I
must admit that the mental act by which David can conceive of such an
outcome is outside the range of this writer. For me, to have a
headache requires that I, as subject, experience something specific.
However, I am also willing to concede, with Husserl, that I am a
perpetual beginner in phenomenology.)
Chalmers’
stance is vehemently in favor of the strong AI position that there is
a non-empty set of computations, executions of which constitute a
mind (Chalmers, 1996, p. 314). However, the contention that
experience always must be taken as over and above the facts of the
world finds a place (p.333) in Chalmers' opus. I believe these views
to be incompatible; let us focus on the zombie-related point. What
Chalmers is above all doing is making a distinction between
subjectivity itself and the contents of consciousness. This
distinction is not a priori accessible to the rest of us as to him.
(One is reminded of the claims that Bob Dylan, the
genius that he is, wrote his songs without even having to
think.) Chalmers can be read as an inner empiricist, in the tradition
of the modern founder of this field, Jacob Needleman: “He is
calling him to remember what the Zen Buddhists call the original
face... It is not metaphor. There is a face within us, a source of
attention and consciousness, that is eternal. It is a literal fact.
A material fact” (Needleman, 1998, pp. 54-55). Moreover, as in
Descartes’ Meditations, what is happening here is actually a
mental exercise.
The
imperative is to distinguish the experience from the content of
experience. Why do this? Essentially, because like techniques in
conventional science, it leads us to ever more valuable insights.
What's being argued, then, is that inner empiricism is a discipline,
a “science,” and that it has been practiced with great
sophistication in the past. The results of these explorations are
codified in religious traditions.
A
consensus from these traditions is that consciousness can only be
understood in the context of a
Weltanschauung. One’s own experience has a cosmic
dimension. By "Weltanschauung" in this context, I
intend the classic Bergsonian notion of an all-encompassing
worldview. However, I emphasis also that one's "subjective"
experience should be seen in this worldview as manifesting a cosmic
dynamic. As has so often been the case in this article, let us cut
straight to Needleman for an exemplification:
In Judaism, Christianity and Islam, we are
most familiar with its expression in the teaching, that man is made
in the image of God — God, the Creator and preserver of the
Universal Wholeness in its gradations and levels. The traditions of
India speak of the Divine, Cosmic Man whose dispersal into fragments
constituted the creation of the world and whose re-collection is the
sole essential task of human life. In Buddhism we find the doctrine
that all the levels of being, from mineral up through the Gods, are
contained in Man — Man, the center and Man the all-embracing void.
The traditions of China revolve around the idea of the King, the
Great Man who governs the parts of existence. One could go on with
this listing. (Needleman, 1988, pp. 22-23)
Yet,
man is not yet a self. I am not yet that man who can be a microcosm.
I need to work on myself, to engage in the activities of inner
empiricism which are simultaneously an exploration and an arduous
path of self-transformation. In order to know consciousness, to be a
self, it is necessary to elevate my level of consciousness.
By
complete contrast, Dennett’s system is a Weltanschauung that
brackets the subjective; I will not be so opaque as to say that
Dennett denies its existence in any other than a methodological
manner. What he is trying to achieve is a third-person account of
all that is, including those states we currently class as subjective.
The key is neo-Darwinism, construed as the algorithmic creation of
what appears to be design. Subjectivity is a bag of parallel
algorithms with a virtual machine riding shotgun on them to finesse
the type of serial purposive experience or behavior that we attribute
to ourselves. Or, in other terms, man is not yet a self. Dennett
(whose ideas closely resemble Baars’ and Smolensky’s on this
score) may be closer to Needleman and to the tradition of inner
empiricism than he thinks. What he depicts is fallen man, man without
a self. Like Michael Gazzaniga, as we see below, his mistake is that,
having realized his and our basic nothingness, he refuses to
countenance anything that can fill it.
What
can fill it, according to the traditions, is nothing less than the
cosmos itself: “Zen insight is not our insight, but Being's
awareness of itself in us” (Merton, quoted in Conner, 1987).
Or
from the sanatana dharma tradition: "The Rishi attempted
to realize Brahman, the hidden power of the universe, in himself"
(Griffiths, 1982). Atman, this realization, is his own hidden ruler.
The
way to this realization seems at first sight full of paradox: “It
is the Not-I that is most of all the I in each one of us” (Merton,
quoted in Conner, 1987).
There
is a remarkable convergence across the religious traditions here: in
the Christianity of John's gospel, the hierophant Jesus proclaims:
“Before Abraham came to be, I am” (John 8:58).
Many
mental exercises are proposed to help induce this realization.
Chalmers' unintended instances we have looked at; for Gurdjieff, the
starting point was the shocking realization of oneself as automatic
and fragmented. The final insight can be gently introduced with such
metaphors as this one from the Upanishads: “Two birds, inseparable
friends, cling to the same tree. One eats the sweet fruit, the other
looks on without eating” (quoted in Griffiths, 1982).
Much
more sophisticated such exercises are also implicit in the use of
koans; finally, the physical and emotional "brains" are
controlled by movements, work, and music right across the religious
spectrum. The aim is always to point to the reality underlying the
"empirical self", the "imposter", or what
Gazzaniga calls the "interpreter".
We
now come to one of the central positive contributions of this
article, which is to rectify Gazzaniga's and other's overlooking of
these traditions. More specifically, we wish to investigate the
possibility of creating a science of consciousness that can do
justice to the myriad findings emerging from neuroscience and the
cognitive sciences on one hand, and the knowledge carried by the
religious traditions on the other. We ultimately wish to confront an
even more awesome question: how can I use the findings of outer
empiricism and the exercises of inner empiricism to be more fully
myself? And what is the relationship of the “myself” which
emerges from this engagement with first-person methodologies, and
that which I am now? Inner empiricism is of course by definition a
first-person methodology, but the person inevitably becomes
transformed by it.
I
wish to make these points absolutely clear. Outer empiricism is the
set of conventional scientific techniques devoted to the “easy”
problem, the projection of standard Western science onto phenomenal
space. Inner empiricism is applied experientialism, a set of
exercises operating on the physical, emotional, and intellectual
parts of the psyche. Using these exercises, one is invited to arrive
at a better concept of the nature of consciousness by transforming
oneself, by becoming a self. Resources exist in the religious
traditions to help this process of self-transformation. The first
step is the most difficult one : that of realizing there is a
problem, that I am not. Only then is one open to the suggestion that
there is a substrate of consciousness, an observer, independent of
mental contents. It is precisely the perception of the ephemeral
nature of these contents, and one's identification with them, that
provides the dynamic toward self-integration. We just looked at some
catalysts for these perceptions (e.g. John 8:58).
Of
course, all of this is realization at purely the intellectual level.
Let us look briefly, (and given this print medium, inadequately) at
physical exercise within various traditions. Advaitins will
stress that there can't be yoga without yoke-ing (a jeu de mots
that works in practically all Indo-European languages, westward to
the Gaelic equivalent chuing). One's body is to be experienced
as continuous with one's physical environment. One exercise to induce
this realization is to conceive of physical sensations as fish in a
ocean encompassing both body and physical environment. Above all, one
is to rethink what the body is (answer, ultimately, a container
metaphor imposed on these sensations.) Once that is done, the final
non-dual truth is revealed; the fish also are composed of water.
Another yoga exercise designed to confound proprioceptive habits is
to conceive of one's chest as co-extensive with the wall in front,
and one's back as co-extensive with the wall behind. These
visualization exercises inform the experience of physical postures,
where energy-flows through the body are attended to.
Physical
exercises in dual traditions differ in explicit intent. The Gurdjieff
movements, themselves physically complicated, are meant to be done
simultaneously with mental math and devotional prayer. (Peter Brook's
film Meetings with Remarkable Men features these movements,
normally not shown to non-initiates.) Finally, the traditions always
use music to educate the emotional center. Monastic music tends to be
modal (i.e., non-diatonic) and to have a restricted tonal range. The
goal is serenity; the emotional highs and lows used by diatonic music
as it creates and releases tension are inappropriate. Experiencing
such music is an interesting first-person exercise; its simple
melodies initially attract, them become boring, and finally settle,
unmovable, in one's mind. I do not believe it beyond the scope of a
contemporary religious genius to produce a new set of intellectual
exercises using the advances in logic of the twentieth century, modal
jazz, and perhaps the kind of physical exercises developed by Moshe
Feldenkrais. (In particular, Gödel can be seen as a
koan-master).
It
behooves us at this point to look more conventionally at the
traditions. In particular, the eastern approaches to reality called
Karma Yoga, Bhakti Yoga and Jnana Yoga stress, respectively, action,
devotion and knowledge. Yet underlying even these is the quest for
presence.
And when, it is seen that none of these is
complete in itself, one can invent an integral yoga to integrate all
the various paths... But can you see who is this integrator?... Real
integration is only possible through pure and simple observation and
perception - by seeing the fragments. This very seeing is
integration. (Kaushik, 1977, pp. 49-50)
Let
us be utterly clear again. Realizing the existence of the
interpreter, the empirical self, is the royal road to consciousness,
to self-integration. It is the leitmotif of inner empiricism as a
first-person methodology. To arrive at having everything, it is first
necessary to lose all.
Good
science has been defined as the art of making errors that lead to
making further, more interesting errors. All scientific theories are
ultimately proven incorrect. The science of consciousness I propose
that combines inner empiricism with outer empiricism starts with the
following “Copernican” revolution: a substrate of subjectivity is
posited as existing independently of the contents of consciousness.
Copernicus' system famously needed epicycles, just like the Ptolemaic
system it superseded. Similarly, the existence of that substrate may
be convincingly demonstrated only through immersion in a set of
physical, mental and emotional exercises, as proposed in the
spiritual traditions.
There
are good reasons for supplementing Chalmers’ hard/easy distinction
with inner empiricism versus outer empiricism. If the latter is an
error, it is one that facilitates our access to the findings of
millennia of work in the spiritual traditions, rather than an error
that proposes a perhaps doomed search for a single magic bullet.
Let
us revisit the above points s-l-o-w-l-y. First of all, scientific
activity can usefully be divided into "normal" and
"abnormal" phases (Kuhn, 1996). The former features work
within a received paradigm (and no, I'm not going to define that
word): “Normal science does not aim at novelties of fact and theory
and, when successful, finds none” (Kuhn, 1996. p. 52).
However,
the progress of science demands that a different phase must also
obtain from time to time: “... Research under a paradigm must be a
particularly effective way of inducing paradigm change” (p. 52).
In
other words, even “normal” science contains within itself the
seeds of the destruction of the paradigm which regulates its
activity. We do not need to go so far as assenting to a Popperian
falsificationist ethos; the goal in all normal science is
(ultimately) to prove oneself wrong. How does this relate to our
proposal for a science of consciousness? Essentially, in contending
that the proposal of inner empiricism as the central first-person
methodology is, at the most, an intriguing mistake.
The
evidence that the ancients were well acquainted with the wiles of the
“interpreter” is impressive in extent. Its workings are
documented similarly even in spiritual tradition that seem to the
noninitiate to diverge wildly thereafter. Thus, from the advaita
tradition:
(18)... What is this thing called mind.
What is its nature? Such an enquiry will reveal that there is no such
thing.
(19)... 'Mind' is but a name for
thoughts. Of all thoughts, that of 'I' is the support. Therefore,
what is called 'mind' is truly the thought ''.'
(21)... The 'I' thought, the pseudo -
'I' is inconstant but the real ‘I’ is always there, it is
eternal. (Mahadevan, 1977, p. 90)
From
David-Neels' work on Buddhism: “A person... is an assembly composed
of a number of members” (Walker, 1957, p.76). Walker, like
Lancaster, comments on the Buddhist strand in Gurdjieff's thought:
"The similarity between Buddha's thinking and the ideas we were
learning from Ouspensky was obviously a very close one. (p.77).
I
wish now to confront one of the conceptual tensions bedeviling
“Western” versus “Eastern” thought, tensions that have
damaged interreligious dialogue. For advaitins and others,
consciousness is synonymous with Being and Reality. It is the
ultimate creative principle. For others, consciousness seems
exclusively mental; an event, perhaps, when self-observation has been
achieved, or a result of the harmony of the three centers, (mental,
emotional, physical). I believe that this tension can only be
resolved within a worldview in which the events of one's life are
seen as part of a cosmic dynamic. We find tantalizing hints in von
Neumann about the possibility of grounding the perspective in
physical law.
John
von Neumann's brilliant work on the foundations of QM has found
modern interpreters in Stanley Klein and Henry Stapp (1993). Both of
these point to the sequence of preparation - evolution - measurement
of a quantum system (see also Ó Nualláin, 2002). At
measurement, or observation, von Neumann showed that the line
dividing observer and system can consistently be drawn at any point.
This leads Klein to an exaltation of duality; there is an absolute
distinction between observer and observed. How ever, advaitins
like Francis Lucille (see below) have pointed to a non-dualist use of
von Neumann. It is indeed possible that von Neumann's work affords an
at least metaphorical framework in which these apparently
diametrically opposed perspectives can be synthesized. I hint below
on at the framework developed in Ó Nualláin (2002) in
which this synthesis is explored in a neo-Hegelian model, as at a
psychological level.
For
the advaitin, consciousness is the fundamental reality. It is
the wave function just before breakdown, or the observer just before
consciousness, which are the same thing. It is spread throughout the
entire universe, and contains infinite possibilities. What we know as
the world, and the events of our lives, are mere superimpositions on
it. The metaphor Ramana Maharshi uses is that of the blank cinema
screen, which remains the same as the movie is projected on it. In
meditation in a correct setting, this reality is allowed reoccupy
those parts of the psyche from which the ego customarily bars it.
Ultimately, there is no distinction between subject and object.
Let
us revisit these points. Francis Lucille, who comes from the advaita
stream initiated in the West by Jean Klein (1988) exemplifies the
viewpoint I have just outlined. Unfortunately, he has yet to publish,
so what follows is a potted summary of his tapes. Lucille claims to
have found a key that gives a unity to his spiritual experience and
knowledge of QM. Let us revisit the notion of a Weltanschauung; it is
a worldview in which one's phenomenal experience is seen in the
context of a cosmic dynamic. Of course, to adapt the characetr Haines
in Joyce's Ulysses, physics is a happy hunting ground for a sick
mind. What makes Lucille impressive is not just his physics knowledge
(gained through an education at the Ecole Normale Supérieure),
but the undoubted presence of the person, which obviously is
incommunicable through this medium.
In
the advaita tradition, being, consciousness and reality are
synonyms. It is important to note that this concept of consciousness
does not connote a disembodied observer; there is no subject /object
differentiation. As we've seen, it resembles a cinema screen on which
all that we experience as ourselves and the world is projected. Let's
try to interrelate this with QM. QM agrees with the ancient idea that
everything consists of vibrations.
Schrödinger's
equation describes the determination and evolution processes that
regulate the “thatness” of all matter. The solution to this
equation is ,
the wave function.
Now
for the tricky bit. The square of the modulus of the wave function at
any point represents the probability that the particle is (found) at
that point once an observation is performed. However, before
observation, the particle is, as it were, smeared all over the
cosmos, containing infinite potential for any measurable property.
For example, we might decide to measure its momentum property, rather
than position. Lucille's major insight is that this pre-observation
state of the particle is identical with what Advaitins call
consciousness.
Moreover,
this state is identical with the state of the observer just prior to
observation. The idea central to Vedanta, that Atman and Brahman are
one, finds a correlate at the physical level.
Let
us now refer again to yoga. As taught in the advaita tradition, it
serves to alert us to the non-dual reality through body sensing.
Indeed, we can gain further justification for this intuition by
referring to the physical interchange of particles between the body
and the rest of the world. On an intellectual level, we need to be
alert to the wiles of the interpreter, which Lucille calls the ego.
In fact, he claims that we never initiate an action; behavior is
automatic, and the ego steps in to take responsibility for it, in the
move Gazzaniga attributes to the interpreter. The result of this
abdication of will is, I believe, quietism: “We are not the doer
but the witness of doing” (Klein, 1988, p.109).
Similarly,
Krishna instructs Arjuna in the Bhagavad Gita to distance himself
from his actions, to see everything sub specie aeternitatis:
“Never was there a time when I did not exist, nor you, nor
all these kings; nor in the future shall any of us cease to be”
(2:12).
The
reader's stamina has been sorely tested by the switching in this
paper from philosophy of mind to comparative religion to QM; I beg
her indulgence for one more piece of cross-disciplinary speculation,
explored at length in Ó Nualláin (1998). Advaita
teaches not renunciation of the world, but realization through
non-attachment to one's actions as one develops the observer in
oneself. Yet we go through most of our lives actively becoming
through the world, which seems to be full of attractor surfaces that
spur us upward.
The
neo-Hegelian framework I have developed (Ó Nualláin,
1998; ms. available on request) affords a synthesis, using Von
Neumann's work, in which the two impulses are seen as fundamentally
one. Nor does the Orient have a monopoly on non-dualism. Meister
Eckhart’s experience thereof is a starting point. “I am
converted into Him in such a way that he makes me one with his being,
not similar” (Eckhart, quoted in Forman, 1991, p.14).
It
does not take a wild leap to find a continuous line of transmission
between this Rhineland mystic and Schelling, in whom the non-dual
European tradition reaches something of an apotheosis. The difference
from Oriental thought manifest in its progressivist ethos is a minor
detail. What counts is that the union “in which these two
absolutenesses (absolute objectivity and absolute subjectivity) are
again one absoluteness” is posited by Schelling (1978) as by Ramana
Maharshi as the ultimate non-dual reality.
Synthesis
seems at first unlikely. However, I am briefly going to outline the
synthesis I am working on. Advaitins like Jean Klein are
careful to point out that their system cannot help one with one’s
career; it is essentially spirituality for retirees (and none the
worse for that). Similarly, the more authentic Western paths like
Cistercianism insist on a radical renunciation of the world. It is as
if the “acts of alienation” we require to become professionals,
parents and individuals in society are to be thwarted. In physics
terms, the wave-function is to differentiate into subject and object
as little as possible. Focus on the eternal, undifferentiated state
must be maintained.
In
this scheme the movement of consciousness in Gurdjieff's system is
the movement of differentiation. At that moment, the observer is
absolutely distinct from the observed. So it is in life when the
imperatives that the unfolding of Geist (in the Hegelian
sense) forces on us individually are fulfilled. Yet this moment of
fulfillment is of course unsustainable, and the remainder of
Gurdjieff's system facilitates some recollection of Being in the
midst of becoming. To put it in other terms, it allows us the most
subtle and important gift of all; participation in our own lives.
We
are, in a sense, right back where we started. To study consciousness
is not just to engage in outer empiricism, in standard Western
science. It is also to engage in the first person exercises, physical
and emotional as well as intellectual, that have been preserved in
the religious traditions and that we call “inner empiricism.” To
study consciousness, we attempt to become conscious, to become a
self; ultimately, fully to be.
References
Chalmers, D. (1996). The conscious mind.
Oxford: Oxford University Press.
Conner, J. (1987) The original face in
Buddhism and the True Self of Thomas Merton. Cistercian Studies,
22(4), 343-351.
Dennett, D. (1988). Quining qualia. In A.J.
Marcel and E. Bisiach (Eds.) Consciousness in Contemporary
Science. Oxford: Oxford University Press.
Dennett, D. (1992). Consciousness
explained. London: Allen Lane.
Dennett, D. (1995). Darwin's dangerous
idea. London: Penguin.
Dostoevsky, F. (DATE). Notes from
underground. PLACE: PUBLISHER.
Flanagan, O. (1992). Consciousness
reconsidered. Cambridge MA: MIT Press.
Forman, R. (1991). Meister Eckhart.
Rockport, MA: Element.
Gazzaniga, M. (NEED REFERENCE)
Griffiths, B (1982). The mystical tradition
in Indian theology. Monastic Studies, 13, 159-174.
Hameroff, S. (NEED REFERENCE)
Kafka, F. (DATE) The trial. Place:
Publisher.
Kaushik, R.R.P.(1977). Light of
exploration. Ipswich: Journey.
Klein,
J. (1988). Who am I? Longmead: Element
Kierkegaard, S. (1941). The sickness
unto death (Trans. E. H. Hong
and H. V. Hong) . Princeton, NJ: Princeton University
Press. (Original work published 1849).
Kuhn, T. (1996). The structure of
scientific revolutions. [which edition?] Chicago: University of
Chicago Press.
Mahadevan, T.M.P (1977). Ramana Maharshi. London: Unwin.
Needleman, J. (1982a). The Heart of
philosophy. San Francisco: Harper.
Needleman, J. (1982b). Consciousness and
tradition. New York: Crossroad.
Needleman, J. (1988). The cosmos.
New York: Arkana.
Needleman, J. (1998). Time and the soul.
New York: Doubleday.
Ó Nualláin, S. (1995). The
search for mind. Norwood: Ablex. (Revised edition 2002. Bristol:
Intellect)
Ó Nualláin, S., McKevitt, P.,
& Mac Aogáin, E. (Eds.) (1997). Two sciences of mind.
Philadelphia: John Benjamins.
Ó Nualláin, S. (1998).
[title?]
Ó Nualláin, S. (2002). Being
human. Bristol: Intellect.
Penrose, R. (NEED REFERENCE).
Schelling, F. W. J. (1978). System of
transcendental idealism (P.
Heath, Trans.). Charlottesville VA: University Press of
Virginia. (Original work published 1800.)
Stapp, H. (1993). Mind, matter and
quantum mechanics. Berlin: Springer-Verlag.
Walker,
K. (1957). Gurdjieff. London: Unwin.
Saturday, June 16, 2012
Why DCU must be wound down quickly
A recent report in DCU's student paper, "the college view", revealed
disturbing facts that many of us had suspected for some time;
http://thecollegeview.com/2012/06/15/dcu-bailed-out-subsidiaries-at-a-cost-of-e9-8-million-in-2010/
Ironically, the misappropriation and misuse of public funds will not surprise anyone - this after all is not just Ireland, but northside Dublin. What is really disturbing is that these funds were run through a company called "DCU Commercial Ltd". While we assured that DCU owns this entity, it is not actually located at DCU. If one checks the Irish companies' database www.cro.ie one finds that DCU Commercial Ltd was set up at the same time as the university, and is based IN ARTHUR COX BUILDING! Cox is DCU's mafia law firm, the author of three illegal disciplinary statutes and about a thousand employment contracts that would not survive a court challenge.
This follows a run of very strange examples of the tail wagging the dog. The current president of DCU had his e-mail diverted to Marion burns, head of HR, and as close to a functional illiterate as exists in upper management anywhere. It is indeed possible that she was one of the levers though which Cox - whoever they ultimately are- controlled DCU. The High court judge and chancellor (ie head) of DCU whose ignorance of communications in upper management critical of her at DCU became a national story, the late Mella Carroll, did not have access to her own e-mail; this was diverted to Tom Hardiman, at that time head of IBM and later of another Dublin law firm. Hardiman, as head of DCU, presided over the introduction of the illegal statute no 3, thrown out by Ireland's high and supreme courts at huge expense to the taxpayer.
There was no reason for DCU Commercial Ltd to exist in such independence; DCU is itself a legal entity, which can "sue or be sued" according to its charter. The explanation invited is that the "education' going on at DCU - regardless of the occasional heroics of the scholars there - is simply a fig leaf to hide a festering wound in the Irish state, an abyss of crime and abuse that will gradually reveal itself over the next few years
A final comment; DCU was never properly accredited, so its degrees are in themselves not worth the paper they are written on. However, the many diligent and bright students who went through there have a right to have their degrees transferred to a properly constituted entity, like a new college in the NUI
Seán Ó Nualláin Ph D Stanford 16u Meitheamh (La blath) 2012
PS It is now increasingly plausible that this scenario is in fact correct;
1970s; Danny O'Hare, Ed Walsh and their knights of Columbaus /Columbus/FF buddies decide to save Ireland by setting up - ahem! - a new type of university
1980's; Both NIHE's are attracting staff with higher salaries and pushing students into very pressurized degree programmes that make the students ideal corporate fodder (I rarely meet a graduate of the NIHED/DCU computing degree still working at the technical end in computing as they're burned out)
1985; The Fwui force Danny into accepting tenure and academic freedom. His master-plan ever after is to get rid of both
1989; After a rather compromised accreditation, NIHED becomes DCU. We now enter phase two; get rid of all the staff who set it up, replace them with non-tenured people, and run the education wing as a front for DCU Commercial Ltd, headquartered at the building named after Arthur cox
A slight problem; DCU is in fact running very well as a normal small university. Ah well - breaking eggs and omelets etc
1994 approx Chuck Feeney, apparently unsolicited, continues his career of interference with universities worldwide by donating to approx $40 million to DCU. It is known that several of Feeney's other "gifts" - like to the center of public inquiry and the health service - came with considerable strings attached, and his closeness to Bertie Ahern was such that he was heard threatening Ahern, while the latter was Taoiseach, with repercussions if privatization of the health service was not accelerated. Of course, Feeney claimed to have given all "his" money away by the ealy 80's.
1995; A new "agreement" with the union is published at DCU; the union was never even consulted, and signatures are actually fraudulently appended, photocopied from the earlier 1985 agreement, to indicate assent including that of Pat Cullen, who had left DCU 5 years before and had been in Australia since then. Prosecution for fraud ? No way - this is Ireland
1997; A new universities act is passed, that require only "consultation' rather than agreement with the union over disciplinary procedures. If we fast-forward to 2012, we find DCU management desperately trying to get agreement from the union, as their legitimacy is now nil. Education is for scholars not management in real life
2000; An industrial relations whizz-kid is brought in to implement the final solution; Academics being transferred like superstar soccer players (actually used as an argument by DCU's barrister Sreenan at the supreme court and, no, I could NOT have invented this), the university to be used for "inward investment " outside the department of education, students as cheap employees doing commercial projects (Aalborg university visit to DCU, 1997)
2000- 2012; cover is given by the Irish state for all these activities, even when the law is clearly being broken with intimidation and bribery of students; no visitor ever is appointed (Mella Carroll was there to frighten off any visitors, whom the la requires to be high court Judges)
2012; Academia internationally is revolutionized by premier universities going online, making all these ill-laid plans irrelevant, even if they had worked. If DCU is to survive, it .must be as the normal small university it once was within the national university system and with regular re-accreditation as happens in the USA. As it stands, it is a menace to the world of scholarship, as well as to its own staff and students.
PPS We are now at the core of the whole issue. I realize this blog comes across as angry, verging on infantile, at times, but it is an honest record of a very significant set of events.
In 2003, there was a chance for staff (faculty and non-faculty together) not just to rescind what turned out to be an illegal disciplinary procedure, which later caused mayhem and was rescinded, in any case after the EAT and Supreme court, but also to take the university back from management. It is appropriate for me to say that I did not let my colleagues down; I refused two deals from management, at great professional and financial cost to myself. The response from SIPTU was to collude with management, and in so doing, to defy the will of their members. That was also the end of DCU.
http://thecollegeview.com/2012/06/15/dcu-bailed-out-subsidiaries-at-a-cost-of-e9-8-million-in-2010/
Ironically, the misappropriation and misuse of public funds will not surprise anyone - this after all is not just Ireland, but northside Dublin. What is really disturbing is that these funds were run through a company called "DCU Commercial Ltd". While we assured that DCU owns this entity, it is not actually located at DCU. If one checks the Irish companies' database www.cro.ie one finds that DCU Commercial Ltd was set up at the same time as the university, and is based IN ARTHUR COX BUILDING! Cox is DCU's mafia law firm, the author of three illegal disciplinary statutes and about a thousand employment contracts that would not survive a court challenge.
This follows a run of very strange examples of the tail wagging the dog. The current president of DCU had his e-mail diverted to Marion burns, head of HR, and as close to a functional illiterate as exists in upper management anywhere. It is indeed possible that she was one of the levers though which Cox - whoever they ultimately are- controlled DCU. The High court judge and chancellor (ie head) of DCU whose ignorance of communications in upper management critical of her at DCU became a national story, the late Mella Carroll, did not have access to her own e-mail; this was diverted to Tom Hardiman, at that time head of IBM and later of another Dublin law firm. Hardiman, as head of DCU, presided over the introduction of the illegal statute no 3, thrown out by Ireland's high and supreme courts at huge expense to the taxpayer.
There was no reason for DCU Commercial Ltd to exist in such independence; DCU is itself a legal entity, which can "sue or be sued" according to its charter. The explanation invited is that the "education' going on at DCU - regardless of the occasional heroics of the scholars there - is simply a fig leaf to hide a festering wound in the Irish state, an abyss of crime and abuse that will gradually reveal itself over the next few years
A final comment; DCU was never properly accredited, so its degrees are in themselves not worth the paper they are written on. However, the many diligent and bright students who went through there have a right to have their degrees transferred to a properly constituted entity, like a new college in the NUI
Seán Ó Nualláin Ph D Stanford 16u Meitheamh (La blath) 2012
PS It is now increasingly plausible that this scenario is in fact correct;
1970s; Danny O'Hare, Ed Walsh and their knights of Columbaus /Columbus/FF buddies decide to save Ireland by setting up - ahem! - a new type of university
1980's; Both NIHE's are attracting staff with higher salaries and pushing students into very pressurized degree programmes that make the students ideal corporate fodder (I rarely meet a graduate of the NIHED/DCU computing degree still working at the technical end in computing as they're burned out)
1985; The Fwui force Danny into accepting tenure and academic freedom. His master-plan ever after is to get rid of both
1989; After a rather compromised accreditation, NIHED becomes DCU. We now enter phase two; get rid of all the staff who set it up, replace them with non-tenured people, and run the education wing as a front for DCU Commercial Ltd, headquartered at the building named after Arthur cox
A slight problem; DCU is in fact running very well as a normal small university. Ah well - breaking eggs and omelets etc
1994 approx Chuck Feeney, apparently unsolicited, continues his career of interference with universities worldwide by donating to approx $40 million to DCU. It is known that several of Feeney's other "gifts" - like to the center of public inquiry and the health service - came with considerable strings attached, and his closeness to Bertie Ahern was such that he was heard threatening Ahern, while the latter was Taoiseach, with repercussions if privatization of the health service was not accelerated. Of course, Feeney claimed to have given all "his" money away by the ealy 80's.
1995; A new "agreement" with the union is published at DCU; the union was never even consulted, and signatures are actually fraudulently appended, photocopied from the earlier 1985 agreement, to indicate assent including that of Pat Cullen, who had left DCU 5 years before and had been in Australia since then. Prosecution for fraud ? No way - this is Ireland
1997; A new universities act is passed, that require only "consultation' rather than agreement with the union over disciplinary procedures. If we fast-forward to 2012, we find DCU management desperately trying to get agreement from the union, as their legitimacy is now nil. Education is for scholars not management in real life
2000; An industrial relations whizz-kid is brought in to implement the final solution; Academics being transferred like superstar soccer players (actually used as an argument by DCU's barrister Sreenan at the supreme court and, no, I could NOT have invented this), the university to be used for "inward investment " outside the department of education, students as cheap employees doing commercial projects (Aalborg university visit to DCU, 1997)
2000- 2012; cover is given by the Irish state for all these activities, even when the law is clearly being broken with intimidation and bribery of students; no visitor ever is appointed (Mella Carroll was there to frighten off any visitors, whom the la requires to be high court Judges)
2012; Academia internationally is revolutionized by premier universities going online, making all these ill-laid plans irrelevant, even if they had worked. If DCU is to survive, it .must be as the normal small university it once was within the national university system and with regular re-accreditation as happens in the USA. As it stands, it is a menace to the world of scholarship, as well as to its own staff and students.
PPS We are now at the core of the whole issue. I realize this blog comes across as angry, verging on infantile, at times, but it is an honest record of a very significant set of events.
In 2003, there was a chance for staff (faculty and non-faculty together) not just to rescind what turned out to be an illegal disciplinary procedure, which later caused mayhem and was rescinded, in any case after the EAT and Supreme court, but also to take the university back from management. It is appropriate for me to say that I did not let my colleagues down; I refused two deals from management, at great professional and financial cost to myself. The response from SIPTU was to collude with management, and in so doing, to defy the will of their members. That was also the end of DCU.
Friday, June 15, 2012
Code and context in gene expression, cognition, and consciousness
Published in “The
codes of life” edited by Marcello Barbieri (Springer-Verlag)
Seán O Nualláin, lecturer, symbolic systems program,
Stanford; visiting scholar, molecular and cell biology, Berkeley
Abstract/summary
“Code”
and “context” are two of the most ambiguous words in English, as their cognates
no doubt are in other languages. In this paper, we are concerned to bring these
notions down to earth, and use them as structuring schemas to understand
various types of biological and cognitive activities. En route, we consider
work such as Barbieri’s and Smith’s that impinge on our concerns in various
ways. The notion of code, of course, has been discussed with respect to what
constitutes the original genetic code, and indeed what the original
autocatalytic sets were to facilitate the emergence of a biological code in the
first place, and where in organisms they could be located. These issues are not
our central concern here.
We first revisit the main themes of previous work that
provides a new framework for considering gene expression vis a vis language
production and comprehension. It is decided that the gene-expression/language
production analogy is a worthwhile one, and that both evince a useful dichotomy
of code and context. Yet language itself, as exemplified in the case of
verbally able autistic people, is not sufficient for communication. We then
consider what the code for intersubjectivity is, if not language. It is argued
that selfhood, which is the medium in which communication takes place, may be
an artifact of data-compression occurring in the brain. Yet we have managed to
extend this epiphenomenal attribute into a vehicle for “code” by narrating to
ourselves. These narratives, which on a personal level maintain what is often
the illusion of personal potency, likeability, and logical consistency, have on
the social level the capacity to produce consensually-validable identifications
like national sentiment, those parts of gender which are constructed, and other
affiliations. We finish with an extended framework in which qualia –
generically viewed, subjective states – are naturalised in a way that does
justice both to ourselves and the world that we inhabit.
Introduction
In Barbieri (2003) the existence of a multiplicity of
codes in nature is insisted on. A code is defined as a set of rules relating
entities between independent worlds. He argues that along with obvious genetic
and linguistic codes, we must consider splicing codes, signalling codes,
compartment codes and apoptosis codes. These points granted, many apparently counterintuitive
conclusions follow. “Meaning” arises
when any given object is related to another through a code. Therefore, meaning
can be said to inhere at the molecular level when the code is between organic
molecules. At the genetic level, we speak of copy-making; at the protein level,
we can speak of code-making and “meaning”. He would agree with the “language of
thought” proposals from Fodor and his followers (see O Nualláin, 2003, 91-92,
and below) that we can - and, according to Fodor, must
in order to explain many cognitive phenomena – speak about a mental code or
“language of thought” as emphatically as we speak about a genetic code at the
origin of life. Finally, the notion of adapter, defined as “catalyst plus
code”, is introduced. Peirce’s celebrated trio of sign, meaning, and
interpretant can be biologised as sign, meaning, and codemaker. Barbieri (in
press) extends the previous argument to insist that sequences and codes are
“nominable” entities.
An
analogous project, arising from an impulse to extend the notion of
“computation”, complements Barbieri’s.
Brian Smith (1996) is unwilling, a priori, to assert even the existence
of objects, let alone that of subjects. His project is an attempt to construct
a theory of “computation in the wild”; a notion of computation so encompassing
that it can encompass word-processing at one extreme and the most athletic
feats of Lisp hackers at the other. After finding all previous accounts of
computation qua effective
computability, Turing machines, and so on, inadequate, Smith comes to a radical
conclusion; a theory of computation is also a general metaphysics. To explain
computation as a concept requires a thorough ontology of the world, and
metaphysical preparation.
In the beginning, according to Smith, was the raw material for objects;
proportionally extensive fields of particularity (P. 191). A metaphysical
clearing has to occur before we even speak of the existence of different
objects, let alone their relation to each other through a code. He argues that
the cognitive acts of sensation, perception, and syntactic analysis can be
compiled together in a concept called “registration” which results in the emergence
of s-objects (roughly speaking, subjects). “Registration” is of course broader
than cognition, and can occur at “lower” biological levels than the cognitive;
therefore, Smith’s is a hospitable formalism, with respect to this principle at
lease, for biosemiotics. Objects emerge
when the registration is altered to cater to the possibility of independently
registered “worlds”; to use cognitive science terminology, allocentric versus
egocentric representation. Smith’s conclusion is revolutionary. He argues that
computation is quite as intentional, in the classical Brentano sense of
relating to objects, as any human cognitive process.
Combining
aspects of the two proposals in ways that do violence to neither, we may indeed
speak about “meaning” in a way that does not confine it to human cognitive
process. In this paper, we are attempting to extend analyses such as those of
Barieri and Smith in a way that psychologizes these analyses in the case of
human cognition, and en route transforms
our notion of subjectivity. So the goal here resembles that of Smith; to
deconstruct and then return our notion of the “subject” or its related term
“self” in the manner that he does for “object”, “true”, “formal”, and so on.
The path is as follows: We first revisit the main themes of previous work that
provides a new framework for considering gene expression vis a vis language
production and comprehension. In many ways, this program is consistent with and
extends Piagetian genetic epistemology (Piaget, 1972). We finish with an
extended framework in which qualia – generically viewed, subjective states –
are naturalised in a way that does justice both to ourselves and the wonderful,
terrible world that we inhabit.
Gene-expression and linguistic behaviour
In our previous work it was noted that the analogy between
gene-expression and language-production is useful, both as a fruitful research
paradigm and also, given the relative lack of success of natural language
processing by computer (nlp), as a cautionary tale for molecular biology. In
particular, given our concern with the HGP and human health, it is noticeable
that only 2% of diseases can be traced back to a straightforward genetic cause.
As a consequence, we argue that the HGP will have to be redone for a variety of
metabolic contexts in order to found a sound technology of genetic engineering
(Ó Nualláin ,forthcoming, a and Ó Nualláin and Strohman, forthcoming),
In essence, the analogy works as
follows:
first of all, at the orthographic or phonological level, depending on
whether
the language is written or spoken, we can map from phonetic elements to
nucleotide sequence. The claim is made that nature has designed highly
ambiguous codes in both cases, and left disambiguation to the context.
At the
next level, that of syntax, we note phenomena like alternative splicing
in gene-expression that indicate a real syntactic level in the genome
analogous
to that in NL. It is argued, in turn, that the semantic level in
language
corresponds to protein production, and that, in apparent paradox,
proteins do
not in themselves specify “meaning”. That is reserved for the concept of
function in the environment. To take one example, while rock pocket mice
in Arizona acquire an adaptive darkening melanin through
expression of genes related to the MC1R protein, their fellow species
members
750km. away in New Mexico
use an entirely different mechanism for the same end. Sibling species
are of
quite different genetic origin; the swallowtail butterfly mimics its
poisonous
counterparts using entirely different genetic mechanisms. Conversely,
Watson
(1977,P. 7) postulated thirty tears ago that “ a multiplicity of
proteins will
be generated from ‘single’ genes”. The
suspension of disbelief required for the HGP’s hype to take was
considerable,
given the scientific sophistication of many of the boosters of the
project.
There has been an enormous amount of work on the
nature and complexity of the genetic code. Di Giulio (2005) reviews
stereochemical and physicochemical hypotheses about the origin of the genetic
code before coming to the conclusion that the coevolution hypothesis about this
origin, with its reference to the biosynthetic relationship between amino
acids, and corroboration by analysis of metabolic pathways, is most nearly correct. Gene assembly in ciliates has spawned a
massive amount of work (e.g. Daley et al, 2003); the existence of two different
nuclei, micronucleus and macronucleus, in a single cell, and the transformation
of the latter into the former has provided much grist to the mill of formalists
anxious to apply the artillery of formal language theory to genetics. Yet this
will not provide a context-general calculus relating nucleotide to phenotype
any more than generative grammar facilitated the advent of computers that could
translate between arbitrary texts of arbitrary languages at will.
Similarly, in NL, correct interpretation of language
strings often requires the processing of “pragmatic” factors. Conversely,
“semantics” often does yield meaning; indeed there are situations where a
single word or phoneme can give us all the meaning we want, without any
explicit syntactic or semantic processing (Fire!!). Since Austin (1962)
distinguished the “perlocutionary” import of phrases from the other levels of
messages conveyed, it has been clear that sentences like “Would you like to
help me?” are not necessarily amenable to pure semantic interpretation to
arrive at their meaning. Humans who are non-opaque to such indirection quickly
realize that understanding of such sentences initially requires an act of
consciousness, an act of considering themselves as objects in the environment
of the speaker. Of course, as we shall see, many humans never quite get it; at
one extreme, sufferers from severe autism, and at the other, the milder
versions of self-centredness that we classify as Asperger’s.
Are there counterparts in nature? If the nexus of
organism and environment is considered over time, perhaps we can suggest that
there are. While it absurd to suggest that the swallowtail butterfly is
“trying” to look like others, natural selection in the specific environment
pushes it in a direction that makes it seem as though it had conscious knowledge of its own appearance. In
the schema considered here, the HGP did little more than elicit some
context-independent relations between
keywords and semantic primitives. The cautionary lesson is that the HGP has to
be redone in countless metabolic contexts, just as NLP needs at least another
generation of work before the Turing test can be passed outside wholly trivial
micro-domains.
Some simple, yet very significant, points need to be
made, particularly in the context of the emerging field of evolutionary
developmental biology (evo-devo): First
of all, considered with respect to the Chomsky Hierarchy, the system of
gene-expression with its myriad feedback mechanisms is context-sensitive.
Therefore, to take but one example, the stance in Salzberg et al. (1998) that
there are no long-distance dependencies in the genome is absurd. The issue of
formal complexity in the case of the expression of certain ciliar genomes, as
we saw, has inspired an impressive
corpus of papers. Secondly, the notion of
the “field” is as current in biology, thanks to painstaking observation
of the development of the fruitfly embryo through the expression of Hox genes,
as it has been in Physics post-Faraday. In order to clarify matters, we will
refer to these field effects as due to “domain” for the rest of this paper.
The field effects are considerable. Hox genes, which
determine many aspects of morphogenesis,
operate in a context-sensitive manner with a recursive, phrase-structure
type complexity. A context-sensitive rule for language might specify that in
context A, a sentence is to be described as a noun phrase followed by a
verb-phrase:
S- A/NP VP
Similarly, Hox genes function with genetic switches at
two levels. One set belongs to the Hox genes themselves, and specify their
expression in the different longitudinal segments of the animal. The other set
of switches involve recognition by hox proteins in order to vary the expression
of different genes. Thus, to take one example, the BCMP gene will express proteins
belonging to the outer ear, or rib cartilage,
depending on the field. One gene might generate different proteins, according
to Watson (ibid.); conversely, one gene might be controlled by a variety of
other genes, giving rise to recursivity and ambiguity fully as complex as
anything in NL.
Let us continue with our analysis of the effects of
external dynamics on formal symbolic systems with reference to another
innovation from physics; Einstein’s work on gravitation. Einstein famously
inspired many excursions to view solar eclipses by predicting that a massive
body like the sun would distort space-time sufficiently for stars to appear
displaced from their normal positions. Moreover, this distortion of space-time
would be more and more marked as one approached the surface of the massive body
in question. The argument in the previous papers leading to this one,
similarly, is that restriction of domain causes the layers of language –
phonetic/orthographic, syntactic, semantic, and pragmatic – to get compressed
until, ultimately, a single phoneme can have massive meaning effects. This
restriction of domain is, of course, nothing other than the process of living
itself.
Cognition.
Of
course, language is no more the only symbolic cognitive system than proteins
manifest the only biological code. In Ó Nualláin (2003), chapter 7, the
following attributes are predicated of human symbol systems: A hierarchical organization,
formal complexity of a certain degree, a recursive structure, processing within
micro-domains called contexts,
metaphor, emotional impact, ambiguity, systematicity,
duality of structure, the notion of a native language, and creativity. In
western classical music, to take one example, harmonic contexts are defined so
specifically that a dominant seventh chord will always demand resolution to the
tonic. In Jazz, that is of course not the case; songs like “Yesterdays”, to
take but one example, involve circles of dominant seventh chords following each
other like a snake swallowing its tail.
In visual art, there is an enormous cultural and historical component as
well. Piet Mondrian can draw a jumble of boxes in a minimalist context and
label the result “Broadway boogie-woogie”, neatly inviting musical experience to enter also. Braque and
Picasso, some decades beforehand, arrive at a system of depiction that
considers the depicted object from a variety of different perspectives, thus
violating classical canons; the result, cubism, can be considered an innovation
in formal complexity. Likewise, Uccello’s rediscovery of the laws of
perspective in early renaissance Italy adds another layer of formal complexity.
Finally, in their choice of art, the public often chooses the less formally
complex of two forms, when given a choice; the monodic melodies of Monteverdi
gained prominence over Palestrina’s dense harmonies.
Fodor (1975) has vehemently argued for the existence of an
innate, language-independent, language of thought. He insists that computation
over representations is the essence of cognition, and that therefore there must
be a language of thought in which representations are couched for cognition to
occur. He exploits the fact that absolute innatism, like its cognate absolute
idealism, is irrefutable; we must, he argues, possess the complete set of
representations from birth. Essentially, the
argument boils down to the fact that, if concept Y is logically distinct
from concept X, it cannot possibly have been learned as an extension of concept
X; if it is not logically so distinct,
it essentially is concept X. It is therefore either a distinct innate concept,
or an extension of an innate concept. To his opponents like Talmy who insist on
compositional semantics with new meanings emergent from combinations of old
such, Fodor offers ridicule; it is like saying that “New
York” is simply an up-to-date version of the Yorkshire
town. Talmy remains unconverted.
A coda to Fodor’s “language of thought” argument would
insist on its manifestation in various symbolic modalities like pictorial
representation and music as well as language itself. There is a lot of value in
this argument. We might like to explore whether folk musics around the world
contain equal levels of formal complexity, in the same way that languages do,
to take one example. Therefore, we might conclude that the relative harmonic
and melodic simplicity of Balkan music is counterbalanced by its rhythmic
sophistication, with use of time signatures like 11:8 and 7:8 that are rare
indeed in the “wild”. Conversely,
contemporary “Irish” music (much of which is Scottish) is rhythmically
and harmonically simple, but extremely intricate melodically. In the absence of
this research, we do have some critical evidence in musical scores for how
music is processed in the real world by a sophisticated audience. For example,
Beethoven’s Eroica symphony features the use of diminished chords repeatedly to
change key in its first movement and moves imperceptibly (once the diminished
innovation is accepted) away from its “home key” before returning. In this case, the key changes can be regarded
as shifts in context. Mozart, in his last G minor symphony, uses dissonance a
great deal in a way that foreshadows many much later innovations; this is an
elaboration in “code”.
Gombrich (1959) is explicitly a history of
elaborations in code in the history of pictorial representation. Constable’s
landscapes are, according to Gombrich, explicitly scientific. They are a
Popperian attempt to set up refutable hypotheses about the constraints on
visual experiences. We have spent some time of music, vision, and language at
this level because we do not know what the neural code really is. If there is a
unified such code, it must exist at a very high level of abstraction if
implementations in media as disparate as dance, music, architecture, and so on
are to be achieved. One hypothesis, featured in the contributions by Kime, Aizawa,
and Hoffman to Ó Nualláin et al. (1997) is that the neural code is describable
by mathematical formalisms akin to tensors and Lie groups. Specifically, these
formalisms allow entities to transfer between systems of different
dimensionalities while maintaining their quantitative identity. The principles
of invariance we can possibly identify as Kantian quantities.
Therefore, at an explicitly symbolic level, we can say
a great deal about code and context in the symbolic modalities of language,
vision, and music. Of course, the analysis has been extended to other aspects
of human endeavor like architecture and dance. Undoubtedly, code processing is
contextual, and skilled artists play on this to provide us highly ambiguous
structures for aesthetic affect. When it comes to specifying the code at a
neural level, we know little.
Code and context in consciousness and
intersubjectivity
With respect to this
final section, we know even less at the neural level about consciousness
than we know about the production of symbolic expression. However, we have a
plethora of data, available to the most unfunded observer: introspective experience. We will
conclude that, whereas Smith’s (op. Cit.) caveats about the necessity for the
construction of subjects and objects is correct, the path to a resolution of
this issue may feature the type of non-linear dynamics that he does not
reference.
The argument is that our experience of self reflects a
process of context-specification, as well as generating the illusion of agency,
in the hope it will become reality. We narrate to ourselves continually,
assigning to ourselves the authorship of acts that in fact happened
automatically; we identify ourselves with respect to our nation, gender,
football team, or choice in food from second to second. In actual fact, EEG
work by Walter Freeman, inter alia,
indicates that, at a gross level, our gross brain state is changing much
faster than that; perhaps 3 to 7 times a second (Freeman, 2003, 2004; Ó
Nualláin (forthcoming, b)). The “conscious moment” at a perceptual level is on
the order of 100 milliseconds; stimuli presented to each eye at a gap greater
than this will not be subject to binocular fusion, and will be perceived as two
different objects. However, the cortex can be destabilised by a saccadic
eye-movement in 3 milliseconds. (Freeman, 2004). So we are getting a sample in
consciousness of events that, whatever we imagine about them, are happening at
a speed that is well beyond our conscious control by far most of the time. It makes
sense to talk of the conscious “perceptual” moment as being in the order of a
tenth of a second, and the “subjective” self-related moment as being perhaps an
order of magnitude longer.
However, we have educational and training systems that
use the accumulated experience of
humanity to facilitate our getting partial control of our mental processes in
restricted contexts. The lavish funds required to establish and run
universities, schools, and research institutes bears witness to just how
distractible we are, and just how much support we need to keep up a train of
thought. One estimate is that we consciously process a few hundred bits out of
the hundreds of billions that impinge on us each second. The major task for the
brain, then, is filtering; context-specification.
Therefore, our phenomenal experience, with its
ever-changing self that yet claims permanent ownership of the whole organism,
is mainly detritus from a process of context-specification that occurred
automatically whole tenths of a second before. The self is fundamentally the
part of this process accessible to the slow rhythms of consciousness. The
pathology of the self mechanism is manifest in many forms of asomatognosia,
where the patient will insist that the left side of her body does not belong to
her, or will with equal determination consistently claim that she has three
hands.
Now
for code. Amartya Sen has recently stressed how we are all multiple; just as
his native India, he stresses, is both Hindu and Muslim, so are we all different
identities at different moments. Indeed, to continue, these identifications are
themselves a code, a snapshot of the Other that allows us compress much
information. They function at a level above that of language and other symbol
systems. They become more elaborate, as when the 19th century notion
of the nation-state compiled feelings of place and race into one, highly morally ambiguous
entity. At their best, they allow a shorthand for dealing with others, and
indeed with ourselves.
To summarise up to this point, then, at the phenomenal
level, as at the organic molecular level and levels in between, there are clear processes corresponding to context and
code. Perhaps we shall eventually learn to ontologise the world such that our
identifications, and the physiological processes on which they supervene, seem
clearly to reflect the structure of both the objective and intersubjective
world. With this ontology complete, we can identify whatever remains outside
its grasp as our authentic and true selves, inviolable by neural contingency.
It may not be clear that there is actually a problem
to be solved with respect to the code used in communication between people.
Surely the existence and proper use of language is sufficient? Unfortunately,
as exemplified in autism, this is not the case. Williams (1992) is an at times
harrowing account of the transition from autism to the real social world. It is
all the more powerful as its author was at all times verbally fluent:
“My
progress at school was not too bad. I loved letters and learned them quickly.
Fascinated by the way they fitted together into words, I learned those, too. My
reading was very good, but I had merely found a more socially acceptable way of
listening to the sound of my own voice. Though I could read a story without
difficulty, it was always the pictures from which I understood the content”
(ibid., P. 25)
Freeman (2004) anticipates this, and argues that we
effectively need a new type of code to understand intersubjective communication
as mere symbols are not enough. Meaning is the result of synchrony at a neural
level between two individuals, and need not necessarily refer to anything in
the “outside” world. Even with a full semantic analysis of each sentence, we
have not necessarily achieved meaning. He exploits the work of Barham (1996) who argues that biological
systems, like the brain, are often kept at far from equilibrium states and need
high energy oscillators in the environment to stabilise them. Freeman argues that the neural part of this
story has been confirmed with EEG work by himself and others.
Therefore, the “code” for social life is at a higher
level than mere symbolic expression. As a consequence, the way we mediate our
experience to ourselves, which in general is an introjection of the tropes of
social life into phenomenal space, is going to exploit this code. At first
analysis, the code involves coherent “selves”, as described above, which yet
can change from second to second in phenomenal experience; dialogue with others
stabilises them. Obviously, much further work of the type that Freeman and his
colleagues have done is necessary; however, the case for existence of a
meta-level verbal conscious code, exemplified by the case of Williams (1992),
is one that has many strengths.
Conclusion
Therefore , we can conclude that the duopoly of code
and context re-create themselves at each level we have considered in this
paper. To ignore context is to risk wasting further billions on doomed
extensions of the HGP, of simplistic natural language processing projects, and
of programs for curing autism that fail to reflect the subtlety of the disorder
in many cases. It is also to risk the generation of a new type of cognitive
science, as remote from the reality of the grace and grit of real human
existence as its intellectual forebears.
References
Austin, J. (1962) How to do things with words. Oxford: OUP.
Barbieri, M. (2003) The Organic Codes: An Introduction to Semantic Biology CUP.
Barbieri,
M. (forthcoming)Life and Semiosis: The real nature of
information and meaning Semiotica
Barham, J. (1996) “A
dynamical model of the meaning of information” Biosystems 38, 235-241
Daley, M., O. Ibarra,
and L. Kari (2003) “Closure and decidability of some language classes with
respect to ciliate bio-operations” Theoretical computer Science, Vol. 306,
Issue 1-3, Pp. 19-38
Di Giulio, M. (2005)
“The origin of the genetic code: theories and their relationship, a review”
Biosystems 80 (2005), 175-184
Fodor, J. (1975) The
language of Thought. New York:
Crowell
Freeman, W. (2003) “A Neurobiological theory of meaning
in Perception, part 1” International journal of bifurcation and Chaos, Vol. 14,
No. 2, 515-530
Freeman, W. (2004) “How and why brains create meaning
from sensory information” International journal of bifurcation and Chaos, Vol.
13, No. 9, 2493-2511
Gombrich, E. (1959) Art
and Illusion. London:
Phaidon.
Ó Nualláin, Seán, P. Mckevitt and E. Mac Aogain (eds.) (1997) Two Sciences of Mind. Amsterdam: Benjamins.
Ó Nualláin, Seán
(2003) The Search for Mind; third edition. Exeter:
England
Ó Nualláin, Seán
(2004) Being Human: The Search for order; second edition. Exeter: England
Ó Nualláin, Seán (forthcoming a) “Subjects and Objects” Biosemiotics
Ó Nualláin, Seán (forthcoming, b) “Ask not what you
can do for your self” Cognitive Processing
Ó Nualláin, Seán
and Richard Strohman (forthcoming)
“Genome and natural language: How far can the analogy be extended”
Perspectives in biology and Medicine
Piaget, J. (1972) Principles of genetic epistemology London: Routledge
Steven Salzberg, David Searls, and Simon Kasif (1998)
Computational Methods in Molecular Biology. Published by Elsevier Science
Smith, B. (1996) The origin of objects MIT Press
Watson, J (1977) Cold Spring
Harbour annual report
Williams, D. (1992) Nobody nowhere. New York: Random house