Kuhn’s scientific values for theory-choice

First, Kuhn’s argument was that there was no way to use theories to choose from amongst conflicting theories.  Well taken.

Second, Kuhn advanced ‘scientific values’ as influential of choice of paradigm:

  • Accuracy
  • Consistency
  • Scope
  • Simplicity
  • Fruitfulness

Kuhn contrasts ‘values’ with ‘rules’.  Interesting to note here both Wittgenstein’s fascination with rules and Kripke’s interpretation of them within the context of the private language argument.  Wittgenstein laid ‘values’ outside the proper bounds of language, and reserved a deep silence for them.  I’ve always thought that the vast majority, less principled than he, make liberal use of their ‘values’ both when deciding which rule-systems to select, and in deciding how to apply rules.

These are from the Structure of Scientific revolutions, as cited in Rorty’s ‘Philosophy and the Mirror of Nature’


Rorty on Hermeneutics

from ‘Philosophy and the Mirror of Nature” p. 325


“Hermeneutics does not need a new epistemology any more than liberal political thought needs a new paradigm of sovereignty.  Hermeneutics, rather, is what we get when we are no longer epistemological.”


The post-epistemological human.  Left with only hermeneutics.


I have a striking thought here that there has been/should be a post-epistemological AI.

Rorty on Pragmatism

from ‘Pragmatism as Romantic Polytheism’ in The Revival of Pragmatism


“Poetry cannot be a substitute for monotheistic religion, but it can serve the purposes of a secular version of polytheism.” p. 23


“There is no such thing as truth.  What has been called by that name is a mixture of agreement, the love of gaining mastery over a recalcitrant set of data, the love of winning arguments, and the love of synthesizing little theories into big theories.” p. 28


These quotes aren’t directly related to this class, but I’m pulling out all of the works I interleaved annotations into over winter break and mining them for quotes.  We’ll see what pattern emerges.


The interesting thing about Rorty’s position in this article (and true about much of his work, the later Wittgenstein, etc) is that it never seeks to justify.  It’s purely descriptive, and then there’s a little persuasion.  There’s no need to controls or rules, only the agreement attendant to sufficient communication.  By that I mean Rorty is interested in us understanding him, but never does he attempt to defend theses, or begin from first principles.  The focus is different than in other philosophers and in much of the dialog between humans, especially in the religious and political realms.


But Rorty’s disposition is at home at a cocktail party, or a barbeque, or a lighthearted discussion amongst friends.  And the capacity to assume that disposition is something very necessary for sentience.  In a way, it is sentience poking out from underneath language, systems, intelligence.  And now you see where I’m going with this- how could we get Rorty’s disposition to poke out from a system?

Neosentience and Umwelt



“biological foundations that lie at the very epicenter of the study of both communication and signification in the human [and non-human] animal.”


Jakob von Uexküll and Thomas A. Sebeok


Each organism has a distinct way of interacting with the world.  This defines the possibility space of their consciousness.  Each sensory faculty and integrative neuronal function an organism possesses creates a Cartesian product of these possibility spaces.  This yields a massive space for sentience and also reinforces Seaman and Rossler’s focus on multimodal sensations for neosentient systems.

Stanley Fish on Watson the Supercomputer and Rules


Interesting, his assertion that computers don’t start from a Wittgenstinian ‘form of life’.  Neosentient computers would, and that’s precicely why they’re exciting.

What computers can’t do, we don’t have to do because  the worlds we live in are already built; we  don’t walk around putting discrete items together until they add up to  a context; we walk around with a contextual sense — a sense of where we are and what’s at stake and what our resources are — already in place;  we  inhabit worldly spaces already organized by purposes, projects and expectations. The computer inhabits nothing and has no purposes and because it has no purposes it cannot alter its present (wholly predetermined) “behavior” when it  fails to advance the purposes it doesn’t have. When as human beings we determine that  “the data coming in make no sense”  relative to what we want to do, we can, Dreyfus explains “try a new total hypothesis,” begin afresh. A computer, in contrast, “could at best be programmed to try out a series of hypotheses to see which best fit the fixed data.”

Fish draws a line between computation and (though he doesn’t use this term) sentience.  How might we invoke this computation?  The contingency of large possibility-space interfaces in a network-brain (see my emergence through autonomy post) could generate the flexibility that Fish speaks of.  Watson, as impressive as it is, is still a hardwired network of binary processing chips.  If we could make the unit of processing a much more complicated thing, a simulated neuron, we’d be much more likely to engender the system with the properties of living systems.

Brains and Super Brains: Seaman and Rossler

Second Class of Brains
Almost no one knows that there is not only a brain equation for solitary brains, but there is also one for “super” brains made out of many brains in a group. Super brains are still autistic, innocent like the mole rat society which implements the still to be written down super brain equation.
Seamand and Rossler, Neosentience, p. 233


  1. The Brain Equation.  If there is a brain equation, I think it will be kind of an average measure of network behavior.  I suspect it’s computationally irreducible in Wolfram’s sense.
  2. Super Brains.  An interesting idea, especially the autism.  Can demes or species acquire consciousness the way that we, colonies of unicellular clones (humans), acquire consciousness?  The individual would need to be subsumed, subjugated.  Some individuals would be the equivalent of skin cells, etc.  And the system would need to be resilient to malevolent cancers.
  3. Super Brain Equation.  Again computationally irreducible.  But see my post on emergent computational interactions.

Gruber quote on ontology

“An ontology is an explicit specification of a conceptualization. The term is borrowed from philosophy, where an Ontology is a systematic account of Existence. For AI systems, what “exists” is that which can be represented. When the knowledge of a domain is represented in a declarative formalism, the set of objects that can be represented is called the universe of discourse. This set of objects, and the describable relationships among them, are reflected in the representational vocabulary with which a knowledge-based program represents knowledge. Thus, in the context of AI, we can describe the ontology of a program by defining a set of representational terms. In such an ontology, definitions associate the names of entities in the universe of discourse (e.g., classes, relations, functions, or other objects) with human-readable text describing what the names mean, and formal axioms that constrain the interpretation and well-formed use of these terms. Formally, an ontology is the statement of a logical theory.[1]

bold is mine.  Tom Gruber, Stanford, from here

So to develop AI further, and enable an electrochemical computer, we need a boundless ontology.  Or perhaps boundless/finite and bounded/infinite are each acceptable.  Either way, the logical structure needs to have an exponentially emergent universe of discourse.

It seems that  a combination of a vast universe of discourse and a powerful entrainment mechanism are central parts of human acculturation.  Can we specify/design their counterparts in an artificial system?