Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

Origins of Mind : 05

How do humans first come to know simple facts about particular physical objects?

The question for this course is ... Our current question is about physical objects. How do humans first come to know simple facts about particular physical objects?
In attempting to answer this question, we are focussing on the abilities of infants in the first six months of life.
What have we found so far? ...
Here’s what we’ve found so far.
We examined how three requirements on having knowledge of physical objects are met. Knowledge of objects depends on abilities to (i) segment objects, (ii) represent them as persisting and (iii) track their interactions. To know simple facts about particular physical objects you need, minimally, to meet these three requirements.

Three requirements

  • segment objects
  • represent objects as persisting (‘permanence’)
  • track objects’ interactions
The second discovery concerned how infants meet these three requirements this.

Principles of Object Perception

  • cohesion—‘two surface points lie on the same object only if the points are linked by a path of connected surface points’
  • boundedness—‘two surface points lie on distinct objects only if no path of connected surface points links them’
  • rigidity—‘objects are interpreted as moving rigidly if such an interpretation exists’
  • no action at a distance—‘separated objects are interpreted as moving independently of one another if such an interpretation exists’

Spelke, 1990

The second was that a single set of principles is formally adequate to explain how someone could meet these requirements, and to describe infants' abilities with segmentation, representing objects as persisting and tracking objects' interactions.
This is exciting in several ways. \begin{enumerate} \item That infants have all of these abilities. \item That their abilities are relatively sophisticated: it doesn’t seem that we can characterise them as involving simple heuristics or relying merely on featural information. \item That a single set of principles underlies all three capacities. \end{enumerate}

three requirements, one set of principles

three requirements, one set of principles: this suggests us that infants’ capacities are characterised by a model of the physical.

Three Questions

1. How do four-month-old infants model physical objects?

2. What is the relation between the model and the infants?

3. What is the relation between the model and the things modelled (physical objects)?

[slide: model] three requirements, one set of principles: this suggests us that infants’ capacities are characterised by a model of the physical (as opposed to being a collection of unrelated capacities that only appear, but don’t really, have anything to do with physical objects).
1. How do four-month-old infants model physical objects?
In asking how infants model physical objects, we are seeking to understand not how physical objects in fact are but how they appear from the point of view of an individual or system.
The model need not be thought of as something used by the system: it is a tool the theorist uses in describing what the system is for and broadly how it works. This therefore leads us to a second question ...
2. What is the relation between the model and the infants?
3. What is the relation between the model and the things modelled (physical objects)?

the Simple View

A natuaral answer is the Simple View: the principles of object perception are things that we know or believe, and we generate expectations from these principles by a process of inference.
The Simple View is worth taking seriously for several reasons. First, it requires no theoretical or conceptual innovation.
Second, as we saw, the Simple View appears to be quite widely supported by developmental psychologists including Baillargeon and, in writings from the last millenium, Spelke too.
Third, it can be deduced from the Uncomplicated Account of Minds and Actions. That is, the developmental evidence together with a background theory about minds and actions which seems implicit in much philosophy and perhaps ordinary thinking too commits us to the Simple View.

Uncomplicated Account of Minds and Actions

For any given proposition [There’s a spider behind the book] and any given human [Wy] ...

1. Either Wy knows that there’s a spider behind the book, or she does not.

2. Either Wy can act for the reason that there is, or seems to be, a spider behind the book (where this is her reason for acting), or else she cannot.

3. The first alternatives of (1) and (2) are either both true or both false.

\subsection{Uncomplicated Account of Minds and Actions} For any given proposition [There’s a spider behind the book] and any given human [Wy] ... \begin{enumerate} \item Either Wy knows that there’s a spider behind the book, or she does not. \item Either Wy can act for the reason that there is, or seems to be, a spider behind the book, or else she cannot. \item The first alternatives of (1) and (2) are either both true or both false. \end{enumerate}
The evidence suggests that infants can act for the reasons which are simple facts about particular physical objects.
And according to the Uncomplicated Account, this entails that those simple facts about particular physical objects are things they believe.
 

A Problem

 
\section{A Problem}
 
\section{A Problem}
As just mentioned, the Simple View is the view that the principles of object perception are things that we know or believe, and we generate expectations from these principles by a process of inference..

scientific intuitive arguments against the simple view

Why must we reject the simple view?
Some philosophers have offered intuitive arguments against the Simple View. \citet{Bermudez:2003dj}, for instance, holds that those without the ability to use a language cannot make inferences; and \citet{Davidson:1975eq} holds that those without language cannot think at all. It may be hard to accept that four-month-old infants are in the business of inferring truths about particular objects’ locations from abstract principles. (And perhaps it is no less hard to accept that adults typically do this in segmenting objects.) But scientific and mathematical discoveries sometimes require us to reject intuitions, even intuitions about very fundamental things like space and time. For this reason there seems to be slim prospect of effectively challenging the Simple View on the basis of intuitions about the nature of knowledge, belief and inference. Doing so is also unnecessary as there are scientific reasons for rejecting the simple view.
I think we shouldn't try to challenge the simple view on the basis of intution.
And we don't need to because there are also scientific reasons for rejecting the simple view.
One set of reasons concerns the apparent discrepancy between looking times and manual search ...
*(The basic idea is to say there's a discrepancy regarding BOTH (a) permanence and (b) causal interactions)

conflicting evidence: permanence

Baillargeon et al 1987, figure 1

Recall this experiment which used habituation to demonstrate infants' abilities to represent objects as persiting while unperceived (in this case, because occluded). Infants can do this sort of task from 2.5 months or earlier \citep{Aguiar:1999jq}.
But what happens if instead of measuring how infants look, we measure how they reach?
\citet{Shinskey:2001fk} did just this. Here you can see their appratus. They had a screen that infants could pull forwards to get to an object that was sometimes hidden behind it. They made two comparisons. First, were infants more likely to pull the screen forwards when an object was placed behind it? Second, were how did infants' performance compare when the barrier was not opaque but transparent?

Shinskey and Munakata 2001, figure 1

Here are their results with 7-month old infants.

Shinskey and Munakata 2001, figure 2

Now we have the beginnings of a problem. The problem is that, if the Simple View is right, infants should succeed in tracking persisting objects regardless of whether we measure their eye movements or their reaching actions. But there is a gap of around five months between looking and reaching.

Responses to the occlusion of a desirable object

  • look (from 2.5 months)

    (Aguiar & Baillargeon 1999)

  • reach (7--9 months)

    (Shinskey & Munakata 2001)

The attraction of the simple view is that it explains the looking. The problem for the simple view is that it makes exactly the wrong prediction about the reaching.
Can we explain the discrepancy in terms of the additional difficulty of reaching? A lot of experiments have attempts to pin the discrepancy on this, or on other extraneous factors like task demands. But none of these attempts have succeeded. After all, we know infants are capable of acting because they move the transparent screen.
As Jeanne Shinskey, one of the researchers most dedicated to this issue says,

‘action demands are not the only cause of failures on occlusion tasks’

\citep[p.\ 291]{shinskey:2012_disappearing}

Shinskey (2012, p. 291)

If there were just one discrepancy, concerning performance, we might be able to hold on to the Simple View. But there are systematic discrepancies along these lines.
Related discrepancies concerning infants' understanding of physical objects occur in the case of their abilities to track causal interactions, too.

conflicting evidence: causal interactions

Recall this experiment about causal interactions, which used a habituation paradigm. Now imagine a version which involved getting infants to reach for the object rather than simply looking. What would the results be? There is an experiment much like this which has been replicated several times, and which shows a discrepancy between looking and searching. Basically infants will look but not search.

Spelke et al 1992, figure 2

*todo
*todo
*todo
Here are the looking time results.
You can even do looking time and reaching experiments with the same subjects and apparatus \citep{Hood:2003yg}.
2.5-year-olds look longer when experimenter removes the ball from behind the wrong door, but don't reach to the correct door

search

here are the search results (shocking).

Hood et al 2003, figure 4

*todo: describe
**todo: Mention that \citep{mash:2006_what} show infants can also predict the location of the object (not just identify a violation, but look forward to where the object is)
Amazingly, 2 year old children still do badly when only the doors are opaque, so that the ball can be seen rolling between the doors, as in this diagram \citep{Butler:2002bv}.

The Simple View
generates
multiple
incorrect predictions.

This is the end of the road for the Simple View. If, like Baillargeon, you want to cling to the Simple View, then you need something very convincing to say about the fact that it appears to generate multiple incorrect predictions.
Similar discrepancies between looking and reaching are also found in some nonhuman primates, both apes and monkeys (chimpanzees, cotton-top tamarins and marmosets). (Some of this is based on the gravity tube task and concerns gravity bias.)

‘A similar permanent dissociation in understanding object support relations might exist in chimpanzees. They identify impossible support relations in looking tasks, but fail to do so in active problem solving.’

\citep{gomez:2005_species}

(Gomez 2005)

Likewise for cotton-top tamarins (Santos et al 2006) and marmosets (Cacchione et al 2012).

Note that this research is evidence of dissociations between looking and search in adult primates, not infants. This is important because it indicates that the failures to search are a feature of the core knowledge system rather than a deficit in human infants.
‘to date, adult primates’ failures on search tasks appear to exactly mirror the cases in which human toddlers perform poorly.’

‘to date, adult primates’ failures on search tasks appear to exactly mirror the cases in which human toddlers perform poorly.’

\citep[p.\ 17]{santos:2009_object}

(Santos & Hood 2009, p. 17)

What about the chicks and dogs?

What about the chicks and dogs? This isn't straightforward. As I mentioned earlier, \citep{kundey:2010_domesticated} show that domestic dogs are good at solidity on a search measure. And as we covered in seminars, \citep{chiandetti:2011_chicks_op} demonstrate object permanence with a search measure in chicks that are just a few days old. Indeed, for many of the other animals I mentioned, object permanence is measured in search tasks, not with looking times. To speculate, it may be that the looking/search dissociation is more likely to occur in adult animals the more closely related they are to humans. But let's focus on the fact that you get the looking/search in any adult animals at all. This is evidence that the dissociation is a consequence of something fundamental about cognition rather than just a side-effect of some capacity limit.

The Simple View
generates
multiple
incorrect predictions.

This really is the end of the road for the Simple View. But it is actually worse ...
occlusion endarkening
violation-of-expectations

Charles & Rivera (2009)

Because this point is controversial, I want to mention one further piece of the puzzle. Five-month-olds not only sometimes fail to search for hidden objects but ...
... they also sometimes fail to look longer when a momentarily hidden object fails to reappear as if by magic. Infants will reach for an object hidden in darkness \citep[e.g.][]{jonsson:2003_infants}. But what happens if instead of measuring reaching we measure looking times? \citet{charles:2009_object} compared what happens when an object is momentarily hidden behind a screen with what happens when an object is momentarily hidden by darkness. They used a trick with light and mirrors so that for some of the infants, the object did not reappear when the screen came up or the light returned. Surprisingly, five-month-old infants’ looking times indicated that an expectation had been violated only when the object was hidden behind a screen but not when hidden by darkness.
I think this pattern of findings is good evidence against the hypothesis that four- or five-month-olds have beliefs about, or knowledege of, the locations of unperceived objects. After all, a belief is essentially the kind of state that can inform actions of any kind, whether they involve looking, searching with the hands or anything else.
NB: Charles & Rivera did the v-of-e part; the manual search part has been done by others.

Charles & Rivera, figure 1 (part)

Charles & Rivera, 2009 fig 3a-c

(There was also a fade condition which I’m not discussing.)

Charles & Rivera, 2009 figs 5-6

The results are complicated. They compare occlusion to empty and endarkening to empty.
They comment that, for occlusion vs empty: ‘the condition-by- outcome interaction usually interpreted as ‘having object permanence’, though definitely not present in the first trial pair, became visible by the fourth trial pair (see Figure 5).’ This is why you see that figure there.
Occulsion: But actually their analysis depends on ‘The significant three-way interaction existed because the disparity between infants’ looking at the two outcomes increased across trial pairs in the Empty Condition, but decreased across pairs in the Occlusion Condition, F(3, 108) = 2.83, p < .05, partial g2 = .07.’
Occulsion: (Note: ‘The predicted two-way condition-by-outcome interaction was not significant, F(1, 36) = 0.43, p > .51, partial g2 = .01, but one main effect and the alternatively predicted three- way interaction were significant.’)
Occlusion: ‘Infants did not show the two-way interaction between condition and outcome; however, they did show the specific three-way interaction interpretable in terms of infants’ exhibiting an understanding of object permanence, but needing time to acclimate to the procedures. That is, the pattern suggests that infants came to expect the occluded object to reappear over the course of trials in the Occlusion Condition, but came to expect it to be gone over the course of trials in the Empty Condition.’
[The complication here is the main effect: infants look longer when there’s something than when there’s nothing, not surprisingly. A better approach might be to do Wynn’s 1992 (‘two mouse’) experiment with occlusion and endarkening---in her design there is always something to look at, and she found that infants don’t prefer to look at one vs two mice.]
When you compare the Empty and Endarkening conditions, you don’t get an interaction. ‘The three-way interaction found in the Occlusion vs. Empty experiment was not present here, F(3, 108) = .176, p > .90. The lack of interaction is clearly visible in ... Figure 6.’
‘Infants’ looking patterns in the Darkness Condition were almost identical to those of the Empty Condition. ... If the results of the Occlusion Condition are taken to indicate that infants expect occluded objects to continue existing, then infants’ behavior in the Darkness Condition must be taken to indicate that infants do not expect endarkened objects to continue existing.’
[NOTE: Imperfect to compare Endarkening and Empty since Empty involves the screen coming down but no change in luminance (as far as I can tell --- it’s not clear exactly how they did that from the procedure.)]
[NOTE: I’m not discussing the Fade condition but the results are interesting: ‘Infants in the Fade Condition behaved similarly to infants in the Occlusion Condition, demonstrating expectations for the reappearance of faded objects. This is the opposite of the expectation adults have under similar conditions, and the opposite of what would be expected under the ecological hypothesis. The pattern of results across all conditions – expectation for the reappearance of occluded and faded objects, but not for endarkened objects – cannot be reconciled with the traditional ecological hypothesis.’]
occlusion endarkening
violation-of-expectations

Charles & Rivera (2009)

Why is this a major challenge for the Simple View? Because it shows that to defend the Simple View, it’s not enough to explain failures of manual search.
To defend the Simple View, you also have to explain failure in a violation-of-expectations task when manual search succeeds.

The Simple View
generates
multiple
incorrect predictions.

This really is the end of the road for the Simple View. We must reject this view because it makes systematically incorrect predictions about actions like searching for occluded objects and about looking behaviours involving endarkened objects.
This a problem? Why? Because, as we'll see, it is hard to identify an alternative.
 

Like Knowledge and Like Not Knowledge (SHORTENED)

 
\section{Like Knowledge and Like Not Knowledge (SHORTENED)}
 
\section{Like Knowledge and Like Not Knowledge (SHORTENED)}
I'm sorry to keep repeating this but I want everyone to understand where we are. There are principles of object perception that explain abilities to segment objects, to represent them while temporarily unperceived and to track their interactions. These principles are not known. What is their status?

a problem

Three Questions

1. How do four-month-old infants model physical objects?

2. What is the relation between the model and the infants?

3. What is the relation between the model and the things modelled (physical objects)?

The Simple View answers this question. But the Simple View is incorrect. So we need an alternative answer. And it is difficult to find one because ...

Four-month-olds can act (e.g. look, and reach) for the reason that this object is there.

Four-month-olds cannot believe, nor know, that this object is there.

Uncomplicated Account of Minds and Actions

For any given proposition [There’s a spider behind the book] and any given human [Wy] ...

1. Either Wy knows that there’s a spider behind the book, or she does not.

2. Either Wy can act for the reason that there is, or seems to be, a spider behind the book (where this is her reason for acting), or else she cannot.

3. The first alternatives of (1) and (2) are either both true or both false.

\subsection{Uncomplicated Account of Minds and Actions} For any given proposition [There’s a spider behind the book] and any given human [Wy] ... \begin{enumerate} \item Either Wy knows that there’s a spider behind the book, or she does not. \item Either Wy can act for the reason that there is, or seems to be, a spider behind the book, or else she cannot. \item The first alternatives of (1) and (2) are either both true or both false. \end{enumerate}

generality of the problem

The problem is quite general. It doesn't arise only in the case of knowledge of objects but also in other domains (like knowledge of number and knowledge of mind). And it doesn't arise only from evidence about infants or nonhuman primates; it would also arise if our focus were exclusively on human adults. More on this later. For now, our aim is to better understand the problem as it arises in the case of knowledge of objects.
domain evidence for knowledge in infancy evidence against knowledge
colour categories used in learning labels & functions failure to use colour as a dimension in ‘same as’ judgements
physical objects patterns of dishabituation and anticipatory looking unreflected in planned action (may influence online control)
number --""-- --""--
syntax anticipatory looking [as adults]
minds reflected in anticipatory looking, communication, &c not reflected in judgements about action, desire, ...

‘if you want to describe what is going on in the head of the child when it has a few words which it utters in appropriate situations, you will fail for lack of the right sort of words of your own.

‘We have many vocabularies for describing nature when we regard it as mindless, and we have a mentalistic vocabulary for describing thought and intentional action; what we lack is a way of describing what is in between

(Davidson 1999, p. 11)

Recall what Davidson said: we need a way of describing what is in between thought and mindless nature. This is the challenge presented to us by the failure of the Simple View.
 

The Problem with the Simple View: Summary

 
\section{The Problem with the Simple View: Summary}
I'm sorry to keep repeating this but I want everyone to understand where we are. There are principles of object perception that explain abilities to segment objects, to represent them while temporarily unperceived and to track their interactions. These principles are not known. What is their status?

How do humans first come to know simple facts about particular physical objects?

The question is ... How do humans first come to know simple facts about particular physical objects?
Here’s what we’ve found so far.
We examined how three requirements on having knowledge of physical objects are met. Knowledge of objects depends on abilities to (i) segment objects, (ii) represent them as persisting and (iii) track their interactions. To know simple facts about particular physical objects you need, minimally, to meet these three requirements.

Three requirements

  • segment objects
  • represent objects as persisting (‘permanence’)
  • track objects’ interactions
The second discovery concerned how infants meet these three requirements this.

Principles of Object Perception

  • cohesion—‘two surface points lie on the same object only if the points are linked by a path of connected surface points’
  • boundedness—‘two surface points lie on distinct objects only if no path of connected surface points links them’
  • rigidity—‘objects are interpreted as moving rigidly if such an interpretation exists’
  • no action at a distance—‘separated objects are interpreted as moving independently of one another if such an interpretation exists’

Spelke, 1990

The second was that a single set of principles is formally adequate to explain how someone could meet these requirements, and to describe infants' abilities with segmentation, representing objects as persisting and tracking objects' interactions.
This is exciting in several ways. \begin{enumerate} \item That infants have all of these abilities. \item That their abilities are relatively sophisticated: it doesn’t seem that we can characterise them as involving simple heuristics or relying merely on featural information. \item That a single set of principles underlies all three capacities. \end{enumerate}

Three Questions

1. How do four-month-old infants model physical objects?

2. What is the relation between the model and the infants?

3. What is the relation between the model and the things modelled (physical objects)?

2. What is the relation between the model and the infants?

the Simple View

A natuaral answer is the Simple View: the principles of object perception are things that we know or believe, and we generate expectations from these principles by a process of inference.
occlusion endarkening
violation-of-expectations

Charles & Rivera (2009)

The Simple View generates incorrect predictions.

The Simple View
generates
multiple
incorrect predictions.

However, as we saw, the Simple View must be wrong because it generates incorrect predictions. This was the lesson of the discrepancy bewteen looking and search measures for both infants' abilities to represent objects as persisting and their abilities to track causal interactions.
As I've just been arguing, the failure of the Simple View presents us with a problem. The problem is to understand the nature of infants' apprehension of the principles given that it doesn't involve knowledge. This is a problem that will permeate our discussion of the origins of mind because it problems of this form come up again and again in different domains. It isn't the only problem we'll encounter, but none of the problems are more important or more general than this one.
 

What Is Core Knowledge?

 
\section{What Is Core Knowledge?}
 
\section{What Is Core Knowledge?}
I talked about the notion of core knowledge in the very first lecture, but since then I have not appealed to the notion. This is deliberate because the notion is tricky; so I thought it would be good to postpone our discussion of it for as long as possible. Now I can put it off no longer.

What is core knowledge? What are core systems?

The first, very minor thing is to realise that there are two closely related notions, core knowledge and core system.
These are related this: roughly, core knowledge states are the states of core systems. More carefully:
For someone to have \textit{core knowledge of a particular principle or fact} is for her to have a core system where either the core system includes a representation of that principle or else the principle plays a special role in describing the core system.
So we can define core knowlegde in terms of core system.

‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’

Carey and Spelke, 1996 p. 517

‘core systems are

  1. largely innate
  2. encapsulated
  3. unchanging
  4. arising from phylogenetically old systems
  5. built upon the output of innate perceptual analyzers’

(Carey and Spelke 1996: 520)

representational format: iconic (Carey 2009)

What do people say core knowledge is?
\subsection{Two-part definition}
There are two parts to a good definition. The first is an analogy that helps us get a fix on what we is meant by 'system' generally. (The second part tells us which systems are core systems by listing their characteristic features.)
‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’ \citep[p.\ 517]{Carey:1996hl}
So talk of core knowledge is somehow supposed to latch onto the idea of a system.
What do these authors mean by talking about 'specialized perceptual systems'?
They talk about things like perceiving colour, depth or melodies.
Now, as we saw when talking about categorical perception of colour, we can think of the 'system' underlying categorical perception as largely separate from other cognitive systems--- we saw that they could be knocked out by verbal interference, for example.
So the idea is that core knowledge somehow involves a system that is separable from other cognitive mechanisms.
As Carey rather grandly puts it, understanding core knowledge will involve understanding something about 'the architecture of the mind'.
Illustration: edge detection.
‘core systems are: \begin{enumerate} \item largely innate \item encapsulated \item unchanging \item arising from phylogenetically old systems \item built upon the output of innate perceptual analyzers’ \citep[p.\ 520]{Carey:1996hl} \end{enumerate}
\textit{Note} There are other, slightly different statements \citep[e.g.][]{carey:2009_origin}.
‘We hypothesize that uniquely human cognitive achievements build on systems that humans share with other animals: core systems that evolved before the emergence of our species. The internal functioning of these systems depends on principles and processes that are distinctly non-intuitive. Nevertheless, human intuitions about space, number, morality and other abstract concepts emerge from the use of symbols, especially language, to combine productively the representations that core systems deliver’ \citep[pp.\ 2784-5]{spelke:2012_core}.
This, them is the two part definition. An analogy and a list of features.
There is one more feature that I want to mention; this is important although I won't disucss it here. To say that a represenation is iconic means, roughly, that parts of the representation represent parts of the thing represented. Pictures are paradigm examples of representations with iconic formats. For example, you might have a picture of a flower where some parts of the picture represent the petals and others the stem.
\subsection{The Core Knowledge View}
The \emph{Core Knowledge View}: the principles of object perception are not knowledge, but they are core knowledge. And we generate expectations from these principles by a process of inference.

Why postulate core knowledge?

The Simple View

The Core Knowledge View

The first problem we encountered was that the Simple View is false. But maybe we can appeal to the Core Knowledge View.
According to the Core Knowledge View, the principles of object perception, and maybe also the expectations they give rise to, are not knowledge. But they are core knowledge.
This raises some issues. Is the Core Knowledge View consistent with the claims that we have ended up with, e.g. about categorical perception and the Principles of Object Perception characterising the way that object indexes work? I think the answer is, basically, yes. Categorical perception involves a system that has many of the features associated with core knowledge.
[*looking ahead (don’t say):] Consider this hypothesis. The principles of object perception, and maybe also the expectations they give rise to, are not knowledge. But they are core knowledge. The \emph{core knowledge view}: the principles of object perception are not knowledge, but they are core knowledge. But look at those features again --- innate, encapsulated, unchanging and the rest. None of these straightforwardly enable us to predict that core knowledge of objects will guide looking but not reaching. So the \emph{first problem} is that (at this stage) it's unclear what we gain by shifting from knowledge to core knowledge.
domain evidence for knowledge in infancy evidence against knowledge
colour categories used in learning labels & functions failure to use colour as a dimension in ‘same as’ judgements
physical objects patterns of dishabituation and anticipatory looking unreflected in planned action (may influence online control)
number --""-- --""--
syntax anticipatory looking [as adults]
minds reflected in anticipatory looking, communication, &c not reflected in judgements about action, desire, ...
The Core Knowledge view may also help us to resolve Discrepant Findings in other domains too ...

Why postulate core knowledge?

The Simple View

The Core Knowledge View

 

Objections to Core Knowledge

 
\section{Objections to Core Knowledge}
 
\section{Objections to Core Knowledge}

‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’

Carey and Spelke, 1996 p. 517

‘core systems are

  1. largely innate
  2. encapsulated
  3. unchanging
  4. arising from phylogenetically old systems
  5. built upon the output of innate perceptual analyzers’

(Carey and Spelke 1996: 520)

representational format: iconic (Carey 2009)

Recall that we defined core systems by listing properties. [(Actually it was a two-part definition so there’s hope.)]

multiple definitions

One objection is that there are multiple definitions, each slightly different from the others, and no obvious way to choose between them.
But although this indicates that we need to impose some theoretical discipline, it doesn’t seem like an objection that could show there is a deep problem with the notion of Core Knowledge.
Here is a second objection ...
One reason for doubting that the notion of a core system is explanatory arises from the way we have introduced it. We have introduced it by providing a list of features. But why suppose that this particular list of features constitutes a natural kind? This worry has been brought into sharp focus by criticisms of 'two systems' approaches. (These criticisms are not directed specifically at claims about core knowledge, but the criticisms apply.)

‘there is a paucity of … data to suggest that they are the only or the best way of carving up the processing,

‘and it seems doubtful that the often long lists of correlated attributes should come as a package’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

we wonder whether the dichotomous characteristics used to define the two-system models are … perfectly correlated …

[and] whether a hybrid system that combines characteristics from both systems could not be … viable’

\citep[p.\ 537]{keren_two_2009}

Keren and Schul (2009, p. 537)

This is weak.
Remember that criticism is easy, especially if you don't have to prove someone is wrong.
Construction is hard, and worth more.
Even so, there is a problem here.

‘the process architecture of social cognition is still very much in need of a detailed theory’

\citep[p.\ 759]{adolphs_conceptual_2010}

Adolphs (2010 p. 759)

Is definition by listing features (a) justified, and is it (b) compatible with the claim that core knowledge is explanatory?

So far I've been explaining objection (a). Now let me say a bit more about (b) ...
We can get the strongest objection by asking ...

Why do we need a notion like core knowledge?

domain evidence for knowledge in infancy evidence against knowledge
colour categories used in learning labels & functions failure to use colour as a dimension in ‘same as’ judgements
physical objects patterns of dishabituation and anticipatory looking unreflected in planned action (may influence online control)
number --""-- --""--
syntax anticipatory looking [as adults]
minds reflected in anticipatory looking, communication, &c not reflected in judgements about action, desire, ...
So why do we need a notion like core knowledge? Think about these domains. In each case, we're pushed towards postulating that infants know things, but also pushed against this. Resolving the apparent contradiction is what core knowledge is for.
Key question: What features do we have to assign to core knowledge if it's to describe these discrepancies?
In the case of Physical Objects, we want to expalin this puzzling pattern of findings ...
occlusion endarkening
violation-of-expectations

Charles & Rivera (2009)

If this is what core knowledge is for (if it exists to explain these discrepancies), what features must core knowledge have?

If this is what core knowledge is for, what features must core knowledge have?

‘Just as humans are endowed with multiple, specialized perceptual systems, so we are endowed with multiple systems for representing and reasoning about entities of different kinds.’

Carey and Spelke, 1996 p. 517

‘core systems are

  1. largely innate
  2. encapsulated
  3. unchanging
  4. arising from phylogenetically old systems
  5. built upon the output of innate perceptual analyzers’

(Carey and Spelke 1996: 520)

representational format: iconic (Carey 2009)

Which of these features explain the discrepancy between measures on which infants do, and measures on which they do not, manifest their abilities to track physical objects?
Why do they fail on some search tasks and but pass some v-of-e tasks when the mode of disappearance is occlusion?
And, equally pressingly, why do they do the converse (pass search, fail v-of-e) when the mode is endarkening?
Encapsulated : there are limits on what information can get into the system. But if we are to explain any successes, it must be possible for information about the locations of physical objects to get into the system. So there’s no way we can use encapsulation to explain the puzzling developmental findings.
So, to return to my question,

If this is what core knowledge is for, what features must core knowledge have?

not being knowledge

The answer seems to be: none of the features that are stipluated in introducing it. This gives us a \textbf{first objection}: there seems to be a mismatch between the definition and application.
[The feature we most need is actually missing from their list: limited accessibility. But this thought comes later.]
summary

objections to the Core Knowledge View:

  • multiple definitions
  • justification for definition by list-of-features
  • definition by list-of-features rules out explanation
  • mismatch of definition to application

The Core Knowledge View
generates
no
relevant predictions.

 

Core System vs Module

 
\section{Core System vs Module}
 
\section{Core System vs Module}
The problems with core knowledge look like they might be the sort of problem a philospoher might be able to help with.
Jerry Fodor has written a book called 'Modularity of Mind' about what he calls modules. And modules look a bit like core systems, as I'll explain. Further, Spelke herself has at one point made a connection. So let's have a look at the notion of modularity and see if that will help us.

core system = module?

‘In Fodor’s (1983) terms, visual tracking and preferential looking each may depend on modular mechanisms.’

\citep[p.\ 137]{spelke:1995_spatiotemporal}

Spelke et al 1995, p. 137

So what is a modular mechanism?
\subsection{Modularity}
Fodor’s three claims about modules:
\begin{enumerate}
\item they are ‘the psychological systems whose operations present the world to thought’;
\item they ‘constitute a natural kind’; and
\item there is ‘a cluster of properties that they have in common’ \citep[p.\ 101]{Fodor:1983dg}.
\end{enumerate}

Modules

  1. they are ‘the psychological systems whose operations present the world to thought’;
  2. they ‘constitute a natural kind’; and
  3. there is ‘a cluster of properties that they have in common … [they are] domain-specific computational systems characterized by informational encapsulation, high-speed, restricted access, neural specificity, and the rest’ (Fodor 1983: 101)
Modules are widely held to play a central role in explaining mental development and in accounts of the mind generally.
Jerry Fodor makes three claims about modules:
What are these properties?
Properties of modules:
\begin{itemize}
\item domain specificity (modules deal with ‘eccentric’ bodies of knowledge)
\item limited accessibility (representations in modules are not usually inferentially integrated with knowledge)
\item information encapsulation (roughly, modules are unaffected by general knowledge or representations in other modules)
\item innateness (roughly, the information and operations of a module not straightforwardly consequences of learning; but see \citet{Samuels:2004ho}).
\end{itemize}
  • domain specificity

    modules deal with ‘eccentric’ bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    roughly, modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is, roughly, for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

Domain specificity
limited accessibility
Let me illustrate limited accessibility ...
Limited accessbility is a familar feature of many cognitive systems. When you grasp an object with a precision grip, it turns out that there is a very reliable pattern. At a certain point in moving towards it your fingers will reach a maximum grip aperture which is normally a certain amount wider than the object to be grasped, and then start to close. Now there's no physiological reason why grasping should work like this, rather than grip hand closing only once you contact the object. Maximum grip aperture shows anticipation of the object: the mechanism responsible for guiding your action does so by representing various things including some features of the object. But we ordinarily have no idea about this. The discovery of how grasping is controlled depended on high speed photography. This is an illustration of limited accessibility. (This can also illustrate information encapsulation and domain specificity.)

maximum grip aperture

(source: Jeannerod 2009, figure 10.1)

Glover (2002, figure 1a)

Illusion sometimes affects perceptual judgements but not actions: information is in the system; information is not available to knowledge \citep{glover:2002_visual}.
See further \citet{bruno:2009_when}: They argue that Glover & Dixon's model \citep{glover:2002_dynamic} is incorrect, at least for grasping (pointing is a different story), because it predicts that the presence or absence of visual information during grasping shouldn't matter. But it does.

van Wermeskerken et al 2013, figure 1

You also get evidence for information encapsulation in four month olds. To illustrate, consider \citet{vanwermeskerken:2013_getting} ...
A Ponzo-like background can make one frog further away than the other.

van Wermeskerken et al 2013, figure 2

This affects which object four-month olds reach for, but does not affect the kinematics of their reaching actions.
What are these properties?
  • domain specificity

    modules deal with ‘eccentric’ bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    roughly, modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is, roughly, for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

Information encapsulation
Innateness
So these are the key properties associated with modularity.
We've seen something like this list of properties before ... Compare the notion of a core system with the notion of a module
The two definitions are different, but the differences are subtle enough that we don't want both. My recommendation: if you want a better definition of core system, adopt core system = module as a working assumption and then look to research on modularity because there's more of it.

‘core systems are

  1. largely innate,
  2. encapsulated, and
  3. unchanging,
  4. arising from phylogenetically old systems
  5. built upon the output of innate perceptual analyzers’

(Carey and Spelke 1996: 520)

Modules are ‘the psychological systems whose operations present the world to thought’; they ‘constitute a natural kind’; and there is ‘a cluster of properties that they have in common’

  1. innateness
  2. information encapsulation
  3. domain specificity
  4. limited accessibility
  5. ...

core system = module ?

I think it is reasonable to identify core systems with modules and to largely ignore what different people say in introducing these ideas. The theory is not strong enough to support lots of distinctions.

Will the notion of modularity help us in meeting the objections to the Core Knowledge View?

Recall that the challenges were these:
  • multiple definitions
  • justification for definition by list-of-features
  • definition by list-of-features rules out explanation
  • mismatch of definition to application
Let’s go back and see what Fodor says about modules again ...
Consider the first objection, that there are multiple defintions ...
Not all researchers agree about the properties of modules. That they are informationally encapsulated is denied by Dan Sperber and Deirdre Wilson (2002: 9), Simon Baron-Cohen (1995) and some evolutionary psychologists (Buller and Hardcastle 2000: 309), whereas Scholl and Leslie claim that information encapsulation is the essence of modularity and that any other properties modules have follow from this one (1999b: 133; this also seems to fit what David Marr had in mind, e.g. Marr 1982: 100-1). According to Max Coltheart, the key to modularity is not information encapsulation but domain specificity; he suggests Fodor should have defined a module simply as 'a cognitive system whose application is domain specific' (1999: 118). Peter Carruthers, on the other hand, denies that domain specificity is a feature of all modules (2006: 6). Fodor stipulated that modules are 'innately specified' (1983: 37, 119), and some theorists assume that modules, if they exist, must be innate in the sense of being implemented by neural regions whose structures are genetically specified (e.g. de Haan, Humphreys and Johnson 2002: 207; Tanaka and Gauthier 1997: 85); others hold that innateness is 'orthogonal' to modularity (Karmiloff-Smith 2006: 568). There is also debate over how to understand individual properties modules might have (e.g. Hirschfeld and Gelman 1994 on the meanings of domain specificity; Samuels 2004 on innateness).
In short, then, theorists invoke many different notions of modularity, each barely different from others. You might think this is just a terminological issue. I want to argue that there is a substantial problem: we currently lack any theoretically viable account of what modules are. The problem is not that 'module' is used to mean different things-after all, there might be different kinds of module. The problem is that none of its various meanings have been characterised rigorously enough. All of the theorists mentioned above except Fodor characterise notions of modularity by stipulating one or more properties their kind of module is supposed to have. This way of explicating notions of modularity fails to support principled ways of resolving controversy.
No key explanatory notion can be adequately characterised by listing properties because the explanatory power of any notion depends in part on there being something which unifies its properties and merely listing properties says nothing about why they cluster together.
So much the same objections which applied to the very notion of core knowledge appear to recur for module. But note one interesting detail ...

Modules

  1. they are ‘the psychological systems whose operations present the world to thought’;
  2. they ‘constitute a natural kind’; and
  3. there is ‘a cluster of properties that they have in common … [they are] domain-specific computational systems characterized by informational encapsulation, high-speed, restricted access, neural specificity, and the rest’ (Fodor 1983: 101)

Will the notion of modularity help us in meeting the objections to the Core Knowledge View?

  • multiple definitions
  • justification for definition by list-of-features
  • definition by list-of-features rules out explanation
  • mismatch of definition to application
We’ve been considering the first objection, that there are multiple defintions ...
What about the objection that picking out a set of features is unjustified ...

Modules

  1. they are ‘the psychological systems whose operations present the world to thought’;
  2. they ‘constitute a natural kind’; and
  3. there is ‘a cluster of properties that they have in common … [they are] domain-specific computational systems characterized by informational encapsulation, high-speed, restricted access, neural specificity, and the rest’ (Fodor 1983: 101)
Interestingly, Fodor doesn't define modules by specifying a cluster of properties (pace Sperber 2001: 51); he mentions the properties only as a way of gesturing towards the phenomenon (Fodor 1983: 37) and he also says that modules constitute a natural kind (see Fodor 1983: 101 quoted above).

Will the notion of modularity help us in meeting the objections to the Core Knowledge View?

  • multiple definitions
  • justification for definition by list-of-features
  • definition by list-of-features rules out explanation
  • mismatch of definition to application
We’ve been considering the first objection, that there are multiple defintions ...
Same point applies to the claim that defining module by listing features is unexplanatory: if we are not listing features but identifying a natural kind, then the objection doesn’t quite get strarted.
As far as the ‘justification for definition by list-of-features’ and ‘definition by list-of-features rules out explanation’ problems go, everything rests on the idea that modules are a natural kind. I think this idea deserves careful scruitiny but as far as I know there's only one paper on this topic, which is by me. I'm not going to talk about the paper here; let me leave it like this: if you want to invoke a notion of core knowledge or modularity, you have to reply to these problems. And one way to reply to them--- the only way I know---is to develop the idea that modules are a natural kind. If you want to know more ask me for my paper and I'll send it to you.
Recall the discrepancy in looking vs search measures. What property of modules could help us to explain it?

Spelke et al 1992, figure 2

Hood et al 2003, figure 1

occlusion endarkening
violation-of-expectations

Charles & Rivera (2009)

  • domain specificity

    modules deal with ‘eccentric’ bodies of knowledge

  • limited accessibility

    representations in modules are not usually inferentially integrated with knowledge

  • information encapsulation

    roughly, modules are unaffected by general knowledge or representations in other modules

    For something to be informationally encapsulated is, roughly, for its operation to be unaffected by the mere existence of general knowledge or representations stored in other modules (Fodor 1998b: 127)
  • innateness

    roughly, the information and operations of a module not straightforwardly consequences of learning

We already considered innateness and inforamtion encapsulation
To say that a system or module exhibits limited accessibility is to say that the representations in the system are not usually inferentially integrated with knowledge.
This is a key feature we need to assign to modular representations (=core knowledge) in order to explain the apparent discrepancies in the findings about when knowledge emerges in development.
Limited accessibility explains why the representations might drive some actions (e.g. certain looking behaviours) but not others (e.g. certain searching actions).
But't the bare appeal to limited accessibility leaves open why the looking and not the searching (rather than conversely). Further, we have to explain why not searching with occlusion whereas not looking with endarkening. Clearly we can’t explain this pattern just by invoking information encapsulation.
And, of course, to say that we can explain something by invoking information encapsulation is too much. After all, limited accessibility is more or less what we're trying to explain. But this is the first problem --- the problem with the standard way of characterising modularity and core systems merely by listing features.

core system = module

Some, not all, objections to the Core Knowledge View overcome:

  • multiple definitions
  • justification for definition by list-of-features
  • definition by list-of-features rules out explanation
  • mismatch of definition to application

The Core Knowledge View
generates
no
relevant predictions.

The main objection is unresolved