WORD GRAMMAR
Word Grammar (WG) is a general theory of language structure. Most of the work to
date has dealt with syntax, but there has also been serious work in semantics and
some more tentative explorations of morphology, sociolinguistics, historical
linguistics and language processing. The only areas of linguistics that have not
been addressed at all are phonology and language acquisition (but even here see
van Langendonck 1987). The aim of this article is breadth rather than depth, in the
hope of showing how far-reaching the theory's tenets are. The overriding consideration, of
course, is the same as for any other linguistic theory: to be true to the facts of language
structure. However, our assumptions make a great deal of difference when approaching
these facts, so it is possible to arrive at radically different analyses according to whether we
assume that language is a unique module of the mind, or that it is similar to other parts of
cognition. The WG assumption is that language can be analysed and explained in the same
way as other kinds of knowledge or behaviour unless there is clear
evidence to the contrary. So far, this strategy has proved productive and largely
successful, as we shall see below. As the theory's name suggests, the central unit of
analysis is the word, which is central to all kinds of analysis:
Grammar. Words are the only units of syntax (section 8), as sentence structure
consists entirely of dependencies between individual words; WG is thus clearly part
of the tradition of dependency grammar dating from Tesnière (1959; Fraser 1994).
Phrases are implicit in the dependencies, but play no part in the grammar.
Moreover, words are not only the largest units of syntax, but also the smallest. In
contrast with Chomskyan linguistics, syntactic structures do not, and cannot,
separate stems and inflections, so WG is an example of morphology-free syntax
(Zwicky 1992, 354). Unlike syntax, morphology (section 7) is based on constituent structure,
and the two kinds of structure are different in others ways too.
Semantics. As in other theories words are also the basic lexical units where
sound meets syntax and semantics, but in the absence of phrases words also
provide the only point of contact between syntax and semantics, giving a radically
`lexical' semantics. As will appear in section 9, a rather unexpected effect of basing
semantic structure on single words is a kind of phrase structure in the semantics.
Situation. We shall see in section 6 that words are the basic units for contextual
analysis (in terms of deictic semantics, discourse or sociolinguistics).
Words, in short, are the nodes that hold the `language' part of the human network
together. This is illustrated by the word cycled in the sentence I cycled to UCL,
which is diagrammed in Figure 1.
Figure 1
As can be seen in this diagram, cycled is the meeting point for ten
relationships which are detailed in Table 1. These relationships are all quite
traditional (syntactic, morphological, semantic, lexical and contextual), and
traditional names are used where they exist, but the diagram uses notation which is
peculiar to WG. It should be easy to imagine how such relationships can multiply to
produce a rich network in which words are related to one another as well as to
other kinds of element including morphemes and various kinds of meaning. All
these elements, including the words themselves, are `concepts' in the standard
sense; thus, a WG diagram is an attempt to model a small part of the total
conceptual network of a typical speaker.
_____________________________________________________________________
related concept C relationship of C to notation in diagram
cycled
the word I subject `s'
the word to post-adjunct `>a'
the morpheme /saikl/ stem straight downward line
the word-form /saikld/ shape curved downward line
the concept `ride-bike' sense straight upward line
the concept `event e' referent curved upward line
the lexeme CYCLE
cycled isa C triangle resting on C
the inflection `past'
me speaker `speaker'
now time `time'\
______________________________________________________________
Table 1
Language as part of a general network
The basis for WG is an idea which is quite uncontroversial in cognitive science:
The idea is that memory connections provide the basic building blocks
through which our knowledge is represented in memory. For example, you
obviously know your mother's name; this fact is recorded in your memory.
The proposal to be considered is that this memory is literally represented by
a memory connection, ... That connection isn't some appendage to the
memory. Instead, the connection is the memory. ... all of knowledge is
represented via a sprawling network of these connections, a vast set of
associations. (Reisberg 1997:257-8)
In short, knowledge is held in memory as an associative network. What is more
controversial is that, according to WG, the same is true of our knowledge of words,
so the sub-network responsible for words is just a part of the total `vast set of
associations'. Our knowledge of words is our language, so our language is a
network of associations which is closely integrated with the rest of our knowledge.
However uncontroversial (and obvious) this view of knowledge may be in
general, it is very controversial in relation to language. The only part of language
which is widely viewed as a network is the lexicon (Aitchison 1987:72), and a
fashionable view is that even here only lexical irregularities are stored in an
associative network, in contrast with regularities which are stored in a
fundamentally different way, as `rules' (Pinker and Prince 1988). For example, we
have a network which shows for the verb come not only that its meaning is `come'
but that its past tense is the irregular came, whereas regular past tenses are
handled by a general rule and not stored in the network. The WG view is that
exceptional and general patterns are indeed different, but they can both be
accommodated in the same network because it is an `inheritance network' in which
general patterns and their exceptions are related by default inheritance (which is
discussed in more detail in section 4). To pursue the last example, both patterns
can be expressed in exactly the same prose:
(1) The shape of the past tense of a verb consists of its stem followed by -d.
(2) The shape of the past tense of come consists of came.
The only difference between these rules lies in two places: `a verb' versus come,
and `its stem followed by -ed' versus came. Similarly, they can both be
incorporated into the same network, as shown in Figure 2 (where the `stem' and
`shape' links are labelled for convenience, and the triangle once again shows the
`isa' relationship by linking the general concept at its base to the specific example
connected to its apex).
Prepared by:
Baby Rhea G. Palattao
MAED English
Submitted to:
Francis Mervin L. Agdana, Ph.D.
Professor