“Six: New directions for models in linguistic theory” in “Graphic Representation of Models in Linguistic Theory”
New directions for models in linguistic theory
We have taken two approaches to graphic representation in linguistics: the philosophy of science, and the principles of graphic design; now, the two perspectives converge. We began the discussion of graphic representation in linguistics by looking at the three kinds of figures that are currently used, three graphic least common denominators. This inevitably imposed a taxonomy of graphic representation in linguistics, and it then became possible to see the homonymy and synonymy that hold between figures and analytic techniques. When we considered as well the evidence that graphic representation functions as a model for linguistic science, the problem it presents for the development of linguistic theory became apparent. We shall try now to determine just where the problem lies, and we shall consider a solution that our theory of graphic representation does not encompass but that it does lead to.
Two-dimensional models in linguistics
Before we can do so, we must reduce our taxonomy of graphic representation in linguistics still further. For when we consider the fact--which we have noted a number of times--that all figures in linguistics are built out of only two art elements, line and space, the number of basic types shrinks from three to two. These two are branching diagrams and tables.
How does this come about? Partly as a result of the scant vocabulary of linguistic representation; partly as a result of the existence of two major themes in linguistic analysis, constituent analysis (of which genesis and taxonomy are special kinds) and componential analysis (of which conflation is a special kind); they correspond respectively to branching diagrams and tables. The branching diagram par excellence is, of course, the tree; the dynamic, hierarchical quality inherent in tree diagrams is the heart of constituent analysis. The table par excellence is the matrix; the static, nonhierarchical quality inherent in matrix diagrams is the heart of componential analysis. The two themes are thus reflected in two kinds of figures, two styles of linguistic architecture.20
The third type of figure, box diagrams, can easily be construed as either branching diagrams or tables. This is clearly the case with block diagrams, which are isomorphic with either a tree (for the meaning of constituent analysis) or a matrix (for the meaning of conflation). They are clearly built on an armature of one or the other:
Thus block diagrams are at most allomorphs--better, allographs--of the tree or the matrix--that is, of branching diagrams or tables. So is the Chinese box, isomorphic with a tree for constituent analysis :
More accurately, it is isomorphic with two trees, or rather with a sort of Rorschach tree, the top and bottom halves of which, however, do not differ:
Isomorphism is important because mere synonymy is not sufficient to reduce two figures to aliomorphs of a single figure. If it were, the correspondences between figures and meanings for graphic representation in linguistics would be easily unraveled. Isomorphic representations are as close to the same figure as it is possible to be without being the same figure. Neither adds anything to what is represented by the other; neither subtracts anything from what is represented by the other; the figures express the same meaning in the same way. (For conflation block diagrams, this is not quite true; but they are clearly tables just as matrices are, for they are only matrices in which empty cells are given a particular interpretation. It is this interpretation that changes the meaning from componential analysis to conflation.)
The cube is more difficult. Because it is constructed out of line and space, and because of the way in which its parts express the meaning of componential analysis, it can be read as a branching diagram,
in which points are the units of the system and the lines branching from point to point carry shared features. This is the visual ambiguity of the cube. But there is also a sense in which the cube, read as a three-dimensional figure, is a tabular representation, inasmuch as it is the representation on a fixed figure of systems which vary. The number of features is fixed at three, and the number of units at eight, because the number of d imensions in the figure is three, and the number of Corners, eight. The position of the units is also fixed, and so empty space has meaning.
The possibility of saying something about system or pattern by letting a slot stand empty is characteristic of tables as against branching diagrams. In a tree (with the exception of a tree for componential analysis, a key) there is no way to express the lack of a unit which ought to be there, to express conspicuous absence. Given a tree and a unit which is lacking in the system represented by that tree, who can say, without knowing in advance the structure of the system represented, just where it is missing? More im- portant, who can infer from the shape of the tree that there issomething missing? In a cube, on the other hand, the absence of a unit is immediately felt, as we saw in the figure from Austerlitz. Moreover, the cube is isomorphic with other fixed figures more table-like in appearance. Thus
may equally well be represented as
or turned inside-out (Conklin, personal communication) to
Whether the cube is a branching diagram or a table, then, depends on how it is read. The point is that it, like the other box diagrams, can be fitted into a two-part scheme consisting of branching diagrams and tables; it is essentially a two-dimensional construction. In fact, the very ambiguity of the cube suggests that this two-part scheme is not a dichotomy but a continuum, the two poles of which are two essentially opposite figures.
By now it is apparent that for this two-part scheme to work, whether it is a dichotomy or a continuum, the concept of branching diagrams must be enlarged to include more than trees, as the concept of tables has been enlarged to include different kinds of fixed figures. In enlarging the notion of branching diagrams to embrace all the kinds found in linguistics, we shall need to depart from the definition of tree diagrams established so far.
We must first of all dispense with the requirement of a unique beginner. There will then be room for such branching diagrams as the vines of Morin and O'Malley:
There is after all no a priori reason why constituent-analysis trees should have a unique beginner: a great many sentences in fact "begin" at more than one spot. The sentence above, for instance, is no more than
(Morin, personal communication)
The requirement of a unique beginner is a requirement of meaning--the meaning of taxonomy--and not of form. This becomes clear when we consider the reading imposed on tree diagrams by dependency grammar. The graphic representation employed by Tesnière, for example, bears a strong resemblance to the vines of Morin and O'Malley, though the latter were developed independently (Morin, personal communication):
(Teniere 1959:253)
That the resemblance is no more than that, however--a graphic similarity only--is apparent from discussions of the meaning of such trees in depen- dency grammar generally.21 Morin and O'Malley's diagram expresses the complicated network of relations holding among various constituents in the deep structure--relations of various kinds--and expresses them in such a way as to avoid repeating elements (such as Ior you in the sentence given above) simply because they participate in more than one relation.
The diagrams of dependency grammar, on the other hand, express one sort of relation only: that of dependency, or government. As a consequence, the tree in dependency grammar differs radically not only from that of Morin and O'Malley but also from that of transformational grammar, and in fact from constituent-analysis trees in general. Essentially, dependency grammar supplements phrase-structure grammar by making room for the expression of government: Robinson (1970b :268) quotes Postal, who points out that, in a PS rule like NP→T N, the information that N is the head is necessarily lost.22 Yet this ultimately imposes on the tree diagram a reading altogether different from that of constituent analysis. Here the vertical dimension means ״dependency" rather than ''constituency," so that the branches are to be construed, reading down, as "x governs y,"and, reading up, as ״y is dependent on x." This is in contrast to the readings required for constituent- analysis trees, which are respectively "x dominates (in the sense of "contains") y" and ״y is a constituent (a subset) of x." The horizontal dimension then has no meaning in dependency trees, since no longer, as in constituent-analysis diagrams, are elements on the same horizantal subsets of the same (immediately dominating) set.
Thus dependency grammar imposes a different reading of the graphic elements of the tree. The difference is underscored by the graphic variant used by such linguists as Anderson (1971a, 1971b), Hays (1964), and Robinson (1970a, 1970b), in which slanting, solid-line branches represent relations of dependency, while straight, broken-line branches represent lexical realization. A diagram such as
is to be read: "V, realized as be , governs NP." (The distinction between broken lines for realization and solid lines for constituency is to be found in constituent-analysis trees as well; but since it is not possible for a node to participate simultaneously in constituency and realization, the graphic distinction is redundant. What is instead relied on to give the information that lexical realization, rather than constituency, is intended, is the convention that terminal elements stand in that relation to the non-branching nodes by which they are immediately dominated.23) Clearly, then, the reading imposed by dependency grammar constitutes a fifth meaning for the tree. Does it therefore depart from all other meanings for the figure? It would seem so: government is a concept apart from the concepts of genesis, taxonomy, componential analysis, and constituent analysis--all of which share, as we have seen, a least common denominator, the notion of successive replacement, linked at least to some degree in each with the notion of sets and subsets. Dependency trees, in contrast, express a notion that does not at all include the idea of successive replacement; to the contrary, it precludes it: for an element to govern or be governed by another, both must be present. And this idea is , moreover, expressed simultaneously with another--that of realization--which is not (as in constituent-analysis trees) automatically entailed by the first, but wholly independent of it.
It has become apparent, from the examples given above, that in dispensing with the requirement that a tree have a unique beginner, we must dispense as well with the requirement that its branches not converge. This, too, seems to be a restriction imposed by the meaning of taxonomy; and this, too, is better foregone for syntactic analysis. Both the vines of Morin and O'Malley and the tree diagrams of dependency grammar are likely to give rise to a network--a tree without a unique begin- ner and with converging branches. In Morin and O'Malley's diagram, for instance, convergence at the lower S-node is expressive of the double constituent function of that S, as object of both accuse and assume, while the redivergence of the branches expresses the fact that the lower S itself decomposes into constituents. In Tesnière's diagram, convergence expresses the double dependency of le maître (on both aime and déteste); and it is not hard to imagine, for the kind of diagram employed by other versions of dependency grammar, cases in which convergence would be necessary-- that is, wherever, in more complicated structures, an element governs or is governed by more than one other element.24
Admitting convergence makes the definition of branching diagrams roomy enough to accommodate as well some other, less obviously tree-like figures. These include Chomsky's finite-state diagram,
(Chomsky 1957:19)
--really a left־to-right tree, a tree lying on its side, with convergence. They also include ״morpheme order diagrams״ (Hoenigswald 1950) such as the "freightyard,״
and the "mazc":
(Hockett 1958:291)
There are alternatives to a figure like the "rollercoaster,"
(Hockett 1958:291)
which, because it sets out all the morphemic alternatives in a row, must use varying configurations of line to express possible combinations of morphemes. As Hockett (1958:292) points out, the rollercoaster cannot, like the freightyard and the maze, display morphemes paradigmatically along the vertical dimension at the same time as it represents them syntagmatically as possible combinations of morphemes. The freightyard and the maze are isomorphic--in fact, very nearly identical--figures, differing in that the freightyard is explicitly treelike in form. (The freightyard is actually a directed graph, as we shall see later on). Both the freightyard and the maze, however, could with little alteration become a left-to-right converging tree:
Weinreich (1966:408) points out that convergence diagrams are the alternative to rearranging the order of elements on the tree. A fixed order of elements, he says, is incompatible with a fixed configuration of branching. We must amend this to add: unless elements are repeated. Katz and Postal's lexical entry tree for bachelor, for example, hangs onto both a fixed order of semantic features and a fixed configuration of branching (a tree that fulfills the two requirements we have just jettisoned, those of a unique beginner and non-convergence)--but only at the expense of repeating features. If we allow convergence, on the other hand, we need not repeat features to preserve their order :
Similarly, the problem of designing a branching diagram that simultaneously represents the hierarchy of features and allows to be inferred all and only the correct implicational relations among them, is very simply solved by permitting convergence. A convergence diagram obviates the difficulties posed, for instance, by the representation of the lexical feature hierarchy from Bever and Rosenbaum discussed in Chapter 1:
Such a representation not only captures the essential unique implication which concerns Bever and Rosenbaum-- that is, [+Human] implies [+Animate] implies [-Plant]--but also conveys accurately the non-unique implications of [-Human] (which implies either [+Animate] or [-Animate]) and [-Animate] (which implies either [+Plant] or [-Plant]). Moreover, such a representation is isomorphic with the facts of the case: for a blending of two meanings, taxonomy and componential analysis, a blending of two trees־־one reading down, the abstract taxonomie relation among the features--as-classes ; another reading up, the practical implicational relation among the features-as-components. (In its doubleness, the convergence diagram recalls the "Rorschach" tree used by Pike; the former is a better representation for the theoretical concept we are discussing here insofar as it expresses graphical- ly the notion of a blending of two different analytic techniques, while the latter expresses graphically the notion of two different applications of the same analytic technique, that of taxonomy.) Clearly, then, the difficulties of lexical representation discussed by Bever and Rosenbaum arise at least in part from insisting on an unsuitable representation, from denying the admissibility of convergence in tree diagrams. A convergence diagram is a better expression of the theoretical concept--in actuality, as we have seen, a double concept--involved; and, as Chomsky and Halle point out, "the more direct the relationship between classificatory and . . . -etic matrices, the less complex--the more highly valued--will be the resulting grammar" (1968:381). Thus we have, again, a demonstration of the influence of model upon theory.
The third and last alteration needed to make the definition of tree diagrams fit all kinds of branching diagrams concerns directionality. Let such diagrams be either dynamic or static, either with or without directionality, and the definition is capacious enough to include network diagrams. (This is the sort of figure which the cube turns into if it is read as a configuration of point and line.) Networks generally express componential analysis, as in the following example:
(Jakobson and Lötz 1949:157)
If parallel lines are used, as they are here, to express the same feature, the figure can convey an analysis in several features. Where most of the branching diagrams we have looked at are dynamic, having a beginning (or several beginnings) and an end, network diagrams are static. They have neither beginning nor end.
The definition of branching diagrams that has emerged so far is no more than a definition of graphs.25 For what is a graph but the joining of points by lines? What we have called dynamic branching diagrams--trees, branching diagrams with more than one beginner, and convergence diagrams--are "directed graphs,״graphs with a definite irreversible direction for every "edge" (Ore 1963:53) .26 The directedness of directed graphs is usually shown by arrows, as in
(Ore 1963:67)
Morin and O'Malley's vine diagram (which, as its designers point out [182] , is a type of directed graph) would be
Hockett's various morpheme structure diagrams would be
What we have called network diagrams, in contrast, are undirected graphs (Ore 1963:52). They would look just as they do ordinarily, like the network for Jakobson's componential analysis of Russian declension,
(The chart of correspondences between form and meaning for graphic representation in linguistics is a network diagram.)
We have pared the number of figures in use for linguistics to two, branching diagrams (or graphs) and tables, having gone, as it were, from the surface structure to the deep structure of graphic representation in linguistics. It seems clear that the tangle of synonymy and homonymy can be laid at the door of this severely limited inventory of figures. With a number of different analytic techniques (though perhaps no more than variations on the two great themes of constituent analysis and componential analysis, they are still different), with a number of different schools and theories and periods of linguistic science, how can such slender resources stretch far enough? Moreover, it is likely that the influence of graphic representation on the development of linguistic theory depends on the figures available. What allows the reduction of all the figures in linguistics to two is the restriction of the vocabulary of graphic representation to the two art elements of line and space, coupled with the limitations of two-dimensionality. In a sense, then, the problem of graphic representation suggests its own solution: widen the resources of representation; increase the dimensions to three.
Three-dimensional models in linguistics
"The heart of all major discoveries in the physical sciences," says Toulmin (1953:34), "is the discovery of novel methods of representation, and so of fresh techniques by which inferences can be drawn." Three-dimensional models are certainly novel methods of representation for linguistic seience; but, as we have suggested, they are in a sense the natural inheritors of graphic representation: they grow out of the kind of representation they replace; for three-dimensional representation simply expands the vocabulary of art elements to include volume. And three-dimensional models fur- nish fresh techniques by which inferences can be drawn about a linguistic explicandum--a fresh neutral analogy. Not only do three-dimensional models open up new territory to be explored, but they also suggest points at which it would be wise to re-examine old ground. They can correct the neutral analogy furnished by two-dimensional models in which territory was wrongly claimed for the positive analogy, resulting in the unwarranted development of linguistic theory. Three-dimensional alternatives to graphic representation are thus indirect evidence for the claim that graphic representation influences the direction taken by linguistic theory.
We have already had evidence of the necessity for--indeed, the inevitability of--three-dimensional models in linguistics, in the attempt to get over into three dimensions that is seen in the modification of transformational-generative theory. The notions of rule, of the transformational cycle, of the dichotomy between deep and surface structure, all represent such an attempt. The dichotomy between deep and surface structure in particular-- imposing the requirement of holding simultaneously in the mind two mutually exclusive views of the explicandum--is not restricted to transformational grammar. Stratificational grammar is another theory that implicitly imposes such a requirement: as its very name suggests, it takes at any given moment in the analysis a multiple view of the datum. Still another is dependency grammar. This is apparent in the work of Tesnière; such a representation as
(Tesnière 1959:347)
is intended as collapsing or telescoping deep and surface structure. As Tesnière points out, this diagram '1fait intervenir quatre phrases, qu'il est parfaitement possible d'additionner":
Tesnière himself insists on the fundamental discrepancy between the two-dimensional model and the multi-dimensional explicandum: "Il y a done antinomie entre l'ordre structural, qui est à plusieurs dimensions (réduites a deux dans Ie stemma), et l'ordre linéaire, qui est â une dimension" (21 [italics mine]).
Transformational-generative theory has so come to rely on the notion of movement, expressed as rules, as to make of it a productive device in every instance where the two-dimensional model falls short of the explicandum--that is, where exploration of the neutral analogy it presents suggests modification of the theory. An example is the modification of the concept of markedness proposed by Chomsky and Halle (1968) and impiemented in a modified form for syntax by Lakoff (1970). Chomsky and Halle's revised markedness theory replaces the old +/-/0 values in phonological matrices (grid diagrams) for lexical items with the values m[arked]/ u[nmarked]/+/- and the addition of marking conventions. The marking conventions, not part of the grammar but rather "universal rules of interpretation" (403) , serve to convert m and и values to + and - ; they are thus rules upon rules. These marking conventions, moreover, are extremely complex: they are context sensitive (e.g., "in initial position before a consonant, the consonant that is [u continuant] is interpreted as [+ continuant]; in other positions it is interpreted as [-continuant]" [412]); they incorporate the whole set of implicational rela- tions existing among features (410) ; and they serve as the measure of the "naturalness" of phonological systems (411) and phonological rules (420). Such complexity argues strongly for the view that these rules add a dimension to the grammar.
The notional addition of a dimension, though it gives evidence of the necessity of three-dimensional models for linguistic theory, is nonetheless merely notional; it remains to demonstrate the usefulness of actual three-dimensional models in linguistics. The first of two three-dimensional models that we shall consider here is a phrasestructure mobile. The design is William G. Moulton's (personal communication); it consists in putting a phrase-structure (immediate-constituent) tree into three dimensions, suspended by its unique beginner, S, so that it floats in space:
We can learn from it not only about the explicandum, English sentences, but also about the influence of tree diagrams on linguistic theory. The problems presented by tree diagrams include the ordering of constituents in the deep structure; the difficulty of representing discontinuous constituents; the necessity for positing such transformations as pruning, extraposition, subject-raising. All are rooted in the linearity of two-dimensional representation. It is reasonable, then, to suppose that transplanting the tree diagram from a two-dimensional medium into a three-dimensional one will remove these difficulties.
The order of constituents in deep structure is one drawback that the phrase-structure mobile does not have. Because the tree is suspended by its beginner, the terminal elements--the deep structure constituents--hang free; they shift this way and that as the mobile moves, so that it is impossible to fix them in any order. It is important that this happens without change in the hierarchical structure imposed on the explicandum by its analysis into immediate constituents: because this hierarchical structure lies wholly on the vertical dimension, it is preserved. The model of the phrase-structure mobile, then, allows us to conceive of, to visualize, constituent analysis without ordered constituents. It expunges from the neutral analogy the potentially misleading element of linear order.
Representation of discontinuous constituents is another difficulty that the phrase structure mobile avoids. Because there is no linear order of terminal elements, there is no need to grapple with the problem of discontinuous constituents, terminal elements that violate that order. The ultimate constituents--the terminal elements--are not linked by the exigencies of two-dimensionality into a continuous chain. Even constituents in the same construction are related not horizontally but vertically, by virtue of the hierarchical structure imposed by constituent analysis; in the phrase-structure mobile we have been looking at, for instance, the two constituents
and back again; this in no way alters their relationship to each other or to the other elements in the sentence. Conversely, if we have a sentence like I looked him up, in which looked ... up is a discontinuous construction, we must, in two- dimensional representation, disentangle the two constructions that make up the sentence:
In three-dimensional representation, the constituents would not be constrained by linear order. They would float free, bound only by the ties of hierarchical structure along the vertical dimension. Since constituents are not continuous to begin with, there is no need to discontinue constituents that have been fortuitously thrown together.
This is, admittedly, a less clear case of the advantages to be gained by increasing two dimensions to three. In the first place, one pays a price for the advantage of not having to represent discontinuity, and that is not being able to represent continuity. It is not possible to show the linearity that is, after all, characteristic of speech if not of grammatical structure. Thus there is, in the second place, the question of whether the neutral analogy furnished by the phrase-structure mobile is not in its own way misleading. For if the counterpoint between linear order and hierarchical structure is characteristic of language, it is misleading for the three-dimensional model to indicate only hierarchical structure. The phrase-structure mobile suggests that the aspect of the neutral analogy furnished by the two-dimensional model that concerns the continuity of termi- nal elements ought to be consigned to the negative analogy. The question is, what is it we are investigating? What is the explioandum for which the phrase-structure mobile is an analogue? If it is grammatical structure--deep structure--and not speech--surface structure--then the mobile is a better model than the two-dimensional tree.
The problem of such transformations as pruning, extraposition, raising, and the like, is only partly circumvented by the phrase-structure mobile. We should still like to delete nodes that do not branch, that is, horizontal bars with nothing hanging from them; to remove segments and hang them elsewhere; to call moving a segment higher up on the structure "raising" it. If these are misleading notions, if they reflect nothing in the explicandum, no processes that actually go on in the world (wherever it is) of grammar, then they are the point at which the three-dimensional model fails us, the point at which it is outgrown. The important thing is that the two-dimensional figure does function as a model for linguistic science, and that there does lie a better model just beyond it. That going from a two-dimensional to a three-dimensional model, a mere change of representation, removes apparent theoretical givens like the order of constituents in deep structure, shows the very real influence of graphic representation on linguistic theory.
The second of our three-dimensional models for linguistics is a torus for phonological space. A torus, a geometrical solid whose surface is formed by rotating a circle about a vertical axis (Hilbert and Cohn-Vossen 1952:200), looks like this:
(Hilbert and Cohn-vossen 1952:200)
In a torus model of phonological space, the usual matrix arrangement for consonants,
is first wrapped around from top to bottom to form a cylinder:
It is then wrapped from left to right as well, so that the front consonants the back ones:
The three-dimensional torus is a more accurate model for phonological space than the two-dimensional matrix for two reasons: first, the three-dimensional model allows a fuller and more accurate expression of the componential analysis of sound systems, whether it is traditional articulatory analysis or distinctive feature analysis; second, the three-dimensional model reflects better the facts of sound change. Before we discuss the reasons why the three-dimensional model is a better fit for the explicandum of phonological space, however, we shall look more closely at its construction.
The circle is the one simple geometric shape that did not figure in our taxonomy of diagrams in linguistics; it doesfigure among the forerunners of the torus, in Grimm's Kreislauf.
(Schleicher 1888:97)
The circular configuration reflects the facts of sound change: the Germanic and High German Consonant Shifts, in which Indo-European Aspiratae (A) [bh dh gh] became Germanic Mediae(M) [b d g] and subsequently Old High German Tenues (T) [p t к]; Indo-European Mediae [b d g] became Germanie Tenues[p t k] and subsequently Old High German Aspirata(pf ts x] ; and Indo-European Tenues [p t к] became Germanic Aspiratae [f θ x] and subsequently Old High German Mediae [b d g] . Representing these sound changes by a circle,
collapes into a single configuration the following:
The Kreislauf constitutes a longitudinal section of the torus. Slicing through the torus from top to bottom yields
The Kreislauf, then, must also be equivalent to a matrix arrangement, wrapped from top to bottom, that is,
can become
The other direction, wrapping from left to right, was proposed by Moulton (MS) as an alternative to the matrix, to account for sound changes like the shift from Old English /х/ to Modern English /f/ (as in tōh > tough) and to accommodate sounds with double articulation (for instance, [kp], [gb]). In both cases, only the two ends of the spectrum along the horizontal in the matrix are involved; the problem, then, is how to get from one end of the matrix to the other without traversing the middle. Moulton's solution is to curve phonological space--though still in two dimensions--so that every row in the matrix becomes a circle:
(Moulton MS:20)
The whole horizontal dimension of the matrix would of course be involved; this does not necessarily yield a three-dimensional model, however. The result could be a set of concentric circles:
This is presumably what Moulton has in mind, since he at no time mentions a three-dimensional model. But for our purposes, the three-dimensional wrapping of the matrix from left to right as well is the better conception.
Among the reasons for preferring the three-imensional model for phonological space is that it is a better representation of the componential analysis of sound systems. The ordering of sounds sharing the same place of articulation on the torus was suggested by the Kreislauf.The usual order of sounds in the matrix is:
or,in terms of articulatory features,
The ordering in the torus, in contrast, allows any sounds sharing a feature to be contiguous:
In terms of distinctive feature theory, each circle--each cross-section of the torus--constitutes a correlation:
Wrapping the matrix from top to bottom, then, allows a better expression of componential analysis; so does wrapping it from left to right. It not only provides a natural place to put sounds like [kp] in a traditional articulatory analysis, as Moulton's curved phonological space was designed in part to do. It also makes contiguous the sounds that share the feature [+Grave] , which the matrix cannot do.
The second reason for preferring the torus to the matrix is that it mirrors the historical facts of sound change. It is already clear that this is so for the two sound shifts diagrammed by the Kreislauf.It is true as well for a number of sound changes involving--heretofore inexplicably--the two ends of the matrix without going through the middle, like the shift from Old English /х/ to Modern English /f/, inexplicable in terms of the two-dimensional matrix model. What route did the change take through phonological space? Did /х/ creep around behind the figure? And what was it doing travelling in the wrong direction (from right to left) in the first place? Theories of sound change developed from the matrix model can, it is true, explain such changes by positing intermediate stages in the shift--for instance, [x] > [θ] > [f]; but there is not always evidence for such intermediate stages--indeed, in many cases, the evidence tells against them (Moulton MS:20). Moreover, they cannot account for the fact that the sounds travel in the wrong direction. It is this sort of sound change that the wrapping of the matrix from left to right is designed to accommodate.
Both of the three-dimensional models we have looked at, the phrase-structure mobile and the torus for phonological space, are improvements on two-dimensional models, the tree and the matrix respectively--the two kinds of figures current in linguistics. Both grow out of two-dimensional representation. Neither is without its flaws: the phrase-structure mobile, while it escapes some of the pitfalls of tree diagrams, does not escape others; the torus, though it is a better fit for the explioandumof phonological space, is an oversimplified representation of it insofar as it does not have room for vowels and semi-vowels.
Thus our theory of graphic representation, with its double perspective of the philosophy of science and the principles of design, has led us back again to the question with which we began. What is it in graphic representation that enables it to furnish models for linguistic science? Though our theory does not solve all the problems it uncovers in graphic representation in linguistics, it does uncover them; though it does not answer all the questions it raises, it does raise them. It is clear that as long as graphic representation furnishes linguistic science with models, as long as it influences the development of linguistic theory, the solution to the problems of model-building lies in more model-building. It lies in the direction in which our theory of graphic representation leads: in the understanding Qf the function of models according to the philosophy of science; in the understanding of the construction of models according to the principles of design.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.