“Optical Signals”
5 Seeing is believing. --Plautus (ca. 2000 Bc)
After having traversed the channel, a signal arrives at a receiver (fig 2-1) where its information is extracted. The receiver's eye and nervous system act not only as filters of signals but also as processors of the information they encode. Therefore, the receiver can constrain how information is encoded in optical signals. This chapter concerns primary features of visual physiology and perception that prove useful to understanding the optical design of animal signals.
The primary carriers of information in visual signals are spatiotemporal arrays of photons that differ in total energy and frequency composition (ch 2), which the eye perceives in terms of brightness, hue and saturation. Although all three variables are intimately related physically, physiologically and psychologically, it is useful to consider first achromatic sensation based solely on brightness and then to extend treatment to chromatic sensation involving hue and saturation as well.
The primary process in photoreception is the transduction of electromagnetic energy to neural energy. This complex process is achieved by large molecules of visual pigments in the eye that absorb photons, changing their electromagnetic energy to molecular energy that initiates chemical changes in the molecule. The visual pigment molecule consists of a relatively small chromophore (light-absorbing part, derived from vitamin A) attached to a large protein moiety. The energy added by photons breaks bonds holding the two parts together and the separation (in vertebrate eyes) or partial separation (in invertebrate compound eyes) of the two parts initiates a chain of complex chemical reactions that eventually give rise to electrical signals in neurons of the visual system.
Every known visual pigment molecule absorbs over a broad spectral range, and its absorption spectrum resembles a ρrobability-density distribution for reasons discussed in ch 3. Dartnall (1953) found that when absorption spectra were plotted by frequency rather than wavelength (eq 3.1), all curves possessed nearly the same shape, but were displaced on the frequency scale. Later, Munz and Schwanzara (1967) showed that there were slight but consistent differences in the shapes of absorption spectra of pigments based on vitamin A1 and A2. In actuality, visual pigment absorption spectra measured in intact photoreceptors by methods of microspectrophotometry do not conform exactly to the shape of the two theoretical spectra (e.g., Liebman and Entine, 1968).
If the receiver's eye contains photoreceptor cells all having the same visual pigment, the eye's sensitivity to light should be proportional to its probability of absorbing photons at a given spectral frequency. Therefore, behavioral and physiological determinations of spectral sensitivity should correlate with visual pigment absorption spectra, and where relevant comparisons have been made the correlation is good (e.g., fig 5-1). The absorption spectra of visual pigments, and hence the spectral sensitivity of the eye containing them, are thought to be evolutionarily adapted to the spectral distribution of ambient light in the species' habitats (e.g., McFarland and Munz, 1975b).
Pit-vipers and a few other animals transduce electromagnetic radiation in the infrared frequencies (ch 3) by an entirely different mechanism. In pit-vipers, a membrane covering a small pit absorbs IR quanta, thereby heating the pit and activating heat-sensitive receptor cells that line it. Such pit-organs cannot form images in the manner of complex eyes, nor do animals use IR signals in social communication. Reception in the near ultraviolet frequencies by insects and some other animals is accomplished by visual pigments in complex eyes or simpler ocelli, and hence such UV reception is true vision, albeit of frequencies invisible to us human animals.
Fig 5-1. Relative spectral sensitivity measured behaoiorally and electrophysiologically with relative absorption of a visual pigment plotted for comparison (after Blough, 1957). The data are expressed as logarithms of the fractions of the maximum sensitivity or pigment extinction; behavioral data are from a tracking technique using operant conditioning with the domestic pigeon; electrophysiological data from microelectrode recordings in the pigeon'ss retina; and pigment data from the eye of the domestic fowl.
As a footnote to the transduction process, it may be noted that the structural orientation of elements in photoreceptors may render them differentially sensitive to planes of polarized light (Waterman and Horch, 1966). More than a hundred invertebrate species are known to perceive the plane of polarization, and recent evidence suggests that some fishes, an amphibian, man himself and the pigeon possess at least marginal sensitivity (Kreithen and Keeton, 1974).
Finally, for sake of completeness it is useful to point out that even higher animals possess sensitivity to light that is not mediated by principal eyes or simple ocelli. Adler (e.g., 1970) has studied receptors on the dorsal surface of the brain of amphibians that help control daily rhythms and orientational behavior. These receptors are structurally somewhat similar to eyes in the usual sense, but in birds the receptors controlling rhythms are known to be deep in the brain and presumably are not eye-like (see Menaker et al., 1970). Furthermore, pigeon neonates have dermal sensitivity to light that does not involve the head at all (Heaton and Harth, 1974), and Needham (1974) states that probably all animals have dermal light sensitivity (see also Steven, 1963). In this volume, I am concerned primarily with stimuli perceived by the primary, paired, lateral eyes of animals that communicate optically.
When irradiance on the eye is increased, more photons are absorbed by the visual pigment molecules, and the visual sensation of light increases in magnitude. One way to measure the visual response is by placing one electrode on the surface of the eye and another elsewhere on the animal. When light is shined upon the eye, a complex electrical potential called the elecroretinogram (ERG) is recorded between the electrodes, and the magnitude of this ERG measures (approximately) the visual sensation of brightness. The experiment yields a sigmoid function (fig 5-2) when only one kind of visual pigment molecule is activated by light.
The oldest problem in psychophysics is the exact nature of the function that relate visual response or sensation (ψ) to the physical intensity (irradiance, I).
Fig 5-2. Intensity-response function of an ERG from the South American bullfrog (after Sustare, 1976). The response measure increases according to a sigmoid function of log intensity, so that near the center of the curve a given difference in log intensity between two stimuli (∆I) causes a large difference in response (∆ψ), whereas near the extreme parts of the curve the same ∆I causes only a small change in response.
There is agreement that
but the function is never linear (ψ ≠ t + kI, where t is a threshold and k a proportionality constant). The WeberFechner relation postulates that ψ α log I and the Stevens power relation asserts that log ψ α log I; both relations
Fig 5-3. Idealized intensity-response function like that of fig 5-2. The middle part of the curve shows a nearly linear increase in response with log intensity (WeberFechner "law") ,whereas the first part of the curve resembles a power function (Stevens "law").
Hailman and Jaeger (1976) point out that the empirical curves such as that offig 5-2 fit Crozier's (e.g., 1940) model of the integral of a log-normal probability-density function; the extact nature of the function, however, is not of critical concern here.
It appears that the brightness contrast between two stimuli should be proportional to the difference in their brightnesses. Therefore, the perceived difference between two stimuli of different radiance depends not only upon the radiance-difference itself, but also on where this difference occurs on the intensity scale. Specifically, a given difference (ΔΙ) will evoke the greatest difference in brightness sensation (Δψ) near the inflection point of the sigmoid curve (fig 5-2). This implication is important for understanding visual adaptation considered below.
The perception of brightness is actually very complex,and entire volumes are devoted to the subject (e.g., Hurvich and Jameson, 1966). One principal phenomenon is of particular interest in optical signals of animals. When the eye perceives two adjacent stimuli of contrasting brightness, the difference in brightness is accentuated at the mutual border. Such lateral Inhibition tends to emphasize the outline of a dark object against a lighter background (or vice versa), so that visual shape is enhanced by sharp borders of contrast (ch 7). More is said about lateral inhibition near the end of this chapter.
An important perceptual phenomenon occurs when very bright light is in the visual field. Such light enters the vertebrate, camera-like eye not only through the pupil, but apparently partly straight through surrounding parts, and then is refracted and scattered by the ocular media to cause a general haze of light within the eye (Minnaert, 1954). This phenomenon, called dazzling, obscures vision in the field surrounding the source, decreases the sensitivity of the eye, and may cause momentary giddiness or pain. An apparent use of a dazzling signal is discussed in ch 6.
When the eye has been in the dark it becomes more sensitive to light (dark-adaptation) and when it bright conditions becomes less sensitive (light-adaptation). We are familiar with this phenomenon of adaptational shifts over large ranges of ambient irradiance, but it also occurs over small ranges as well. The effect of adaptation is to shift the sigmoid intensity-response curve (figs 5-2 and 5-3) toward the left (dark-adaptation) or right (light-adaptation) on its intensity axis. The usefulness of this shift is evident: by adapting such that the inflectionpoint of the curve is near the average ambient intensity, the eye maximizes the contrast between stimuli that deviate from the average (Hailman and Jaeger, 1976).
One can measure the time-course of dark-adaptation by determining the minimum intensit of light perceived by an animal (threshold) as a function of its time in the dark. For example, one may simply project lights of various intensities and ask a human observer whether he or she can see them. Animals can be trained by operant conditioning techniques to answer the same question by responding in,one way when they see light and in another way when they do not. Therefore, an animal tracks its visual threshold in such behavioral experiments, and one expects the threshold to decrease smoothly as a function of time in the dark. For some animals, smooth dark-adaptation curves result, but for many species there is a sharp break somewhere in the curve (fig 5-4). This break suggests that there are two populations of receptors having different characteristics, and in vertebrate retinas these populations are known as rods and cones, named by the shape of of their outer segments (portions that contain the visual pigments).
Fig 5-4. Dark-adaptation curves for several species. As a function of time in the dark, the eye becomes more sensitive (threshold falls). The break in the curve is attributed to differences between rod and cone vision in duplex retinae: the cone sensitivity falls to an asymptotic threshold, but as dark-adaptation in rods continues the visual threshold falls to still lower values until rod sensitivity also reaches an asymptotic value. Fully darkadapted values for man and an owl are included for camparison.
Experiments establish that rods are solely responsible for perception in the dimmest part of the intensity range (lower portions of curves in fig 5-4) and primarily cones for the brightest part. Cones also mediate color perception, but for the present that complicated subject may be deferred; color vision is not known to occur at very dim intensities mediated by rod vision alone. Eyes with two populations of photoreceptors such as rods and cones show visual duplexity, and such eyes are found among many invertebrates as well as vertebrates.
During the evening, when the ambient light falls from levels of cone-dominated to rod-mediated vision, the brightnesses of objects relative to one another change. Purkinje noticed this shift in the apparent brightnesses of flowers in his garden at sunset, and this purkinje shift implies that the spectral sensitivity of the eye changes. Using an operant conditioning method similar to that used for determining the time-course of darkadaptation, it is possible to measure the spectral thresholds of animals under high and low light levels. As expected, the former curve of photopic thresholds occurs at much higher thresholds than the latter curve for scotopic thresholds (fig 5-5). The Purkinje shift is shown more clearly by inverting the threshold scale to create curves of spectral sensitivity, and expressing both curves in terms of their maxima (fig 5-6, p. 127): curves of relative spectral sensitivity. From such curves it is apparent that under scotopic conditions a light of 600 THz will appear brighter than a light of the same physical intensity at 500 THz, but under photopic conditions, equally intense lights of these two frequencies will have the reverse brightness relations.
Because every known eye is differentially sensitive across its visible spectrum according to one or more curves of spectral sensitivity (e.g., fig 5-6), stimuli of the same total physical irradiance may evoke different visual brightnesses when they differ in spectral composition. It therefore proves useful to devise a measuring scale that incorporates spectral sensitivity such that two stimuli that appear equally bright have the same value on the scale. Such scales have been devised for human vision, and they are the primary basis for the science of photometry.
Fig 5-5. Scotopic and photopie spectral thresholds of the pigeon (after Blough, 1957). Light of higher energy is needed to stimulate the cones of photopie vision than the rods of scotopic vision.
The Commission Internationale de 1זEclairage (C.I.E.) has established two curves of standardized visual efficiency that are very close to the empirically measured scotopic and photopie spectral sensitivity curves of the human eye. Because photometry is concerned primarily with vision at high light levels, the C.I.E. photopie curve is used as the basis of defining photometric units of measurement. Just as the distinction is made between the radiometric concepts of radiance and irradiance (ch 3), photometry distinguishes the photometric concepts of illuminance and illuminance, respectively, as the quantities of standardized brightness sensation.
Fig 5-6. Purkinje-shift in spectral sensitivities, based on the data of fig 5-5 (opposite), plotted as log relative sensitivity curves. Each curve is normalized to its own maximum sensitivity to show more clearly the shift in spectral placement of the curves.
Photometry has is own standard of luminous intensity called the candela (cd), or new candle, based originally on the apparent brightness of an actual, burning candle. A standard point-source candela emits a given luminousflux through one steradian (solid angular unit), the flux being called the lumen. The light falling upon an illuminated surface may then be measured in lumens/m2, the unit lux of illuminance. Similarly, a luminous surface may be designated by the lumens/m2 that it emits, the unit of luminance called the nit. Photometry is initially confusing because of its diversity of units in different systems. For example, the English unit of illuminance often used is the foot-candle, and that of luminance the foot-lambert.
Radiance can be converted to equivalent luminance, and irradiance to illuminance, by two mathematical conversions. First, one must know the entire spectral distribution of the radiance (or irradiance) so that it can be adjusted according to the C.I.E. photopie luminosity curve. Then the result must be related by a conversion factor curiously named the least mechanical equivalent of light, the value of which is about 679 lumens/W.
In some cases, the photopie spectral sensitivity of animals is sufficiently close to the C.I.E. photopie luminosity curve that photometric units of luminance and illuminance are meaningful measures of stimulus brightnesses (fig 5-7). In other cases, such use of photometric quantities would be misleading, and photometric systems for the animal concerned should be devised.
Fig 5-7 (opposite). Photopic speotral sensitivity curves of some terrestrial vertebrates compared with the C.I.E. photopic luminosity curve, upon which photometric units of luminance and illuminance are based. The curves of the pigeon (after Blough, 1957) and bullfrog (after Sustare, 1976) are reasonably close to the C.I.E. standard, but the curve of the stump-tailed macaque (after Schierer and Blough, 1966) differs markedly.
Finally, confusion can arise over the use of names applied to light. Heretofore, "intensity" has been used to mean either radiance or irradiance, and it is also used this way in the literature for either luminance or illuminance. The photometric standard (candela) is a unit of the quantity "luminous intensity" often abbreviated simply to "intensity." Therefore, context should specify which meaning of intensity is intended, and for most uses one must further distinguish radiometric from photometric quantities, as well as light emitted from a surface from that falling upon a surface. Lastly, "brightness" has been used to mean the subjective (photometric) intensity of light, but "surface brightness" refers specifically to luminance.
Color-blind animals and those communicating in very low light levels receive optical information solely in spatiotemporal arrays of brightnesses, but for most species with well-developed optical communication, color plays an important informational role. This section reviews some aspects of color vision relevant to reception of optical signals.
If an eye has only one kind of receptor activated by light under given conditions (e.g., fig 5-1, p. 119), it cannot distinguish two frequencies of light. For example, a dim light at 550 THz may have the same brightness as an intense light at 450 THz. If two receptor-types having different spectral sensitivity curves (e.g., fig 5-6) are simultaneously stimulated, however, certain judgments of frequencies independent of brightness might be possible. Scotopic and photopie systems are not ordinarily used together for such purposes; they are systems designed instead to operate at different ranges of light-level, and hence extend the sensitivity range of the eye.
Suppose for a moment that the two curves of fig 5-6 could be used simultaneously, and that their absolute as well as relative sensitivities were the same as their peak responses. They might be two types of cones having different visual pigments. In this hypothetical case, a stimulus at 450 THz would stimulate the two populations of receptors quite differently, but one at 550 THz would cause about equal activation of the two systems (p. 127). Therefore, by comparing activation of the two populations of receptors, the eye could make some primitive spectral discriminations. Such a two-receptor system is called dichromatic color-vision (not to be confused with dichromatism of colored substances mentioned in chs 3 and 4).
The difficulty with a dichromatic system is that certain frequencies cannot be distinguished. For example, in fig 5-8a a light of reference frequency r that stimulates both receptors in some ratio could be mimicked by a combination of two other lights, each of which stimulated only one (or primarily one) receptor by virtue of being outside the spectral range of sensitivity of the other (fig 5-8b) . By independetly adjusting the intensities of these two superimposed frequencies a perfect metameric match to frequencyr would theoretically be possible.
If three or more populations of receptors having different spectral sensitivities were used, confusions become more difficult to create. However, the neural coding required to compare the responses of the different populations and extract from the comparisons the frequency-composition of the stimulus become correspondingly complex. It is therefore of little surprise that most animal colorvision systems known utilize three or four different receptor populations as the best compromise. In the eye of the goldfish, for example, there are three different populations of cone photoreceptor cells (fig 5-9) that make up the trichromatic color-vision system, and similar systerms are known in such species as the honeybee, rhesus macaque and human animal.
Fig 5-8. Equivocation in a diehromatic system. An animal having two receptor-types in its eye would confuse a reference stirnulus of frequency r with a combination of two other stimuli (i and ii) whose intensities were properly adjusted to create a metamerie match. Such equivocation is more difficult to achieve with three or more overlapping sensitivity curves.
Fig 5-9. Trichromatic system of three cone pigments determined by microspectrophotometry of the goldfish retina (after Marks, 1965).
spectral discrimination and hues
If the visual system is extracting spectral information by means of comparing the activity of different receptor-types, one expects the precision of information to be different in different parts of the spectrum. The precision may be measured by the minimum detectable difference in frequency between two stimuli, called in psychophysics the just-noticeable difference (jnd) or difference threshold . The jnd of discrimination problems is never an absolute value: it is rather always based on some criterion of reliable discrimination, such as detecting the difference nine times in 10, or 99 times in 100 trials. When the same criterion of discrimination is used for experiments throughout the spectrum, then the relative jnd values may be plotted as a Spectral discrimination curve.
From the trichromatic nature of many color vision systems (e.g., fig 5-9) one may guess at the general shape of the spectral discrimination curve to be expected. In the spectral region of high sensitivity of one receptor (i.e., its peak) and low sensitivity of the other two, one expects poor precision because small changes in frequency cause only small changes in ratios of activity among receptor-types. In regions where receptors are more similar in sensitivity (on the slopes of their sensitivity curves) and sensitivity changes greatly with small changes in frequency (i.e., the slopes of the sensitivity curves), one expects good precision in resolving frequencies. Therefore, fig 5-9 (p. 131) suggests that spectral discrimination curves will be roughly shaped like a "W," rising rapidly toward large jnd values at the ends of the spectrum where one receptor is primarily active, falling toward small jnd values more centrally in the spectrum, but having a small hump in the middle where one receptor-type has its peak. Empirically determined spectral discrimination curves have precisely this expected shape (fig 5-10).
From the shape of the spectral discrimination curve one may derive other expectations concerning the subjective impression of different frequencies. Where discrimination is poor (peaks of the curve in fig 5-10), the animal confuses adjacent stimuli readily, so that these must appear subjectively similar. One expects, therefore, to experience three quite different sensations corresponding to the three spectral regions of large jnd values: the primary color sensations. We humans cannot ask animals what they experience, but we each know that for us high frequencies (short wavelengths) evoke violet and blue sensations, middle frequencies green and yellow sensations, and low frequencies (long wavelengths) orange and red sensations. There is as yet no precise mathematical model that relates such hue names of human sensation to the human spectral discrimination curve, but the generally expected relationship exists. Because we can make much finer spectral discriminations than indicated by the spectral blocks represented by hue names, the number of hues a human culture recognizes linguistically is somewhat arbitrary,although their underlying sensory bases are not.
Fig 5-10. Spectral discrimination curves for two species (after Hailman, 1967 b). Both curves show the general "W" shape expected from a trichromatic system of color vision.
Mixtures of frequencies evoke sensations to which we apply the same names as used for monochromatic stimuli, even though we readily distinguish them. In fact, monochromatic radiation is very rare in nature; we experience primarily spectrally complex stimuli composed of many different frequencies. When all the frequencies are of approximately equal physical intensity, the sensation is achromatic: shades of gray from white to black that differ only in intensity, as in the sensations of scotopic vision When a broad band of frequencies makes up the stimulus, we identify it linguistically by a hue name such as blue that corresponds to monochromatic hues at the frequency of the peak intensity of the complex stimulus.
However, complex stimuli appear to be mixtures of a monochromatic hue and white light. Pure, almost monochromatic stimuli are said to be Saturated, whereas those that seem to be mixed with achromatic light are desaturated. In sum, we distinguish three elements of sensation about a visual stimulus: its hue (correlated primarily with spectral position), saturation (correlated primarily with spectral purity) and brightness (correlated primarily with physical intensity).
Finally, one can note that when a spectrally complex stimulus is double-peaked in its spectral energy distribution it may be difficult to predict what hue sensation it evokes: the primary sensation may be that associated with one or the other of its peaks or some mixture of sensation, or even some achromatic sensation (particularly if the brightness is low). One special kind of doublepeaked stimulus evokes an entirely new hue sensation not matched by any monochromatic stimulus: mixtures of the spectral extremes (red and violet or blue) evoke the sensation purple, which is a uniquely non-spectral chromatic hue. It is easy to see why purple is a unique sensation: purple lights stimulate primarily the two extreme receptors (fig 5-9) and hence have a physiological effect different from that of any single monochromatic light.
Figure 5-8 (p. 131) presented a case of chromatic equivocation in which two stimuli that have different physical bases (frequency compositions) are confused by the eye. Indeed, the whole point of utilizing three receptor-populations with different spectral sensitivities is to reduce such chromatic equivocation. Yet, in welldefined situations equivocation still occurs as metameric matches of stimuli.
Suppose one stimulus (monochromatic or spectrally complex) is shown on a screen adjacent to another screen illuminated by three superimposed lights that form the second stimulus. With only a few well-defined exceptions that are not of present concern, the three superimposed lights may be monochromatic or complex lights and may be chosen at will from all possible lights. It turns out that if the intensities of the three lights are independently adjusted, one can always create a visual match between the two screens.
This counter-intuitive result is subject to only minor specification, in that in order to create some matches one of the three lights must be superimposed with the reference stimulus of the first screen rather than with the other two experimental stimuli. The reason we do not experience such chromatic equivocation all the time in our daily lives is that the three matching lights must have just the right combination of intensities, which is sufficiently improbable to make metamerie matching more important as an experimental tool for studying vision than it is a source of serious visual confusion. We do often see two surfaces, such as dyed clothing and painted wood, as having the same color when their spectral compositions may be very different.
Goldsmith (1961) devised a tentative colorimetrie system for the honey bee based on matching experiments performed by Daumer (1956). Because the honey bee is sensitive to ultraviolet radiation, Goldsmith chose as theoretical matching stimuli one monochromatic stimulus in the UV (at 360 nm); the other two chosen lie within our visible spectrum as well as that of the bee (440 and 588 nm). Any stimulus may be plotted according to the percentage of its component energies at these three wavelengths, and since two percentages determine the third by subtraction the plot may be made on ordinary Cartesian coordinates (fig 5-11).
This chromaticity diagram of the bee is read according to the following example: a monochromatic stimulus of 490 nm may be matched by a combination of about 65% 588-nm light (x-axis) and 35% 440-nm light (y-axis), with little or no 360-nm light required. The broken curve in fig 5-11 is the spectral locus of monochromatic lights, within which all real stimuli plot as points. Notice that the whitepoint (labeled W in fig 5-11) is matched by about 55% 588-nm light, about 30% 440-nm light and therefore requires about 15% 360-nm light for a match. In other words, achromatic stimuli of the bee require some ultraviolet components of energy. Flowers that appear white to us because they reflect about equal amounts of energy through our visible spectrum may or may not appear achromatic to bees, depending upon whether the stimuli do or do not reflect a sufficient UV component.
Fig 5-11. A chromaticity diagram of the honey bee (after Goldsmith, 1961). A monochromatic light of wavelength 474 nm is confused by the bee with a light having about 30% 588-nm and 70% 440-nm components. Some stimuli cannot be matched with combinations of 588- and 440-nm light, and require in addition some UV component of 360 nm , chosen as the third reference wavelengths for this particular diagram. There are an indefinitely large number of such possible diagrams 3using various combinations of three reference stimuli.
colorimetry and stimulus-space
The C.I.E. chromaticity diagram for standardized human color-vision is more complexly derived than Goldsmith's diagram for the honey bee, but has the same major features. The standard matching stimuli are mathematically defined complex lights that are purely imaginary. Data from any matching experiment, however, may be transformed mathematically for plotting on the C.I.E. diagram. Suppose a complex light plots at point Ρ in fig 5-12 . A line from the white-point W through Ρ intersects the spectral locus at point λd, which is the dominant wavelength of the stimulus. Because the human eye sees various similar stimuli as achromatic, the dominant wavelength has meaning only in relation to a given white, the white in fig 5-12 being that of equal energy throughout the visible spectrum. The ratio of the lengths WP toWΛ d is the excitation purity, a measure of the saturation of the complex light. Excitation purity varies from zero (white light) to one (monochromatic light).
Fig 5-12. The standard C.I.E. chromaticity diagram, with the axes omitted. The white-point (W) is equal energy. Every real stimulus plots as a point (e.g., P) and a line from W through Ρ intersects the spectral locus at the dominant wavelength (λ d)) . The ends of the spectral locus are connected hy a line called the purple locus. Purple stimuli (P’) are designated as explained in the text.
When a stimulus plots in the lower portion of the C.I.E. chromaticity diagram, the line from white (W) through its point (P’) may intersect the straight line at the bottom of the diagram connecting the extremes of the spectral locus (fig 5-12, previous page). This line is called the purple locus, and since purples have no monochromatic referents they are designated differently from other stimuli. A line is drawn from P’ through W to intersect the spectral locus, and the point of intersection is the complementary dominant wavelength, λ’d. Whereas the dominant wavelength is that monochromatic stimulus added to white to produce a visual match with a given light, the complementary dominant wavelength is that monochromatic light to be substracted from white to produce a visual match with a purple stimulus.
By means of tolerably straightforward calculations, the chromaticity coordinates of any stimulus may be caiculated for plotting on the C.I.E. diagram. One may then graphically determine the dominant wavelength (or compiementary wavelength) and excitation purity of the stimulus. If the luminance of the stimulus is also calculated, any stimulus may be expressed by three numbers. Table 5-I shows the relations among physical variables, psychophysical specification and subjective sensations of stimuli.
Table 5-I
Variables of Surface Coloration of a Stimulus
psychophysical quantity | subjective sensation | physical correlate |
dominant wavelength excitation purity luminance | hue saturation brightness | spectral peak spectral variance radiance |
A number of different schemata have been devised for plotting any stimulus in a three-dimensional space. A painter named Μunsell devised a subjective system based on variables he called hue, chroma and value, which are roughly related to dominant wavelength, excitation purity and luminance, respectively. The Ostwald system is similar, except based explicitly on the C.I.E. variables of colorimetry. It may be pictured spatially as two cones base-to-base, with the upper apex being pure white and the lower one being black, the axis between them representing the achromatic locus. The distance laterally from the central axis is saturation and the surface of the figure is the locus of monochromatic radiation. A section normal to the axis roughly resembles the C.I.E. chromaticity diagram. The ornithologist Ridgway devised a similar system (with variables he called color, tint and shade) that was used to describe the coloration of bird species in hundreds if not thousands of technical papers. E.H. Burtt, Jr. and I devised an Ostwald-like system derived from C.I.E. variables in which dominant wavelength, excitation purity and relative luminance were plotted as rectilinear variables in three dimensions, thus providing a cube in which all stimuli plot as single points (Burtt, 1977). Many similar systems have been utilized.
Optical signals encode information in spatiotemporal arrays whose component parts may be describable in terms of surface hue, saturation and brightness. However, the way in which the components combine to make larger assemblages perceived as whole must also be scrutinized: the spatiotemporal arrays are perceived as patterns, flicker, movement and so on. These higher levels of sensory processing are, in general, not precisely understood physiologically, but many have been characterized psychophysically to an extent that they prove useful in understanding the optical design of social signals.
Various simple temporal phenomena in vision probably imply limitations of the visual system in receiving and processing stimuli. If one stares for a moment at a dark shape on a light background and then looks at a homogeneous dark background she sees the same shape as a light figure on the dark background: the negative after-image. Reversed polarity of the contrast yields reverse polarity of the negative after-image, which is due to fatigue processes in the visual system. Sometimes, under certain conditions, the same polarity of contrast occurs in the after-image, in which case it is a positive аftеr-imаgе, the causes of which are not well understood. Furthermore, if the shape or its background are chromatic, the negative after-image may also be chromatic, but of the complementary color.
When some stimulus is rapidly alternated between dim and bright states, one perceives the alternation as flicker when the light is bright, but as a steady light (fusion) when it is dim. The flash-rate at which the transition occurs from flicker to fusion is the critical fusion frequency (cff), and in a variety of animals the cff as a function of log intensity describes a sigmoid curve (fig 5-13). Presumably, the fusion results from slow decay processes of the visual system causing a persistence of sensation in the absence of stimulation, not unlike posi- tive after-images in general effect.
Suppose a light is turned on in the left part of the visual field, and then turned off just before another light is turned on in the right part of the field. When the angu- lar distance between lights is not too great, their bright- nesses are similar and the extinguishing of the first light is closely followed by lighting of the second, one perceives a single light stimulus that seems to move from left to right. This phi phenomenon shows that the perception of movement may be related intimately to the perception of flicker with a spatial component. Indeed, the experiments upon which fig 5-13 are based did not use flickering light at all. Rather, the animals were placed inside rotating drums with alternating black and white bars. When the speed of the drum's rotation is low or the light is bright, the animal moves either its eyes (visual nystagmus) or entire head (optokinetic or optomotor response) in an attempt to track the moving stripes. However, the animal can turn only so far without moving bodily, so when reaching the end of its arc, snaps back to the forward orientation and immediately tracks a new moving stripe. When the speed of rotation is great or the light is dim, the animal shows no such responses because the stripes are visually fused and the animal cannot perceive any motion of the drum·
Fig 5-13. Flicker fusion curves of two reptiles (after Bartley, 1951). At flash rates above the plotted critical fusion frequency (cff) for a given intensity the animal perceives a steady light (fusion) instead of a flickering one.
Black and white stripes on a rotating drum (or sectors on a spinning wheel) fuse to an achromatic color of some intermediate intensity. If the stripes or sectors are of different hues, these colors fuse to either achromatic or chromatic colors, depending on conditions. For example, the apparent color of a wheel composed of a red and a yellow semicircle fuses to orange when rapidly spinning. When three colors widely separated in the spectrum (such as red, blue and green) are used in the right proportions, the fused color is achromatic, usually perceived as some level of gray. These phenomena may be termed movement- fusions.
In some ways, simple spatial phenomena of vision resemble simple temporal phenomena. The spatial resolution of the eye may be measured by its acuity, the ability to discriminate closely spaced stimuli such as lines ruled on a glass plate. The acuity of an animal's eye depends upon many factors, such as the dioptic (focusing) system, the density of photoreceptors and so on. Like temporal resolution, spatial resolution depends upon the brightnesses of the stimuli and their spectral compositions, but experimental methods of determining visual acuity vary widely so that direct comparisons are difficult. Figure 5-14 shows an acuity curve for the honey bee and two curves for man determined by different methods. All three curves show a sigmoid relation with log intensity.
Fig 5-14. Visual acuity curves of the honey bee and man as a function of log intensity. Acuity is expressed as 1/(resolvable angle in minutes of arc) so that the maximum resolution of the bee is about 60' or 1° (after pirenne, 1967) and that of man is about 0.5' (after Bartley, 1951). The two curves for man shew results of two testing methods.
Visual acuity is not a single, invariant function of intensity, however. For example, acuity of amphibious animals in air and water may be quite different (Schusterman and Barrett, 1973). Also, the pigeon, and probably many other birds, are near-sighted for stimuli located directly forward and viewed binocularly, whereas they are far-sighted for stimuli located laterally and viewed monocularly (Catania, 1964).
A newer approach to the eye's spatial resolution is in the measurement of contrast-detection as a function of spatial frequency. A grating of stripes somewhat like those used in some acuity studies is shown to a subject whose response may be recorded behaviorally or electro- physiologically. The stripes, however, are not simple light and dark alternations; instead, the intensity varies continuously according to a sinusoidal function, luminance being highest in the middle of the light portion and lowest in the middle of the dark portion of the cycle. Even simple sinusoidal functions, however, look like fuzzy dark and light stripes instead of continuously changing intensity because the visual system is tuned to certain spatial frequencies. Spatial frequency perception is studied by varying the frequency of simple displays, combining frequencies (especially harmonics), varying the amplitude of luminance, and so on.
A surprising result has emerged from the study of Contrast sensitivity in spatial frequencies. Obviously, the ability to see the pattern depends upon the contrast-threshold (and hence its inverse, contrast-sensitivity). It turns out that the contrast-sensitivity depends upon the spatial frequency: sensitivity is maximum at some particular frequency and declines with increasing or decreasing frequency (fig 5-15, next page). To appreciate this phenomenon, consider a striped pattern with relatively low contrast. At an optimum distance, the striped pattern is detectable, but at a greater distance the spatial frequency is higher (more stripes per visual angle), so sensitivity declines and the striped pattern becomes invisible. Similarly, at close distances (lower spatial frequency with fewer stripes per visual angle), the stripes also disappear because the contrast falls below threshold. The phenomenon is so counter-intuitive that many persons find it hard to believe even when shown the proper stimuli. This phenomenon, plus species-differences in the optimum spatial frequency (fig 5-15) have important potential implications for animal signals (chs 7 and 8) as well as being an experimental tool for studying resolving phenomena of the eye.
Fig 5-15. Contrast-sensitivity curves for two species (after Campbell and Maffei,1974).
As one might expect, a mosiac of small dots that are below the resolving power of the eye fuse perceptually. Black and white dots, such as used in photographs of newspapers, fuse into shades of gray. Colored dots fuse into intermediate colors including achromatic colors, which is the perceptual principle used in grain color films and in color television. The mixing of brightnesses or colors in such spatial arrays is appropriately called spatial-mosaic fusion.
There is no unified explanation of how visual systems recognize even simple two-dimensional patterns. Traditional associationist psychology held that the recognition of even simple diagrams as distinctive wholes was slowly acquired through learning, whereas ethologists and Gestalt psychologists tended to consider the pattern-perception of at least key social signals to be a property of neural organization largely uninfluenced by learning (see Hailman, 1970 for a discussion). In recent years, cognitive psychologists have begun to favor the latter view, finding that the human eye may see contours of shapes that are not actually present in the visual stimulus (e.g., Lawson and Gulick, 1967; Gregory, 1972; Coren, 1972). For example, in fig 5-16 one sees a white square that is not, in fact, present. Such illusions do not depend on regular geometric shape of contours perceived in all cases, but perception of shape does appear to involve "rules" that are being systematically articulated. Perception of two-dimensional pattern is not a simple process of learning to sort matricies of dark and light areas into categories that are subsequently recognized; rather, the visual system is predisposed toward processing sensory data according to rather complex organizational principles.
Fig 5-16. A visual illusionגin whieh the eye sees a white square.
Perhaps the most significant advance in the neurophysiological bases of pattern-perception is developing from research on receptive fields of individual neurons in the visual system of vertebrates. Each visual neuron responds to key stimuli in only part of the total visual field of the eye, the neuron's field being known as its receptive field. Kuffler (1953) found that receptive fields of the cat's visual neurons were not homogeneously sensitive to light: some areas responded best to an increase in intensity and others to a decrease; some best to light in the center, others to light in the periphery. This breakthrough led to many investigations of receptive field geometries and neural mechanisms in a variety of other animals, including rabbits, turtles, salamanders, pigeons, frogs, etc.
The basic organization of the vertebrate retina, although differing in details among species, may be summarized from studies of the mudpuppy (e.g., Dowling and Werblin, 1969; Werblin and Dowling, 1969; Werblin, 1971, 1972). The most direct path of neural data goes from the photoreceptors (rods and cones) via bipolar cells to ganglion cells, whose axons make up the optic nerve and send data from the eye to the brain. Therefore, a light stimulus to a photoreceptor activates the ganglion cell(s) directly in line with it in the retina. Light peripheral to this central axis activates other photoreceptors, which are connected not only to bipolar cells, but also to horizontal cells that carry data laterally in the retina and may inhibit the photoreceptors or bipolar cells of the central axis. Therefore, light stimuli directly in line with a ganglion cell may excite it whereas light peripheral to the axis may suppress its activity. A deeper layer of amacrine cells carries even more peripheral sensory data to the central ganglion cell. This basic organization of the retina, which underlies the phenomenon of lateral inhibition mentioned earlier in this chapter, controls the retina's adaptational state (also see above) and its reiative general sensitivity to light.
If one records electrically from the ganglion cell, one expects to find a central area of the receptive field where light activates the cell and a peripheral area where light suppresses activity, and just this pattern is found in the cat (fig 5-17a). Cessation of peripheral light is followed by an after-discharge of activity in the ganglion cell. Not only these "on-center, off-periphery" cell geometries, but also the reverse polarities are found in the cat. Not all vertebrate ganglion cells have such simple bullseye-like receptive field, however. In the frog, there are five classes of complexly organized types, such as the one that responds only to dark, convex shapes that move through the ganglion cell's receptive field (Maturana et al., 1960). In the pigeon's retina some cells are specialized to detect edges in particular orientation or movement in particular directions (Maturana and Frenk, 1963).
Fig 5-17. Two schematized receptive fields. A given neuron is stimulated by light in a restricted part of the animal’s total visual field: the cell's receptive field. Certain cells respond maximally to a dark center with light surround (left), others with the reverse polarity. Higher in the pathway from reoeptors to brain other cells may respond maximally to dark bars in the receptive field (right), and different cells may have different preferences for orientation of the bar.
In a classic series of investigations, Hubel and Wiesel (e.g., 1959, 1961, 1962, 1963, 1965) explored the receptive fields of cells more central in the visual pathways. After processing the lateral geniculate, sensory data are sent to the visual cortex, where "simple" cortical cells respond to particular orientations of slits of light (or dark) in their receptive fields (fig 5-17b). Cells within one column of the cortex respond best to slits in a particular orientation, those in other columns to different orientations. Receptive fields at this level in the cat are much larger than and apparently an integration of the bullseye-like fields aligned in the retina. "Complex" cortical cells respond not only to orientation of a light slit, but selectively to its direction of movement within the receptive field. "Hypercomplex" cells respond to a slit regardless of where it is placed within the field and may also show preferences for particular lengths of slits. In the frog, central visual cells have such complex properties that they must be described in nearly ethological terms: a "newness" detector responds when an insect-like stimulus enters the receptive field and a "sameness" unit responds as long as the stimulus remains, even if it ceases to move--yet responds more vigorously when it moves again (Lettvin et al., 1961 ).
From such studies of the receptive fields of individual cells in the vertebrate visual systems one may already reach several general conclusions. (1) Visual systems are highly organized to extract information about two-dimension patterns of light, implying that some patterns will be easier to detect than others. (2) As one moves centrally within the visual system, cells respond to more specific kinds of stimuli, and at one level it is possible to have different cells responding preferentially to different kinds of key stimuli. (3) The organization differs in different species, so without specific studies it is difficult to suggest what patterns will be most effectively detected. In visual systems used primarily for specific visual tasks-such as detection of predator and prey in frogs or coordination of flight in pigeons--the organization of receptive fields appears to be more specialized, although too few species have been studied to yield final conclusions.
The commonest receptive-field geometry is the bullseye type, which seems almost an inevitable consequence of lateral inhibition working through the anatomical substrate of the retina. Bullseye-like receptive fields are known primarily from ganglion cells, but also from cells at higher centers, in a variety of animals including spider monkey (Hubel and Wiesel, 1960), rhesus macaque (Wiesel and Hubel, 1966), domestic rabbit (H.B. Barlow et al.,1964), laboratory rat (Brown and Rojas, 1965), Mexican ground squirrel (Michael, 1968) and goldfish (Jacobson and Gaze, 1964). Perhaps because of this widespread organizational principle in receptive fields, bullseye-like stimuli are readily recognized by animals (see Wickler, 1968: 64-70). So far as I am aware, there has been only one attempt to connect receptive-field geometries with the characteristics of optical social signals (Hailman, 1971; see also figure 15 in Hailman, 1977a for a summary), but the topic may be a fruitful one for future research.
Whatever the exact organization of sensory inputs, the visual systems of invertebrate and vertebrate animals alike enhance the boundaries between light and dark in spatial patterns by the process of lateral inhibition, mentioned previously. An important consequence of lateral inhibition has been recognized for a long time in human vision (see Minnaert, 1954: 105-106). The boundary between light and dark is not only enhanced, but also shifted toward the darker side, a phenomenon confusingly called irradiation (to be distinguished from irradiance, ch 3). Most persons are familiar with irradiation, even though they may not have realized it. When dark telephone lines cross against a bright sky, the point of intersection disappears, and there seems to be a gap in the wires. Or, when the sun rises (or sets) over water, the horizon seems to dip in front of the sun. It is possible that this principle is utilized in some optical signals (ch 8).
Man and other animals often have good visual judgment concerning the distance of some perceived object. Psychologists have admirably elucidated the multiplicity of visual clues used in such depth perception, only some of which depend upon possessing two eyes. Almost a dozen sources of depth information have been described.
One class of cues concerns properties of the environmental scene that can be captured in still photographs. These are the cues that provide the illusion of depth in pictorial representations on a flat surface. Perspective or convergence is the cue provided by parallel lines meeting at the horizon, such as railroad tracks. A related cue is the known size of an object, whose image becomes smaller at greater distances. There is a case that falls between perspective and known-size cues: repetitive units that become smaller with increasing distance, such as telephone poles. Imaginary lines connecting their bases and apexes meet at the horizon, providing a perspective cue that may operate in conjunction with the learned size of telephone poles. Another type of clue, which may be considered a subset of convergence, is texture gradient. Because of perspective, the units of texture--such as the cobblestones of a street--become smaller in the distance.
A third major photographable cue is Superposition, in which nearer objects may partially obscure the view of more distant objects behind them. A fourth, and rather subtle, cue is elevation. Objects higher in a scene, particularly if they extend above the horizon, appear farther away than lower objects. This principle follows logically from perspective, but perceptually it appears to be distinct.
Two photographable cues depend upon environmental conditions: brightness and distinctness. Closer objects tend to look brighter than farther objects because of absorption and scattering by the medium between object and viewer (ch 3). This cue is particularly useful in turbid waters, as every SCUBA diver learns. Similarly, nearby objects appear more distinct because their images are less blurred by the medium than are those of farther objects.
To these six depth cues that can be captured photographically may be added several others that rely on mechanisms of perception. Accommodation, the focusing of an image onto the retina, provides a seventh cue. In order to accommodate, the dioptics of the eye must be altered (such as changing the shape of the lens) for different distances. In man, a distant object demands a thin lens and a nearby object a thick lens for proper accommodation. If the observer can internally sense the thickness of her lens (e.g., by spindles in the muscle fibers that alter shape), this feedback provides a depth cue.
Some cues derive from viewing the world from different points in space. In motion parallax, nearby objects such as fence posts along a road travel rapidly to the rear of one's visual field, whereas distant objects such as hills move backward more slowly. (Very distant objects such as the moon appear to be stationary.) A simple form of motion parallax is displacement parallax, in which a one-eyed animal can look at a scene from one place and then move its head sideways to look from another spot: closer objects change their spatial relationships with one another moredrastically than do farther objects, which tend to stay in the same place within the visual field.
Animals with two eyes can compare both views at once, the differing images on the two retinas being called binocular disparity . Furthermore, in order to accommodate nearby objects, the two eyes must be rotated inward, and this cue of ocular convergence is apparently perceived by feedback from sensory cells in the muscles that control eye-movements.
There is an illusion of depth due to color used by some painters (presumably consciously). This chapter avoids dioptic considerations of eyes because the structure of animal eyes varies so greatly, but the color-illusion of depth demands an excursion into structure. The principal refracting surface in the vertebrate eye is the outer surface, called the cornea (fig 5-18a, next page). The lens within the eye is either moved backward and forward like a camera lens (e.g., in birds) or its shape is changed (e.g., in man) to effect the fine adjustments in accommodation (focusing). Because the index of refraction varies with the frequency (eq 3.8), different colors are brought to focus at different distances behind the lens (fig 5-18b). This is the phenomenon of chromatic aberration mentioned in ch 3.
The human eye accommodates through changes in the shape of the lens, so that it must be made thinner to focus violet rays on the retina (fig 5-18c) and thicker for red rays (fig 5-18d). However, a thin lens also brings distant objects into focus (fig 5-18e) and a thick lens accommodates nearby objects (fig 5-18f). As noted above, the eye can sense these changes in accommodation for distance through internal receptor cells in the accommodation mechanism, and use such sensory data to judge distance. Therefore, accomodating violet surfaces provides internal data signaling distant objects and accommodating red surfaces sends data signaling nearby objects. I delight in viewing my reproduction of Paul Klee's "Around the Fish," a painting of red fish on a blue platter in which the fish appear to stand out from the platter. This illusion, although often subtle, may be used in the design of some optical signals (ch 8).
A remarkable aspect of vision is that certain perceptions occur without an immediate sensory basis, and must depend upon simultaneous contrasts of some complexity or upon past experience, or both. One perceives a basketball as spherical, even when there are no evident shadows to provide clues as to its three-dimensional shape. This shape constancy is due in part to having viewed basketballs from every direction, and in so doing having found that they always have a circular pattern. Apparently, much of our perceptual identification of three-dimensional shape depends upon such experience in viewing objects from many angles, and forming a mental picture of the shape.
Fig 5-18. Depth illusion based on color due to chromatic aberration in the eye. The cornea is the principal refracting surface of the human eye (top left), but fine changes in accommodation (focus) are effected by changes in the shape of the lens. The lens, however, shows chromatic aberration (top right), so that violet rays require a lens-shape (middle left) like that used to focus on distant objects (bottom left), and red rays require a shape (middle right) used to focus on nearby objects (bottom right). For this reason, a red spot may seem closer to the observer than its blue or violet background when the colors are actually on a plane. (Angles of rays and shapes of lenses exaggerated greatly in diagrams.)
Constancy of brightness is very persistent. A white triangle upon a gray background still appears like a white triangle on a gray background when viewed in extremely dim light--even though the gray in bright light may be absolutely brighter than the white in dim light. Part of this constancy may be explained by adaptational shifts of the eye, but after these are taken into account the constancy still remains, and therefore must be due to the simultaneous contrast between the two surfaces. For this reason, the apparent surface brightness of an object is not directly proportional to its photometric luminance.
Similarly, objects appear to have a relatively constant color under differing conditions of illumination, although homogeneously colored surfaces viewed in isolation lose such color constancy. SCUBA divers identify color patterns of fishes at various depths where the actual spectral composition of the stimuli varies markedly. This phenomenon of spatial induction of color sensations has proved a continuing problem in the generation of a comprehensive theory of color vision.
Most of the research on perceptual constancies relates to human vision or that of a few domesticated laboratory animals. There is very little known about the roles of simultaneous contrast and past experience in animal vision, yet various kinds of perceptual constancies might be important to understanding the design of optical signals. For example, certain signal shapes might be designed by evolution to be recognizable from various angles, or color patterns might be designed to be recognizable under varying conditions of illumination. Such possibilities are deserving of experimental attention.
Unlike the channel (ch 3) and the sender (ch 4), the receiver imposes many important limitations upon the encoding of information in optical signals. Visual pigments, which absorb photons entering the eye, are more sensitive to some frequencies than others and have a limited dynamic range of photon-fluxes over which they can generate differential neural signals. The spectral and intensity ranges of the eye may be extended by having populations of different receptors that operate best in low or high light levels, or which have sensitivity curves located in different parts of the spectrum. At least three of the latter such receptor types are needed for reliable color vision, but even with three types color-equivocation may occur. Perception is built upon spatiotemporal arrays of photon-fluxes, but its characteristics are many and complicated. There are both temporal and spatial resolving limits that constrain the receiver's ability to accept information encoded temporally and spatially in light. Many perceptual phenomena, particularly those involved with illusions and constancies, suggest that perception is a complex, active process that depends upon events in the entire visual field as well as patterns of expectation in the visual processing system itself, some of which are established through prior visual experience. This chapter could do no more than point out some of the aspects of animal sensation and perception that must be taken into account if one is to understand the design-characteristics of optical signals. Only the most evident and well-known aspects of vision are mentioned; many others may well play a role in understanding optical design of signals.
Recommended Reading and Reference.
This chapter differs from others in that it reviews only selected topics from its potential subject matter--those I felt necessary for understanding later chapters. Therefore, the Appendix presents an unusually long list of relevant articles from Scientific American through which the reader can expand upon the subjects of this chapter without delving into highly technical literature. A few technical volumes, though, are worth mentioning. The Visual Pigments by Dartnall (1957), although outdated, is still a clear introduction to pigment chemistry. Rodieck's (1973) Vertebrate Retina provides excellent coverage of its subject. The vision volumes of the Handbook of Sensory Physiology (e.g., Dartnall, 1972; Fuortes, 1972) cover virtually all the physiological aspects of vision mentioned in this chapter, as well as much other material. The most authoritative reference on photometry and colorimetry is still LeGrand's (1957) classic Light, Color and Vision, but a book by one of LeGrand's translators covers much of the same ground and is available in a Dover reprint (Walsh, 1958). An excellent reference that ranges from molecular aspects of vision to the complexities of visual perception is Pirenne's (1967) Vision and the Eye . There are, of course, many other fine sources of information about vision and visual perception.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.