Ana səhifə

Multisensory Integration in the Estimation of Relative Path Length


Yüklə 80.3 Kb.
tarix11.06.2016
ölçüsü80.3 Kb.


Multisensory Integration in the Estimation of Relative Path Length

Hong-Jin Sun, Jennifer L. Campos, and George S. W. Chan


Department of Psychology

McMaster University

Hamilton, Ontario, Canada, L8S 4K1

All correspondence to:


Dr. Hong-Jin Sun

Department of Psychology

McMaster University

1280 Main Street West

Hamilton, Ontario, CANADA, L8S 4K1

Tel: 905-525-9140 ext. 24367

Fax: 905-529-6225

sunhong@mcmaster.ca

Running head: Multisensory Integration in Self-motion

Number of Manuscript Pages: 30

Number of Figures: 9

Total Number of Pages: 39
Abstract
One of the fundamental requirements for successful navigation through an environment is the continuous monitoring of distance travelled. To do so, humans normally use one or a combination of visual, proprioceptive/efferent, vestibular, and temporal cues. In the real world, information from one sensory modality is normally congruent with information from other modalities, hence, studying the nature of sensory interactions is often difficult.

In order to decouple the natural covariation between different sensory cues, we used virtual reality technology to vary the relation between the information generated from visual sources and the information generated from proprioceptive/efferent sources. When we manipulated the stimuli, such that the visual information was coupled in various ways to the proprioceptive/efferent information, human subjects predominantly used visual information to estimate the ratio of two traversed path lengths. Although proprioceptive/efferent information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive/efferent information was inconsistent with the visual information. These results convincingly demonstrated that active movement (locomotion) facilitates visual perception of path length travelled.



Keywords: locomotion, distance perception, optic flow, proprioception, virtual reality

One of the fundamental requirements for successful navigation in an environment and the establishment of a mental representation of one’s location in space, is the continuous monitoring of distance travelled. Both visual and nonvisual sources of information can potentially be used in this task.

Visual information is often considered to be critical for spatial processing. For instance, the dynamic retinal information generated by the observer’s self-motion (optic flow) (Gibson, 1950; Lee, 1980; Sun, Carey, and Goodale, 1992; Warren and Hannon, 1990), as well as the motion of objects in the environment (Regan and Hamstra, 1993; Sun and Frost, 1998), can specify the spatial-temporal relation between the observer and environmental landmarks. However, only a few studies have examined the role of optic flow in the estimation of distance travelled (Bremmer and Lappe, 1999; Redlick, Jenkin, and Harris, 2001; Witmer and Kline, 1998). Results have demonstrated that visual information alone can be used to accurately discriminate and reproduce traversed distances (Bremmer and Lappe, 1999).

During navigation, idiothetic information is internally generated as a function of our body movements in space (Chance, Gaunet, Beall, and Loomis, 1998; Mittelstaedt and Mittelstaedt, 2001). This information can be derived from sensory information about movements from muscles and joints ("inflow" or proprioceptive input) and motor efferent signals ("outflow"). The combination of these two kinds of information will be hereafter referred to as proprioception. Another source of idiothetic information is vestibular information, which is generated following a change in linear or rotational movement velocity. Idiothetic information alone can be used to monitor locomotion as shown by studies demonstrating that humans are able to walk to a previously viewed target without vision (Bigel and Ellard, 2000; Elliott, 1986; Loomis, Da Silva, Fujita, and Fukusima, 1992; Rieser, Ashmead, Talor, and Youngquist, 1990; Steenhuis and Goodale, 1988; Thomson, 1983). In addition, duration of travel and cognitive factors may also affect the estimation of distance travelled (Sadalla and Magel, 1980; Foos, 1982; Witmer and Kline 1998).

Compared to studies examining the contributions of particular sensory cues to locomotion, the integration of visual and nonvisual cues has also been explored in a variety of tasks, such as: the estimation of path length (Harris, Jenkin, and Zikovitz, 2000; Ohmi, 1996); the reproduction of angular displacement (Lambrey, Viaud-Delmon, and Berthoz, 2002); path integration (Kearns, Warren, Duchon, and Tarr, 2002; Klatzky, Loomis, Beall, Chance, and Golledge, 1998); wayfinding (Grant and Magee, 1998); and maintaining a constant walking speed on a treadmill (Prokop, Schubert, and Berger,1997).

When multiple sources of information are available in a real world scenario, they often provide redundant and concordant information making it very difficult to study the individual contributions of each. Creating cue conflict in multisensory and intrasensory tasks has been a popular approach in identifying the individual contributions of different sensory cues. An influential method has involved the wearing of “displacement prisms” which causes a spatial discrepancy between the shifted spatial layout obtained through vision and the correct spatial layout provided by other sensory modalities such as proprioception (e.g., Pick, Warren, and Hay, 1969; Warren, 1979). This prism paradigm has typically been used to study the perception of spatial direction as measured through target-pointing responses. A similar method can be adapted to study how travelled distance is perceived during locomotion, which involves dynamic visual and nonvisual information as well as the integration of information over time.

In order to study visual and nonvisual interactions during locomotion, Rieser, Pick, Ashmead, and Garing (1995) adopted an innovative approach to uncouple the natural covariance of vision and proprioception. Subjects were asked to walk on a treadmill at one speed (biomechanical feedback) while being pulled on a tractor at either a faster or a slower speed (environmental flow feedback). Subjects’ path length estimations were altered after having experienced a new relation between locomotor activity and the resulting optic flow. However, these results may not be generalizable due to the fact that the sensory stimulation may not have been well controlled (e.g., the presence of extraneous visual and nonvisual stimulation) and subjects may have been aware of the experimental manipulation.

In the current study we adopted an efficient, reliable, and systematic approach by simulating locomotor experiences in a virtual environment. Subjects’ task involved moving forward down a straight, virtual hallway. The perceptual experience of moving in the virtual environment was created when subjects pedalled a stationary bicycle while simultaneously being presented with the corresponding optic flow information. The optic flow information was yoked directly to subjects’ pedalling movements. The relation between visual and proprioceptive information was varied through software by manipulating optic flow gain (OFG – the relation between subjects’ pedalling speed and the resulting speed of the visual flow field). In this task, proprioceptive information was available while vestibular information was mainly excluded.

Our paradigm was different from commonly utilized cue-conflict paradigms in terms of how the “conflict” was created. In a prism paradigm, when a horizontal wedge prism is used, there is an absolute discrepancy at any given moment in egocentric directional information received from vision (displaced direction) and from proprioception (correct direction). However, for locomotion tasks, without a scaling factor, there is no absolute coupling between optic flow and proprioceptive information at any single moment. Optic flow, however, can provide relative velocity or distance information (Bremmer and Lappe, 1999).

We thus used a psychophysical ratio estimation task comparing two travelled path lengths to examine how subjects were able to estimate path length travelled. Our virtual reality (VR) paradigm allowed us to change the relation between visual information and proprioceptive information across different travelled path lengths. This approach permitted the first assessment of the individual contributions of optic flow information and of proprioceptive information in relative path length estimation.

Materials and Methods

Subjects


Thirty-three subjects (17 females and 16 males) participated in this study with ages ranging from 19 to 24, all with normal or corrected-to-normal visual acuity. All subjects gave their informed consent prior to their inclusion in this study. This research was approved by the McMaster University Research Ethics Board and has therefore been performed in accordance with the ethical standards specified by the 1964 Declaration of Helsinki.

Stimuli


The VR interface was a modified stationary mountain bicycle (see Figure 1A). The rear tire of the bicycle was equipped with an infrared sensor that translated pedalling speed information to an SGI Onyx2 with an Infinite Reality2 Engine. The visual environment was developed using Open Inventor and was presented to subjects via a head-mounted display (V8, Virtual Research; liquid crystal display with a resolution of 640x480 per eye and a field of view of 60° diagonal), which presented the same image to both eyes. The environment consisted of a straight, empty, seemingly infinite hallway, with walls, a floor, and no ceiling. Both the floor and the two walls were covered with a completely random dot texture (see Figure 1B).

Insert Figure 1 about here


Procedure


Prior to the experiment, each subject was given explicit instructions on the task and was provided with a series of practice trials (without feedback) to get used to both the equipment and the task.

Subjects pedalled the stationary bicycle at a comfortable speed and attempted to maintain that pedalling speed throughout each trial. As a direct effect of their own pedalling movements, subjects simultaneously experienced the visual flow pattern via the head mounted display in real time. The information about subjects’ pedalling movement was used to both update the visual information and for the purpose of subsequently analyzing subject’s movement dynamics (e.g. pedalling speed).

Each trial required subjects to travel through two path lengths: an initial “reference” path length followed immediately by a “judgment” path length. The end of each path was indicated by a loud clicking sound, at which point, subjects were required to stop pedalling. At the end of the trial, subjects were asked to give a verbal estimation of the judgment path length as a percentage of the reference path length. Subjects were instructed to provide estimates that were as precise as possible and subjects' actual estimations tended to fall in 5% increments. No feedback of any kind was given. The reference path length was always kept fixed at a length equivalent to the real-world length of 32 metres, while the judgement path length was varied randomly among a number of lengths.

As stated previously, OFG is defined as the relation between the information received visually and the information received through nonvisual (proprioceptive) sources. Considering subjects pedalling speed was on average 1 pedal rotation/sec, when an OFG of 1 was specified by the VR software, subjects visually perceived a speed of 4 m/sec. Thus, if subjects produced 8 pedal rotations (over 8 seconds), they would visually experience a reference path length of 32m. For the same pedalling speed, when presented with an OFG of 2, subjects would visually perceive a speed of 8 m/sec. Thus, subjects would only need to produce 4 pedal rotations (over 4 seconds) in order to visually experience the path length of 32m. We can think of the OFG manipulation as being somewhat analogous to switching the gears on a bicycle without having the normal accompanying change in required muscle strength or exertion.

The experiment consisted of four conditions administered in a random order. In condition 1, relative path length estimation was tested using a simulated normal bicycle riding experience. In this condition, the OFG was held constant at 1 throughout the test. As a result, the relation between the information received visually and the information received through nonvisual sources was consistent throughout the condition. This condition will hereafter be referred to as the VNcon condition. In this condition, the reference path length was held at a visually specified length of 32m. The judgment path length was varied randomly among 21 lengths ranging from 18% to 200% of the reference path length. Each subject was tested with four blocks of 42 trials.

In condition 2 the OFG was varied randomly. Such variations produced an inconsistent relation between the visual and nonvisual information between trials as well as between the reference and judgment path lengths presented during each trial. This condition will hereafter be referred to as the VNincon condition. In this condition, one of four values of OFG (0.31, 0.77, 1.23, and 1.69) was assigned randomly to both the reference path length and the judgment path length. The reference path length was held constant at a visually specified length of 32m, but because the OFG was varied among four levels, the path length specified by proprioceptive information varied among four values. The judgment path length specified by vision was varied randomly among nine levels. In order to include all four optic flow gains for each of the nine visually specified lengths, the length specified by proprioceptive information would have to vary among 36 levels. Each subject was tested with four blocks of 36 trials.


In condition 3, subjects viewed the same pattern of optic flow as in the VNincon condition through the head-mounted display, however, their movement was generated by operating a computer mouse rather than riding the bicycle. Due to the fact that subjects did not pedal the bicycle, the proprioceptive information provided by leg movements was removed. This condition will hereafter be referred to as the P(-) condition. Each subject was tested with four blocks of 36 trials.
In condition 4, visual information was absent and will hereafter be referred to as the V(-) condition. All aspects of this condition were the same as those in the VNcon condition, with the exception that subjects pedalled in the dark. Each subject was tested with four blocks of 42 trials.

For the VNcon, VNincon, and V(-) conditions, if subjects pedalled at a constant speed, the information from proprioceptive sources and duration of travel were always matched; thus, it would not be possible to distinguish the contributions of each source. In the P(-) condition, although proprioceptive information was removed, duration of travel remained a possible source of nonvisual information.


Results

Figures 2-3 and 5-7 illustrate the relation between subjects’ relative path length estimates (Lest ) and the correct path length (L) for the various conditions. If subjects estimated path length perfectly, it would be expected that their responses would fall on the diagonal dashed line. The difference between the mean estimated response and the correct response is considered constant error.

Errors for the VNcon Condition


Figure 2 illustrates the relation between the mean Lest and L when OFG was consistent (VNcon condition). When the judgment path length was longer than the reference path length (correct response higher than 100%), subjects tended to underestimate the length. As the path length increased, the constant error increased somewhat proportionally. Errors were minimal when the judgment path lengths were shorter than the reference path lengths. The standard deviation of the path length estimate tended to increase slightly as the judgment path length increased.

Insert Figure 2 about here


Errors for the VNincon and P(-) Conditions

Errors should subjects exclusively use visual information

In the VNcon condition, the path length information obtained from visual inputs, nonvisual inputs, or both, would lead to the same estimate. Thus, the contributions of different sensory cues were not distinguishable in this case. However, for the VNincon condition, if subjects were to rely exclusively on information from visual inputs or from nonvisual inputs (proprioception and/or duration of travel), this would lead to different responses. The correct path length, if subjects’ responses were based exclusively on visual cues (Lv) or exclusively on nonvisual cues (Ln), was calculated independently. Lv was calculated using the ratio of judgment path length to reference path length.


During the VNincon condition, the relation between the mean Lest and the mean Lv is illustrated in Figure 3A. The constant errors in relative path length estimation were larger in magnitude when the judgment path length was either much longer or much shorter than the reference path length (further away from 100%). When the judgment path length was longer than the reference path length, subjects tended to underestimate the length, whereas when the judgment path length was shorter than the reference path length, they overestimated. The size of the constant errors appeared to be larger in the VNincon condition (Figure 3A) than in the VNcon condition (Figure 2).
Insert Figure 3 about here

In the P(-) condition (Figure 3B) magnitude of error was larger than the VNincon condition (Figure 3A), although the pattern of error was similar – that is, there was an underestimation for the longer path lengths and an overestimation for the shorter path lengths. To compare the size of error between the VNincon condition and the P(-) condition, a statistical test comparing the slope of the regression lines for these two conditions demonstrated that the magnitude of error for the P(-) condition was significantly greater (t-test, p<0.05, Zar, 1996, pp. 353-357).

As a result of the higher magnitude of error observed in the P(-) condition, estimates appeared to be similar across path lengths (Figure 3B). In order to test whether subjects were performing at chance levels, we conducted an ANOVA which demonstrated that the variations among the path length estimates were in fact significantly different (F(8,296) = 114, p<0.001).

For a given path length, the variation of subjects’ relative path length estimations is considered variable error. Overall, the standard deviation of the path length estimate tended to increase slightly as the judgment path length increased. This is evident in the VNincon condition (Figure 3A), but less so in the P(-) condition (Figure 3B).


Errors should subjects exclusively use visual information: Effect of OFG


For both the VNincon condition and the P(-) condition, the above analysis was conducted on data that was collapsed across all levels of OFG. To assess whether the error fluctuated as a function of OFG, Figures 4A and 4B illustrate subjects’ estimates broken down into the four levels of OFG (in judgment path length) for the VNincon and P(-) conditions respectively. It appears that the magnitude of error differed as a function of OFG, with larger OFGs resulting larger constant error. A 4 x 9 (4 OFGs x 9 path lengths) within-subjects ANOVA was conducted for each condition. For the VNincon condition, there was no significant effect of OFG (F(3, 96) = 1.77, p > 0.05), however there was a significant effect of path length (F(8, 256) = 59.44, p < 0.001). Further, a significant interaction effect between OFG and path length was observed (F(24, 768) = 2.18, p < 0.001). For the P(-) condition, there was a significant effect of OFG (F(3, 96) = 6.58, p < 0.001) and a significant effect of path length (F(8, 256) = 156.39, p < 0.001). Further, a significant interaction effect between OFG and path length was observed (F(24, 768) = 2.54, p < 0.001). The results suggest that the effect of optic flow is non-linear and such non-linearity is not uniform for different path length ratios.
Insert Figure 4 about here

Errors should subjects exclusively use nonvisual information


The correct path length, if subjects’ responses were based exclusively on nonvisual cues (Ln), was calculated using the ratio of travel duration of the judgment path length to the travel duration of the reference path length. The duration of travel was calculated by assuming that subjects always pedalled at a constant speed.

Figures 5A and 5B illustrate the relation between the mean Lest and the Ln for the VNincon and P(-) conditions respectively. Figures 6A and 6B further clarify this relation by focusing on the range between 25% and 175% of the correct answers in Figures 5A and 5B. Both constant error and variable error were high. This suggests that when visual information was inconsistent with nonvisual information, relative path length estimation was not directly derived from nonvisual sources.


Insert Figure 5 about here

Insert Figure 6 about here




Error for the V(-) Condition

When visual information was absent (V(-)) (Figure 7), the pattern of error and the magnitude of error were comparable to the VNcon condition (Figure 2). This indicated that nonvisual information alone was sufficient for relative path length estimation. The variable error increased proportionally as the judgment path length increased. The magnitude of the variable error in the V(-) condition was somewhat larger than that in the VNcon condition.

Insert Figure 7 about here




Summary of Errors


Figure 8 illustrates the mean unsigned errors (combination of constant and variable errors) for all path lengths and for all subjects tested in each condition. There was no significant difference between the VNcon and the V(-) conditions (t(32) = .5, p > 0.05), indicating comparable performance for the conditions in which visual and nonvisual cues were consistent and when nonvisual cues were presented alone.

In addition, the magnitude of error for the VNcon condition was smaller than the magnitude of error observed for both the VNincon and P(-) conditions assuming subjects relied exclusively on visual information (t(32) = 2.78, p < 0.001).

For both the VNincon and the P(-) conditions, the magnitude of error for Lv was much less than for Ln, indicating vision’s dominance (VNincon, t(32) = 10.26, p < 0.001; P(-), t(32) = 5.0, p < 0.001) (see Figure 8). This comparison was made by averaging only the data for Ln within the range equivalent to that of Lv (i.e., when the correct answer was between 25% and 175% - see Figures 6A and 6B). Note that such a comparison represents a more conservative estimate of the overall difference in error observed between the two cues. If the entire range of path lengths as specified by nonvisual information (5% - 950%) were considered, the error should subjects rely exclusively on nonvisual cues would be much higher, with a mean error of 99.6 for the VNincon condition and 98.1 for the P(-) condition.

Assuming that subjects used exclusively visual information, the error was significantly smaller for the VNincon condition than for the P(-) condition (t(32) = 2.78, p<.01), indicating that the presence of proprioceptive cues improved performance even though they were inconsistent in relation to the visual cues. For Ln, there was no significant difference between the VNincon and the P(-) conditions (t(32) = 1.3, p>.05).


Insert Figure 8 about here

Effect of OFG on Pedalling Speed


It is important to note that for the VNincon condition, the calculation of correct path length specified by nonvisual information was made on the basis that subjects pedalled the bicycle at a constant speed. We also empirically examined whether subjects’ actual pedalling speed fluctuated in response to the different OFGs presented. To do this, each subject’s average pedalling speed for each trial was calculated by measuring the last two seconds of pedalling movement. Speed values were then averaged for each subject across all trials for each OFG used. As shown in Figure 9, subjects tended to pedal slightly slower for higher OFGs. Such differences in pedalling speed were significant as indicated by a one-way ANOVA comparing the four levels of OFG (F(3, 96) = 27.4, p < 0.001). However, these differences in pedalling speed (ranging from 1.1 to 0.9) were minimal compared to the corresponding differences in OFG (ranging from 0.31 to 1.69).
Insert Figure 9 about here
Discussion

The present study investigated the relative contributions of optic flow information and proprioceptive information to human performance on relative path length estimation. During the VNcon and V(-) conditions, subjects produced the least amount of error, suggesting proficiency in relative path length estimation using either combined cues or nonvisual cues alone. In the VNincon condition, estimations appeared to be based predominantly on Lv. Thus, when visual and proprioceptive information were inconsistent with each other, subjects responded in a manner as predicted should optic flow remain the dominant source of information. What is of particular interest is that, although proprioceptive information was not used directly, its mere availability increased the accuracy of relative path length estimation based on visual cues, even though the proprioceptive information was inconsistent with the visual information.



Visual Dominance and the Unique Role of Proprioceptive Information

The visual dominance reflected in the path length estimates for the VNincon condition is similar to what has been observed in cue-conflict literature examining visual-tactile interactions in near body space (Warren and Rossano, 1991; Welch, 1978; Welch and Warren, 1986). For example, it has been shown that when subjects are required to wear displacement prisms, causing a spatial discrepancy between the shifted spatial layout obtained through vision and the correct spatial layout provided by nonvisual cues, they predominantly relied on vision to perform spatial tasks (e.g., Pick, Warren, and Hay, 1969; Warren, 1979).


In our VNincon condition, proprioceptive information actually provided “conflicting” information with regards to visual information. It would be reasonable to assume that should the conflicting proprioceptive information be removed, as was the case in the P(-) condition, the accuracy of relative path length estimation based on visual information should increase. This was in fact the opposite of what we found. The fact that subjects’ performance based on vision dropped with the removal of proprioceptive cues, demonstrated for the first time, that the proprioceptive information involved in active movement improves the encoding of the resulting optic flow information, even when the proprioceptive information is misleading.

It seems reasonable to claim that combining proprioceptive information with visual input would be advantageous. When subjects are actively engaging in a locomotor activity, it is possible that they may have a stronger tendency to register the resulting visual flow information, making the relative path length estimation based on vision more likely and more accurate. Indeed, literature has demonstrated that idiothetic updating facilitates visual performance regarding both stationary landmarks (e.g., object recognition, Harman, Humphrey, and Goodale, 1999) and the spatial relation between objects (Simons and Wang, 1998). This improved perception due to action is an interesting cross-modal interaction that complements theoretical frameworks that discriminate vision-for-perception from vision-for-action (Milner and Goodale, 1996).



Cue Interaction Models

To some extent, our

results are in accordance with the general ideas specified by the statistically optimal integration model that has recently received convincing support from work on visual-haptic integration (Ernst and Banks, 2002, van Beers, Wolpert, and Haggard, 2002; van Beers, Sittig, and van der Gon, 1999) and on the integration of different visual cues in depth perception (Johnston, Cumming, and Landy, 1994; Young, Landy, and Maloney, 1993). This theory predicts that sensory information from multiple sources is weighted according to the estimated reliability of each source relative to the estimated reliabilities of other available sources. A number of aspects of our study are consistent with this model.

In our study, when the relation between visual and nonvisual information was variable, the results showed that subjects tended to rely more heavily on visual information. Such result may reflect the perceived reliability of these two cues in the current path length estimation task. It has traditionally been demonstrated that visual cues are, in general, more informative for large scale spatial processing than any other sensory information (Warren and Rossano, 1991).

With regards to nonvisual information, duration of travel may potentially contribute to path length estimation. Therefore, in the VNincon condition, visual information, proprioceptive information, and duration of travel may all be contributing cues. In the P(-) condition, only visual information and duration of travel were available. If the combined visual-proprioceptive information available in the VNincon condition was perceived as being more reliable, this combination would be more heavily weighted during path length estimation compared to situations in which proprioceptive cues were absent (P(-)). Consequently, the apparent advantage of using a combination of cues appeared to persist even when the two sources of information were not consistent across trials.

In contrast to our results, Kearns et al. (2002) assessed path integration using a VR walking paradigm. It was demonstrated that when nonvisual information generated from walking was combined with intermittently available optic flow information, subjects relied more heavily on nonvisual cues. Although their results seemed to contradict our results, this discrepancy may in fact be a direct consequence of the differences in experimental design. For instance, in the Kearns et al. experiment, optic flow information was only intermittently available and therefore may have been perceived as being less reliable than nonvisual information. Further, because in their study locomotion was achieved through walking, this may have offered a more reliable source of nonvisual information than pedalling a stationary bicycle – a task not as common as walking.

Although our results regarding visual-proprioceptive interaction appear to be consistent with the general assumptions specified by the statistically optimal integration model in terms of cue weighting based on perceived reliability, the exact nature of the cue combination requires further examination. Experiments

supporting the optimal integration model typically use very subtle sensory conflicts (e.g., Ernst and Banks, 2002) and these results suggest that subjects’ estimations were based on a weighted average of individual cues (Clark and Yuille, 1990). Our experiment involved a greater sensory "conflict" between vision and proprioceptive information. Our results indicated that, although nonvisual information was not used directly, the mere availability of proprioceptive information increased the accuracy of relative path length estimation based on visual cues even though the proprioceptive information was inconsistent with the visual information. Therefore, it appears that the interaction between visual and proprioceptive information found in our study takes a different and non-linear form compared to the results reported in other literature which demonstrates a linear weighting (e.g., Johnston, Cumming, and Landy, 1994; Young, Landy, and Maloney, 1993).

Similar to the cue interaction mechanisms discussed in the context of visual-haptic interactions, Mittelstaedt and Mittelstaedt (2001) described three forms of interactions that may occur between vestibular and proprioceptive cues when estimating path length. Vestibular cues could either be suppressed by the proprioceptive cues; cancelled by the proprioceptive cue (see also Mergner and Rosemeier, 1998); or the two cues could actually combine to form an optimized weighted average. The results of the current study examining relative path length estimation, revealed a unique interaction between visual and proprioceptive cues that may not be entirely explained by other proposed theories. In order to gain a better understanding of such mechanisms, further experimentation involving the quantification of path length discrimination thresholds would complement the current findings involving ratio estimation.

The Effect of Optic Flow on Pedalling Speed

In our study, in addition to being required to monitor and report the path length travelled, subjects were also responsible for maintaining a constant pedalling speed. Both of these tasks required subjects to monitor visual and nonvisual information.

Results showed that pedalling speed varied slightly as a function of OFG but not to the degree that would be predicted should pedalling speed be completely determined by OFG (Figure 9). As shown in the VNincon condition, the strong contribution of visual information in the estimation of path length appears to be inconsistent with the results of the small effect of OFG on the rate of pedalling. The fact that subjects relied more heavily on different sources of sensory information when estimating path length compared to when maintaining a constant pedalling speed, may reflect differences in task requirement. When subjects were required to monitor and report path length, visual information seemed to play a more important role, perhaps reflective of the fact that vision is normally the more reliable and more readily available source of information for spatial perception. In contrast, when subjects were required to maintain a constant pedalling speed, they may have devoted more attention to proprioceptive information as it typically provides more reliable locomotor information.

The magnitude of the effect of optic flow on pedalling speed is consistent with the effect of optic flow found in studies using a treadmill-walking task. Prokop, Schubert, and Berger (1997) conducted a study in which subjects were instructed to walk at a constant speed on a closed-loop treadmill, while the magnitude of optic flow was manipulated. Their results demonstrated that subjects modulated their movements as a result of variations in optic flow magnitude. However, the degree to which subjects modified their movements was far less than predicted should subjects only rely on optic flow. Overall, both Prokop et al. and the current study suggest that both visual and proprioceptive information play a role in modulating self-motion – findings that are generally consistent with other similar studies (Konczak, 1994; Stappers, 1996; Varraine, Bonnard, and Pailhous, 2002).

As shown in the VNincon condition, not only did pedalling speed vary slightly as a function of OFG (Figure 9), but the verbal estimates of relative path length also differed according to OFG (Figure 4). In other words, the effect of visual speed was non-linear. Mittelstaedt and Mittelstaedt (2001) also showed that path length estimation varied in response to changes in movement speed when idiothetic cue availability was manipulated. In our study, although subjects’ pedalling speed remained within a small range, the variation in OFG and thus the variation in visual speed led to a non-linearity similar to that observed with regards to idiothetic cues (Mittelstaedt and Mittelstaedt, 2001). The exact pattern and nature of such visual non-linearity remains to be examined extensively and could be better understood with additional experimental conditions designed to isolate the contributions of optic flow.

Conclusion

This is the first study to investigate the relative contributions of visual and proprioceptive information in relative path length estimation. Further, it complements literature investigating the relative contributions of these sources of information to locomotion in general. The results of this study will allow for further examination of the exact nature of such interactions. For instance, it is important to evaluate whether the beneficial effect of proprioceptive information is strictly a perceptual phenomenon or results from the fact that subjects have active control over their movement and/or experience increased levels of attention. Ultimately, this paradigm allows for further insight into how the brain integrates different sources of information.



ACKNOWLEDGEMENTS


The authors wish to thank Drs. D. Maurer, R. Racine, L. Allan, and C. Ellard for helpful discussions and comments on the manuscript and N. Jones, K. Strode, and D. Zelek for their assistance in collecting data. We would also like to thank Dr. B. Frost, G. Barrett, and R. Dupras for discussions and assistance in the initial design and construction of the VR interface and software. We also thank two anonymous reviewers for helpful comments on the earlier versions of this article. This work was supported by grants from the Natural Science and Engineering Research Council of Canada and the Canadian Foundation for Innovation to H.-J. S.

References

Bigel, MG, Ellard, CG (2000) The contribution of nonvisual information to simple place

navigation and distance estimation: An examination of path integration. Can J Exp Psychol 54: 172-185

Bremmer, F, Lappe, M (1999) The use of optical velocities for distance discrimination

and reproduction during visually simulated self motion. Exp Brain Res 127: 33-42

Chance SS, Gaunet F, Beall AC, Loomis JM (1998) Locomotion mode affects the

updating of objects encountered during travel: The contribution of vestibular and proprioceptive inputs to path integration. Presence-Teleop Virt Env 7: 168-178

Clark, JJ, Yuille, AL (1990) Data fusion for sensory information processing systems.

Boston: Kluwer Academic Publishers

Elliott, D (1986) Continuous visual information may be important after all: A failure to

replicate Thomson (1983). J Exp Psychol Hum Percept Perform 12: 388-391

Ernst, MO, Banks, MS (2002) Humans integrate visual and haptic information in a

statistically optimal fashion. Nature 415: 429-433

Foos, PW (1982) Distance estimation and information load. Percept Mot Skills 54(1): 79-

82

Gibson, JJ (1950) The perception of the visual world. Houghton Mifflin: Boston, MA



Grant, SC, Magee LE (1998) Contributions of proprioception to navigation in virtual environments. Hum Factors 40(3): 489-497

Harman, KL, Humphrey, GK, Goodale, MA (1999) Active manual control of object views facilitates visual recognition. Curr Biol 22: 1315-1318 

Harris, LR, Jenkin, M, Zikovitz, DC (2000) Visual and non-visual cues in the perception

of linear self motion. Exp Brain Res 135: 12-21

Johnston, EB, Cumming, BG, Landy, MS (1994) Integration of stereopsis and motion

shape cues. Vision Res 34: 2259-2275

Johnston, EB, Cumming, BG, Parker, AJ (1993) Integration of depth modules: stereopsis

and texture. Vision Res 33: 813-826

Kearns, MJ, Warren, WH, Duchon, AP, Tarr, MJ (2002) Path integration from optic flow

and body senses in a homing task. Perception 31: 349-374

Klatzky, RL, Loomis, JM, Beall, AC, Chance, SS, and Golledge, RG (1998) Spatial

updating of self-position and orientation during real, imagined, and virtual locomotion. Psychol Sci 9: 293-298

Konczak J (1994) Effects of optic flow on the kinematics of human gait: a comparison of

young and older adults. J Mot Behav 26:225-36

Lambrey S, Viaud-Delmon I, Berthoz, A (2002) Influence of a sensorimotor conflict on

the memorization of a path traveled in virtual reality. Cog Brain Res 14: 177-186

Lee, DN (1980) The optic flow field: The foundation of vision. Philos Trans R Soc Lond

B Biol Sci 290: 169-179

Loomis, JM, Da Silva, JA, Fujita, N, Fukusima, SS (1992) Visual space perception and

visually directed action. J Exp Psychol Hum Percept Perform 18: 906-921

Mergner, T, Rosemeier T (1998) Interaction of vestibular, somatosensory and visual

signals for postural control and motion perception under terrestrial and

microgravity conditions – a conceptual model. Brain Res Brain Res Rev 28: 118-135

Milner, AD, Goodale, MA (1996) The visual brain in action. Oxford: Oxford University

Press

Mittelstaedt, ML, Mittelstaedt, H (2001) Idiothetic navigation in humans: estimation of



path length. Exp Brain Res 139: 318-332

Ohmi, M (1996) Egocentric perception through interaction among many sensory systems.

Cog Brain Res 5: 87-96

Pick, HL, JR, Warren, DH, Hay, JC (1969) Sensory conflict in judgments of spatial

direction. Percept Psychophys 6: 203-205

Prokop T, Schubert M, Berger W (1997) Visual influence on human locomotion. Exp

Brain Res 114: 63-70

Redlick, FP, Jenkin, M, Harris, LR (2001) Humans can use optic flow to estimate distance of travel. Vision Res 41: 213-219

Regan, D, Hamstra, SJ (1993) Dissociation of discrimination thresholds for time to

contact and for rate of angular expansion. Vision Res 33: 447-462

Rieser, JJ, Ashmead, DH, Talor, CR, Youngquist, GA (1990) Visual perception and the

guidance of locomotion without vision to previously seen targets. Perception 19: 675-689

Rieser, JJ, Pick, HL, Ashmead, DH, Garing, AE (1995) Calibration of human locomotion

and models of perceptual-motor organization. J Exp Psychol Hum Percept Perform 21: 480-497

Sadalla, EK, Magel, SG (1980) The perception of traversed distance. Environ Behav

12(1): 65-79

Simons, DJ, Wang, RF (1998) Perceiving real-world viewpoint changes. Psychol Sci, 9,

315-320


Stappers PJ (1996) Matching proprioceptive to visual speed affected by nonkinematic

parameters. Percept Mot Skills 83:1353-4

Steenhuis, RE, Goodale, MA (1988) The effects of time and distance on accuracy of

target-directed locomotion: Does an accurate short-term memory for spatial location exist? J Mot Behav 20: 399-415

Sun, H-J, Carey, DP, Goodale, MA (1992) A mammalian model of optic-flow utilization

in the control of locomotion. Exp Brain Res 91: 171-175

Sun, H-J, Frost, BJ (1998) Computation of different optical variables of looming objects

in pigeon nucleus rotundus neurons. Nat Neurosci 1: 296-303

Thomson, JA (1983) Is continuous visual monitoring necessary in visually guided

locomotion? J Exp Psychol Hum Percept Perform 9: 427-443

van Beers, RJ, Sittig, AC, van der Gon, JJD (1999) Integration of proprioceptive and

visual position-information: An experimentally supported model. J Neurophysiol 81: 1355-1364

van Beers, RJ, Wolpert, DM, Haggard, P (2002). When feeling is more important than

seeing in sensorimotor adaptation. Curr Biol 12: 834-837

Varraine E, Bonnard M, Pailhous J (2002) Interaction between different sensory cues in

the control of human gait. Exp Brain Res 142: 374-384

Warren, DH (1979) Spatial localization under conflict conditions: Is there a single

explanation? Perception 8: 323-337

Warren, DH, Rossano, MJ (1991) Intermodality relations vision and touch. In M.A.

Heller and W. Schiff (Eds.), The psychology of touch. Hillsdale, NJ, Erlbaum, pp119-137

Warren, WH, Hannon, DJ (1990) Eye movements and optical flow. Journal of the

J Opt Soc Am A 7: 160-169

Welch, RB (1978) Perceptual modification. New York: Academic Press

Welch, RB, Warren, DH (1986) Intersensory interactions. In K.R. Boff, L. Kaufman, JP,

Thomas (Eds.), Handbook of perception and human performance. New York, Wiley, pp 25.1-25.36

Witmer, BG, Kline, PB (1998) Judging perceived and traversed distance in virtual

environments. Presence-Teleop Virt Env 7: 144-167

Young, MJ, Landy, MS, Maloney, LT (1993) A perturbation analysis of depth perception

from combinations of texture and motion cues. Vision Res 33: 2685-2696

Zar, JH (1996) Biostatistical Analysis 3rd Ed. New Jersey, Prentice-Hall Inc., pp 353-357

Figure Captions

Figure 1. (A) Subjects wore a head-mounted display and rode a stationary bicycle in order to move through the virtual world. (B) The visual environment consisted of a straight, empty, and seemingly infinite hallway.

Figure 2. The relation between subjects’ relative path length estimates (Lest) and the correct path length (L) in the VNcon condition. The diagonal dashed line indicates perfect performance. The error bars represent standard deviation.

Figure 3. The relation between subjects’ relative path length estimates (Lest) and the correct path length as specified by vision (Lv) for the VNincon condition (A) and for the P(-) condition (B).

Figure 4. The relation between subjects’ relative path length estimates and the correct path length as specified by vision (Lv) plotted according to different levels optic flow gain (OFG) used in judgment path, for the VNincon condition (A) and for the P(-) condition (B).

Figure 5. The relation between subjects’ relative path length estimate (Lest) and the correct path length as specified by nonvisual information (Ln) across the entire range of path lengths tested, for the VNincon condition (A) for the P(-) condition (B).

Figure 6. The relation between subjects’ relative path length estimate (Lest) and the correct path length as specified by nonvisual information (Ln) for the range of path lengths between 25% and 175% of the correct answers, for the VNincon condition (A) and for the P(-) condition (B).

Figure 7. The relation between subjects’ relative path length estimate (Lest) and the correct path length as specified by nonvisual information (Ln), for the V(-) condition.

Figure 8. The mean unsigned errors across all path lengths for all conditions. V: visual information; N: nonvisual information; P: proprioceptive information; VN: combination of visual and nonvisual information.

Figure 9. The relation between subjects’ average pedalling speed and OFG for the VNincon condition.






A

B




A

B




A

B




A

B










Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©kagiz.org 2016
rəhbərliyinə müraciət