Charles M Higgins
Publications
PMID: 22744199;Abstract:
Abstract Collision avoidance models derived from the study of insect brains do not perform universally well in practical collision scenarios, although the insects themselves may perform well in similar situations. In this article, we present a detailed simulation analysis of two well-known collision avoidance models and illustrate their limitations. In doing so, we present a novel continuous-time implementation of a neuronally based collision avoidance model. We then show that visual tracking can improve performance of thesemodels by allowing an relative computation of the distance between the obstacle and the observer.We compare the results of simulations of the two models with and without tracking to show how tracking improves the ability of the model to detect an imminent collision.We present an implementation of one of thesemodels processing imagery from a camera to showhow it performs in real-world scenarios. These results suggest that insects may track looming objects with their gaze. © The Author(s) 2012.
PMID: 15490223;Abstract:
Behavioral experiments suggest that insects make use of the apparent image speed on their compound eyes to navigate through obstacles, control flight speed, land smoothly, and measure the distance they have flown. However, the vast majority of electrophysiological recordings from motion-sensitive insect neurons show responses which are tuned in spatial and temporal frequency and are thus unable to unambiguously represent image speed. We suggest that this contradiction may be resolved at an early stage of visual motion processing using nondirectional motion sensors that respond proportionally to image speed until their peak response. We describe and characterize a computational model of these sensors and propose a model by which a spatial collation of such sensors could be used to generate speed-dependent behavior.
Abstract:
Visual motion information provides a variety of clues that enable biological organisms from insects to primates to efficiently navigate in unstructured environments. We present modular mixed-signal very large-scale integration (VLSI) implementations of the three most prominent biological models of visual motion detection. A novel feature of these designs is the use of spike integration circuitry to implement the necessary temporal filtering. We show how such modular VLSI building blocks make it possible to build highly powerful and flexible vision systems. These three biomimetic motion algorithms are fully characterized and compared in performance. The visual motion detection models are each implemented on separate VLSI chips, but utilize a common silicon retina chip to transmit changes in contrast, and thus four separate mixed-signal VLSI designs are described. Characterization results of these sensors show that each has a saturating response to contrast to moving stimuli, and that the direction of motion of a sinusoidal grating can be detected down to less than 5% contrast, and over more than an order of magnitude in velocity, while retaining modest power consumption. © 2005 IEEE.
Abstract:
The Hassenstein-Reichardt (HR) correlation model is commonly used to model elementary motion detection in the fly. Recently, a neuronally-based computational model was proposed which, unlike the HR model, is based on identified neurons. The response of both models increases as the square of contrast, although the response of insect neurons saturates at high contrasts. We introduce a saturating nonlinearity into the neuronally-based model in order to produce contrast saturation and discuss the neuronal implications of these elements. Furthermore, we show that features of the contrast sensitivity of movement-detecting neurons are predicted by the modified model. © 2004 Elsevier B.V. All rights reserved.