Disambiguating Visual Motion Through Contextual Feedback Modulation
From MaRDI portal
Publication:3160475
DOI10.1162/0899766041732404zbMath1084.68906DBLPjournals/neco/BayerlN04OpenAlexW2082223698WikidataQ45033858 ScholiaQ45033858MaRDI QIDQ3160475
Publication date: 9 February 2005
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/0899766041732404
Related Items (9)
Biologically plausible learning in neural networks with modulatory feedback ⋮ A dynamical model of visual motion processing for arbitrary stimuli including type II plaids ⋮ A Bio-Inspired, Computational Model Suggests Velocity Gradients of Optic Flow Locally Encode Ordinal Depth at Surface Borders and Globally They Encode Self-Motion ⋮ Computing with a Canonical Neural Circuits Model with Pool Normalization and Modulating Feedback ⋮ Globally consistent depth sorting of overlapping 2D surfaces in a model using local recurrent interactions ⋮ Bifurcation analysis applied to a model of motion integration with a multistable stimulus ⋮ Motion detection, noise reduction, texture suppression, and contour enhancement by spatiotemporal Gabor filters with surround inhibition ⋮ A neurally plausible model of the dynamics of motion integration in smooth eye pursuit based on recursive Bayesian estimation ⋮ Interaction of feedforward and feedback streams in visual cortex in a firing-rate model of columnar computations
Cites Work
This page was built for publication: Disambiguating Visual Motion Through Contextual Feedback Modulation