One of the primary ways we interact with the world is using our hands. In macaques, the circuit
spanning the anterior intraparietal area, the hand area of the ventral premotor cortex, and the primary
motor cortex is necessary for transforming visual information into grasping movements. We
hypothesized that a recurrent neural network mimicking the multi-area structure of the anatomical
circuit and using visual features to generate the required muscle dynamics to grasp objects would
explain the neural and computational basis of the grasping circuit. Modular networks with object
feature input and sparse inter-module connectivity outperformed other models at explaining neural
data and the inter-area relationships present in the biological circuit, despite the absence of neural
data during network training. Network dynamics were governed by simple rules, and targeted
lesioning of modules produced deficits similar to those observed in lesion studies, providing a
potential explanation for how grasping movements are generated.
Presented on 18.12.2019 by Marie Schmidt