Opendata, web and dolomites

VisualGrasping SIGNED

Visually guided grasping and its effects on visual representations

Total Cost €


EC-Contrib. €






Project "VisualGrasping" data sheet

The following table provides information about the project.


Organization address
postcode: 35390

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Germany [DE]
 Total cost 159˙460 €
 EC max contribution 159˙460 € (100%)
 Programme 1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility)
 Code Call H2020-MSCA-IF-2017
 Funding Scheme MSCA-IF-EF-ST
 Starting year 2018
 Duration (year-month-day) from 2018-04-02   to  2020-07-02


Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 


 Project objective

I ask how vision guides grasping, and conversely, how learning to grasp objects constrains visual processing. Grasping an object feels effortless, yet the computations underlying grasp planning are nontrivial and there is an extensive literature describing the multifaceted features of visually guided grasping. I aim to bind this fragmented body of knowledge into a unified framework for understanding how humans visually select grasps. To do so I will use motion-tracking hardware (already in place at the University of Giessen) to measure and model human grasping patterns to 3D objects. I will rely on Dr. Fleming’s unique expertise with physical simulation to simulate human grasping with objects varying in shape and material. Joining behavioral measurements with computer simulations will provide a powerful data- and theory-driven approach to fully map out the space of human grasping behavior. The complementary goal of this proposal is to understand how grasping constrains visual processing of object shape and material. I plan to tackle this goal by building a computational model of visual processing for grasp planning. Both Dr. Fleming and I have previous experience with computational modelling of visual function. I will exploit powerful machine learning techniques to infer what kinds of visual representations are necessary for grasp planning. I will train Deep Neural Nets (for which the hardware and software is already in place and in use by the Fleming lab) using extensive physics simulations. Dissecting the learned network architecture and comparing the network’s performance to human behavior will tell us what information about shapes, material, and objects the human visual system encodes to plan motor actions. In short, with this research I aim to determine how processing within the human visual system is shaped by and guides hand motor action.


year authors and title journal last update
List of publications.
2019 Guido Maiello, Vivian C. Paulun, Lina K. Klein, Roland W. Fleming
Object Visibility, Not Energy Expenditure, Accounts For Spatial Biases in Human Grasp Selection
published pages: 204166951982760, ISSN: 2041-6695, DOI: 10.1177/2041669519827608
i-Perception 10/1 2019-11-18
2018 Guido Maiello, Vivian C. Paulun, Lina K. Klein, Roland W. Fleming
The Sequential-Weight Illusion
published pages: 204166951879027, ISSN: 2041-6695, DOI: 10.1177/2041669518790275
i-Perception 9/4 2019-11-18

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "VISUALGRASPING" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email ( and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "VISUALGRASPING" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.2.)

SIMIS (2020)

Strongly Interacting Mass Imbalanced Superfluid with ultracold fermions

Read More  

iRhomADAM (2020)

Uncovering the role of the iRhom2-ADAM17 interaction in inflammatory signalling

Read More  

POMOC (2019)

Charles IV and the power of marvellous objects

Read More