Abstract: We are interested in investigating how to design flexible and inclusive technology that supports in- teraction between individuals with different sensory abilities. We refer to this as crossmodal collabora- tion. In particular, we investigate technologies that engage more than one sense across modalities, such as combining graphics, data sonification and haptic feedback as a potential means for supporting richer and more meaningful interactions between sighted and visually impaired users with and through technology. Our prior work in this context includes developing novel technology for domains spanning diagram editing (Metatla et al. 2012), information seeking (Sahib et al. 2013), music and sound pro- duction (Metatla et al. 2016), and more recently learning and teaching (Metatla 2017).