PhD Studentship available

PhD Studentship available to work as part of the EPSRC research project: “Crossmodal Interactive Tools for Inclusive Learning” (EP/N00616X/2). The project aim is to investigate novel multisensory learning and teaching technologies that support inclusive education for visually-impaired and sighted children in mainstream schools. We use an iterative user-centred approach combining participatory design activities with empirical research into multimodal and crossmodal interaction to find out how different senses can be effectively integrated with visual capabilities to support inclusive learning. We are engaged with local schools to design, implement and research augmented multisensory tools for teaching and learning purposes, focusing on accommodating curriculum requirements and social processes surrounding collaborative learning. Continue reading “PhD Studentship available”

I am on the Program Committee for Audio Mostly’2017 Conference

I am part of the Program Committee for Audio Mostly’2017, an interdisciplinary conference on design and experience of interaction with sound. This year’s conference theme is  “Augmented and Participatory Sound/Music Experiences”. The conference will be collocated with the 3rd Web Audio Conference (WAC) “Collaborative Audio” (21-23 August 2017), with one day, Wednesday the 23rd, a common day between the two events.  Check out the conference website for more details. Continue reading “I am on the Program Committee for Audio Mostly’2017 Conference”

HaptiSonic Artefacts: short paper accepted at CHI’2017 workshop

I will be at CHI’2017 conference, presenting a paper on HaptiSonic artefacts as part of the Things of Design workshop. The workshop explores design research, as a growing mode of research within the HCI community, and the role of the artifact in generating knowledge outcomes from research through design (RtD). Continue reading “HaptiSonic Artefacts: short paper accepted at CHI’2017 workshop”

Postdoctoral position available in multisensory interaction and education

Applications are invited for a Research Associate/Senior Research Associate position in the Bristol Interaction Group (BIG Lab) within the Department of Computer Science at the University of Bristol. The role is part of the EPSRC project “Crossmodal Interactive Tools for Inclusive Learning”, which aims at investigating novel multisensory learning and teaching technologies that supports inclusive interaction between visually-impaired and sighted children in mainstream schools. Continue reading “Postdoctoral position available in multisensory interaction and education”

Welcome to New PhD Student Mohammed Alshahrani

Welcome to Mohammed Alshahrani who joins us to start his PhD on examining multimodal and crossmodal interaction to improve the accessibility and usability of mobile technology for the elderly population. Continue reading “Welcome to New PhD Student Mohammed Alshahrani”

Journal Paper Accepted at JMUI

Our journal paper entitled “Audio-Haptic Interfaces for Digital Audio Workstations: A Participatory Design Approach” has been accepted for publication in the Journal on Multimodal User Interfaces


We examine how auditory displays, sonification and haptic interaction design can support visually impaired sound engineers, musicians and audio production specialists access to digital audio workstation. We describe a user-centred approach that incorporates various participatory design techniques to help make the design process accessible to this population of users. We also outline the audio-haptic designs that results from this process and reflect on the benefits and challenges that we encountered when applying these techniques in the context of designing support for audio editing.

Journal paper accepted at PeerJ

Our journal paper entitled “Sonification of reference markers for auditory graphs: Effects on non-visual point estimation tasks” has been accepted for publication in PeerJ

Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.

[NEW GRANT] Crossmodal Interactive Tools for Inclusive Learning (CrITIcaL) – 2016/2021

Sponsored by EPSRC Early Career Fellowship

Role: Principal Investigator

I have been awarded a 5-years Early Career fellowship to research and develop interactive learning tools to make mixed classrooms more inclusive of visually impaired students.

I will be exploring answers to questions such as: how do people learn together when they have access to different sets of sensory modalities? And, how can we exploit crossmodal interaction to design more inclusive collaborative user interfaces? Continue reading “[NEW GRANT] Crossmodal Interactive Tools for Inclusive Learning (CrITIcaL) – 2016/2021”

Full paper accepted at CHI’2016

CHI_2016_logoOur paper entitled “Tap the ShapeTones: Exploring the effects of crossmodal congruence in an audio-visual interface” has been accepted at CHI’2016

ShapeTonesAbstract: There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.