I am part of the Program Committee for Audio Mostly’2017, an interdisciplinary conference on design and experience of interaction with sound. This year’s conference theme is “Augmented and Participatory Sound/Music Experiences”. The conference will be collocated with the 3rd Web Audio Conference (WAC) “Collaborative Audio” (21-23 August 2017), with one day, Wednesday the 23rd, a common day between the two events. Check out the conference website for more details. Continue reading “I am on the Program Committee for Audio Mostly’2017 Conference”
I will be at CHI’2017 conference, presenting a paper on HaptiSonic artefacts as part of the Things of Design workshop. The workshop explores design research, as a growing mode of research within the HCI community, and the role of the artifact in generating knowledge outcomes from research through design (RtD). Continue reading “HaptiSonic Artefacts: short paper accepted at CHI’2017 workshop”
Applications are invited for a Research Associate/Senior Research Associate position in the Bristol Interaction Group (BIG Lab) within the Department of Computer Science at the University of Bristol. The role is part of the EPSRC project “Crossmodal Interactive Tools for Inclusive Learning”, which aims at investigating novel multisensory learning and teaching technologies that supports inclusive interaction between visually-impaired and sighted children in mainstream schools. Continue reading “Postdoctoral position available in multisensory interaction and education”
Welcome to Mohammed Alshahrani who joins us to start his PhD on examining multimodal and crossmodal interaction to improve the accessibility and usability of mobile technology for the elderly population. Continue reading “Welcome to New PhD Student Mohammed Alshahrani”
Our journal paper entitled “Audio-Haptic Interfaces for Digital Audio Workstations: A Participatory Design Approach” has been accepted for publication in the Journal on Multimodal User Interfaces
We examine how auditory displays, sonification and haptic interaction design can support visually impaired sound engineers, musicians and audio production specialists access to digital audio workstation. We describe a user-centred approach that incorporates various participatory design techniques to help make the design process accessible to this population of users. We also outline the audio-haptic designs that results from this process and reflect on the benefits and challenges that we encountered when applying these techniques in the context of designing support for audio editing.
Our journal paper entitled “Sonification of reference markers for auditory graphs: Effects on non-visual point estimation tasks” has been accepted for publication in PeerJ
Research has suggested that adding contextual information such as reference markers to data sonification can improve interaction with auditory graphs. This paper presents results of an experiment that contributes to quantifying and analysing the extent of such benefits for an integral part of interacting with graphed data: point estimation tasks. We examine three pitch-based sonification mappings; pitch-only, one-reference, and multiple-references that we designed to provide information about distance from an origin. We assess the effects of these sonifications on users’ performances when completing point estimation tasks in a between-subject experimental design against visual and speech control conditions. Results showed that the addition of reference tones increases users accuracy with a trade-off for task completion times, and that the multiple-references mapping is particularly effective when dealing with points that are positioned at the midrange of a given axis.
Our paper entitled “Tap the ShapeTones: Exploring the effects of crossmodal congruence in an audio-visual interface” has been accepted at CHI’2016
Abstract: There is growing interest in the application of crossmodal perception to interface design. However, most research has focused on task performance measures and often ignored user experience and engagement. We present an examination of crossmodal congruence in terms of performance and engagement in the context of a memory task of audio, visual, and audio-visual stimuli. Participants in a first study showed improved performance when using a visual congruent mapping that was cancelled by the addition of audio to the baseline conditions, and a subjective preference for the audio-visual stimulus that was not reflected in the objective data. Based on these findings, we designed an audio-visual memory game to examine the effects of crossmodal congruence on user experience and engagement. Results showed higher engagement levels with congruent displays with some reported preference for potential challenge and enjoyment that an incongruent display may support, particularly for increased task complexity.
A plug-in which makes Digital Audio Workstations accessible to audio producers with visual impairments, allowing them to record and edit digital audio by using sound to represent signal levels. Project Poster
ShapeTones is a memory game that we have developed for the iPhone/iPad. We designed it so that it could be played using audio, visuals or audio-visual feedback. Our aim was to make an accessible game that can be played by everyone including people with vision or hearing impairments.
The event is run “in partnership by Nesta and SoundLab, a Digital R&D for the Arts project, this event offered the opportunity to try a range of cutting-edge digital music making technologies, watch performances, play in a pop-up band and debate about how music making tools can be made even more accessible.”
Our Accessible Peak Level meter, which won the award for Best Solution by a Large Organisation at the American Disability Act 25th Anniversary event organised by AT&T and NYU, has been featured on AccessWorld Magazine, the online magazine of the American Foundation for the Blind