Human-Centred Computing (HCC).
Formal and Cognitive Foundations.
The principal aim of this line of research is to investigate:
the formal and computational foundations of AI and Cognitive Technologies with a principal emphasis on human-centred knowledge representation, semantics, commonsense reasoning, integration of reasoning & learning, and visuospatial representation and reasoning.
A particular focus is on integrated commonsense reasoning & learning about space and motion in human-scale embodied multimodal interaction.
behavioural / empirical methods in cognitive science & psychology and neuroscience aimed at investigating human intelligence from the viewpoints of embodiment, multimodal interaction, and visuospatial thinking.
A special emphasis is on visuospatial cognition and computation, e.g., in the backdrop of aspects pertaining to visual perception, high-level event perception, motion, and narrative-driven perceptual sensemaking.
Minds. Experiences. Technologies.
Key drivers and thematic points of interest include:
- Human-Centred Technology Design
- Embodied Multimodal Interaction
- Commonsense   /   Space.  Action.  Motion.  Interaction.
- Applications:   Interpretation and Synthesis of Embodied Cognitive Experiences
(e.g., in autonomy, social interaction, creative design, clinical diagnostics, assisted learning)
Human-Centred Technology Design
Design and development of artificial cognitive technologies where the human-centred modelling, interpretation, simulation, and synthesis of embodied cognitive experiences -e.g., encompassing multimodal perception and interaction- is critical. We particularly encourage works emphasising development of general methods for human-centred AI keeping in mind: interpretability, generality or domain-independence, elaboration tolerance, benchmarking, and re-usability.
Embodied Multimodal Interaction
The multimodality that is alluded to stems from an inherent synergistic value in the integrated processing and interpretation of a range of data sources that are common in the context of cognitive interaction systems, computational cognition, and human-computer interaction scenarios. Mutually interacting modalities of interest include, but are not limited to:
- vision, gesture, speech, language, visual attention, facial expressions, tactile interactions, olfaction
- human expert guided (fine-grained) event segments
(e.g. coming from behavioural or environmental psychologists, designers, annotators, crowd-sourced data)
- qualitative analysis based on dialogic components, think-aloud protocols.
From a data standpoint, in focus are: high-precision visual fixation with stationary and mobile eye-tracking, immersive / virtual eye-tracking, high-resolution EEG, full-body precision motion tracking, human pose data and gesture data, biological motion data, haptic touch data, auditory stimuli (e.g., focussing on speech acts, speaker diarisation), and basic physiological markers. Behavioural data involving think-out-loud protocol based data collection is also in focus. Technically, this translates to data such as:
- Visuospatial Imagery., e.g.: image, video, point-clouds; crowd-sourced data, survey data
- Movement and Interaction Data., e.g.: indoor or outdoor settings pertaining to motion of people / things at arbitrary spatial and temporal scales; sensory-motor data about interaction of people with things (e.g., in activities of everyday living)
- Neurophysiological and other Human Behaviour Data., e.g.: eye-tracking and related human behaviour data in psychophysical, oculomotor and VR experiments of human vision; FMRI, EEG data occurring in neuroscience, brain-computer interfaces
Commonsense   /   Space. Action. Motion. Interaction.
- Space and Language
- Deep Semantics - Space and Motion
- Vision and Narrative
- Integration of Vision and AI
- Integration of Reasoning and Learning
- Data-Centred Methods for Psychology
Space and Language  /  spatio-linguistic grounding of visual perception and embodied interaction / basic ontological building blocks of human-centred visuospatial computing in diverse areas
Deep (Visuospatial) Semantics  /  denoting the existence of systematic formalisation(s) and declarative programming methods -e.g., pertaining to space and motion- supporting query answering, relational learning, non-monotonic abductive inference, and embodied simulation.
Vision and Narrative  /  computational models of narrative, and narrative-based semantic interpretation and sensemaking of human behaviour data, predicated on the narratological concept of ``sensemaking is an act of story formation''.
Integration of Vision and AI  /  systematically formalised integrative AI methods \& tools (e.g., combining visual processing, knowledge representation & reasoning, and learning)
Integration of Reasoning and Learning  /  reasoning-learning synergies aimed at exploiting ``reasoning to learn'' and ``learning to reason'' settings within cognitive systems; relational causal knowledge discovery motivated by semantic interpretation of human behaviour for the experimental sciences.
Data-Centred Methods for Psychology  /  AI + ML + Data Mining for analysing ``big data'' emanating from human-behavioural research in psychology, and more broadly, humanities and social sciences. The aim here is twofold: not only to better interpret human behaviour, but also use such that technological artefacts may better learn / adapt contextual and normative aspects of human interactions and expectations.
Applications   /   E.g., Designing Embodied Cognitive Experiences
(Example) Application domains of interest include autonomous vehicles, multimodal social interaction in robotics, creative design technologies, clinical diagnostic intervention tools. Furthermore, embodied design thinking, and human-centred design engineering from the viewpoints of diverse design domains are also of interest: design areas being considered include AI for architecture and built environment design, product design, (digital) visuoauditory narrative media design, and visual art design in its many diverse forms. The ``synthesis'' of embodied visual, visuolocomotive, and visuoauditory cognitive experiences from the viewpoints of evidence-based architectural design and media design (e.g., film, animation, immersion) is a topic of ongoing interest.