Artificial and Human Intelligence research addresses the formal & cognitive foundations for human-centred computing, and the human-centred design,
development, and usability of cognitive technologies aimed at human-in-the-loop assistance & empowerment in decision-making, planning,
creative-technical problem-solving, and automation.
The workshop features invited and contributed research advancing the formal and cognitive
foundations of human-centred computing particularly from the viewpoints of theories and methods developed within the fields of:
- Artificial Intelligence
- Cognitive Science - Psychology
- Cognitive Neuroscience
- Visuospatial Cognition and Computation
- Human-Computer Interaction
- Design Sciene - Design Cognition
A detailed research statement relevant to the workshop is available at:
Related initiatives Open for Application
Institute on Artificial and Human Intelligence (Sweden)
Apply > www.codesign-lab.org/institute2020/
Application domains being addressed include, but are not limited to:
- Autonomous driving
- Embodied cognitive vision for robotics
- Social interaction in cognitive robotics
- Architectural design cognition
- Clinical diagnostic technologies
- Social signal processing
- Technology-assisted learning
- Digital media design
- Visual art design
- Vision and social media
- Cognitive moving image studies
- Visual art, cultural heritage, fashion
- Vision and VR, AR
- Vision for psychology / behavioural studies
- Vision for social sciences, humanities
- Vision for industrial applications
- [        ]
Technical Focus   /   Human-Centred Computing (HCC). Formal and Cognitive Foundations.
The principal emphasis of the workshop is on:
the formal and computational foundations of AI and Cognitive Technologies with a principal
emphasis on human-centred knowledge representation, semantics, commonsense reasoning, integration
of reasoning & learning, and visuospatial representation and reasoning. A particular focus is on integrated
commonsense reasoning & learning about space and motion in human-scale embodied multimodal interaction.
behavioural / empirical methods in cognitive science & psychology and neuroscience aimed at investigating human intelligence
from the viewpoints of embodiment, multimodal interaction, and visuospatial thinking.
A special emphasis of the workshop is on
visuospatial cognition and computation, e.g., in the backdrop of aspects pertaining to visual perception, high-level event perception,
motion, and narrative-driven perceptual sensemaking.
Particular themes of high interest solicited by the workshop include:
- knowledge representation - semantics - commonsense - declarative methods
- semantic interpretation of multimodal human behaviour data
- integration of reasoning and learning - explainability - neurosymbolism
- synergy of computational and behavioural / empirical methods
- data-centred methods for psychology - psychology-driven AI - ``in the wild'' naturalistic experimentation
- embodiment - visuospatial thinking - motion & interaction - visuospatial perception and cognition
- design science - design cognition and computation - designing embodied cognitive experiences
The workshop particularly emphasises:  (i). ``In-the-wild'' ecologically valid naturalistic (embodied multimodal interaction)
settings;  (ii). Bottom-up
interdisciplinarity, e.g., combining methods in AI and cognitive psychology; and  (iii)
. Design-thinking as a
human-centred perspective for
engineering (``usable'') cognitive technologies aiming to assist, empower, and augment human capability.
We welcome contributions addressing the workshop themes from formal, cognitive, computational,
engineering, empirical, psychological, and philosophical perspectives. Select indicative topics include:
- knowledge representation - semantics
- reasoning about space, actions, and change
- commonsense reasoning
- computational cognitive systems
- embodied visuoauditory perception
- declarative spatial reasoning
- deep (visuo-spatial) semantics
- integrated reasoning and learning
- non-monotonic reasoning
- visual computing computing
- cognitive vision
- commonsense scene understanding
- semantic question-answering with image, video, point-clouds
- concept learning and inference from visual stimuli
- explainable visual interpretation
- learning relational knowledge from dynamic visuo-spatial stimuli
- knowledge-based vision systems
- ontologically modelling for scene semantics
- motion representation and reasoning (e.g., for embodied control)
- attention, anticipation, action
- declarative reasoning about space and motion
- computational models of narrative
- narrative models for storytelling (from stimuli)
- vision and linguistic summarization (e.g., of social interaction, human behavior)
- vision, AI, and eye-tracking
- high-level visual perception and eye-tracking
- egocentric vision, perception
- visual perception and embodiment
- biological and artificial vision
- biological motion
- visuoauditory perception
- multimodal media annotation tools
WORKSHOP PARTICIPATION / CONTRIBUTION POSSIBILITIES.
We invite contributions or participation in any which way that is possible for those interested.
Please just send an email to the workshop contact to inquire about possibilities.
I. COVID / CORONA TIMES
The workshop will primarily feature invited talks based on recently pubished and / or emerging lines of research.
Furthermore, presentations by young / early career researchers will also be featured. If you would like to propose a presentation
or have some other idea to enrich the workshop, please feel welcome to contact the workshop chair.
You are also welcome to follow PRE CORONA times guidelines should you desire (refer below).
II. PRE COVID / CORONA TIMES
Submitted papers must be formatted according to ECAI 2020 guidelines
(details here: ECAI 2020 guidelines
Contributions may be submitted as:
- Technical papers     (max 7 pages content + max 1 page only references)
- Position / vision statements     (max 5 pages content + max 1 page only references)
- Work in progress reports or ``new'' project / initiative positioning     (max 5 pages content + max 1 page only references)
- Poster abstract     (e.g., for early stage PhD candidates)     (max 2 pages + max 1 page only references)
- System demonstrations     (max 2 pages + max 1 page only references)
Each contibution (type) will be allocated an adequate duration of presentation to be determined by the workshop organisation.
Poster contributions are
additionally expected to bring their poster for presentation and discussion during a poster sessio
Submissions should include a label describing the category of submission
(as per 1-5 above) as a footnote on the first page of the paper. !!
All submissions should only be made (in English) electronically as PDF
documents via the paper submission site at: (Easychair