As robots and autonomous systems move away from laboratory setups towards complex real-world scenarios, both the perception capabilities of these systems and their abilities to acquire and model semantic information must become more powerful. The autonomous acquisition of information, the extraction of semantic models, and exploration strategies for deciding where and how to acquire the most relevant information pertinent to a specific semantic model are the research foci of an annual series of workshops at ICRA, called Semantic Perception, Mapping and Exploration (SPME).
Semantic perception for intelligent systems such as robots has seen a lot of progress recently, with many new and interesting techniques being developed in parallel by different research groups. Moreover, with the advent of inexpensive and accurate 3D imaging sensors, there has been an explosion of interest in 3D point clouds across a broad range of people. Not neglecting this trend, this edition of the workshop series puts a special focus on (3D) Semantic Perception.
While there is a lot of work on 3D perception that is freely available, and initiatives as PCL and 3DTK are enabling the community to build on previous results in order to push the frontiers for researchers further, there are some open questions left to be answered. There is no consensus yet emerging on the standard solutions, features and algorithms needed for semantic perception, mapping and exploration, and if the current approaches are viable on the long run. This workshop provides the venue for discussing the definition and uses of semantic information for and by perception, and to identify the most important directions of future research and development of new tools that would aid it.
This edition of the annual workshop series focuses on (3D) semantic perception. Topics of interest include, but are not necessarily limited to:
- Extracting semantic information from visual sensors, 3D sensors, or different sensor modalities
- Semantic scene interpretation (and decomposition into parts of interest)
- Semantic object perception (incl. localization, identification, anchoring)
- Categorization or classification of objects, rooms, and environments
- Modeling (of objects and environments), registration using semantic information etc.
- Specifying and exploiting background knowledge for semantic perception and mapping