Touch is a crucial sensor modality for both humans and robots, as it allows us to directly sense object properties and interactions with the environment. Recently, touch sensing has become more prevalent in robotic systems, thanks to the increased accessibility of inexpensive, reliable, and high-resolution tactile sensors and skins. Just as the widespread availability of digital cameras accelerated the development of computer vision, we believe that we are rapidly approaching a new era of computational science dedicated to touch processing.
With the increased popularity of generative AI and large language models (LLMs) that incorporate multimodal data (vision, audio, text), touch remains a notably absent sensing modality, yet it is essential for enabling AI systems to interact with and understand the physical world in a human-like manner. To this end, emerging touch sensors, traditionally focused on measuring forces and surface geometry, are beginning to incorporate additional modalities found in human skin, such as temperature, deformation, and vibration. This advancement brings touch sensing closer to the richness of human perception and positions it as a key component of future multimodal AI systems.
However, a key question is now becoming critically important as the field gradually transitions from hardware development to real-world applications: How do we make sense of touch? While the output of modern high-resolution tactile sensors and skins shares similarities with computer vision, touch presents challenges unique to its sensing modality. Unlike images, touch information is influenced by temporal components, its intrinsically active nature, and very local sensing, where a small subset of a 3D space is sensed on a 2D embedding. We believe that AI/ML will play a critical role in the efficient processing of touch as a sensing modality. However, this raises important questions regarding which computational models are best suited to leverage the unique structure of touch, similar to how convolutional neural networks leverage spatial structure in images.
The development and advancement of touch processing will greatly benefit a wide range of fields, including tactile and haptic use cases. For instance, advancements in tactile processing (from the environment to the system) will enable embodied AI and robotic applications in unstructured environments, such as agricultural robotics and tele-medicine. Understanding touch will also facilitate providing sensory feedback to amputees through sensorized prostheses and enhance future AR/VR systems. Furthermore, multimodal sensing enhances perception skills by combining data from various sensor types—such as vision, sound, and touch—to create a richer understanding of the environment. This integration improves the ability of embodied AI systems to interpret complex scenarios, adapt to dynamic tasks, and operate more safely and efficiently.
Important dates
- Submission deadline: 22 August 2025 (AOE) (Anywhere on Earth)
- Notification: 22 September 2025 (AOE)
- Camera ready: 29 September 2025 (AOE)
Schedule
Time (Local Time, CST) | Title | Speaker |
---|---|---|
08:50 - 09:00 | Opening Remark | Organizers |
09:00 - 09:30 | Invited Talk 1 | |
09:30 - 10:00 | Lightning Talk | |
10:00 - 10:30 | Coffee Break | |
10:00 - 11:00 | Poster and Demo Session | |
11:00 - 11:30 | Invited Talk 2 | |
11:30 - 12:00 | Invited Talk 3 | |
12:00 - 13:30 | Lunch Break | |
13:30 - 14:00 | Invited Talk 4 | |
14:00 - 14:30 | Invited Talk 5 | |
14:30 - 15:00 | Invited Talk 6 | |
15:00 - 15:30 | Coffee Break | |
15:00 - 16:00 | Poster and Demo Session | |
16:00 - 16:30 | Invited Talk 7 | |
16:30 - 17:00 | Invited Talk 8 | |
17:00 - 17:50 | Panel Discussion | |
17:50 - 18:00 | Award Ceremony |
Speakers
- Matei Ciocarlie (Columbia University, United States)
- Randy Flanagan (Queen’s University, Canada)
- Yu She (Purdue University, United States)
- Carmelo (Carlo) Sferrazza (UC Berkeley, United States)
- Monroe Kennedy (Stanford University, United States) (Confirmed after Proposal Deadline)
- Lorenzo Jamone (University College London, United Kindom)
- Harold Soh (National University of Singapore, Singapore)
- Merle Fairhurst (TU Dresden, Germany)
Organizers
- Roberto Calandra (TU Dresden, Germany)
- Haozhi Qi (UC Berkeley, United States)
- Perla Maiolino (University of Oxford, United Kingdom)
- Mike Lambeta (Meta AI, United States)
- Jitendra Malik (UC Berkeley, United States)
- Yasemin Bekiroglu (Chalmers University of Technology)
Call for Papers
We welcome submissions focused on all aspects of touch processing, including but not limited to the following topics:
- Computational approaches to process touch data.
- Learning representations from touch and/or multimodal data.
- Tools and libraries that can lower the barrier of touch sensing research.
- Collection of large-scale tactile datasets.
- Applications of touch processing
We encourage relevant works at all stages of maturity, ranging from initial exploratory results to polished full papers. Accepted papers will be presented in the form of posters, with outstanding papers being selected for spotlight talks.
Contacts
For questions related to the workshop, please email to workshop@touchprocessing.org.