RwoR: Generating Robot Demonstrations from Human Hand Collection for Policy Learning without Robot
Liang Heng, Xiaoqi Li, Shangqing Mao, Jiaming Liu, Ruolin Liu, Jingli Wei, Yu-Kai Wang, Yueru Jia, Chenyang Gu, Rui Zhao, Shanghang Zhang, Hao Dong
ViTacFormer: Learning Cross-Modal Representation for Visuo-Tactile Dexterous Manipulation
Liang Heng, Haoran Geng, Kaifeng Zhang, Pieter Abbeel, Jitendra Malik
[Project]
[PDF]
[Code]
CrayonRobo: Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation
Xiaoqi Li, Lingyun Xu, Mingxu Zhang, Jiaming Liu, Yan Shen, Iaroslav Ponomarenko, Jiahui Xu, Liang Heng, Siyuan Huang, Shanghang Zhang, Hao Dong
3D Weakly Supervised Visual Grounding at Category and Instance Levels
Xiaoqi Li, Jiaming Liu, Nuowei Han, Liang Heng, Yandong Guo, Hao Dong, Yang Liu
MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation
Rongyu Zhang, Menghang Dong, Yuan Zhang, Liang Heng, Xiaowei Chi, Gaole Dai, Li Du, Yuan Du, Shanghang Zhang
[PDF]
Last Updated on July, 2025