통합 참고문헌 (References)
104 references
[1] Fang, H.-S., & Agrawal, P. et al. (2025). DEXOP: Dexterous Manipulation with Passive Exoskeleton. IEEE RA-L. https://arxiv.org/abs/2509.04441 #10 scholar
[2] Zheng, R., et al. (2026). EgoScale: Egocentric Video Pretraining for Scalable Robot Learning. arXiv. https://research.nvidia.com/labs/gear/egoscale/ scholar
[3] Yang, B., et al. (2026). AoE: Always-on Egocentric Data Collection for Robot Learning. arXiv. scholar
[4] Kareer, S., et al. (2024). EgoMimic: Scaling Imitation Learning via Egocentric Video. arXiv. https://arxiv.org/abs/2410.24221 scholar
[5] Dan, P., et al. (2025). X-Sim: Cross-Embodiment Simulation for Robot Learning. CoRL 2025 Oral. https://portal-cornell.github.io/X-Sim/ scholar
[6] Lum, T. G. W., et al. (2025). Human2Sim2Robot: Dexterous Manipulation Transfer via Simulation. CoRL 2025. scholar
[7] Liu, V., et al. (2025). EgoZero: Robot Policy Learning from Egocentric Video without Robot Data. arXiv. scholar
[8] Chen, H., et al. (2025). VidBot: Learning Robot Manipulation from Internet Videos. CVPR 2025. scholar
[9] Ye, S., et al. (2025). LAPA: Latent Action Pretraining from Videos. ICLR 2025. scholar
[10] Hoque, R., et al. (2025). EgoDex: A Large-Scale Egocentric Dexterous Manipulation Dataset. arXiv. scholar
[11] BuildAI (2025). Egocentric-10K: 10,000 Hours of Factory Egocentric Video. Hugging Face. scholar
[12] Grauman, K., et al. (2022). Ego4D: Around the World in 3,000 Hours of Egocentric Video. CVPR 2022. scholar
[13] SJTU (2024). AirExo: Low-Cost Exoskeletons for Learning Whole-Arm Manipulation in the Wild. ICRA 2024. scholar
[14] Chi, C., et al. (2024). Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots. RSS 2024. https://umi-gripper.github.io/ #35 scholar
[15] Sunday Robotics (2025). ACT-1: A Robot Foundation Model. Technical Report. https://www.sundayrobotics.com/act-1 #29 scholar
[16] Park, M., & Park, Y.-L. et al. (2024). Stretchable Glove for Hand Motion Estimation. Nature Communications. https://www.nature.com/articles/s41467-024-50101-w #6 scholar
[17] Yin, J., et al. (2025). OSMO: A Large-Scale Tactile Glove for Human-to-Robot Manipulation Transfer. arXiv. https://arxiv.org/abs/2512.08920 #18 scholar
[18] Liu, Q., et al. (2025). VTDexManip: Visual-Tactile Dexterous Manipulation Dataset. ICLR 2025. scholar
[19] Ren, T.-A., et al. (2025). TacCap: FBG-Based Optical Tactile Thimble. arXiv. scholar
[20] Zhang, H., et al. (2025). DOGlove: Open-Source Haptic Feedback Glove. RSS 2025. scholar
[21] Xu, M., et al. (2025). DexUMI: Universal Manipulation Interface for Dexterous Hands. arXiv. #8 scholar
[22] Si, Z., et al. (2025). ExoStart: Exoskeleton-Aided Dexterous Manipulation from One Demo. arXiv. #9 scholar
[23] SJTU (2025). AirExo-2: In-the-Wild Data Collection for Robot Learning. CoRL 2025 Oral. scholar
[24] Hoque, R., et al. (2025). EgoDex: Egocentric Dexterous Manipulation Dataset. arXiv. scholar
[25] BuildAI (2025). Egocentric-10K: Factory Egocentric Video Dataset. Hugging Face. scholar
[26] PhysBrain (2025). Egocentric2Embodiment Pipeline. arXiv. scholar
[27] Chi, C., et al. (2025). UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers. arXiv. scholar
[28] Sharma, A., et al. (2025). Self-supervised perception for tactile skin covered dexterous hands (Sparsh-skin). arXiv. https://arxiv.org/abs/2505.11420 scholar
[29] Zhao, Z., et al. (2025). Embedding high-resolution touch across robotic hands enables adaptive human-like grasping (F-TAC Hand). Nature Machine Intelligence. https://arxiv.org/abs/2412.14482 #39 scholar
[30] Ye, Q., et al. (2026). Visual-Tactile Learning for Dexterous Manipulation. Science Robotics. scholar
[32] Kim, T., et al. (2026). UMI-FT: Compliant Manipulation via Universal Manipulation Interface with Force/Torque Sensing. arXiv. #36 scholar
[33] Kim, M., et al. (2025). EquiTac: Tactile Equivariance for Efficient Manipulation Learning. arXiv. #37 scholar
[34] Almeida, J. D., Falotico, E., Laschi, C., & Santos-Victor, J. (2025). The Role of Touch: Towards Optimal Tactile Sensing Distribution in Anthropomorphic Hands for Dexterous In-Hand Manipulation. IEEE ICNSC 2025. https://arxiv.org/abs/2509.14984 #41 scholar
[35] Zhang, N., Ren, J., Dong, Y., Gu, G., & Zhu, X. (2025). Soft Robotic Hand with Tactile Palm-Finger Coordination (TacPalm SoftHand). Nature Communications 16:2395. https://doi.org/10.1038/s41467-025-57741-6 #40 scholar
[36] Shaw, K., et al. (2023/2024). VideoDex: Learning Dexterous Manipulation from Internet Videos. CoRL 2023 / IJRR 2024. scholar
[37] Ghunaim, Y., et al. (2025). Human2Bot: Zero-Shot Robot Learning from Human Videos. Autonomous Robots. scholar
[38] Sunday, E. (2025). ACT-1: Humanoid Hand for Human-Level Manipulation. Physical Intelligence Blog. #29 scholar
[39] Habilis Team (2026). Habilis-β: On-Device VLA for Sustained Autonomous Operation. arXiv. #33 scholar
[40] Fang, H.-S., et al. (2025). DEXOP: Dexterous Manipulation with Passive Exoskeleton. IEEE RA-L. #10 scholar
[41] Goswami, R. G., et al. (2025). DexWM: Dexterous World Models from Human and Robot Data. arXiv. scholar
[42] Yang, R., et al. (2025). EgoVLA: Egocentric Vision-Language-Action Model. arXiv. scholar
[43] RoboWheel (2024). HOI-Based Cross-Embodiment Robot Learning. arXiv. scholar
[45] DexH2R (2024). Task-Oriented Residual RL for Dexterous Manipulation Transfer. arXiv. scholar
[46] Li, et al. (2025). ManipTrans: Efficient Bimanual Dexterous Manipulation Transfer via Residual Learning. CVPR 2025. scholar
[47] Park, S., et al. (2025). Learning to Transfer Human Hand Skills for Robot Manipulations. arXiv:2501.04169. scholar
[48] Chen, L. Y., et al. (2024). Mirage: Cross-Embodiment Zero-Shot Transfer via Cross-Painting. RSS 2024. scholar
[49] H2R (2025). Human-to-Robot Video Augmentation for Pretraining. arXiv. scholar
[50] Lepert, et al. (2025). Masquerade: Scaling In-the-Wild Human Video to Bimanual Robot Policy Learning. arXiv. scholar
[51] Yin, J., et al. (2025). OSMO: A Large-Scale Tactile Glove. arXiv. https://arxiv.org/abs/2512.08920 #18 scholar
[52] Liu, V., et al. (2025). EgoZero: Smart Glasses to Robot Policy. arXiv. scholar
[56] Zhu, Z., et al. (2023). A Soft Robotic Glove with Integrated Sensing and Haptic Feedback. Engineering. https://doi.org/10.1016/j.eng.2023.01.011 scholar
[57] Ding, Z., et al. (2025). DOGlove: Dexterous Manipulation with a Low-Cost Open-Source Glove. RSS 2025. scholar
[58] Shanghai AI Lab (2026). TAG: Teleoperation Accessible Glove. arXiv. scholar
[59] Sundaram, S., et al. (2019). Learning the Signatures of the Human Grasp Using a Scalable Tactile Glove. Nature. (STAG) scholar
[60] Han, S., et al. (2024). ViTaM: A High-Resolution Vision-Based Tactile Sensor. Nature Communications. scholar
[61] Chi, C., et al. (2024). DexCap: Scalable and Portable Dexterous Manipulation Data Collection. RSS 2024. scholar
[62] Liu, Q., et al. (2025). VTDexManip: Visual-Tactile Dataset for Dexterous Manipulation. ICLR 2025. scholar
[63] Yang, R., et al. (2025). EgoVLA: Egocentric VLA with MANO. arXiv. scholar
[64] Hoque, M., et al. (2025). EgoDex: Learning Dexterous Manipulation from Egocentric Video. arXiv. scholar
[67] Liu, S. Q., & Adelson, E. H. (2024). A Passively Bendable, Compliant Tactile Palm with RObotic Modular Endoskeleton Optical (ROMEO) Fingers. ICRA 2024. https://arxiv.org/abs/2404.08227 scholar
[68] Pozzi, M., Malvezzi, M., Prattichizzo, D., & Salvietti, G. (2024). Actuated Palms for Soft Robotic Hands: Review and Perspectives. IEEE/ASME Transactions on Mechatronics, 29(2):902–921. scholar
[69] ROBOTIS (2025). HX5-D20 Dexterous Robot Hand. https://www.robotis.com/ scholar
[70] Montana, D. J. (1988). The Kinematics of Contact and Grasp. IJRR, 7(3). scholar
[71] Murray, R. M., Li, Z., & Sastry, S. S. (1994). A Mathematical Introduction to Robotic Manipulation. CRC Press. scholar
[72] Mason, M. T., & Salisbury, J. K. (1985). Robot Hands and the Mechanics of Manipulation. MIT Press. scholar
[73] Li, Y., et al. (2024). MultiGrasp: Multi-Object Grasping with Dexterous Hands. IEEE RA-L. https://arxiv.org/abs/2310.15599 scholar
[74] Li, H., et al. (2025). SeqMultiGrasp: Sequential Multi-Object Grasping via Diffusion. arXiv. https://arxiv.org/abs/2503.12579 scholar
[75] Wan, W., et al. (2025). SeqGrasp: Sequential Grasping via Opposition Space. arXiv. https://arxiv.org/abs/2503.11806 scholar
[76] Zheng, R., et al. (2026). EgoScale: Egocentric Video Pretraining. arXiv. scholar
[77] Yang, B., et al. (2026). AoE: Always-on Egocentric Data Collection. arXiv. scholar
[78] Dan, P., et al. (2025). X-Sim: Cross-Embodiment Simulation. CoRL 2025 Oral. scholar
[79] Liu, V., et al. (2025). EgoZero: Robot Policy from Egocentric Video. arXiv. scholar
[80] Liu, Q., et al. (2025). VTDexManip: Visual-Tactile Dataset. ICLR 2025. scholar
[81] Chen, L. Y., et al. (2024). Mirage: Cross-Painting Transfer. RSS 2024. scholar
[82] Chi, C., et al. (2024). UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers. arXiv. scholar
[83] Zhou, Y., Lee, W. S., Gu, Y., & She, Y. (2026). Tactile-reactive gripper with an active palm for dexterous manipulation. npj Robotics, 4, 13. https://www.nature.com/articles/s44182-026-00079-y scholar
[84] DexH2R (2024). Task-Oriented Residual RL for Dexterous Transfer. arXiv. scholar
[86] Lum, T. G. W., et al. (2025). Human2Sim2Robot. CoRL 2025. scholar
[89] Zheng, R., et al. (2026). EgoScale. arXiv. scholar
[90] Li, et al. (2025). ManipTrans. CVPR 2025. scholar
[92] DexH2R (2024). Task-Oriented Residual RL. arXiv. scholar
[94] Kareer, S., et al. (2024). EgoMimic. arXiv. scholar
[97] Ye, Q., et al. (2026). Visual-Tactile Learning. Science Robotics. scholar
[98] DexH2R (2024). Residual RL. arXiv. scholar
[99] Hoque, R., et al. (2025). EgoDex. arXiv. scholar
[100] BuildAI (2025). Egocentric-10K. Hugging Face. scholar
[104] Richardson, B. A., Grüninger, F., Mack, L., Stueckler, J., & Kuchenbecker, K. J. (2025). ISyHand: A Dexterous Multi-finger Robot Hand with an Articulated Palm. IEEE-RAS Humanoids 2025. https://arxiv.org/abs/2509.26236 scholar
감사의 글
이 책은 서울대학교 박용래, 박종우 교수님과의 공동연구를 위한 연구 전략 서베이 문서입니다. TacGlove, TacTeleOp, TacPlay 연구의 근거를 제공하기 위해 작성되었습니다.
이 프로젝트는 황민호님의 Harness 스킬을 이용하여 제작되었습니다.
이 저작물의 제작에 AI 도구가 활용되었습니다. 문헌 조사, 콘텐츠 생성, 원고 작성에 Claude(Opus 4.6)를 사용하였습니다.