Yan Shen

Orcid: 0000-0001-9262-9049

Affiliations:
  • Peking University, Beijing, Chiina


According to our database1, Yan Shen authored at least 12 papers between 2022 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
NaturalVLM: Leveraging Fine-Grained Natural Language for Affordance-Guided Visual Manipulation.
IEEE Robotics Autom. Lett., December, 2024

GarmentLab: A Unified Simulation and Benchmark for Garment Manipulation.
CoRR, 2024

NaturalVLM: Leveraging Fine-grained Natural Language for Affordance-Guided Visual Manipulation.
CoRR, 2024

Broadcasting Support Relations Recursively from Local Dynamics for Object Retrieval in Clutters.
Proceedings of the Robotics: Science and Systems XX, 2024

ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
ImageManip: Image-based Robotic Manipulation with Affordance-guided Next View Selection.
CoRR, 2023

Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusions.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Manipulation.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations.
Proceedings of the 34th British Machine Vision Conference 2023, 2023

2022
DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Object Manipulation.
CoRR, 2022

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects.
Proceedings of the Tenth International Conference on Learning Representations, 2022


  Loading...