Xiaoyu Shi

Orcid: 0009-0003-3696-4442

Affiliations:
  • Chinese University of Hong Kong, Multimedia Laboratory, Hong Kong


According to our database1, Xiaoyu Shi authored at least 18 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning.
CoRR, 2024

AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data.
Proceedings of the SIGGRAPH Asia 2024 Technical Communications, 2024

Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling.
Proceedings of the ACM SIGGRAPH 2024 Conference Papers, 2024

Three Things We Need to Know About Transferring Stable Diffusion to Visual Dense Prediction Tasks.
Proceedings of the Computer Vision - ECCV 2024, 2024

Be-Your-Outpainter: Mastering Video Outpainting Through Input-Specific Adaptation.
Proceedings of the Computer Vision - ECCV 2024, 2024

BlinkVision: A Benchmark for Optical Flow, Scene Flow and Point Tracking Estimation Using RGB Frames and Events.
Proceedings of the Computer Vision - ECCV 2024, 2024

2023
FlowFormer: A Transformer Architecture and Its Masked Cost Volume Autoencoding for Optical Flow.
CoRR, 2023

Context-TAP: Tracking Any Point Demands Spatial Context Features.
CoRR, 2023

A Unified Conditional Framework for Diffusion-based Image Restoration.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Context-PIPs: Persistent Independent Particles Demands Context Features.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

BlinkFlow: A Dataset to Push the Limits of Event-Based Optical Flow Estimation.
IROS, 2023

VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow Estimation.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

FlowFormer++: Masked Cost Volume Autoencoding for Pretraining Optical Flow Estimation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

A Simple Baseline for Video Restoration with Grouped Spatial-Temporal Shift.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
No Attention is Needed: Grouped Spatial-temporal Shift for Simple and Efficient Video Restorers.
CoRR, 2022

FlowFormer: A Transformer Architecture for Optical Flow.
Proceedings of the Computer Vision - ECCV 2022, 2022

2021
Decoupled Spatial-Temporal Transformer for Video Inpainting.
CoRR, 2021

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021


  Loading...