Merey Ramazanova

Orcid: 0000-0002-6234-0831

According to our database1, Merey Ramazanova authored at least 11 papers between 2021 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Combating Missing Modalities in Egocentric Videos at Test Time.
CoRR, 2024

Exploring Missing Modality in Multimodal Egocentric Datasets.
CoRR, 2024

Evaluation of Test-Time Adaptation Under Computational Time Constraints.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
Revisiting Test Time Adaptation under Online Evaluation.
CoRR, 2023

OWL (Observe, Watch, Listen): Audiovisual Temporal Context for Localizing Actions in Egocentric Videos.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Just a Glimpse: Rethinking Temporal Information for Video Continual Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context.
CoRR, 2022

SegTAD: Precise Temporal Action Detection via Semantic Segmentation.
Proceedings of the Computer Vision - ECCV 2022 Workshops, 2022


2021
Ego4D: Around the World in 3, 000 Hours of Egocentric Video.
CoRR, 2021


  Loading...