Alena Shilova

Orcid: 0000-0002-1795-8421

According to our database1, Alena Shilova authored at least 14 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Optimal Re-Materialization Strategies for Heterogeneous Chains: How to Train Deep Neural Networks with Limited Memory.
ACM Trans. Math. Softw., June, 2024

2023
AdaStop: sequential testing for efficient and reliable comparisons of Deep RL Agents.
CoRR, 2023

2022
Entropy Regularized Reinforcement Learning with Cascading Networks.
CoRR, 2022

Survey on Large Scale Neural Network Training.
CoRR, 2022

MadPipe: Memory Aware Dynamic Programming Algorithm for Pipelined Model Parallelism.
Proceedings of the IEEE International Parallel and Distributed Processing Symposium, 2022

Survey on Efficient Training of Large Neural Networks.
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

2021
Memory Saving Strategies for Deep Neural Network Training. (Stratégies pour économiser la mémoire lors de l'apprentissage dans les réseaux neuronaux profonds).
PhD thesis, 2021

Efficient Combination of Rematerialization and Offloading for Training DNNs.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Memory Efficient Deep Neural Network Training.
Proceedings of the Euro-Par 2021: Parallel Processing Workshops, 2021

Pipelined Model Parallelism: Complexity Results and Memory Considerations.
Proceedings of the Euro-Par 2021: Parallel Processing, 2021

2020
A Makespan Lower Bound for the Tiled Cholesky Factorization Based on ALAP Schedule.
Proceedings of the Euro-Par 2020: Parallel Processing, 2020

Optimal GPU-CPU Offloading Strategies for Deep Neural Network Training.
Proceedings of the Euro-Par 2020: Parallel Processing, 2020

2019
Optimal checkpointing for heterogeneous chains: how to train deep neural networks with limited memory.
CoRR, 2019

Training on the Edge: The why and the how.
Proceedings of the IEEE International Parallel and Distributed Processing Symposium Workshops, 2019


  Loading...