TrungTin Nguyen

Orcid: 0000-0001-8433-5980

According to our database1, TrungTin Nguyen authored at least 18 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model.
CoRR, 2024

Accelerating Transformers with Spectrum-Preserving Token Merging.
CoRR, 2024

Risk Bounds for Mixture Density Estimation on Compact Domains via the <i>h</i>-Lifted Kullback-Leibler Divergence.
CoRR, 2024

CompeteSMoE - Effective Training of Sparse Mixture of Experts via Competition.
CoRR, 2024

Bayesian Likelihood Free Inference using Mixtures of Experts.
Proceedings of the International Joint Conference on Neural Networks, 2024

On the Asymptotic Distribution of the Minimum Empirical Risk.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Structure-Aware E(3)-Invariant Molecular Conformer Aggregation Networks.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Towards Convergence Rates for Parameter Estimation in Gaussian-gated Mixture of Experts.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024

2023
HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts.
CoRR, 2023

Demystifying Softmax Gating in Gaussian Mixture of Experts.
CoRR, 2023

Demystifying Softmax Gating Function in Gaussian Mixture of Experts.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

A Non-asymptotic Risk Bound for Model Selection in a High-Dimensional Mixture of Experts via Joint Rank and Variable Selection.
Proceedings of the AI 2023: Advances in Artificial Intelligence, 2023

2022
Summary statistics and discrepancy measures for approximate Bayesian computation via surrogate posteriors.
Stat. Comput., 2022

2021
A non-asymptotic model selection in block-diagonal mixture of polynomial experts models.
CoRR, 2021

A non-asymptotic penalization criterion for model selection in mixture of experts models.
CoRR, 2021

2020
An l<sub>1</sub>-oracle inequality for the Lasso in mixture-of-experts regression models.
CoRR, 2020


  Loading...