Hongyi Wang

Affiliations:
  • Carnegie Mellon University, Machine Learning Department, Pittsburgh, PA, USA
  • University of Wisconsin-Madison, USA (PhD 2021)


According to our database1, Hongyi Wang authored at least 43 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild.
CoRR, 2024

FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations.
CoRR, 2024

ViT-1.58b: Mobile Vision Transformers in the 1-bit Era.
CoRR, 2024

Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models.
CoRR, 2024

SHED: Shapley-Based Automated Dataset Refinement for Instruction Fine-Tuning.
CoRR, 2024

TrustLLM: Trustworthiness in Large Language Models.
CoRR, 2024

RedCoast: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations, 2024

Does Compressing Activations Help Model Parallel Training?
Proceedings of the Seventh Annual Conference on Machine Learning and Systems, 2024


Maestro: Uncovering Low-Rank Structures via Trainable Decomposition.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Fusing Models with Complementary Expertise.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
LLM360: Towards Fully Transparent Open-Source LLMs.
CoRR, 2023

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices.
CoRR, 2023

Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs.
CoRR, 2023

SlimPajama-DC: Understanding Data Combinations for LLM Training.
CoRR, 2023

Memory-adaptive Depth-wise Heterogenous Federated Learning.
CoRR, 2023

FedNAR: Federated Optimization with Normalized Annealing Regularization.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Cuttlefish: Low-Rank Model Training without All the Tuning.
Proceedings of the Sixth Conference on Machine Learning and Systems, 2023

MPCFORMER: Fast, Performant and Provate Transformer Inference with MPC.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
MPCFormer: fast, performant and private Transformer inference with MPC.
CoRR, 2022

Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation.
CoRR, 2022

Rare Gems: Finding Lottery Tickets at Initialization.
CoRR, 2022

Rare Gems: Finding Lottery Tickets at Initialization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

AMP: Automatically Finding Model Parallel Strategies with Heterogeneity Awareness.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

On the Utility of Gradient Compression in Distributed Training Systems.
Proceedings of the Fifth Conference on Machine Learning and Systems, 2022

Efficient Federated Learning on Knowledge Graphs via Privacy-preserving Relation Embedding Aggregation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

2021
Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients.
CoRR, 2021

A Field Guide to Federated Optimization.
CoRR, 2021

Pufferfish: Communication-efficient Models At No Extra Cost.
Proceedings of the Fourth Conference on Machine Learning and Systems, 2021

Adaptive Gradient Communication via Critical Learning Regime Identification.
Proceedings of the Fourth Conference on Machine Learning and Systems, 2021

2020
Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification.
CoRR, 2020

FedML: A Research Library and Benchmark for Federated Machine Learning.
CoRR, 2020

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Federated Learning with Matched Averaging.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding.
CoRR, 2019

Demonstration of Nimbus: Model-based Pricing for Machine Learning in a Data Marketplace.
Proceedings of the 2019 International Conference on Management of Data, 2019

DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

2018
DRACO: Robust Distributed Training via Redundant Gradients.
CoRR, 2018

ATOMO: Communication-efficient Learning via Atomic Sparsification.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

The Effect of Network Width on the Performance of Large-batch Training.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

DRACO: Byzantine-resilient Distributed Training via Redundant Gradients.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Recognizing actions during tactile manipulations through force sensing.
Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017


  Loading...