Sang Michael Xie

Orcid: 0000-0002-0820-2753

According to our database1, Sang Michael Xie authored at least 25 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
A Survey on Data Selection for Language Models.
Trans. Mach. Learn. Res., 2024

Meta-Designing Quantum Experiments with Language Models.
CoRR, 2024

Connect Later: Improving Fine-tuning for Robustness with Targeted Augmentations.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
Holistic Evaluation of Language Models.
Trans. Mach. Learn. Res., 2023

Data Selection for Language Models via Importance Resampling.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models.
Proceedings of the International Conference on Machine Learning, 2023

Reward Design with Language Models.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation.
Proceedings of the International Conference on Machine Learning, 2022

An Explanation of In-context Learning as Implicit Bayesian Inference.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Extending the WILDS Benchmark for Unsupervised Adaptation.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets.
CoRR, 2021

Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization.
Proceedings of the 38th International Conference on Machine Learning, 2021


In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery.
Remote. Sens., 2020

WILDS: A Benchmark of in-the-Wild Distribution Shifts.
CoRR, 2020

Simplifying Models with Unlabeled Output Data.
CoRR, 2020

Understanding and Mitigating the Tradeoff between Robustness and Accuracy.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Adversarial Training Can Hurt Generalization.
CoRR, 2019

Differentiable Subset Sampling.
CoRR, 2019

Reparameterizable Subset Sampling via Continuous Relaxations.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019

2018
Semi-supervised Deep Kernel Learning: Regression with Unlabeled Data by Minimizing Predictive Variance.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

2016
Transfer Learning from Deep Features for Remote Sensing and Poverty Mapping.
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016


  Loading...