Micah Goldblum

Orcid: 0000-0002-8266-2424

Affiliations:
  • University of Maryland, College Park, MD, USA


According to our database1, Micah Goldblum authored at least 105 papers between 2019 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
A Simple Baseline for Predicting Events with Auto-Regressive Tabular Transformers.
CoRR, 2024

Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices.
CoRR, 2024

Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking.
CoRR, 2024

Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models.
CoRR, 2024

LiveBench: A Challenging, Contamination-Free LLM Benchmark.
CoRR, 2024

Just How Flexible are Neural Networks in Practice?
CoRR, 2024

Large Language Models Must Be Taught to Know What They Don't Know.
CoRR, 2024

Adaptive Rentention & Correction for Continual Learning.
CoRR, 2024

Measuring Style Similarity in Diffusion Models.
CoRR, 2024

Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion.
CoRR, 2024

TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks.
CoRR, 2024

Compute Better Spent: Replacing Dense Layers with Structured Matrices.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Non-Vacuous Generalization Bounds for Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Position: The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

On the Reliability of Watermarks for Large Language Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

NEFTune: Noisy Embeddings Improve Instruction Finetuning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Identifying Attack-Specific Signatures in Adversarial Examples.
Proceedings of the IEEE International Conference on Acoustics, 2024

2023
Towards Transferable Adversarial Attacks on Image and Video Transformers.
IEEE Trans. Image Process., 2023

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses.
IEEE Trans. Pattern Anal. Mach. Intell., 2023

Perspectives on the State and Future of Deep Learning - 2023.
CoRR, 2023

A Simple and Efficient Baseline for Data Attribution on Images.
CoRR, 2023

Baseline Defenses for Adversarial Attacks Against Aligned Language Models.
CoRR, 2023

Bring Your Own Data! Self-Supervised Evaluation for Large Language Models.
CoRR, 2023

A Cookbook of Self-Supervised Learning.
CoRR, 2023

The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning.
CoRR, 2023

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Understanding and Mitigating Copying in Diffusion Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Simplifying Neural Network Training Under Class Imbalance.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

What Can We Learn from Unlearnable Datasets?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

When Do Neural Nets Outperform Boosted Trees on Tabular Data?
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Seeing in Words: Learning to Classify through Language Bottlenecks.
Proceedings of the First Tiny Papers Track at ICLR 2023, 2023

Transfer Learning with Deep Tabular Models.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

The Lie Derivative for Measuring Learned Equivariance.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

How Much Data Are Augmentations Worth? An Investigation into Scaling Laws, Invariance, and Implicit Regularization.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Panning for Gold in Federated Learning: Targeted Text Extraction under Arbitrarily Large-Scale Aggregation.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

STYX: Adaptive Poisoning Attacks Against Byzantine-Robust Defenses in Federated Learning.
Proceedings of the IEEE International Conference on Acoustics, 2023

Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Universal Guidance for Diffusion Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

A Deep Dive into Dataset Imbalance and Bias in Face Identification.
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 2023

2022
What do Vision Transformers Learn? A Visual Exploration.
CoRR, 2022

K-SAM: Sharpness-Aware Minimization at the Speed of SGD.
CoRR, 2022

On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition.
CoRR, 2022

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning.
CoRR, 2022

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise.
CoRR, 2022

A Deep Dive into Dataset Imbalance and Bias in Face Identification.
CoRR, 2022

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking.
CoRR, 2022

Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Autoregressive Perturbations for Data Poisoning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification.
Proceedings of the International Conference on Machine Learning, 2022

Bayesian Model Selection, the Marginal Likelihood, and Generalization.
Proceedings of the International Conference on Machine Learning, 2022

Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations.
Proceedings of the International Conference on Machine Learning, 2022

The Uncanny Similarity of Recurrence and Depth.
Proceedings of the Tenth International Conference on Learning Representations, 2022

The Close Relationship Between Contrastive Learning and Meta-Learning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Stochastic Training is Not Necessary for Generalization.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

Poisons that are learned faster are more effective.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

Towards Transferable Adversarial Attacks on Vision Transformers.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Active Learning at the ImageNet Scale.
CoRR, 2021

Comparing Human and Machine Bias in Face Recognition.
CoRR, 2021

Identification of Attack-Specific Signatures in Adversarial Examples.
CoRR, 2021

Datasets for Studying Generalization from Easy to Hard Examples.
CoRR, 2021

MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data.
CoRR, 2021

SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training.
CoRR, 2021

Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release.
CoRR, 2021

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations.
CoRR, 2021

What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.
CoRR, 2021

Thinking Deeply with Recurrence: Generalizing from Easy to Hard Sequential Reasoning Problems.
CoRR, 2021

Technical Challenges for Training Fair Neural Networks.
CoRR, 2021

Encoding Robustness to Image Style via Adversarial Feature Perturbations.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Adversarial Examples Make Strong Poisons.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks.
Proceedings of the 38th International Conference on Machine Learning, 2021

Data Augmentation for Meta-Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

The Intrinsic Dimension of Images and Its Impact on Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition.
Proceedings of the 9th International Conference on Learning Representations, 2021

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff.
Proceedings of the IEEE International Conference on Acoustics, 2021

Adversarial attacks on machine learning systems for high-frequency trading.
Proceedings of the ICAIF'21: 2nd ACM International Conference on AI in Finance, Virtual Event, November 3, 2021

2020
Adversarial Robustness and Robust Meta-Learning for Neural Networks.
PhD thesis, 2020

Analyzing the Machine Learning Conference Review Process.
CoRR, 2020

Random Network Distillation as a Diversity Metric for Both Image and Text Generation.
CoRR, 2020

An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process.
CoRR, 2020

Prepare for the Worst: Generalizing across Domain Shifts with Adversarial Batch Normalization.
CoRR, 2020

Adversarial Attacks on Machine Learning Systems for High-Frequency Trading.
CoRR, 2020

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks.
Proceedings of the 37th International Conference on Machine Learning, 2020

Truth or backpropaganda? An empirical investigation of deep learning theory.
Proceedings of the 8th International Conference on Learning Representations, 2020

Understanding Generalization Through Visualizations.
Proceedings of the "I Can't Believe It's Not Better!" at NeurIPS Workshops, 2020

Witchcraft: Efficient PGD Attacks with Random Step Size.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

Adversarially Robust Distillation.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Robust Few-Shot Learning with Adversarially Queried Meta-Learners.
CoRR, 2019


  Loading...