Nicolas Flammarion

According to our database1, Nicolas Flammarion authored at least 64 papers between 2015 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Simplicity bias and optimization threshold in two-layer ReLU networks.
CoRR, 2024

Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
CoRR, 2024

Does Refusal Training in LLMs Generalize to the Past Tense?
CoRR, 2024

Implicit Bias of Mirror Flow on Separable Data.
CoRR, 2024

Is In-Context Learning Sufficient for Instruction Following in LLMs?
CoRR, 2024

Competition Report: Finding Universal Jailbreak Backdoors in Aligned LLMs.
CoRR, 2024

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks.
CoRR, 2024

JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models.
CoRR, 2024

Early alignment in two-layer networks training is a two-edged sword.
CoRR, 2024

Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

First-order ANIL provably learns representations despite overparametrisation.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Leveraging Continuous Time to Understand Momentum When Training Diagonal Linear Networks.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024

2023
On Adaptivity in Quantum Testing.
Trans. Mach. Learn. Res., 2023

Why Do We Need Weight Decay in Modern Deep Learning?
CoRR, 2023

Model agnostic methods meta-learn despite misspecifications.
CoRR, 2023

Quantum Channel Certification with Incoherent Strategies.
CoRR, 2023

(S)GD over Diagonal Linear Networks: Implicit Regularisation, Large Stepsizes and Edge of Stability.
CoRR, 2023

On the spectral bias of two-layer linear networks.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Saddle-to-Saddle Dynamics in Diagonal Linear Networks.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

(S)GD over Diagonal Linear Networks: Implicit bias, Large Stepsizes and Edge of Stability.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Penalising the biases in norm regularisation enforces sparsity.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Sharpness-Aware Minimization Leads to Low-Rank Features.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

SGD with Large Step Sizes Learns Sparse Features.
Proceedings of the International Conference on Machine Learning, 2023

A Modern Look at the Relationship between Sharpness and Generalization.
Proceedings of the International Conference on Machine Learning, 2023

Linearization Algorithms for Fully Composite Optimization.
Proceedings of the Thirty Sixth Annual Conference on Learning Theory, 2023

Quantum Channel Certification with Incoherent Measurements.
Proceedings of the Thirty Sixth Annual Conference on Learning Theory, 2023

2022
An Efficient Sampling Algorithm for Non-smooth Composite Potentials.
J. Mach. Learn. Res., 2022

Sequential algorithms for testing identity and closeness of distributions.
CoRR, 2022

On the effectiveness of adversarial training against common corruptions.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Towards Understanding Sharpness-Aware Minimization.
Proceedings of the International Conference on Machine Learning, 2022

ARIA: Adversarially Robust Image Attribution for Content Provenance.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

Accelerated SGD for Non-Strongly-Convex Least Squares.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Label noise (stochastic) gradient descent implicitly solves the Lasso for quadratic parametrisation.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Trace norm regularization for multi-task learning with scarce data.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Linear Speedup in Personalized Collaborative Learning.
CoRR, 2021

A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip.
CoRR, 2021

A Continuized View on Nesterov Acceleration.
CoRR, 2021

Last iterate convergence of SGD for Least-Squares in the Interpolation regime.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Sequential Algorithms for Testing Closeness of Distributions.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

RobustBench: a standardized adversarial robustness benchmark.
Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, 2021

2020
RobustBench: a standardized adversarial robustness benchmark.
CoRR, 2020

Optimal Robust Linear Regression in Nearly Linear Time.
CoRR, 2020

Online Robust Regression via SGD on the l1 loss.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Understanding and Improving Fast Adversarial Training.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent.
Proceedings of the 37th International Conference on Machine Learning, 2020

Square Attack: A Query-Efficient Black-Box Adversarial Attack via Random Search.
Proceedings of the Computer Vision - ECCV 2020, 2020

2019
Is There an Analog of Nesterov Acceleration for MCMC?
CoRR, 2019

Escaping from saddle points on Riemannian manifolds.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Fast Mean Estimation with Sub-Gaussian Rates.
Proceedings of the Conference on Learning Theory, 2019

2018
Sampling Can Be Faster Than Optimization.
CoRR, 2018

Gen-Oja: A Simple and Efficient Algorithm for Streaming Generalized Eigenvector Computation.
CoRR, 2018

Gen-Oja: Simple & Efficient Algorithm for Streaming Generalized Eigenvector Computation.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo.
Proceedings of the 35th International Conference on Machine Learning, 2018

Averaging Stochastic Gradient Descent on Riemannian Manifolds.
Proceedings of the Conference On Learning Theory, 2018

2017
Stochastic Approximation and Least-Squares Regression, with Applications to Machine Learning. (Approximation Stochastique et Régression par Moindres Carrés : Applications en Apprentissage Automatique).
PhD thesis, 2017

Robust Discriminative Clustering with Sparse Regularizers.
J. Mach. Learn. Res., 2017

Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression.
J. Mach. Learn. Res., 2017

Stochastic Composite Least-Squares Regression with Convergence Rate $O(1/n)$.
Proceedings of the 30th Conference on Learning Theory, 2017

2015
From Averaging to Acceleration, There is Only a Step-size.
Proceedings of The 28th Conference on Learning Theory, 2015


  Loading...