Julien Launay

According to our database1, Julien Launay authored at least 23 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Optical training of large-scale Transformers and deep neural networks with direct feedback alignment.
CoRR, 2024

2023
The Falcon Series of Open Language Models.
CoRR, 2023

The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only.
CoRR, 2023

AlGhafa Evaluation Benchmark for Arabic Language Models.
Proceedings of ArabicNLP 2023, Singapore (Hybrid), December 7, 2023, 2023

The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data Only.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model.
CoRR, 2022

What Language Model to Train if You Have One Million GPU Hours?
CoRR, 2022

Scaling Laws Beyond Backpropagation.
CoRR, 2022

What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
CoRR, 2022

PAGnol: An Extra-Large French Generative Model.
Proceedings of the Thirteenth Language Resources and Evaluation Conference, 2022

What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Generalization?
Proceedings of the International Conference on Machine Learning, 2022

Adversarial Robustness by Design Through Analog Computing And Synthetic Gradients.
Proceedings of the IEEE International Conference on Acoustics, 2022

What Language Model to Train if You Have One Million GPU Hours?
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

2021
Is the Number of Trainable Parameters All That Actually Matters?
CoRR, 2021

ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients.
CoRR, 2021

Photonic Differential Privacy with Direct Feedback Alignment.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Is the Number of Trainable Parameters All That Actually Matters?
Proceedings of the I (Still) Can't Believe It's Not Better! Workshop at NeurIPS 2021, 2021

LightOn Optical Processing Unit : Scaling-up AI and HPC with a Non von Neumann co-processor.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021

2020
Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment.
CoRR, 2020

Light-in-the-loop: using a photonics co-processor for scalable training of neural networks.
CoRR, 2020

Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

2019
Principled Training of Neural Networks with Direct Feedback Alignment.
CoRR, 2019

Energy-Aware Resources in Digital Twin: The Case of Injection Moulding Machines.
Proceedings of the 9th Workshop on Service Oriented, 2019


  Loading...