Alberto Delmas Lascorz

Orcid: 0009-0006-2487-250X

According to our database1, Alberto Delmas Lascorz authored at least 25 papers between 2014 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2024

Atalanta: A Bit is Worth a "Thousand" Tensor Values.
Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2024

2022
APack: Off-Chip, Lossless Data Compression for Efficient Deep Learning Inference.
CoRR, 2022

2021
Boveda: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick.
Proceedings of the Fourth Conference on Machine Learning and Systems, 2021

2020
BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization.
CoRR, 2020

Late Breaking Results: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

2019
Accelerating Image-Sensor-Based Deep Learning Applications.
IEEE Micro, 2019

ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning.
Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019

Laconic deep learning inference acceleration.
Proceedings of the 46th International Symposium on Computer Architecture, 2019

Bit-Tactical: A Software/Hardware Approach to Exploiting Value and Bit Sparsity in Neural Networks.
Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019

2018
Value-Based Deep-Learning Acceleration.
IEEE Micro, 2018

Laconic Deep Learning Computing.
CoRR, 2018

DPRed: Making Typical Activation Values Matter In Deep Learning Computing.
CoRR, 2018

Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How.
CoRR, 2018

Exploiting Typical Values to Accelerate Deep Learning.
Computer, 2018

Identifying and Exploiting Ineffectual Computations to Enable Hardware Acceleration of Deep Learning.
Proceedings of the 16th IEEE International New Circuits and Systems Conference, 2018

Loom: exploiting weight and activation precisions to accelerate convolutional neural networks.
Proceedings of the 55th Annual Design Automation Conference, 2018

2017
Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks.
CoRR, 2017

Cnvlutin2: Ineffectual-Activation-and-Weight-Free Deep Neural Network Computing.
CoRR, 2017

Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision Variability.
CoRR, 2017

Dynamic Stripes: Exploiting the Dynamic Precision Requirements of Activation Values in Neural Networks.
CoRR, 2017

IDEAL: image denoising accelerator.
Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, 2017

Bit-pragmatic deep neural network computing.
Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, 2017

Bit-Pragmatic Deep Neural Network Computing.
Proceedings of the 5th International Conference on Learning Representations, 2017

2014
An improved FPGA-based specific processor for Blokus Duo.
Proceedings of the 2014 International Conference on Field-Programmable Technology, 2014


  Loading...