Milos Nikolic

Orcid: 0000-0002-4168-0837

Affiliations:
  • University of Toronto, Canada


According to our database1, Milos Nikolic authored at least 15 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Atalanta: A Bit is Worth a "Thousand" Tensor Values.
Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2024

2022
Schrödinger's FP: Dynamic Adaptation of Floating-Point Containers for Deep Learning Training.
CoRR, 2022

2021
Boveda: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick.
Proceedings of Machine Learning and Systems 2021, 2021

2020
BitPruning: Learning Bitlengths for Aggressive and Accurate Quantization.
CoRR, 2020

Late Breaking Results: Building an On-Chip Deep Learning Memory Hierarchy Brick by Brick.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

2019
Accelerating Image-Sensor-Based Deep Learning Applications.
IEEE Micro, 2019

ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning.
Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019

Characterizing Sources of Ineffectual Computations in Deep Learning Networks.
Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, 2019

Laconic deep learning inference acceleration.
Proceedings of the 46th International Symposium on Computer Architecture, 2019

Bit-Tactical: A Software/Hardware Approach to Exploiting Value and Bit Sparsity in Neural Networks.
Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019

2018
Laconic Deep Learning Computing.
CoRR, 2018

DPRed: Making Typical Activation Values Matter In Deep Learning Computing.
CoRR, 2018

Bit-Tactical: Exploiting Ineffectual Computations in Convolutional Neural Networks: Which, Why, and How.
CoRR, 2018

Identifying and Exploiting Ineffectual Computations to Enable Hardware Acceleration of Deep Learning.
Proceedings of the 16th IEEE International New Circuits and Systems Conference, 2018

Characterizing Sources of Ineffectual Computations in Deep Learning Networks.
Proceedings of the 2018 IEEE International Symposium on Workload Characterization, 2018


  Loading...