Malte J. Rasch
Orcid: 0000-0002-7988-4624
According to our database1,
Malte J. Rasch
authored at least 32 papers
between 2006 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
Multi-Function Multi-Way Analog Technology for Sustainable Machine Intelligence Computation.
CoRR, 2024
State-Independent Low Resistance Drift SiSbTe Phase Change Memory for Analog In-Memory Computing Applications.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits 2024, 2024
Improving the Accuracy of Analog-Based In-Memory Computing Accelerators Post-Training.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2024
Proceedings of the IEEE International Conference on Software Services Engineering, 2024
2023
Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference.
CoRR, 2023
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators.
CoRR, 2023
Architectures and Circuits for Analog-memory-based Hardware Accelerators for Deep Neural Networks (Invited).
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023
Impact of Phase-Change Memory Drift on Energy Efficiency and Accuracy of Analog Compute-in-Memory Deep Learning Inference (Invited).
Proceedings of the IEEE International Reliability Physics Symposium, 2023
AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing.
Proceedings of the IEEE International Conference on Edge Computing and Communications, 2023
2022
Pattern Training, Inference, and Regeneration Demonstration Using On-Chip Trainable Neuromorphic Chips for Spiking Restricted Boltzmann Machine.
Adv. Intell. Syst., 2022
Impact of Phase-Change Memory Flicker Noise and Weight Drift on Analog Hardware Inference for Large-Scale Deep Learning Networks.
Adv. Intell. Syst., 2022
Analog-memory-based 14nm Hardware Accelerator for Dense Deep Neural Networks including Transformers.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022
2021
Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices.
Frontiers Comput. Neurosci., 2021
A Flexible and Fast PyTorch Toolkit for Simulating Training and Inference on Analog Crossbar Arrays.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021
2020
Training Large-scale Artificial Neural Networks on Simulated Resistive Crossbar Arrays.
IEEE Des. Test, 2020
Synchronized Analog Capacitor Arrays for Parallel Convolutional Neural Network Training.
Proceedings of the 63rd IEEE International Midwest Symposium on Circuits and Systems, 2020
2019
Neural network accelerator design with resistive crossbars: Opportunities and challenges.
IBM J. Res. Dev., 2019
Zero-shifting Technique for Deep Neural Network Training on Resistive Cross-point Arrays.
CoRR, 2019
Training Large-Scale Spiking Neural Networks on Multi-core Neuromorphic System Using Backpropagation.
Proceedings of the Neural Information Processing - 26th International Conference, 2019
2018
2015
Frontiers Comput. Neurosci., 2015
2013
Design principles of the sparse coding network and the role of "sister cells" in the olfactory system of Drosophila.
Frontiers Comput. Neurosci., 2013
Frontiers Comput. Neurosci., 2013
2012
2011
Proceedings of the Advances in Neural Networks - ISNN 2011, 2011
2007
Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, 2007
2006
Proceedings of the Advances in Neural Information Processing Systems 19, 2006
Proceedings of the Proceedings 14th International Conference on Intelligent Systems for Molecular Biology 2006, 2006