Dongseok Kwon

Orcid: 0000-0001-7676-8938

According to our database1, Dongseok Kwon authored at least 24 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Toward Optimized In-Memory Reinforcement Learning: Leveraging 1/<i>f</i> Noise of Synaptic Ferroelectric Field-Effect-Transistors for Efficient Exploration.
Adv. Intell. Syst., June, 2024

SNNSim: Investigation and Optimization of Large-Scale Analog Spiking Neural Networks Based on Flash Memory Devices.
Adv. Intell. Syst., April, 2024

Si-Based Dual-Gate Field-Effect Transistor Array for Low-Power On-Chip Trainable Hardware Neural Networks.
Adv. Intell. Syst., January, 2024

FPIA: Field-Programmable Ising Arrays with In-Memory Computing.
Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design, 2024

2023
Analog Synaptic Devices Based on IGZO Thin-Film Transistors with a Metal-Ferroelectric-Metal-Insulator-Semiconductor Structure for High-Performance Neuromorphic Systems.
Adv. Intell. Syst., December, 2023

1/<i>f</i> Noise in Synaptic Ferroelectric Tunnel Junction: Impact on Convolutional Neural Network.
Adv. Intell. Syst., June, 2023

2022
Novel, parallel and differential synaptic architecture based on NAND flash memory for high-density and highly-reliable binary neural networks.
Neurocomputing, 2022

Neuron Circuits for Low-Power Spiking Neural Networks Using Time-To-First-Spike Encoding.
IEEE Access, 2022

On-Chip Trainable Spiking Neural Networks Using Time-To-First-Spike Encoding.
IEEE Access, 2022

First Demonstration of 1-bit Erase in Vertical NAND Flash Memory.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits 2022), 2022

2021
On-chip trainable hardware-based deep Q-networks approximating a backpropagation algorithm.
Neural Comput. Appl., 2021

Hardware-based spiking neural network architecture using simplified backpropagation algorithm and homeostasis functionality.
Neurocomputing, 2021

Pulse-Width Modulation Neuron Implemented by Single Positive-Feedback Device.
CoRR, 2021

Direct Gradient Calculation: Simple and Variation-Tolerant On-Chip Training Method for Neural Networks.
Adv. Intell. Syst., 2021

Spiking Neural Networks With Time-to-First-Spike Coding Using TFT-Type Synaptic Device Model.
IEEE Access, 2021

Hardware-Based Spiking Neural Network Using a TFT-Type AND Flash Memory Array Architecture Based on Direct Feedback Alignment.
IEEE Access, 2021

2020
Efficient precise weight tuning protocol considering variation of the synaptic devices and target accuracy.
Neurocomputing, 2020

Hardware Implementation of Spiking Neural Networks Using Time-To-First-Spike Encoding.
CoRR, 2020

Low-Power and High-Density Neuron Device for Simultaneous Processing of Excitatory and Inhibitory Signals in Neuromorphic Systems.
IEEE Access, 2020

NAND Flash Based Novel Synaptic Architecture for Highly Robust and High-Density Quantized Neural Networks With Binary Neuron Activation of (1, 0).
IEEE Access, 2020

2019
Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices.
Neural Comput. Appl., 2019

Investigation of Neural Networks Using Synapse Arrays Based on Gated Schottky Diodes.
Proceedings of the International Joint Conference on Neural Networks, 2019

Review of candidate devices for neuromorphic applications.
Proceedings of the 49th European Solid-State Device Research Conference, 2019

2018
Hardware-based Neural Networks using a Gated Schottky Diode as a Synapse Device.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2018


  Loading...