Shitong Shao

Orcid: 0000-0003-4689-6140

According to our database1, Shitong Shao authored at least 35 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Spiking Spatiotemporal Neural Architecture Search for EEG-Based Emotion Recognition.
IEEE Trans. Instrum. Meas., 2025

2024
MDDP: Making Decisions From Different Perspectives in Multiagent Reinforcement Learning.
IEEE Trans. Games, September, 2024

Attention-Based Intrinsic Reward Mixing Network for Credit Assignment in Multiagent Reinforcement Learning.
IEEE Trans. Games, June, 2024

PELE scores: pelvic X-ray landmark detection with pelvis extraction and enhancement.
Int. J. Comput. Assist. Radiol. Surg., May, 2024

2D-SNet: A Lightweight Network for Person Re-Identification on the Small Data Regime.
IEEE Trans. Biom. Behav. Identity Sci., January, 2024

Generalized Contrastive Partial Label Learning for Cross-Subject EEG-Based Emotion Recognition.
IEEE Trans. Instrum. Meas., 2024

Multi-perspective analysis on data augmentation in knowledge distillation.
Neurocomputing, 2024

Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection.
CoRR, 2024

DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation.
CoRR, 2024

Bag of Design Choices for Inference of High-Resolution Masked Generative Transformer.
CoRR, 2024

Golden Noise for Diffusion Models: A Learning Framework.
CoRR, 2024

A Unimodal Speaker-Level Membership Inference Detector for Contrastive Pretraining.
CoRR, 2024

IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced Video Synthesis.
CoRR, 2024

Alignment of Diffusion Models: Fundamentals, Challenges, and Future.
CoRR, 2024

Elucidating the Design Space of Dataset Condensation.
CoRR, 2024

Self-supervised Dataset Distillation: A Good Compression Is All You Need.
CoRR, 2024

Your Diffusion Model is Secretly a Certifiably Robust Classifier.
CoRR, 2024

Precise Knowledge Transfer via Flow Matching.
CoRR, 2024

Rethinking Centered Kernel Alignment in Knowledge Distillation.
CoRR, 2024

Rethinking Centered Kernel Alignment in Knowledge Distillation.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

Auto-DAS: Automated Proxy Discovery for Training-Free Distillation-Aware Architecture Search.
Proceedings of the Computer Vision - ECCV 2024, 2024

Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
MS-FRAN: A Novel Multi-Source Domain Adaptation Method for EEG-Based Emotion Recognition.
IEEE J. Biomed. Health Informatics, November, 2023

Hybrid knowledge distillation from intermediate layers for efficient Single Image Super-Resolution.
Neurocomputing, October, 2023

A Bi-Stream hybrid model with MLPBlocks and self-attention mechanism for EEG-based emotion recognition.
Biomed. Signal Process. Control., September, 2023

Catch-Up Distillation: You Only Need to Train Once for Accelerating Sampling.
CoRR, 2023

Black-box Source-free Domain Adaptation via Two-stage Knowledge Distillation.
CoRR, 2023

DiffuseExpand: Expanding dataset for 2D medical image segmentation using diffusion models.
CoRR, 2023

Spatial-Temporal Constraint Learning for Cross-Subject EEG-Based Emotion Recognition.
Proceedings of the International Joint Conference on Neural Networks, 2023

Teaching What You Should Teach: A Data-Based Distillation Method.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023

2022
Learning What You Should Learn.
CoRR, 2022

BiSMSM: A Hybrid MLP-Based Model of Global Self-Attention Processes for EEG-Based Emotion Recognition.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2022, 2022

Bootstrap Generalization Ability from Loss Landscape Perspective.
Proceedings of the Computer Vision - ECCV 2022 Workshops, 2022

AIIR-MIX: Multi-Agent Reinforcement Learning Meets Attention Individual Intrinsic Reward Mixing Network.
Proceedings of the Asian Conference on Machine Learning, 2022

What Role Does Data Augmentation Play in Knowledge Distillation?
Proceedings of the Computer Vision - ACCV 2022, 2022


  Loading...