Zhenxiang Xiao

Orcid: 0009-0003-8085-3520

According to our database1, Zhenxiang Xiao authored at least 14 papers between 2022 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Anatomy-Guided Spatio-Temporal Graph Convolutional Networks (AG-STGCNs) for Modeling Functional Connectivity Between Gyri and Sulci Across Multiple Task Domains.
IEEE Trans. Neural Networks Learn. Syst., June, 2024

Brain Structural Connectivity Guided Vision Transformers for Identification of Functional Connectivity Characteristics in Preterm Neonates.
IEEE J. Biomed. Health Informatics, April, 2024

Coupling Visual Semantics of Artificial Neural Networks and Human Brain Function via Synchronized Activations.
IEEE Trans. Cogn. Dev. Syst., April, 2024

Instruction-ViT: Multi-modal prompts for instruction learning in vision transformer.
Inf. Fusion, April, 2024

Regularity and variability of functional brain connectivity characteristics between gyri and sulci under naturalistic stimulus.
Comput. Biol. Medicine, January, 2024

Fusing multi-scale functional connectivity patterns via Multi-Branch Vision Transformer (MB-ViT) for macaque brain age prediction.
Neural Networks, 2024

2023
Characterizing functional brain networks via Spatio-Temporal Attention 4D Convolutional Neural Networks (STA-4DCNNs).
Neural Networks, January, 2023

Holistic Evaluation of GPT-4V for Biomedical Imaging.
CoRR, 2023

Ophtha-LLaMA2: A Large Language Model for Ophthalmology.
CoRR, 2023

Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT.
CoRR, 2023

2022
Modeling spatio-temporal patterns of holistic functional brain networks via multi-head guided attention graph neural networks (Multi-Head GAGNNs).
Medical Image Anal., 2022

A Unified and Biologically-Plausible Relational Graph Representation of Vision Transformers.
CoRR, 2022

Eye-gaze-guided Vision Transformer for Rectifying Shortcut Learning.
CoRR, 2022

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning.
CoRR, 2022


  Loading...