Shengjie Luo

Orcid: 0000-0003-1770-4592

According to our database1, Shengjie Luo authored at least 17 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Let the Code LLM Edit Itself When You Edit the Code.
CoRR, 2024

GeoMFormer: A General Architecture for Geometric Molecular Representation Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024

2023
Mix MSTAR: A Synthetic Benchmark Dataset for Multi-Class Rotation Vehicle Detection in Large-Scale SAR Images.
Remote. Sens., September, 2023

Rethinking the Expressive Power of GNNs via Graph Biconnectivity.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

One Transformer Can Understand Both 2D & 3D Molecular Data.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Salient-VPR: Salient Weighted Global Descriptor for Visual Place Recognition.
IEEE Trans. Instrum. Meas., 2022

Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets.
CoRR, 2022

Your Transformer May Not be as Powerful as You Expect.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track.
CoRR, 2021

Do Transformers Really Perform Bad for Graph Representation?
CoRR, 2021

Revisiting Language Encoding in Learning Multilingual Representations.
CoRR, 2021

Do Transformers Really Perform Badly for Graph Representation?
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training.
Proceedings of the 38th International Conference on Machine Learning, 2021


  Loading...