Massimiliano Lupo Pasini

Orcid: 0000-0002-4980-6924

According to our database1, Massimiliano Lupo Pasini authored at least 28 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Anderson acceleration with approximate calculations: Applications to scientific computing.
Numer. Linear Algebra Appl., October, 2024

A Perspective on Scalable AI on High-Performance Computing and Leadership Class Supercomputing Facilities [Industrial and Governmental Activities].
IEEE Comput. Intell. Mag., August, 2024

AI for Materials Design and Discovery Using Atomistic Scale Information [Industrial and Governmental Activities].
IEEE Comput. Intell. Mag., May, 2024

Transferring predictions of formation energy across lattices of increasing size.
Mach. Learn. Sci. Technol., 2024

Scalable Training of Graph Foundation Models for Atomistic Materials Modeling: A Case Study with HydraGNN.
CoRR, 2024

Scaling Ensembles of Data-Intensive Quantum Chemical Calculations for Millions of Molecules.
Proceedings of the IEEE International Parallel and Distributed Processing Symposium, 2024

MDLoader: A Hybrid Model-driven Data Loader for Distributed Deep Neural Networks Training.
Proceedings of the IEEE International Parallel and Distributed Processing Symposium, 2024

2023
Stable parallel training of Wasserstein conditional generative adversarial neural networks.
J. Supercomput., 2023

Hierarchical Model Reduction Driven by Machine Learning for Parametric Advection-Diffusion-Reaction Problems in the Presence of Noisy Data.
J. Sci. Comput., 2023

DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies.
CoRR, 2023

A deep learning approach for adaptive zoning.
CoRR, 2023

DDStore: Distributed Data Store for Scalable Training of Graph Neural Networks on Large Atomistic Modeling Datasets.
Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, 2023

2022
Multi-task graph neural networks for simultaneous prediction of global and atomic properties in ferromagnetic systems <sup>*</sup>.
Mach. Learn. Sci. Technol., 2022

Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules.
J. Cheminformatics, 2022

A deep learning approach to solve forward differential problems on graphs.
CoRR, 2022

A deep learning approach for detection and localization of leaf anomalies.
CoRR, 2022

Multi-task graph neural networks for simultaneous prediction of global and atomic properties in ferromagnetic systems.
CoRR, 2022

Computational Workflow for Accelerated Molecular Design Using Quantum Chemical Simulations and Deep Learning Models.
Proceedings of the Accelerating Science and Engineering Discoveries Through Integrated Research Infrastructure for Experiment, Big Data, Modeling and Simulation, 2022

Machine Learning for First Principles Calculations of Material Properties for Ferromagnetic Materials.
Proceedings of the Accelerating Science and Engineering Discoveries Through Integrated Research Infrastructure for Experiment, Big Data, Modeling and Simulation, 2022

2021
Scalable balanced training of conditional generative adversarial neural networks on image data.
J. Supercomput., 2021

A scalable algorithm for the optimization of neural network architectures.
Parallel Comput., 2021

Stable Anderson Acceleration for Deep Learning.
CoRR, 2021

Fast and Accurate Predictions of Total Energy for Solid Solution Alloys with Graph Convolutional Neural Networks.
Proceedings of the Driving Scientific and Engineering Discoveries Through the Integration of Experiment, Big Data, and Modeling and Simulation, 2021

Stable Parallel Training of Wasserstein Conditional Generative Adversarial Neural Networks : *Full/Regular Research Paper submission for the symposium CSCI-ISAI: Artificial Intelligence.
Proceedings of the International Conference on Computational Science and Computational Intelligence, 2021

2020
A parallel strategy for density functional theory computations on accelerated nodes.
Parallel Comput., 2020

2019
Convergence analysis of Anderson-type acceleration of Richardson's iteration.
Numer. Linear Algebra Appl., 2019

A greedy constructive algorithm for the optimization of neural network architectures.
CoRR, 2019

2017
Analysis of Monte Carlo accelerated iterative methods for sparse linear systems.
Numer. Linear Algebra Appl., 2017


  Loading...