Your Location:
Home >
Browse articles >
Memory-efficient tensor parallelism for long-sequence Transformer training
Regular Papers | Updated:2025-06-09
    • Memory-efficient tensor parallelism for long-sequence Transformer training

      Enhanced Publication
    • 面向长序列Transformer训练的内存高效张量并行方法
    • Frontiers of Information Technology & Electronic Engineering   Vol. 26, Issue 5, Pages: 770-787(2025)
    • DOI:10.1631/FITEE.2400602    

      CLC: TP183
    • Received:17 July 2024

      Revised:2025-02-23

      Published Online:02 April 2025

      Published:2025-05

    Scan QR Code

  • Peng LIANG, Linbo QIAO, Yanqi SHI, et al. Memory-efficient tensor parallelism for long-sequence Transformer training[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(5): 770-787. DOI: 10.1631/FITEE.2400602.

  •  
  •  
icon
The trial reading is over, you can activate your VIP account to continue reading.
Deactivate >
icon
The trial reading is over. You can log in to your account, go to the personal center, purchase VIP membership, and read the full text.
Already a VIP member?
Log in >

0

Views

142

Downloads

0

CSCD

>
Alert me when the article has been cited
Submit
Tools
Download
Export Citation
Share
Add to favorites
Add to my album

Related Articles

No data

Related Author

No data

Related Institution

No data
0