• Current Issue
  • Cover Articles
  • Archive
  • Virtual Issues
  • Online First
MORE
Volume 26  Issue 10,2025 2025年第26卷第10 Issue
  • Special Feature on Theories and Applications of Financial Large Models

    In the field of xxx, expert xx has made significant progress. They established the xx system/explored the xx topic/verified the xx conjecture, offering solutions to tackle xx problems and paving the way for future research in xx.

    Shuoling LIU, Xiaojun ZENG, Xiu LI, Qiang YANG

    Vol. 26, Issue 10, Pages: 1767-1770(2025) DOI: 10.1631/FITEE.2520000
      
    36
    |
    18
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 135063172 false
    Updated:2025-11-18
  • Special Feature on Theories and Applications of Financial Large Models

    Large investment model Enhanced Publication AI Introduction

    In the field of quantitative investment, the large investment model (LIM) has been introduced to address the challenges of diminishing returns and increasing labor and time costs. Expert xx established the LIM system, which employs end-to-end learning and universal modeling to create an upstream foundation model capable of autonomously learning comprehensive signal patterns from diverse financial data. These "global patterns" are then transferred to downstream strategy modeling, optimizing performance for specific tasks. This provides solutions to enhance both performance and efficiency at scale in quantitative investment research.

    Jian GUO, Heung-Yeung SHUM

    Vol. 26, Issue 10, Pages: 1771-1792(2025) DOI: 10.1631/FITEE.2500268
    Abstract:Traditional quantitative investment research is encountering diminishing returns alongside rising labor and time costs. To overcome these challenges, we introduce the large investment model (LIM), a novel research paradigm designed to enhance both performance and efficiency at scale. LIM employs end-to-end learning and universal modeling to create an upstream foundation model, which is capable of autonomously learning comprehensive signal patterns from diverse financial data spanning multiple exchanges, instruments, and frequencies. These "global patterns" are subsequently transferred to downstream strategy modeling, optimizing performance for specific tasks. We detail the system architecture design of LIM, address the technical challenges inherent in this approach, and outline potential directions for future research.  
    Keywords:Artificial general intelligence;End-to-end;Large investment model;Quantitative investment;Foundation model;Multimodal large language model   
    50
    |
    20
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 135063175 false
    Updated:2025-11-18
  • Special Feature on Theories and Applications of Financial Large Models

    In the field of financial large language models, a comprehensive survey explores the interaction between knowledge distillation and FinLLMs. Expert researchers established a structured taxonomy and comprehensive evaluation framework, providing a clear roadmap to accelerate the development of distilled FinLLMs.

    Jiaqi SHI, Xulong ZHANG, Xiaoyang QU, Junfei XIE, Jianzong WANG

    Vol. 26, Issue 10, Pages: 1793-1808(2025) DOI: 10.1631/FITEE.2500282
    Abstract:Financial large language models (FinLLMs) offer immense potential for financial applications. While excessive deployment expenditures and considerable inference latency constitute major obstacles, as a prominent compression methodology, knowledge distillation (KD) offers an effective solution to these difficulties. A comprehensive survey is conducted in this work on how KD interacts with FinLLMs, covering three core aspects: strategy, application, and evaluation. At the strategy level, this review introduces a structured taxonomy to comparatively analyze existing distillation pathways. At the application level, this review puts forward a logical upstream–midstream–downstream framework to systematically explain the practical value of distilled models in the financial field. At the evaluation level, to tackle the absence of standards in the financial field, this review constructs a comprehensive evaluation framework that proceeds from multiple dimensions such as financial accuracy, reasoning fidelity, and robustness. In summary, this research aims to provide a clear roadmap for this interdisciplinary field, to accelerate the development of distilled FinLLMs.  
    Keywords:Financial large language models (FinLLMs);Knowledge distillation;Model compression;Quantitative trading   
    15
    |
    17
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 135063183 false
    Updated:2025-11-18
  • Special Feature on Theories and Applications of Financial Large Models

    A survey on large language model-based alpha mining Enhanced Publication AI Introduction

    In the field of quantitative research, this study presents a structured review of emerging LLM-based alpha mining systems, which provides solutions to redefine the future of quantitative research. Expert analysis suggests that LLM is a scalable interface for amplifying both domain expertise and algorithmic rigor.

    Junjie ZHANG, Shuoling LIU, Tongzhe ZHANG, Yuchen SHI

    Vol. 26, Issue 10, Pages: 1809-1821(2025) DOI: 10.1631/FITEE.2500386
    Abstract:Alpha mining, which refers to the systematic discovery of data-driven signals predictive of future cross-sectional returns, is a central task in quantitative research. Recent progress in large language models (LLMs) has sparked interest in LLM-based alpha mining frameworks, which offer a promising middle ground between human-guided and fully automated alpha mining approaches and deliver both speed and semantic depth. This study presents a structured review of emerging LLM-based alpha mining systems from an agentic perspective, and analyzes the functional roles of LLMs, ranging from miners and evaluators to interactive assistants. Despite early progress, key challenges remain, including simplified performance evaluation, limited numerical understanding, lack of diversity and originality, weak exploration dynamics, temporal data leakage, and black-box risks and compliance challenges. Accordingly, we outline future directions, including improving reasoning alignment, expanding to new data modalities, rethinking evaluation protocols, and integrating LLMs into more general-purpose quantitative systems. Our analysis suggests that LLM is a scalable interface for amplifying both domain expertise and algorithmic rigor, as it amplifies domain expertise by transforming qualitative hypotheses into testable factors and enhances algorithmic rigor for rapid backtesting and semantic reasoning. The result is a complementary paradigm, where intuition, automation, and language-based reasoning converge to redefine the future of quantitative research.  
    Keywords:Alpha mining;Quantitative investment;Large language models (LLMs);LLM agents;FinTech   
    18
    |
    16
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 135063191 false
    Updated:2025-11-18
  • Special Feature on Theories and Applications of Financial Large Models

    In the field of financial large language models, researchers introduce AnalyScore and Stocksis to address evaluation metrics and analytical depth issues. FinSphere, an AI agent, generates professional-grade stock analysis reports, outperforming general-purpose LLMs and domain-specific FinLLMs.

    Shijie HAN, Jingshu ZHANG, Yiqing SHEN, Kaiyuan YAN, Hongguang LI

    Vol. 26, Issue 10, Pages: 1822-1831(2025) DOI: 10.1631/FITEE.2500414
    Abstract:Current financial large language models (FinLLMs) exhibit two major limitations: the absence of standardized evaluation metrics for stock analysis quality and insufficient analytical depth. We address these limitations with two contributions. First, we introduce AnalyScore, a systematic framework for evaluating the quality of stock analysis. Second, we construct Stocksis, an expert-curated dataset designed to enhance the financial analysis capabilities of large language models (LLMs). Building on Stocksis, together with a novel integration framework and quantitative tools, we develop FinSphere, an artificial intelligence (AI) agent that generates professional-grade stock analysis reports. Evaluations with AnalyScore show that FinSphere consistently surpasses general-purpose LLMs, domain-specific FinLLMs, and existing agent-based systems, even when the latter are enhanced with real-time data access and few-shot guidance. The findings highlight FinSphere's significant advantages in analytical quality and real-world applicability.  
    Keywords:Large language model (LLM);Instruction-tuned financial LLM;Real-time stock analysis;Evaluation framework and dataset   
    18
    |
    16
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 135063173 false
    Updated:2025-11-18
  • Special Feature on Theories and Applications of Financial Large Models

    In the financial industry, the development of large language models (LLMs) has created transformative opportunities, especially in financial trading. Expert researchers established an intelligent trade order recognition pipeline, which provides solutions to solve the problem of integrating LLMs with trading systems.

    Yu KANG, Xin YANG, Ge WANG, Yuda WANG, Zhanyu WANG, Mingwen LIU

    Vol. 26, Issue 10, Pages: 1832-1846(2025) DOI: 10.1631/FITEE.2500285
    Abstract:The development of large language models (LLMs) has created transformative opportunities for the financial industry, especially in the area of financial trading. However, how to integrate LLMs with trading systems has become a challenge. To address this problem, we propose an intelligent trade order recognition pipeline that enables the conversion of trade orders into a standard format for trade execution. The system improves the ability of human traders to interact with trading platforms while addressing the problem of misinformation acquisition in trade execution. In addition, we create a trade order dataset of 500 pieces of data to simulate the real-world trading scenarios. Moreover, we design several metrics to provide a comprehensive assessment of dataset reliability and the generative power of big models in finance by using five state-of-the-art LLMs on our dataset. The results show that most models generate syntactically valid JavaScript object notation (JSON) at high rates (about 80%–99%) and initiate clarifying questions in nearly all incomplete cases (about 90%–100%). However, end-to-end accuracy remains low (about 6%–14%), and missing information is substantial (about 12%–66%). Models also tend to over-interrogate—roughly 70%–80% of follow-ups are unnecessary—raising interaction costs and potential information-exposure risk. The research also demonstrates the feasibility of integrating our pipeline with the real-world trading systems, paving the way for practical deployment of LLM-based trade automation solutions.  
    Keywords:Large language model;Financial instruction;Evaluation;Dataset construction   
    46
    |
    10
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 134360715 false
    Updated:2025-11-18
SEE MORE

Videos

  • 2023 Issue 1 | Scalability and efficiency challenges for the exascale supercomputing system: practice of a parallel supporting environment on the Sunway exascale prototype system 00:02:51

    2023 Issue 1 | Scalability and efficiency challenges for the exascale supercomputing system: practice of a parallel supporting environment on the Sunway exascale prototype system

    2023-12-30
    Play Total: 23
  • 2023 Issue 6 | Model division multiple access for semantic communications 00:02:30

    2023 Issue 6 | Model division multiple access for semantic communications

    2023-12-30
    Play Total: 13
  • 2022 Issue 10 | Discussion on a new paradigm of endogenous security towards 6G networks 00:02:15

    2022 Issue 10 | Discussion on a new paradigm of endogenous security towards 6G networks

    2023-12-30
    Play Total: 2
  • 2022 Issue 12 | Technology trends in large-scale high-efficiency network computing 00:02:22

    2022 Issue 12 | Technology trends in large-scale high-efficiency network computing

    2023-12-30
    Play Total: 2
  • 2022 Issue 6 | Self-deployed execution environment for high performance computing 00:02:48

    2022 Issue 6 | Self-deployed execution environment for high performance computing

    2022-08-03
    Play Total: 8
  • 2022 Issue 2 | A full-process intelligent trial system for smart court 00:02:24

    2022 Issue 2 | A full-process intelligent trial system for smart court

    2022-05-17
    Play Total: 8
  • 2022 Issue 3 | Automatic protocol reverse engineering for industrial control systems with dynamic taint analysis 00:02:37

    2022 Issue 3 | Automatic protocol reverse engineering for industrial control systems with dynamic taint analysis

    2022-05-17
    Play Total: 5
  • P1 Speech by Academician Baoyan Duan 00:05:36

    P1 Speech by Academician Baoyan Duan

    2022-04-17
    Play Total: 12
  • P2 Speech by Professor Min  Sheng, Xidian University 00:02:27

    P2 Speech by Professor Min Sheng, Xidian University

    2022-04-17
    Play Total: 7
  • P3 Speech by Professor Yunsong Li, Xidian University 00:02:37

    P3 Speech by Professor Yunsong Li, Xidian University

    2022-04-17
    Play Total: 11
SEE MORE

0