• Current Issue
  • Cover Articles
  • Archive
  • Virtual Issues
  • Online First
MORE
Volume 26  Issue 8,2025 2025年26卷第8 Issue
  • Regular Papers

    Xiaosong ZHANG, Yukun ZHU, Xiong LI, Yongzhao ZHANG, Weina NIU, Fenghua XU, Junpeng HE, Ran YAN, Shiping HUANG

    Vol. 26, Issue 8, Pages: 1243-1278(2025) DOI: 10.1631/FITEE.2500053
    Abstract:Noncooperative computer systems and network confrontation present a core challenge in cyberspace security. Traditional cybersecurity technologies predominantly rely on passive response mechanisms, which exhibit significant limitations when addressing real-world complex and unknown threats. This paper introduces the concept of "active cybersecurity," aiming to enhance network security not only through technical measures but also by leveraging strategy-level defenses. The core assumption of this concept is that attackers and defenders, in the context of network confrontations, act as rational decision-makers seeking to maximize their respective objectives. Building on this observation, this paper integrates game theory to analyze the interdependent relationships between attackers and defenders, thereby optimizing their strategies. Guided by this foundational idea, we propose an active cybersecurity model involving intelligent threat sensing, in-depth behavior analysis, comprehensive path profiling, and dynamic countermeasures, termed SAPC, designed to foster an integrated defense capability encompassing threat perception, analysis, tracing, and response. At its core, SAPC incorporates theoretical analyses of adversarial behavior and the optimization of corresponding strategies informed by game theory. By profiling adversaries and modeling confrontation as a "game," the model establishes a comprehensive framework that provides both theoretical insights into and practical guidance for cybersecurity. The proposed active cybersecurity model marks a transformative shift from passive defense to proactive perception and confrontation. It facilitates the evolution of cybersecurity technologies toward a new paradigm characterized by active prediction, prevention, and strategic guidance.  
    Keywords:Active cybersecurity;Intelligent threat sensing;In-depth behavior analysis;Comprehensive path profiling;Dynamic countermeasures   
    2
    |
    0
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 125013412 false
    Updated:2025-09-04
  • Regular Papers

    Sisi SHAO, Zhibo HE, Shangdong LIU, Weili ZHANG, Fei WU, Fukang ZENG, Jun ZUO, Longfei ZHOU, Yukun NIU, Yimu JI

    Vol. 26, Issue 8, Pages: 1279-1292(2025) DOI: 10.1631/FITEE.2400251
    Abstract:Mimic active defense technology effectively disrupts attack routes and reduces the probability of successful attacks by using a dynamic heterogeneous redundancy (DHR) architecture. However, current approaches often overlook the adaptability of the adjudication mechanism in complex and variable network environments, focusing primarily on system security while neglecting performance considerations. To address these limitations, we propose an output difference feedback and system benefit control based DHR architecture. This architecture introduces an adjudication mechanism based on output difference feedback, which enhances adaptability by considering the impact of each executor's output deviation on the global decision. Additionally, the architecture incorporates a scheduling strategy based on system benefit, which models the quality of service and switching overhead as a bi-objective optimization problem, balancing security with reduced computational costs and system overhead. Simulation results demonstrate that our architecture improves adaptability towards different network environments and effectively reduces both the attack success rate and average failure rate.  
    Keywords:Mimic defense;Adjudication mechanism;Scheduling strategy;Executor output difference;System benefit   
    2
    |
    1
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 125013429 false
    Updated:2025-09-04
  • Regular Papers

    Shaowu XU, Xibin JIA, Qianmei SUN, Jing CHANG

    Vol. 26, Issue 8, Pages: 1293-1304(2025) DOI: 10.1631/FITEE.2500164
    Abstract:Temporal attention mechanisms are essential for video action recognition, enabling models to focus on semantically informative moments. However, these models frequently exhibit temporal infidelity—misaligned attention weights caused by limited training diversity and the absence of fine-grained temporal supervision. While video-level labels provide coarse-grained action guidance, the lack of detailed constraints allows attention noise to persist, especially in complex scenarios with distracting spatial elements. To address this issue, we propose temporal fidelity enhancement (TFE), a competitive learning paradigm based on the disentangled information bottleneck (DisenIB) theory. TFE mitigates temporal infidelity by decoupling action-relevant semantics from spurious correlations through adversarial feature disentanglement. Using pre-trained representations for initialization, TFE establishes an adversarial process in which segments with elevated temporal attention compete against contexts with diminished action relevance. This mechanism ensures temporal consistency and enhances the fidelity of attention patterns without requiring explicit fine-grained supervision. Extensive studies on UCF101, HMDB-51, and Charades benchmarks validate the effectiveness of our method, with significant improvements in action recognition accuracy.  
    Keywords:Action recognition;Disentangled information bottleneck;Temporal modeling;Temporal fidelity   
    2
    |
    1
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 125013411 false
    Updated:2025-09-04
  • Regular Papers

    Maokun ZHENG, Zhi LI, Long ZHENG, Weidong WANG, Dandan LI, Guomei WANG

    Vol. 26, Issue 8, Pages: 1305-1323(2025) DOI: 10.1631/FITEE.2400766
    Abstract:Diffusion tensor imaging (DTI) is a widely used imaging technique for mapping living human brain tissue's microstructure and structural connectivity. Recently, deep learning methods have been proposed to rapidly estimate diffusion tensors (DTs) using only a small quantity of diffusion-weighted (DW) images. However, these methods typically use the DW images obtained with fixed q-space sampling schemes as the training data, limiting the application scenarios of such methods. To address this issue, we develop a new deep neural network called q-space-coordinate-guided diffusion tensor imaging (QCG-DTI), which can efficiently and correctly estimate DTs under flexible q-space sampling schemes. First, we propose a q-space-coordinate-embedded feature consistency strategy to ensure the correspondence between q-space-coordinates and their respective DW images. Second, a q-space-coordinate fusion (QCF) module is introduced which efficiently embeds q-space-coordinates into multiscale features of the corresponding DW images by linearly adjusting the feature maps along the channel dimension, thus eliminating the dependence on fixed diffusion sampling schemes. Finally, a multiscale feature residual dense (MRD) module is proposed which enhances the network's feature extraction and image reconstruction capabilities by using dual-branch convolutions with different kernel sizes to extract features at different scales. Compared to state-of-the-art methods that rely on a fixed sampling scheme, the proposed network can obtain high-quality diffusion tensors and derived parameters even using DW images acquired with flexible q-space sampling schemes. Compared to state-of-the-art deep learning methods, QCG-DTI reduces the mean absolute error by approximately 15% on fractional anisotropy and around 25% on mean diffusivity.  
    Keywords:Diffusion tensor imaging;Diffusion tractography;Deep learning;Fast diffusion tensor estimation;Q-space-coordinate information   
    2
    |
    0
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 125013413 false
    Updated:2025-09-04
  • Regular Papers

    Zuyi WANG, Zhimeng ZHENG, Jun MENG, Li XU

    Vol. 26, Issue 8, Pages: 1324-1340(2025) DOI: 10.1631/FITEE.2400960
    Abstract:End-to-end object detection methods have attracted extensive interest recently since they alleviate the need for complicated human-designed components and simplify the detection pipeline. However, these methods suffer from slower training convergence and inferior detection performance compared to conventional detectors, as their feature fusion and selection processes are constrained by insufficient positive supervision. To address this issue, we introduce a novel query-selection encoder (QSE) designed for end-to-end object detectors to improve the training convergence speed and detection accuracy. QSE is composed of multiple encoder layers stacked on top of the backbone. A lightweight head network is added after each encoder layer to continuously optimize features in a cascading manner, providing more positive supervision for efficient training. Additionally, a hierarchical feature-aware attention (HFA) mechanism is incorporated in each encoder layer, including in- and cross-level feature attention, to enhance the interaction between features from different levels. HFA can effectively suppress similar feature representations and highlight discriminative ones, thereby accelerating the feature selection process. Our method is highly versatile in accommodating both CNN- and Transformer-based detectors. Extensive experiments were conducted on the popular benchmark datasets MS COCO, CrowdHuman, and PASCAL VOC to demonstrate the effectiveness of our method. The results showed that CNN- and Transformer-based detectors using QSE can achieve better end-to-end performance within fewer training epochs.  
    Keywords:End-to-end object detection;Query-selection encoder;Hierarchical feature-aware attention   
    2
    |
    1
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 125013428 false
    Updated:2025-09-04
  • Regular Papers

    Changtong ZAN, Liang DING, Li SHEN, Yibing ZHAN, Xinghao YANG, Weifeng LIU

    Vol. 26, Issue 8, Pages: 1341-1355(2025) DOI: 10.1631/FITEE.2400458
    Abstract:Large language models (LLMs) exhibit remarkable capabilities in various natural language processing tasks, such as machine translation. However, the large number of LLM parameters incurs significant costs during inference. Previous studies have attempted to train translation-tailored LLMs with moderately sized models by fine-tuning them on the translation data. Nevertheless, when performing translations in zero-shot directions that are absent from the fine-tuning data, the problem of ignoring instructions and thus producing translations in the wrong language (i.e., the off-target translation issue) remains unresolved. In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability of translation-tailored LLMs, particularly for maintaining accurate translation directions. We first fine-tune LLMs on the translation data to elicit basic translation capabilities. At the second stage, we construct instruction-conflicting samples by randomly replacing the instructions with the incorrect ones. Then, we introduce an extra unlikelihood loss to reduce the probability assigned to those samples. Experiments on two benchmarks using the LLaMA 2 and LLaMA 3 models, spanning 16 zero-shot directions, demonstrate that, compared to the competitive baseline—translation-finetuned LLaMA, our method could effectively reduce the off-target translation ratio (up to -62.4 percentage points), thus improving translation quality (up to +9.7 bilingual evaluation understudy). Analysis shows that our method can preserve the model's performance on other tasks, such as supervised translation and general tasks. Code is released at https://github.com/alphadl/LanguageAware_Tuning.  
    Keywords:Zero-shot machine translation;Off-target issue;Large Language Model;Language-aware instruction tuning;Instruction-conflicting sample   
    2
    |
    1
    |
    0
    <HTML>
    <L-PDF><Meta-XML>
    <Citation> <Bulk Citation> 125013507 false
    Updated:2025-09-04
SEE MORE

Videos

  • 2023 Issue 1 | Scalability and efficiency challenges for the exascale supercomputing system: practice of a parallel supporting environment on the Sunway exascale prototype system 00:02:51

    2023 Issue 1 | Scalability and efficiency challenges for the exascale supercomputing system: practice of a parallel supporting environment on the Sunway exascale prototype system

    2023-12-30
    Play Total: 23
  • 2023 Issue 6 | Model division multiple access for semantic communications 00:02:30

    2023 Issue 6 | Model division multiple access for semantic communications

    2023-12-30
    Play Total: 13
  • 2022 Issue 10 | Discussion on a new paradigm of endogenous security towards 6G networks 00:02:15

    2022 Issue 10 | Discussion on a new paradigm of endogenous security towards 6G networks

    2023-12-30
    Play Total: 2
  • 2022 Issue 12 | Technology trends in large-scale high-efficiency network computing 00:02:22

    2022 Issue 12 | Technology trends in large-scale high-efficiency network computing

    2023-12-30
    Play Total: 2
  • 2022 Issue 6 | Self-deployed execution environment for high performance computing 00:02:48

    2022 Issue 6 | Self-deployed execution environment for high performance computing

    2022-08-03
    Play Total: 8
  • 2022 Issue 2 | A full-process intelligent trial system for smart court 00:02:24

    2022 Issue 2 | A full-process intelligent trial system for smart court

    2022-05-17
    Play Total: 8
  • 2022 Issue 3 | Automatic protocol reverse engineering for industrial control systems with dynamic taint analysis 00:02:37

    2022 Issue 3 | Automatic protocol reverse engineering for industrial control systems with dynamic taint analysis

    2022-05-17
    Play Total: 5
  • P1 Speech by Academician Baoyan Duan 00:05:36

    P1 Speech by Academician Baoyan Duan

    2022-04-17
    Play Total: 11
  • P2 Speech by Professor Min  Sheng, Xidian University 00:02:27

    P2 Speech by Professor Min Sheng, Xidian University

    2022-04-17
    Play Total: 6
  • P3 Speech by Professor Yunsong Li, Xidian University 00:02:37

    P3 Speech by Professor Yunsong Li, Xidian University

    2022-04-17
    Play Total: 11
SEE MORE

0