Huang, Qiongyan and Xia, Yuhan and Long, Yunfei and Fang, Hui and Liang, Ruiwei and Guan, Yin and Xu, Ge (2025) Prompt4LJP: prompt learning for legal judgment prediction. Journal of Supercomputing, 81 (2). DOI https://doi.org/10.1007/s11227-025-06945-0
Huang, Qiongyan and Xia, Yuhan and Long, Yunfei and Fang, Hui and Liang, Ruiwei and Guan, Yin and Xu, Ge (2025) Prompt4LJP: prompt learning for legal judgment prediction. Journal of Supercomputing, 81 (2). DOI https://doi.org/10.1007/s11227-025-06945-0
Huang, Qiongyan and Xia, Yuhan and Long, Yunfei and Fang, Hui and Liang, Ruiwei and Guan, Yin and Xu, Ge (2025) Prompt4LJP: prompt learning for legal judgment prediction. Journal of Supercomputing, 81 (2). DOI https://doi.org/10.1007/s11227-025-06945-0
Abstract
The task of legal judgment prediction (LJP) involves predicting court decisions based on the facts of the case, including identifying the applicable law article, the charge, and the term of penalty. While neural methods have made significant strides in this area, they often fail to fully harness the rich semantic potential of language models (LMs). Prompt learning is a novel paradigm in natural language processing (NLP) that reformulates downstream tasks into cloze-style or prefix-style prediction challenges by utilizing specialized prompt templates. This paradigm shows significant potential across various NLP domains, including short text classification. However, the dynamic word lengths of LJP labels present a challenge to the general prompt templates designed for single-word [MASK] tokens commonly used in many NLP tasks. To address this gap, we introduce the Prompt4LJP framework, a new method based on the prompt learning paradigm for the complex LJP task. Our framework employs a dual-slot prompt template in conjunction with a correlation scoring mechanism to maximize the utility of LMs without requiring additional resources or complex tokenization schemes. Specifically, the dual-slot template consists of two distinct slots: one dedicated to factual descriptions and the other to labels. This approach effectively tackles the challenge of dynamic word lengths in LJP labels, reformulating the LJP classification task as an evaluation of the applicability of each label. By incorporating a correlation scoring mechanism, we can identify the final result label. The experimental results show that our Prompt4LJP method, whether using discrete or continuous templates, outperforms baseline methods, particularly in charges and terms of penalty prediction. Compared to the best baseline model EPM, Prompt4LJP shows F1-score improvements of 2.25% and 4.76% (charge prediction and term of penalty prediction) with discrete templates, and 3.24% and 4.05% with the continuous template, demonstrating prompt4LJP ability to leverage pretrained knowledge and adapt flexibly to specific tasks. The source code can be obtained from https://github.com/huangqiongyannn/Prompt4LJP.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Natural language processing; Legal judgment prediction; Prompt learning; Legal application; Masked language model |
Divisions: | Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 01 May 2025 14:49 |
Last Modified: | 01 May 2025 14:49 |
URI: | http://repository.essex.ac.uk/id/eprint/40455 |
Available files
Filename: Prompt4LJP prompt learning for legal judgment prediction.pdf
Embargo Date: 22 January 2026