Skip to main content
Springer Nature - PMC COVID-19 Collection logoLink to Springer Nature - PMC COVID-19 Collection
. 2022 Mar 18;23(2):186–206. doi: 10.1631/FITEE.2100041

A full-process intelligent trial system for smart court

一种智慧法院的全流程智能化审判系统

Bin Wei 1,✉,#, Kun Kuang 2,✉,#, Changlong Sun 2,3,, Jun Feng 4,, Yating Zhang 3, Xinli Zhu 5, Jianghong Zhou 2, Yinsheng Zhai 5, Fei Wu 2,
PMCID: PMC8930487

Abstract

In constructing a smart court, to provide intelligent assistance for achieving more efficient, fair, and explainable trial proceedings, we propose a full-process intelligent trial system (FITS). In the proposed FITS, we introduce essential tasks for constructing a smart court, including information extraction, evidence classification, question generation, dialogue summarization, judgment prediction, and judgment document generation. Specifically, the preliminary work involves extracting elements from legal texts to assist the judge in identifying the gist of the case efficiently. With the extracted attributes, we can justify each piece of evidence’s validity by establishing its consistency across all evidence. During the trial process, we design an automatic questioning robot to assist the judge in presiding over the trial. It consists of a finite state machine representing procedural questioning and a deep learning model for generating factual questions by encoding the context of utterance in a court debate. Furthermore, FITS summarizes the controversy focuses that arise from a court debate in real time, constructed under a multi-task learning framework, and generates a summarized trial transcript in the dialogue inspectional summarization (DIS) module. To support the judge in making a decision, we adopt first-order logic to express legal knowledge and embed it in deep neural networks (DNNs) to predict judgments. Finally, we propose an attentional and counterfactual natural language generation (AC-NLG) to generate the court’s judgment.

Key words: Intelligent trial system, Smart court, Evidence analysis, Dialogue summarization, Focus of controversy, Automatic questioning, Judgment prediction

Acknowledgements

We thank all members of the FITS project team, especially the natural language processing team. In particular, we would like to thank Xiaozhong LIU, Lin YUAN, Huasha ZHAO, Yi YANG, Tianyi WANG, Xinyu DUAN, Qiong ZHANG, Xiaojing LIU, and Feiyu GAO.

Contributors

Bin WEI, Kun KUANG, Changlong SUN, and Jun FENG discussed the organization of this paper from different aspects, including the views of both law and computer science. Bin WEI drafted mainly Sections 1, 3, 4, and 10. Kun KUANG drafted mainly Sections 6 and 7. Changlong SUN drafted mainly Sections 2 and 9 and provided judicial big data and technical models for experiments in Section 8. Jun FENG drafted mainly Section 5 and conducted the experiments in Section 8. Fei WU, Xinli ZHU, and Jianghong ZHOU guided the research. All authors revised and finalized the paper.

Compliance with ethics guidelines

Bin WEI, Kun KUANG, Changlong SUN, Jun FENG, Yating ZHANG, Xinli ZHU, Jianghong ZHOU, Yinsheng ZHAI, and Fei WU declare that they have no conflict of interest.

Footnotes

Project supported by the Key R&D Projects of the Ministry of Science and Technology of China (No. 2020YFC0832500), the National Key Research and Development Program of China (No. 2018AAA0101900), the National Social Science Foundation of China (No. 20&ZD047), the National Natural Science Foundation of China (Nos. 61625107 and 62006207), the Key R&D Project of Zhejiang Province, China (No. 2020C01060), and the Fundamental Research Funds for the Central Universities, China (Nos. LQ21F020020 and 2020XZA202)

These authors contributed equally to this work

Contributor Information

Bin Wei, Email: srsysj@zju.edu.cn.

Kun Kuang, Email: kunkuang@zju.edu.cn.

Changlong Sun, Email: changlong.scl@taobao.com.

Jun Feng, Email: JuneFeng.81@gmail.com.

Fei Wu, Email: wufei@zju.edu.cn.

References

  1. Aletras N, Tsarapatsanis D, Preoţiuc-Pietro D, et al. Predicting judicial decisions of the European court of human rights: a natural language processing perspective. PeerJ Comput Sci. 2016;2:e93. doi: 10.7717/peerj-cs.93. [DOI] [Google Scholar]
  2. Arditi D, Oksay FE, Tokdemir OB. Predicting the outcome of construction litigation using neural networks. Comput-Aided Civ Infrastruct Eng. 1998;13(2):75–81. doi: 10.1111/0885-9507.00087. [DOI] [Google Scholar]
  3. Ashley KD, Brüninghaus S. Automatically classifying case texts and predicting outcomes. Artif Intell Law. 2009;17(2):125–165. doi: 10.1007/s10506-009-9077-9. [DOI] [Google Scholar]
  4. Chao WH, Jiang X, Luo ZC, et al. Interpretable charge prediction for criminal cases with dynamic rationale attention. J Artif Intell Res. 2019;66:743–764. doi: 10.1613/jair.1.11377. [DOI] [Google Scholar]
  5. Dahbur K, Muscarello T. Classification system for serial criminal patterns. Artif Intell Law. 2003;11(4):251–269. doi: 10.1023/B:ARTI.0000045994.96685.21. [DOI] [Google Scholar]
  6. Duan XY, Zhang YT, Yuan L, et al., 2019. Legal summarization for multi-role debate dialogue via controversy focus mining and multi-task learning. Proc 28th ACM Int Conf on Information and Knowledge Management, p.1361–1370. 10.1145/3357384.3357940
  7. Elnaggar A, Otto R, Matthes F, 2018. Deep learning for named-entity linking with transfer learning for legal documents. Proc Artificial Intelligence and Cloud Computing Conf, p.23–28. 10.1145/3299819.3299846
  8. Gerani S, Mehdad Y, Carenini G, et al., 2014. Abstractive summarization of product reviews using discourse structure. Proc Conf on Empirical Methods in Natural Language Processing, p.1602–1613.
  9. Goo CW, Chen YN, 2018. Abstractive dialogue summarization with sentence-gated modeling optimized by dialogue acts. IEEE Spoken Language Technology Work-shop, p.735–742. 10.1109/SLT.2018.8639531
  10. Graves A, Schmidhuber J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neur Netw. 2005;18(5–6):602–610. doi: 10.1016/j.neunet.2005.06.042. [DOI] [PubMed] [Google Scholar]
  11. Hochreiter S, Schmidhuber J. Long short-term memory. Neur Comput. 1997;9(8):1735–1780. doi: 10.1162/neco.1997.9.8.1735. [DOI] [PubMed] [Google Scholar]
  12. Hu YK, Luo ZC, Chao WH, 2020. Identifying principals and accessories in a complex case based on the comprehension of fact description. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.4265–4269. 10.18653/v1/2020.acl-main.393
  13. Imbens GW, Rubin DB. Causal Inference for Statistics, Social, and Biomedical Sciences: an Introduction. New York, USA: Cambridge University Press; 2015. [Google Scholar]
  14. Jackson P, Al-Kofahi K, Tyrrell A, et al. Information extraction from case law and retrieval of prior cases. Artif Intell. 2003;150(1–2):239–290. doi: 10.1016/S0004-3702(03)00106-1. [DOI] [Google Scholar]
  15. Ji CZ, Zhou X, Zhang YT, et al., 2020. Cross copy network for dialogue generation. Proc Conf on Empirical Methods in Natural Language Processing, p.1900–1910. 10.18653/v1/2020.emnlp-main.149
  16. Kanapala A, Jannu S, Pamula R. Passage-based text summarization for legal information retrieval. Arab J Sci Eng. 2019;44(11):9159–9169. doi: 10.1007/s13369-019-03998-1. [DOI] [Google Scholar]
  17. Katz DM, Bommarito MJII, Blackman J. A general approach for predicting the behavior of the supreme court of the United States. PLOS ONE. 2017;12(4):e0174698. doi: 10.1371/journal.pone.0174698. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Klement EP, Mesiar R, Pap E, 2000. Triangular Norms. Springer, Dordrecht, the Netherlands. 10.1007/978-94-015-9540-7
  19. Kuang K, Li L, Geng Z, et al. Causal inference. Engineering. 2020;6(3):253–263. doi: 10.1016/j.eng.2019.08.016. [DOI] [Google Scholar]
  20. Lafferty JD, McCallum A, Pereira FCN, 2001. Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proc 18th Int Conf on Machine Learning, p.282–289.
  21. Lample G, Ballesteros M, Subramanian S, et al., 2016. Neural architectures for named entity recognition. https://arxiv.org/abs/1603.01360
  22. Li T, Gupta V, Mehta M, et al., 2019. A logic-driven framework for consistency of neural models. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.3924–3935. 10.18653/v1/D19-1405
  23. Li T, Jawale PA, Palmer M, et al., 2020. Structured tuning for semantic role labeling. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.8402–8412.
  24. Liu CL, Chen KC, 2019. Extracting the gist of Chinese judgments of the supreme court. Proc 17th Int Conf on Artificial Intelligence and Law, p.73–82. 10.1145/3322640.3326715
  25. Liu CY, Wang P, Xu J, et al., 2019. Automatic dialogue summary generation for customer service. Proc 25th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining, p.1957–1965. 10.1145/3292500.3330683
  26. Liu XJ, Gao FY, Zhang Q, et al., 2018. Graph convolution for multimodal information extraction from visually rich documents. Proc NAACL-HLT 2019, p.32–39.
  27. Luo BF, Feng YS, Xu JB, et al., 2017. Learning to predict charges for criminal cases with legal basis. Proc Conf on Empirical Methods in Natural Language Processing, p.2727–2736.
  28. Medvedeva M, Vols M, Wieling M. Using machine learning to predict decisions of the European court of human rights. Artif Intell Law. 2020;28(2):237–266. doi: 10.1007/s10506-019-09255-y. [DOI] [Google Scholar]
  29. MoŽina M, Zabkar J, Bench-Capon T, et al. Argument based machine learning applied to law. Artif Intell Law. 2005;13(1):53–73. doi: 10.1007/s10506-006-9002-4. [DOI] [Google Scholar]
  30. Pearl J. Causality: Models, Reasoning, and Inference. 2nd Ed. New York, USA: Cambridge University Press; 2009. [Google Scholar]
  31. Pearl J, Glymour M, Jewell NP. Causal Inference in Statistics: a Primer. Chichester, UK: John Wiley & Sons; 2016. [Google Scholar]
  32. Sak H, Senior A, Beaufays F, 2014. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Proc 15th Annual Conf of the Int Speech Communication Association, p.338–342.
  33. Sutton C, McCallum A, 2007. An introduction to conditional random fields for relational learning. In: Getoor L, Taskar B (Eds.), Introduction to Statistical Relational Learning. MIT Press, Cambridge, USA, p.268–373.
  34. Wang TY, Zhang YT, Liu XZ, et al., 2020. Masking orchestration: multi-task pretraining for multi-role dialogue representation learning. Proc 34th AAAI Conf on Artificial Intelligence, p.9217–9224. 10.1609/aaai.v34i05.6459
  35. Wu YQ, Kuang K, Zhang YT, et al., 2020. De-biased court’s view generation with causality. Proc Conf on Empirical Methods in Natural Language Processing, p.763–780.
  36. Xiao CJ, Zhong HX, Guo ZP, et al., 2018. CAIL2018: a large-scale legal dataset for judgment prediction. https://arxiv.org/abs/1807.02478
  37. Xie YQ, Xu ZW, Kankanhalli MS, et al., 2019. Embedding symbolic knowledge into deep networks. Proc 33rd Conf on Neural Information Processing Systems, p.4233–4243.
  38. Yang WM, Jia WJ, Zhou XJ, et al., 2019. Legal judgment prediction via multi-perspective bi-feedback network. Proc 28th Int Joint Conf on Artificial Intelligence, p.4085–4091.
  39. Yang ZC, Yang DY, Dyer C, et al., 2016. Hierarchical attention networks for document classification. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.1480–1489. 10.18653/v1/N16-1174
  40. Yang ZL, Salakhutdinov R, Cohen WW, 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. Proc Int Conf on Learning Representations.
  41. Ye H, Jiang X, Luo ZC, et al., 2018. Interpretable charge predictions for criminal cases: learning to generate court views from fact descriptions. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.1854–1864. 10.18653/v1/N18-1168
  42. Zhao HS, Yang Y, Zhang Q, et al., 2018. Improve neural entity recognition via multi-task data selection and constrained decoding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.346–351. 10.18653/v1/N18-2056
  43. Zhong HX, Guo ZP, Tu CC, et al., 2018. Legal judgment prediction via topological learning. Proc Conf on Empirical Methods in Natural Language Processing, p.3540–3549. 10.18653/v1/D18-1390
  44. Zhong HX, Xiao CJ, Tu CC, et al., 2020a. How does NLP benefit legal system: a summary of legal artificial intelligence. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.5218–5230. 10.18653/v1/2020.acl-main.466
  45. Zhong HX, Wang YZ, Tu CC, et al., 2020b. Iteratively questioning and answering for interpretable legal judgment prediction. Proc AAAI Conf on Artificial Intelligence, p.1250–1257. 10.1609/aaai.v34i01.5479
  46. Zhou X, Zhang YT, Liu XZ, et al., 2019. Legal intelligence for e-commerce: multi-task learning by leveraging multiview dispute representation. Proc 42nd Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.315–324. 10.1145/3331184.3331212

Articles from Frontiers of Information Technology & Electronic Engineering are provided here courtesy of Nature Publishing Group

RESOURCES