A survey of safety and trustworthiness of large language models through the lens of verification and validation X Huang, W Ruan, W Huang, G Jin, Y Dong, C Wu, S Bensalem, R Mu, ... arXiv preprint arXiv:2305.11391, 2023 | 38 | 2023 |
A hierarchical HAZOP-like safety analysis for learning-enabled systems Y Qi, PR Conmy, W Huang, X Zhao, X Huang arXiv preprint arXiv:2206.10216, 2022 | 8 | 2022 |
Stpa for learning-enabled systems: a survey and a new practice Y Qi, Y Dong, S Khastgir, P Jennings, X Zhao, X Huang 2023 IEEE 26th International Conference on Intelligent Transportation …, 2023 | 6* | 2023 |
safety analysis in the era of large language models: a case study of STPA using ChatGPT Y Qi, X Zhao, S Khastgir, X Huang arXiv preprint arXiv:2304.01246, 2023 | 5 | 2023 |
Building Guardrails for Large Language Models Y Dong, R Mu, G Jin, Y Qi, J Hu, X Zhao, J Meng, W Ruan, X Huang arXiv preprint arXiv:2402.01822, 2024 | 2 | 2024 |
Direct Training Needs Regularisation: Anytime Optimal Inference Spiking Neural Network D Wu, Y Qi, K Cai, G Jin, X Yi, X Huang arXiv preprint arXiv:2405.00699, 2024 | | 2024 |