Ofsted will train inspectors on artificial intelligence use and explore how the technology can help the school watchdog to make 鈥渂etter decisions鈥. The government requested regulators and agencies set out their strategic approach to AI by the end of April. In its response, , Ofsted said it already used AI, including in its risk assessment of 鈥済ood鈥 schools, to help decide whether to carry out full graded inspections or short ungraded visits. But Ofsted is also 鈥渁lso exploring how AI can help us to make better decisions based on the information we hold”, to work “more efficiently” and 鈥渇urther improve鈥 how it inspects and regulates. The biggest benefits from AI could include assessing risk, working more efficiently through automation and making best use of the data – particularly text. It will also 鈥渄evelop inspectors鈥 knowledge鈥 about the technology so they 鈥渉ave the knowledge and skills to consider AI and its different uses鈥. Ofsted won’t inspect AI tool quality Ofsted said it supported the use of AI by schools where it improves the care and education of children. When inspecting, it will 鈥渃onsider a provider鈥檚 use鈥 of AI 鈥渂y the effect it has on the criteria set out鈥 in its existing inspection frameworks. But 鈥渋mportantly鈥 it will not directly inspect the quality of AI tools. 鈥淚t is through their application that they affect areas of provision and outcomes such as safeguarding and the quality of education,鈥 Ofsted said. 鈥淟eaders, therefore, are responsible for ensuring that the use of AI does not have a detrimental effect on those outcomes, the quality of their provision or decisions they take.鈥 Ofsted warned the effect of the new technology on children is still 鈥減oorly understood鈥 so it will try to better understand the use of AI by providers and research on the impact. 鈥淏y better understanding the effect of AI in these settings, we can consider providers鈥 decisions more effectively as part of our inspection and regulatory activity.鈥 ‘Modest number’ of AI malpractice cases Exams regulator Ofqual said there had been 鈥渕odest numbers鈥 of AI malpractice cases in coursework, with some leading to sanctions against students.聽 In its evidence, , the regulator said it would add AI-specific categories for exam boards to report malpractice.聽 It has also requested 鈥渄etailed information鈥 from boards on how they are managing AI-related malpractice risks. The regulator has adopted a 鈥減recautionary principle鈥 to AI use, but remains open to new, compliant innovations. But Ofqual told exam boards last year that AI as a sole marker of work does not comply with regulations, and using the technology as a sole form of remote invigilation is also 鈥渦nlikely鈥 to be compliant. It has launched an 鈥渋nnovation service鈥 to help exam boards understand how their innovations meet regulatory requirements.