NASUWT has set out 6 principles for the ethical development and application of artificial intelligence and digital technologies in education

Introduction
NASUWT principles

  1. A public good and human right
  2. Promoting human expertise, human review and human interaction
  3. Safety and privacy
  4. Protecting teachers' jobs and securing workers' rights
  5. Promoting equality, diversity and inclusion
  6. Strategic vision for the use of AI

Some key messages
Further information

Introduction

NASUWT has established six principles for the ethical design, development, procurement and application of AI-enabled technologies.

These principles underpin our advice and guidance on AI-enabled technologies.

They are also intended to support teachers, leaders and NASUWT representatives to make informed judgements about whether, when and how AI-enabled technologies are used in schools and education settings.

NASUWT principles

A public good and human right
  1. Education is a public good and a human right. AI-enabled technologies must support education goals and the values of a democratic society.

This means:

  • High-quality, inclusive and equitable education for all.

  • Recruiting developing and retaining a high-quality teaching workforce.

  • Decisions about whether, when and how AI-enabled technologies are used: if they are based on the needs of learners, and the professional judgements of teachers, not on commercial interests.

  • AI-enabled technologies are designed, developed, procured and implemented in ways that respect democratic values such as fairness, justice, transparency, accountability and sustainability.

  • Teachers and leaders are supported to make informed decisions about the ethical use of AI-enabled technologies.

Promoting human expertise, human review and human interaction
  1. AI-enabled technologies are designed, developed and implemented in ways that promote human expertise, human review and human interaction

This means:

  • AI-enabled technologies should empower the teacher to exercise their professional autonomy and agency. They must not replace the professional judgement of the teacher

  • AI-enabled technologies must not replace direct human interaction between the teacher and learner as the teacher plays a critical role in supporting the social and emotional dimensions of learning, and in personalising education to meet the diverse needs of learners.

  • Decisions about the design, development, procurement, implementation review and continued use of AI-enabled technologies are negotiated and agreed with recognised workforce unions, including NASUWT.

  • Those affected by the AI-enabled technology are consulted and their views and needs inform decisions about whether, when and how AI-enabled technology is used.

  • Teachers are empowered to make informed decisions about whether, when and how AI-enabled technologies should be used, including through training, professional development and learning, and through access to ongoing support.

Safety and privacy
  1. Safety and privacy are ensured

This means:

  • AI-enabled technologies must be designed, developed, procured and implemented in ways that uphold rights to data privacy, data protection and safety.

  • Data protection and privacy impact assessments, along with equality impact assessments, are undertaken to identify risks and action taken to secure effective practice, including action to mitigate risks.

  • Risk assessments are conducted before an AI-enabled tool or technology is purchased or leased and, where appropriate, this includes examining how the tool or technology interacts with other school information and management systems.

  • It is clear what happens to personal data (of learners, staff and others), including where the data is stored, and that the data is secure.

  • Companies providing a data management tool or service demonstrate that they implement these rights.

Protecting teachers' jobs and securing workers' rights
  1. AI-enabled technologies are designed, developed, procured and implemented in ways that protect teachers’ jobs and secure workers’ rights

This means:

  • AI-enabled technologies do not replace or displace teachers, including supply teachers.

  • AI-enabled technologies do not de-professionalise teaching nor remove core professional responsibilities from the role of the teacher.

  • Teachers and leaders have the right to a work-life balance, including the right to switch off. Steps are taken to prevent teachers and leaders feeling under pressure to work outside of hours or to take on additional tasks.

  • AI-enabled technologies do not add to teachers' and leaders' workload. Workload impact assessments are undertaken to identify and mitigate risks. Pre-existing generators of workload must also be addressed. Any new tasks should result in old tasks being removed.

  • AI-enabled technologies are not used for high-stakes punitive purposes such as monitoring or judging a teacher’s practice. If AI tools enable monitoring of a teacher’s practice, this information is controlled by the teacher and is only used for self-reflection and personal development purposes.

  • Teachers and leaders are supported to make effective use of AI-enabled technologies.

Promoting equality, diversity and inclusion
  1. AI-enabled technologies support actions to promote equality, equity, diversity and inclusion

This means:

  • AI-enabled technologies support personalisation (rather than standardising education) and are selected as part of an inclusive approach that recognises the diverse backgrounds and needs of learners.

  • AI-enabled technologies are implemented in ways that reduce and remove inequities. This includes addressing issues related to resources and infrastructure.

  • Action is taken to remove the risks of bias, discrimination and exclusion, including through the use of equality impact assessments, equality monitoring, reviews and evaluations.

  • Action is taken to develop the digital literacy of staff and learners so that they can make ethically informed decisions about AI-enabled technologies. This includes making them aware of the risks of bias arising from algorithms and datasets.

  • Feedback about the use and equality impact of AI-enabled technologies is sought. The voices of users and disadvantaged and under-represented groups are also sought and influence decisions.

Strategic vision for the use of AI
  1. There is a strategic vision for the use of AI-enabled technologies and a participatory approach to their governance

This means:

  • School managers are responsible and accountable for the ethical use and effective implementation of AI-enabled tools.

    • There is an AI strategy (which may be part of a digital strategy) and a senior manager is responsible for the strategy.

  • Governance of AI-enabled technologies is based on the principles of inform, consult, collaborate and empower.

  • AI-enabled tools are trialled and reviewed before decisions are made to purchase or lease them.

  • Teachers and other users of AI-enabled technologies receive training and support so that they can provide meaningful feedback about the appropriateness and effectiveness of tools.

  • Companies are held accountable for the products they provide and manage.

    • Contracts include clauses relating to ethical development and use, including clauses about privacy, equality and inclusion, and the storage and use of data.

Some key messages

Decisions about whether it is appropriate to use AI-enabled technologies will depend on the purposes to which they are being put. In other words, it is necessary to consider the context.

AI-enabled technologies present both opportunities and risks so it is vital to establish both the intended purposes and potential purposes.

  • Who is making the decisions?

  • Who has influence?

  • Whose interests are being served?

AI-enabled technologies have implications for teachers’ jobs and working conditions. They could help to address workload burdens and support planning and preparation, but they could also displace and de-professionalise the teacher or even replace teachers. There are very significant issues about access to training and support, including time to access that training and support.

Further information

NASUWT advice and guidance

Data protection and privacy

Remote and hybrid education

TUC

Webpage providing TUC advice and guidance on AI along with links to TUC work on AI. This includes toolkits, reports and practical advice and guidance for union representatives on AI in the workplace and securing workers’ rights.

Websites containing resources, advice and support on AI in education

  • Educate Ventures Research provides a wide range of resources, including a newsletter – The Skinny, to support teachers and leaders to make informed decisions about the use of AI in schools.

  • AI in education provides resources, practical guidance and information about the use of AI in education settings.

  • GenEd AI is a partnership between Education Scotland and Daydream Believers and provides links to resources, advice, events and research about the use of AI in education.

  • Department for Education (DfE) policy paper, Generative AI in education: This applies to England but includes information that is relevant across the UK. It provides an overview of Government advice on the use of GenAI in education and links to advice and resources that schools should use when using GenAI. It also provides links to advice on safety standards, data protection and intellectual property.

  • Joint Council for Qualifications (JCQ) for guidance on the use of AI in assessments. This applies across the UK.

  • Wales hwb for advice about Generative AI in education and education digital standards more generally. 

 



Your feedback

If you require a response from us, please DO NOT use this form. Please use our Contact Us page instead.

In our continued efforts to improve the website, we evaluate all the feedback you leave here because your insight is invaluable to us, but all your comments are processed anonymously and we are unable to respond to them directly.