Facial recognition in schools – Getting it right
A compelling privacy issue of our time is the use of facial recognition (FR) and the potential invasiveness of this technology. Quite apart from the use of FR in civil surveillance and law enforcement, which is highly topical and has been the subject of proposed (and actual) technology bans around the world, we are increasingly aware of multinational companies (hello Facebook, TikTok, Google) that have been creating and storing scans of faces for years with a person’s supposed permission for this buried deep in the legalese of user Terms and Conditions. Others – like Clearview AI and 7-Eleven – have been censured by Australia’s privacy regulator for using the technology in an intrusive and disproportionate way.
In the UK, recent discussion has been focused on the use of FR in school canteens for taking payments from students. Collecting sensitive personal information from children to streamline the lunch service appears disproportionate to the need and leads one to wonder whether privacy (as the default position our children should expect while in the care of their schools) had a ‘seat at the table’ during those early conversations held between the school and the vendor.
As someone who created a privacy technology solution for child-safe organisations to provide a better, easier way to protect images of children, I think FR (and the privacy safeguards that must come with it) is a topic we need to talk about not run from.
FR as a tool
When I have conversations about the use of FR, I share the analogy that FR is merely a tool… like a shovel. You can use a shovel to dig a hole and plant a tree, but you can also use that same shovel to hurt someone. When someone gets hurt, do you blame the shovel or the person wielding it?
In a 2020 New York Times article, Bruce Schneier acknowledged the tech backlash associated with facial recognition technology and argued that it is the uses to which facial recognition is often put (and not the technology itself) that should be of greatest concern. In my lane of photo management, I believe that FR technology on its own is not the burning platform; it’s the purposes for which it is used (roll call… really?!), and how it is implemented (sadly, this is often without a person’s knowledge or consent), secured and tested for accuracy that brings privacy concerns to the fore.
FR in schools
Companies that offer FR solutions as part of their tech-pack or service offering have good guidance and tools at their disposal to ‘get it right’, and they should be proactive in using these tools to benefit the community they serve.
Likewise, every school has a duty of care to ensure the handling of student data meets mandatory compliance requirements imposed by privacy law. So, when introducing any new technology, parents must be informed that the tech is being used, for what purpose, and – where required (such as when using a student’s biometric information, as in the case of FR) – the school must obtain valid consent from students and/ or their parents.
FR can be used for good, if solutions are purpose-built and the technology is used properly. For example, FR can support schools in curating and managing images of children in their care so not a single student photo is shared, used, or published publicly without the proper consent filtering.
If FR is proposed for use in school settings, the school must look at how the technology (big picture, not just the FR component) was built in the first place – is privacy at its core?
Privacy by Design (PbD) is a design principle that puts privacy and rights at the centre of the design and development of products and services. It ensures privacy is built into the design of the project or system being developed up front, as part of the initial specifications, as opposed to being retrofitted or “bolted on” later. It focuses on ways that technology companies can minimise risks by anticipating, detecting, and eliminating privacy harms before they occur. A must have if any school wants to reduce risk.
Making the FR decision
Let’s be honest. Every day for say… the past 10 years, schools have subjected students to FR every time they simply posted photos featuring children on channels such as Facebook, TikTok, Twitter, and Instagram or created profiles for students using Google education suite tools.
In this way, it would seem that schools have already adopted FR as the norm (after all, that’s how these platforms can ‘tag’ students and differentiate them from each other in school galleries). However, many schools are either unaware of this or unconcerned about the long-term implications for student privacy.
Photo management is an essential function of schools, and student photos are used for a variety of legitimate purposes – from confirming the class roll each morning, to helping canteen staff identify which kids have allergies, to acknowledging the academic or sporting achievements of students in the yearbook. There are good, privacy-forward, purpose-built for education alternatives to the ‘Facebook’ brand of photo management. Schools can choose to deploy technologies – like pixevety – that support students in protecting their digital identities now and into the future.
Where FR technology is considered as part of photo management, schools should always keep in mind that automated face or biometric identification can only go so far – although 80% of the hard work is done by machine, human intervention will always be required to ensure identity tagging of faces is done correctly. I provide more information about this in related blog posts, as well as on the pixevety website.