EYES WIDE SHUT – FACIAL RECOGNITION TECHNOLOGY AT SCHOOLS

I was asked recently to write about this topic for an online publication and it intrigued me to continue the discussion here on LinkedIn as it is a topic that…...
March 14, 2019
by pixevety

I was asked recently to write about this topic for an online publication and it intrigued me to continue the discussion here on LinkedIn as it is a topic that is growing in public interest.

Every school has a duty of care to ensure personal information handled about their students, staff and families (this includes photos and videos) meet the mandatory compliance requirements imposed by Australian privacy law and endorsed by the Australian Education sector.

As a photo management solution, pixevety uses facial recognition technology for image subject identification only. Our platform is engineered for child-safe organisations to improve their compliance in the management and administration of photos and videos: images are uploaded into a closed private gallery for each organisation (separate environments); member photos are then uploaded, identified and tagged; individual consent settings are then matched to each image; the image collection is then managed by the school (to organise, share etc). The use of facial recognition in this context is to ensure the privacy wishes of parents are fulfilled by the school when it comes to the handling of children’s images.

It has been my view for some time that the use of a face for biometric identification is not a reliable source on its own, and a stretch to refer to it in that way. Using a face isn’t as accurate as say a fingerprint or iris scan. Over recent times, this technology’s accuracy has improved due to greater corporate investment and innovation in AI. It has become more reliable but is far from perfect and will always require some level of human intervention to ensure it is used correctly.

But many companies and researchers have been exploiting our photos as social media and user-generated content took over, scraping faces from Facebook, Google, Instagram and Youtube, as this recent article shares.  We are hearing more about the importance of privacy from these social media players, yet they continue to use facial recognition in a way that is not protecting consumer privacy. This week the Founder of the World Wide Web shared a thought piece on its 30th birthday where he said:

“Companies must do more to ensure their pursuit of short-term profit is not at the expense of human rights, democracy, scientific fact or public safety. Platforms and products must be designed with privacy, diversity and security in mind.”

I do feel the online corporate world is slowly evolving into a .TRUST world – defined by an intent to do the right thing. I am not just talking about corporate governance and what’s right for the shareholders – that’s a given – but also what’s right for customers. A truly all-encompassing era where the digital health and wellbeing come first and this will define the longevity of any business, let alone an online one. The online corporate world is maturing faster than ever, and we must whole-heartedly welcome this across all sectors – and I know most schools are – but with our eyes-wide-open. Remembering that the auto industry took almost 80 years to create some form of safety standard, this analogy highlights how quickly the online world has evolved – around four times faster than traditional sectors with double the adoption rate for a similar time period.

Facial recognition in a .TRUST world

As mention in my recent article, I’ve been reading with great interest the discussions around biometrics and the use of facial recognition technologies in schools. The term “biometrics” is derived from the Greek words for life (“bio”) and measure (“metric”) and it relates to the activity of identifying a person, whether they have been identified previously or are entirely unknown. The human preoccupation with being able to uniquely identify a person is not new and can be traced to early cave paintings. Today, biometrics, and in particular facial recognition (that is, technologies that enable the accurate identification of a person’s face) have been through a long period of development. While exciting to imagine a technology that could determine one face from another (uniquely and reliably) – particularly in law enforcement context, the advancement of facial recognition has become a rather double-edged sword; for the community, it raises serious privacy concerns; for the authorities, concerns about false recognitions and public perceptions of “identifying the many to catch the few” have been prominent.

I don’t believe that the use of facial recognition technology – or any other technology – in schools is the issue. It is the purpose for which the technology is being used that requires further scrutiny and due diligence.

Facial recognition was not designed to be an evil technology, however, if used irresponsibly, or without due consideration to privacy, trust in those using the technology (whether that’s law enforcement bodies, municipal governments, schools, social media platforms or anyone else) will erode.

Unlike fingerprints, it is easy to capture an image of someone’s face without consent, and many folks may not be aware that there are two approaches to facial recognition emerging in the marketplace:

  1. Cooperative facial recognition, as it sounds, involves the voluntary consent of individuals to have their photos taken and used for identification and protection/security purposes, and privacy is carefully woven into that technology’s design.
  2. Non-cooperative facial recognition solutions are typically activated in a mainstream world-wide-web environment with minimal cooperation or consent from individuals, are not designed around privacy and therefore have a greater potential for invasion of privacy.

Facial recognition organisations should be seeking critical buy-in from users. Facebook’s controversial deployment back in 2010 was received with great concern from privacy advocates, however, the wider community did not understand what they were agreeing to in the first place; how the technology would be used and for what true purpose. And now there are years of security flaws and privacy concerns that are starting to only now be talked about by Facebook.

It is only recently that consumers have become more alert to the risks posed by many of the technologies deployed by social media providers, and the relevance of informed consent in those contexts. Facebook and others (like Amazon and Google) use facial recognition tools that utilise a global face template pool, meaning that the ability to recognise and name a face is available across all galleries and/ or accounts maintained with those providers. They also hold databases of consumer images and it is questionable whether these databases are designed with privacy in mind (i.e. images deleted when no longer required for the primary purpose). Obviously, this makes automated identification fast and easy, but it also facilitates widespread surveillance of the community by commercial organisations who are building global face (and other) databases – whether or not in a strictly lawful manner – for their own commercial purposes. In the last 24 hours, Google has been accused of spying on millions of Australians by building profiles containing “intimate lifestyle details” such as home and work addresses — plus “secret interests”.

If you have the time to read Mark Zuckerberg’s latest manifesto on Facebook privacy, you will find its more about how Facebook aims to use band-aid solutions to fix the security flaws when posting content on Facebook rather than improving consent, reducing user surveillance and protecting the privacy footprint of Facebook customers.

“In short, he’s offering privacy on Facebook, but not necessarily privacy from Facebook.”

– Forbes, 8 March 2019

How should schools use facial recognition? With eyes-wide-open!

Finding technology solutions that can help automate and improve a school’s overall image management process is important but finding a solution that is built on privacy principles and supports privacy compliance should be viewed as critical in the digital age of consent.

Working out if the technology provider chosen uses cooperative or uncooperative data set is essential. Asking where these face templates are stored and who owns this data is also critical. Then focus on how the technology will be used within the school. Using facial recognition technology to say, authenticate school members featured in images, is an essential component of compliance and in the past has been a very time-consuming and laborious task for school administration staff and teachers – and highly subject to human error.

Put simply, if you don’t know who a student is, how can you link their consent wishes to all the images they are featured in and ensure they are used appropriately as a result? This is particularly important for special cases such as children in foster care or domestic violence situations who need greater protection.

The ideal system for which a school can safely operate is a closed-environment facial recognition function. This allows the name of an individual to be securely associated with their image in a gallery that is fenced off by an electronic barrier that secures gallery-specific content and access. No physical photos are shared during the identification process, and integration of parental consent is done in real time, always. The facial recognition tool thus enables privacy rather than exploits it and builds greater efficiency into the daily school photo management processes. A system like this where privacy is built-in by design – that is, where its core purpose is to protect the digital identity of its customers, the children – also helps to protect the school and staff from the inadvertent duty of care and privacy risks.

And the fact that you have access to personal information doesn’t mean you have consent to do as you please with it. Consent must be voluntary, informed, current and specific – anything short of this is not consent. To add to this, clearly facial recognition is improving and maturing quickly, so the method by which the above processes are delivered is imperative for consideration and communication. Facial recognition can be used in many ways and it will be the unique combination of purpose and morality that will make or break this business. The industry must be vigilant and proactive to reduce fear and welcome appropriate legislation – only then can we as the public benefit from the technology that purports to serve our customers.

Schools should be conducting rigorous due-diligence activities and creating binding contracts with Edutech providers who offer facial recognition to ensure they continue to protect personal information shared. Edutech vendors must listen, lead and develop online services with the required ethics and values traditionally seen in mainstay business, and ensure student protection, privacy and safety measures are at the core of any product delivered.

When facial recognition is used in the correct way within a controlled environment and for the right reasons, its application will only enhance the benefit given to an entire school community and reduce harm.

Save with pixevety

School administrators save on average 25hrs per month.

Recieve the latest news from pixevety directly in your inbox!

More news

Related posts

pixevety Safest School Media Management System

pixevety Safest School Media Management System

OCTOBER 16th, 2024:  Media Management specialists pixevety are proud to announce their global ISO 27001:22 certification, the only dedicated school media management system to offer this level of assurance. ISO 27001:22 certification is the internationally recognised...

read more

FAQ's

For schools

For parents

News