Current

    The federal government has committed to strengthening privacy laws in its first term. Companies that fall foul of their obligations will face increased fines and tougher regulatory penalties.


    At a time when the capacity to gather personal data is growing exponentially and the opportunities for misuse are vast, experts say Australia’s weak data privacy laws have become a glaring omission. Compared with the United States, United Kingdom and the European Union, the Privacy Act 1988 (Cth) provides limited actionable remedies and its provisions are easily circumvented.

    Edward Santow, industry professor of responsible technology at the University of Technology Sydney (UTS), says the laws have not kept pace with the development of new technologies.

    This is particularly worrisome because biometrics and artificial intelligence (AI) are fuelled by vast quantities of personal information. The emerging space in the metaverse will present even greater security risks, including an unprecedented scope for impersonation and harassment, noted McKinsey & Co in a March 2022 podcast, The promise and peril of the metaverse.

    Even in situations where the use of traditional forms of technology are in question, Australia’s laws are found to be lacking.

    “In the UK, if there is a News of the World type situation when somebody is hacking your phone calls and then publishing what they hear, it is a breach of privacy in tort law,” says Santow. “In Australia, there is no cause of action. It’s simply not protected by our laws. There may be other protections, but you have to sort of contort the law to avail [yourself] of them. There are so many scenarios where privacy is massively intruded on [in Australia] and there’s either no legal protection or an inadequate one.”

    As it stands, some of the privacy protections in Australia’s existing privacy law are easily bypassed. Often, this is in the pursuit of commercially valuable information, such as a shopper’s preferences and habits.

    “We are all familiar with the phenomenon of just clicking ‘accept’ on a lengthy privacy statement because you have no choice — you need to access a particular service and don’t have time to read these kinds of statements every day,” says Santow. “As a result, there is a veneer of privacy protections that don’t actually exist.”

    Kate Bower, consumer data advocate at consumer advocacy CHOICE, believes that the inadequacy of privacy provisions has contributed to a free-for-all approach by some players and a cavalier attitude more broadly.

    “In the past couple of decades, there has been a culture of wildly grabbing data from everywhere and then working out what to do with it later,” says Bower. “I think those days are now over.”

    Poised for change

    Following multiple reports advocating an overhaul during the past 10 years, Australia is on the cusp of bringing its data protection rules in line with other jurisdictions. In July, newly sworn-in Attorney-General Mark Dreyfus stated he was committed to introducing “sweeping reforms” to Australia’s privacy laws during the government’s first term. His department is currently analysing the feedback from its discussion paper that reviews the Privacy Act.

    Some industry stakeholders have called for the Privacy Act and other regulatory settings to be brought in line with Europe’s General Data Protection Regulation (GDPR), the world’s strongest set of data protection rules. GDPR came into force in 2018 and applies to any business operating in the EU, even if its headquarters are outside the region.

    GDPR’s seven principles underpin how personal data can be handled. These include lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality; and accountability. Companies found to be in breach can face a fine of up to €20m ($29m) or four per cent of a firm’s global turnover, whichever is greater. Under the previous data protection regime, the largest fine was $877,000.

    The AICD submission to the Privacy Act review noted that further work was necessary to understand the benefits and costs of GDPR adoption — including how it would impact other key legislation.

    The AICD view is that international alignment is a very complex policy debate where there may be real costs in moving to GDPR adequacy that would need to be clearly outweighed by the benefits, including lower business costs and improved data security practices.

    The former government’s draft Privacy Legislation Amendment Bill 2021 includes a maximum fine of $10m, or three times the value of any benefit obtained as a result of the misuse of information, or 10 per cent of the entity’s annual turnover. The previous maximum fine was $2.1m. Australian Information Commissioner and Privacy Commissioner Angelene Falk welcomes the steeper fines.

    “These updates to penalties are needed to bring Australian privacy law into closer alignment with competition and consumer remedies, and provide a greater deterrence,” she said.

    The draft bill also includes the development of an online privacy code to regulate social media services, data brokerage services and large online platforms. The code will be developed by industry and will include requirements for these companies to be more transparent about how they handle personal information and to seek specific consent. It will also include more stringent privacy requirements for children.

    Upskill to avoid a tech wreck

    Bower believes it is a challenging time for directors at companies who are striving to balance the pursuit of innovation with respect for consumer privacy.

    “There are far more commercial, out-of-the-box AI solutions being spruiked nowadays,” she says. “The rapid increase in more invasive technologies means that directors need to get their data governance and AI ethics plans in order. It is easy for brand reputation to be lost by a scandal that may have been made as a result of middle management buying a piece of software with inbuilt invasive technology.”

    Jan Begg FAICD says directors have a governance obligation to know what processes and frameworks are in place around the use of any form of technology. Relying on what is strictly legal is not the best approach. “There’s always going to be a gap between what the regulations allow and what the technology can do, so it is constantly tricky,” says Begg, a non-executive director and a member of AICD’s Governance of Technology and Innovation Panel. “Directors are not in an organisation full-time, so they have varying knowledge about day-to-day activities.”

    With companies increasingly using AI to make decisions, Santow believes it is essential for directors to upskill. “It doesn’t mean that every company director needs to become a PhD- level data scientist, but it does mean they need to understand the risks and opportunities concerning AI,” he says.

    Face print concerns

    Falk’s office has recommended that facial recognition technology be banned due to potential privacy harms. Indeed, facial recognition is the only biometric information that can be collected without a person’s knowledge. (Voice recognition may be another, although CHOICE hasn’t yet found evidence of it being collected covertly in Australia.)

    Santow and his team at UTS are currently drafting a model facial recognition law that governments may adopt, and it is slated for publication towards the end of the year. It aims to balance the benefits of the technology while protecting human rights, which includes the right to privacy. The author of Machines Behaving Badly, Toby Walsh, supports the idea of a moratorium on facial recognition technology until there is a specific law to regulate its use.

    “I would encourage companies not to be early adopters of facial recognition,” says Walsh, who is also a laureate fellow and professor of artificial intelligence at the University of New South Wales. “There are significant risks, and the technology can be prone to error on people of colour and women, for example. Organisations that use it are exposing themselves to the potential PR, legal and financial risks of being found to be discriminating against these groups. And even if it works perfectly, there is still a real risk of alienating the public.”

    According to the 2020 Australian Community Attitudes to Privacy Survey conducted by the Office of the Australian Information Commissioner (OAIC), 52 per cent of Australians are uncomfortable with their biometric information being collected by a retailer, and just 25 per cent said they would be comfortable.

    Landmark case underway

    Facial recognition technology has been making headlines for all the wrong reasons of late, with Kmart, Bunnings and The Good Guys being the subject of investigations by the OIAC, following a referral from CHOICE. The Good Guys immediately paused its trial of the technology, while Bunnings and Kmart followed suit in late July.

    Out of the 25 major retailers in Australia that CHOICE contacted, only Kmart, Bunnings and The Good Guys were found to be analysing CCTV footage to create profiles or face prints of their customers. Bunnings stated the data was used to reduce theft by matching faces in a database of known shoplifters or those who engaged in antisocial behaviour. Santow says there are issues over the transparency of such a database, and even whether it could be sold to other retailers.

    According to Bower, CHOICE was tipped off by a member of the public who had asked to exchange an item at Kmart. A customer service assistant said that they would first need to comb through CCTV footage to check that the customer hadn’t in fact stolen the product. The customer service assistant was unable to answer any further questions about the way the technology was being used. CHOICE subsequently referred the three retailers to the OAIC for potential breaches of the Privacy Act.

    “It’s likely to be a landmark case,” says Bower. “Kmart and Bunnings are very big businesses, so if the commissioner determines they’re in breach, the fine is likely to be large. If there isn’t a fine, then it will obviously send a clear message about our penalties under the current legislation.”

    In 2021, Falk determined that convenience store group 7-Eleven interfered with the privacy of 1.6 million customers by collecting their face prints as they completed customer satisfaction surveys. This was deemed not reasonably necessary to achieve business aims, and genuine consent was lacking. The company also used the personal information to understand the demographic profile of customers who completed the survey.

    However, Falk did not have the power to issue a fine, which can only be done by a court. Falk has called for the new privacy law to include a power for the OAIC to issue public infringement notices and similar pecuniary options that are currently available to other regulators.

    The existing Privacy Act states that intruding on someone’s privacy may only be permitted in certain circumstances, such as if free and informed consent is given. The decision concerning Kmart, Bunnings and The Good Guys will likely turn on whether the in-store signage that mentioned the use of facial recognition will be deemed sufficient.

    “Most people wouldn’t have noticed the signs, so it is a bit of a fiction to claim they truly consented,” says Santow. “The concept of consent is somewhat at odds with how corporate practice has evolved over time.”

    Watch for risk of discrimination

    The possibility that facial recognition will lead to racial discrimination has concerned human rights groups around the world. The quality of facial recognition technology algorithms varies considerably. The most accurate has an error rate of just 0.08 per cent, according to tests by the National Institute of Standards and Technology. However, this level of accuracy is only possible in ideal conditions where facial features are unobscured and the lighting is good. The error rate for individuals strolling past a camera in a public setting can be as high as 9.3 per cent.

    Facial recognition is least accurate on people from diverse backgrounds, women and those with a disability, because the initial data sets were taken from white males. In the US, mistakes have been made that have led to false arrests.

    Mistakes have also been made in China, where facial recognition is used for its social credit system. Individuals and organisations are tracked by the government to determine their “trustworthiness”. In 2018, prominent businesswoman Dong Mingzhu was “caught” jaywalking by an overzealous AI system that had seen her face on the side of a bus in an advert. Mingzhu’s face was briefly depicted on a billboard used to shame those who flout traffic laws, before the error was realised. Mingzhu reportedly responded by thanking the police for their hard work in promoting road safety.

    Walsh urges companies in Australia to think carefully about whether it is possible to obtain the benefits of facial recognition without identifying individuals. For example, a Tesla vehicle can detect when a driver is showing signs of fatigue, but it isn’t necessary for the driver to be identified in order to obtain the safety benefit.

    “There are clearly some good use cases [for this technology],” says Walsh. “Members of a terrorist gang can be identified in real time, but equally, if a government doesn’t like someone, they can track them when they exercise their legal right to protest about the climate emergency.”

    Santow’s “model law” on facial recognition adopts a risk-based approach to using facial recognition, and the justification for intruding on someone’s privacy must be sufficient. A company that uses facial recognition for its own commercial purposes, such as to gain insights about their customers’ spending habits, would not be seen as having a compelling justification. A “ticking time bomb scenario” where the police need to track someone down would have a far stronger justification.

    “Companies that use facial recognition need to put very strict limitations on how they use it, and those limitations need to be the sorts of things that uphold people’s human rights,” says Santow.

    Some companies are using facial recognition on their employees — for instance, to track engagement levels of remote team members.

    “When you’re talking about an employer-employee relationship, it’s much more complicated because there is an employment contract in place,” says Bower.

    Walsh believes employees may feel coerced into agreeing to be tracked and says emotion detection technology remains unreliable at best.

    “When it comes to using it on employees, there’s a question of whether they are meaningfully able to opt out — that it’s not going to harm their employment prospects,” he says.

    Set an AI strategy

    Santow believes it is critical for directors to have the tools they need to live out their values as an organisation when it comes to the use of AI. The board is integral in setting a strategy for the use of technology in the business, and Santow says it must be underpinned by the principles of being fair, accurate and accountable.

    “Those three things should be at the heart of how a business makes all its decisions. Technology is no different. However, the thing about AI is that those three things are its weak points. It can quite quickly become unfair, inaccurate and unaccountable.”

    During his tenure as Human Rights Commissioner between 2016 and 2021, Santow warned banks to be on guard for biases in AI technology used to assess customer creditworthiness, which could lead to gender discrimination. “Banks around the world have AI-powered systems that comb through 30 or 40 years of decisions on granting home loans,” he says. “This was a time when women found it harder to get home loans — partly due to prejudice. The prejudice against women back then is being baked into the training data for artificial intelligence today.”

    As well as being discriminatory, biased AI leads to poor commercial decisions that can result in serious reputational harm. Using historical financial data could also be deemed a breach of privacy — because customers have not consented to their financial information being used for a secondary purpose — or to be involved in perpetuating discrimination.

    Ask the right questions

    Begg says a data governance framework should set processes around how customer data will be used. There also needs to be a workplace culture where management has regular dialogue with the board, and day-to-day practices are aligned with strategy. It can be difficult to spot a supplier that is using facial recognition technology, but it is equally necessary.

    “If you want to find out whether your organisation is using facial recognition technology, ask management,” says Begg. “And if so, why? What is being captured and how is the data being used? How does it achieve the organisation’s objectives? Is it saving money? Is it legal?”

    Begg chairs the international standards committee ISO/ IEC JTC1/SC40 (IT service management and IT governance). She believes establishing a governance framework for artificial intelligence is essential for any organisation that uses AI in its daily business — and that it is a boardroom-level responsibility. A new international standard, ISO/IEC 38507, provides best practice guidance.

    “The governance of data standards (ISO/IEC 38505-1) contains a list of things that directors should think about, such as the life cycle of data. What will be done with the data and at what point will it be disposed of?” she asks.

    Begg stresses that a board should ask management for a comprehensive business case, and that it shouldn’t be based solely on the financial outlay to purchase a piece of technology. Questions need to be asked about how it fits with the overarching strategy, what the benefits will be, whether there is a potential for reputational damage, and what can be done to mitigate that risk.

    “If you’re collecting anything from anybody, whether it’s credit card information, something physical or information about them, there are obligations — not just in the legislation, but in practical terms,” says Begg. “What is it going to cost the organisation to collect and store the information?”

    Regardless of the type of technology being implemented, Falk recommends using privacy by design as a guiding principle. This means collecting the minimum amount of data needed to achieve the business aim, and to carry out a separate impact assessment for every purpose, rather than using the data for a secondary purpose.

    “Boards need to make sure they are not implementing technology that will either be non-compliant with the law or break customers’ trust by collecting data in a way that customers find creepy,” says Bower.

    Latest news

    This is of of your complimentary pieces of content

    This is exclusive content.

    You have reached your limit for guest contents. The content you are trying to access is exclusive for AICD members. Please become a member for unlimited access.