As cyberattacks become an ever-increasing reality for local business — and tech teams become the first line of defence — organisations must learn to overcome common communication roadblocks.
Communication between a board and the information technology team is critical for maintaining an organisation’s defensive cybersecurity posture and staying abreast of new developments, yet it is often inadequate.
“No matter what size your business is, a lack of communication runs the risk of misalignment between the company’s strategic goals and its technology initiatives,” says Daniel Sekers GAICD, chair and non-executive director at Votiro and a director at Divergent Group. “There have even been examples where it has led to the company’s downfall.”
It is also inherently challenging. Technology is a specialised knowledge area and security risks can be difficult to quantify. And while an organisation may be doing all it can to guard against a data breach, there is arguably no way to ensure the business is completely protected. Sekers believes that part of the challenge stems from the fact that such dialogue requires different personality types exchanging information that is complex and frequently subject to changes.
“You’ve got technical experts working with non-technical decision-makers,” he says. “These different kinds of people look at the same issue through different lenses.”
When forming a working relationship between a board and a technology team, it is vital that there is a common understanding in areas such as risk appetite.
There are no silly questions
One of the most common issues is a lack of frankness when tech teams communicate security threats, says Darren Hopkins, a partner at McGrathNicol and the head of cybersecurity. There is a tendency to paint a more rosy picture than reality dictates.
“Technical teams often see it as their duty to provide comfort and reassurance to boards,” says Hopkins. “When a board asks if everything is OK or whether more needs to be done, the tech person will often assure them that everything is in hand, but without providing any real strategic reporting to back that up.”
Hopkins frequently hears of the disconnect boards feel when a tech leader tells them that a cybersecurity situation is under control, yet they are simultaneously being asked to invest in a new security product or service. The board cannot see how the investment would benefit the business and is therefore less likely to give it the green light.
“Sometimes, a board member will see on the agenda that the IT person is coming to the meeting to present — and their immediate question is whether the CFO is available to attend to make a call on the expected purchase,” says Hopkins.
In Hopkins’ experience, the greatest disconnect occurs in the aftermath of a data breach. “I see this play out quite often,” he says. “The IT team says, ‘We’ve been warning the board and the executive that this is coming for some time. We’ve been asking for them to buy some systems or to support us with an investment, but it’s always been difficult.’ But the board says to me, ‘Well, they never actually told us why they needed it. There wasn’t a real articulation as to how the investment would mitigate the risk.’”
By the time the board is really listening, the damage has already been done and the organisation will likely have to spend more to rectify the damage. This can lead to ongoing issues. For example, the tech team may be dissuaded from raising alarm bells in a timely way because it feels as though it lacks a seat at the decision-making table.
Hopkins’ tip for avoiding damaging miscommunications is to ensure that conversations are a genuine two-way dialogue. Boards need to ask follow-up questions and tech teams should avoid industry jargon. Relying on statistics alone — such as how many phishing attacks were thwarted in the month prior — is less effective than qualitative data, such as the reasons why some staff failed to spot the simulated phishing links that were sent to test the organisation’s defences.
Sekers observes that directors often refrain from asking for clarification because they fear appearing to be ill-informed. “It’s important to create an atmosphere of everyone feeling comfortable asking questions,” he says. “I do this by showing my own willingness to ask the silly question. In fact, there is no such thing. If it’s something that will impact your decision-making, it is undoubtedly pertinent.”
Focusing on the future
Professor Mary-Anne Williams MAICD, the Michael J Crouch Chair for Innovation at UNSW and a deputy director at UNSW AI Institute, says technology teams should be provided with the training and resources to improve communication skills, as this will help to create a culture of openness and transparency. She says there should also be clear channels, protocols and templates in place to help teams communicate their innovation goals and processes to the board.
It is critical the tech team understands the organisation’s risk appetite. Williams’ research shows that an organisational culture that supports innovation also values learning from failure, as this empowers teams to test new ideas. However, many businesses lack a strong innovation culture, and in this environment, tech teams in particular tend to be less likely to cultivate an experimental mindset.
“Risk-taking in innovation needs an environment where people can practise risk-taking skills to improve,” she says. When done well, it can lead to the development of transformative technologies, products and services, which can provide a competitive advantage and drive long-term growth.
Hopkins tells boards to encourage IT teams to focus on future-facing investment ideas as opposed to reactive purchases that fix or replace an existing tool. “Boards should be reaching out to their tech teams to help the business stay on top of its peers,” he says. “An example right now is how ChatGPT could be used within a business. I believe AI will become an incredibly important part of the way we defend our platforms, so we need to be more forward-thinking in our conversations about technology investments.”
When the news is bad If the news that the tech team conveys to the board is alarming, try to avoid responding in a way that could exacerbate the situation and lead to defensiveness.
“As a director, when something is said to me that is concerning, my immediate reaction is emotional,” says Sekers. “It is important to pause. It will give your brain the chance to process the information and consider what you need to do next.”
He recommends getting as much information as possible and trying to avoid downplaying the severity of the situation, as this will likely prove counterproductive. “Both tech teams and boards need to avoid blaming other people for problems,” he adds. “They also need to be realistic and to not make promises that they can’t keep.”
Playing it safe
Identifying threats is a core component of a comprehensive cybersecurity risk management plan.
Knowing which threats are most critical — based on their potential impact to the business — will help ensure each one is responded to in a timely manner. Other measures recommended by the security organisation Mimecast include:
- Conduct cybersecurity risk assessments
- Establish network access controls
- Implement firewalls and security software
- Create a patch management schedule
- Monitor network traffic
- Create an incident response plan
- Examine the physical security of your organisation
- Minimise your attack surface
What to ask
As told to Sholto Macpherson
When considering the impact of generative AI, Kelly Brough, ANZ lead Accenture Applied Intelligence, says directors should consider six key questions:
- Where does generative AI have the potential to shift our business model or our operations to create comparative advantage? How might we measure impact and monitor usage?
- What investment is required across our data foundations and applications to enable us to be ready to realise the benefits of generative AI?
- How can we ensure our people are upskilled in generative AI as either consumers, producers or leaders using generative AI in their work?
- As we begin to experiment with and ultimately deploy generative AI, how do we ensure we are adopting AI responsibly? What guardrails are required, and how might we ensure a human in the loop as we begin adoption?
- Are there additional security considerations to address in order to prevent new data vulnerabilities from using foundation models?
- Are we establishing effective ecosystem partners to enable and protect us as we commence our use of generative AI?
This article first appeared under the headline 'Tech Whisperers’ in the August 2023 issue of Company Director magazine.
Already a member?
Login to view this content