A panel discussion recently focused on how energy and utility organisations can head off major cyber threats. Hosted by Company Directors and BAE Systems Applied Intelligence, it provided plenty of insights for all types of organisations. Zilla Efrat reports.
Picture this: Your city has had no electricity all day because your power company has suffered a cyber-security attack, perhaps by a terrorist organisation, an unhappy staff member, a teenager enjoying a prank or even international criminals demanding a ransom before they will restore the electricity flow. It could even be another country engaging in a new type of warfare against Australia.
This may sound like the script for a Hollywood movie, but it is not as far-fetched as it seems.
In the past, the risks of a cyber-attack largely resulted in breaches of privacy, fraud or loss of competitive advantage or reputation. However, research by BAE Systems Applied Intelligence (Twitter @BAESystems_AI) has found that many energy and utility (E&U) organisations and their suppliers believe it is only a matter of time before a major cyber-attack cripples a piece of critical operational infrastructure, such as an electricity grid, the water or gas supply or other necessities vital for society to exist.
In fact, we have already seen the forerunners of this type of attack in other parts of the world. In 2010, for example, the Stuxnet computer worm wrought havoc on equipment at Iran’s Natanz nuclear plant, affecting the manufacture of highly enriched uranium. Then, in 2012, the world’s largest oil company, Saudi Aramco, suffered a malware attack that forced it to take down its computer systems for 12 days. Shortly afterwards, RasGas, Qatar’s second largest producer of liquefied natural gas, was hit with an “unknown virus” which forced it to shut down its systems.
Thus far, security measures in Australia have been largely able to deal with threats and, fortunately, there have only been a few malicious cyber incidents in the E&U sector.
However, the threats facing the sector are constantly evolving and intensifying.
Given these developments and the potentially devastating repercussions, the Australian Institute of Company Directors and BAE Systems Applied Intelligence hosted panel events in Melbourne and Sydney in June aimed at highlighting how E&U organisations can bolster their technical and governance practices around cyber security. The discussions also offer valuable insights for directors and executives from all kinds and sizes of organisations. An edited abstract of the discussions follows.
Graham Burdis: Cyber-attacks don’t have to be that sophisticated. As a prank in 2008, a 14 year-old Polish student turned the city of Lodz’s tram system into his own personal train set. Using information obtained from the public library, the internet and by trespassing at tram depots, he built a device by modifying a TV remote control. He used it to manoeuvre the trams and change track points, triggering chaos that derailed four trams. Any protection in the central control system against the intrusion was rendered completely redundant. So much has happened since then and so many systems are prone to attack.
Craig Searle: That is true. Australia has the dubious honour of experiencing the first true cyber-attack that had a physical impact. In 2000, Vitek Boden hacked into the Maroochy Shire Council’s waste water plant and released about 10,000 litres of sewerage into parklands. At the time, the chief information officer was on TV saying the council had done really well. He said: “He attempted to log in to the system 46 times and we defended him 46 times. He only got in on the 47th attempt.” But many wondered how someone could attempt attacks 46 times without the council noticing or responding. That was probably a low day for IT in Australia. But it did spur on some pretty interesting responses in the utilities and financial services industries around employee screening, identifying insider threats and creating a culture of security within organisations.
Burdis: He wasn’t an insider. He was a job applicant. He applied for a job at the council and did not get it. He then took it out on the council. What has been your experience of cyber-attacks?
Michael Lifson: At Macquarie Generation, we constantly get them. Many are standard, such as phishing emails. We arranged a “white hat” attack and we learnt a lot from that. It allowed us to understand what some of our weaknesses were.
Hugh Gleeson: In recent years, things have changed a lot in the E&U sector. First, the 9/11 experience opened up the issues of security and cyber security. As a result, we now have much better physical security around our sub-stations and a better consideration of critical infrastructure from a national level downwards. Cyber security is part of this and is certainly not being missed out in the process.
Utility businesses run very sophisticated IT systems these days. That we have a smart meter system and collect 48 meter reads per day per customer shows that we manage a lot of data, including privacy information. We have a sophisticated IT system which extends across to the supervisory control and data acquisition (SCADA) systems.
When you are an IT-literate company, it is important to logically extend your thinking about security to every bit of IT that you have. The SCADA systems evolved as physically separate from the main IT systems and were run by different people. That, however, is changing rapidly and they are now increasingly linked into the main IT systems for good information management and control reasons. That means they need to fall under the umbrella of the IT security applied to your main IT systems.
Burdis: So how do organisations know they have been attacked?
Searle: It is very difficult if someone has done it well and it can be impossible to tell. There is no one organisation that has its security all right yet. There are organisations that are further along that journey, but it is just that – a journey.
THE ISSUES
Human error
Burdis: With the industry having evolved from the more manual type of control systems of the past, what are the key challenges for E&U organisations now?
Lifson: Industries that come from that background have people who have worked in large physical assets for many years where the technology has been around for a long time. In the days when they started, there weren’t the same threats as there are today. People get that things change, but when it is a technology-type issue, they may not think about it in the same way as other things. Perhaps that’s because technology issues can be ephemeral and not physical.
Peter McIntyre: People that have grown up in the industry have a high degree of trust in its people and systems because they have worked alongside them for years. I think we have to be conscious that they have a certain degree of trust that can be exploited if we are not careful. At TransGrid, our biggest challenge is around people issues rather than the hard IT/SCADA issues. Our people use a range of IT systems that expose our business to risks. Our critical service is the electricity supply of the state. The SCADA system is very capable and I am confident that it is in very good hands. If we are aware of attempts to attack us – even through the corporate IT system or any of our sister organisations – we can push data out to technicians and we can physically air gap the system. It is as easy as unplugging a connection and running it remotely. So I am confident that the SCADA is in a better position than the corporate IT system. However, because of the broad range of employees we have, it is much harder to get a large number of employees to the right level of awareness than a small team of specialists.
Burdis: When we first started looking at cyber threats a few years back, it was all about firewalls and outside threats. Then it became more about the people than the systems.
Les Hosking: Human error is one of the main areas where I have seen security problems. It is so easy to accidently attach confidential information to a file that should not be sent. It’s about continually educating staff that this is a new environment, particularly older employees. We try to train staff and then automate as much of the system as we can to avoid that manual handling by staff.
Lifson: Staff training should be for both older and younger workers. If you look at the Facebook generation, many are on there without fully understanding the ramifications of their actions.
Searle: One of the challenges is to target education and training to the appropriate level of person. Too often we find that training and education are just cookie cutter exercises and people do not get from it what they should, so those unsafe behaviours continue.
Hosking: There should also be a review of the corporate policy on internet access and staff should be made aware of this policy. I know of companies that haven’t upgraded their policies for a long time.
Burdis: Many businesses induct new employees, but may also have people sitting there for years who have never been through an induction.
Blurring the boundaries
Searle: We believe that social media is the greatest treasure trove for the bad guys. It provides huge amounts of freely available information. People can build up a really detailed profile not just of you, but of everyone you know and interact with. There is no real way for an organisation to know if it is being scoped out for an attack. Also, security training and education has really been around how you are expected to behave at the office, but it needs to evolve to include how you should behave at home because the boundaries between the work and home environments is becoming very blurred, if it exists at all anymore. You see a lot of cases where someone’s personal life can now directly affect the enterprise. For example, there have been high-profile phishing attacks that initially targeted someone’s wife, partner or child. So people in senior positions need to be aware that it’s not just them that’s a target. The entire ecosystem of their relationships is now fair play. From the bad guys’ perspective, there is no such thing as “off limits”. And those types of attacks have a much better return on their investment. They are able to very quickly identify trust relationships, abuse them and use that to leverage access or information.
Lifson: The blurring of the boundaries between home and work has a special relevance in organisations where control systems focus on large assets. People sit in an office and stare at a computer all day. They go home and they do the same. So through their day jobs, they may be more internet savvy. But if they drive bulldozers or work with coal all day, they may go home and use the computer without understanding that what they are doing can have a connection to work.
McIntyre: Is there merit in companies helping their employees understand the security risks of their home computer? Or should they train them at work and hope the learnings go home with them?
Searle: We have seen two different approaches. One is making things like antiviral software available to family members at highly discounted rates. As part of that, there is an expectation that there will be some security training for the family. In the other approach, if you want to work from home, the company will pay for an ADSL line. It will give you an access point which is hard coded to only work with your laptop and vice versa. And the ADSL line goes via the company networks. The company extends its boundary into the home. That’s a pretty heavy duty approach and quite expensive, but organisations realise that the organisational boundary is now wider and that they have to do something different.
Burdis: Your helpdesk could also help staff with home issues.
Hosking: There is a debate as to whether you should use your own iPad or a corporate iPad for board papers. With a corporate iPad, the company can control what happens.
Insiders
Burdis: According to the Security Incidents Organization, unintentional incidents – caused by software or device flaws, human error or malware infection – accounted for 80 per cent of cyber incidents. Intentional attacks were the cause of only 20 per cent and 53 per cent of these were by insiders, such as disgruntled employees.
Paul Adams: For many years we’ve strictly segregated the duties in accounting and finance. We don’t allow the same person to raise an invoice, do the accounts and send the money out of the business. The security people ensure there are checkers on checkers within the financial system because it has been the biggest area of risk. It has to be the same for IT risks. You have to look at what access people have to what. Can that same person upload, change and activate the software? What is the segregation of duties around that?
Janice van Reyk: At my organisations, segregation of duties and controls exist for IT in the same way as they do for financial controls. I would have thought that was good practice.
Sue O’Connor: I don’t think it is universal.
Van Reyk: It didn’t matter how much security was around. It didn’t stop the US from experiencing a Bradley Manning and an Edward Snowden. They were insiders with the highest levels of security.
Searle: Psychological profiling and those types of activities are purely “point in time” assessments. Someone joins the company, gets a promotion and passes the profile, but peoples’ circumstances change and things happen to affect them.
Adams: It doesn’t happen a lot that a select group of people is constantly monitored. But that should be part of them taking on the role.
Filson: You can see this thinking being escalated into other things, such as positive vetting pilots given the question marks over the Malaysia Airlines Flight 370 tragedy. The spotlight fell on the pilot. It’s a question of mental health and monitoring people because our lives are in their hands.
Searle: In a more informal sense, you could indoctrinate it into the organisation’s culture where it becomes okay to raise issues, to say: “I think Bob’s having some troubles. Do we need to talk about whether it is appropriate for him to have certain levels of access?” That happens in the financial part of a business. In an IT security sense, that should also be considered as part of the culture.
Adams: It’s the same with safety. You have to ask people if they are okay today because if they aren’t, they could go out and hurt themselves.
Searle: When we do audits of the security culture, we test it in a number of ways. One is when I come to present a report, I will try and get to the meeting room without going through any of the usual security just to see if someone will stop me. We also do things around dropping USBs and technical testings. But I think the big test for me is the reception I get when I present my findings. People go: “That is terrible. I am going to ensure everyone does training again and I will train them and train them until they get it 100 per cent right.” That is not the right answer. The right answer is the question: “What is it that we are not understanding or missing out on?”
Hosking: We do test to see if there is an awareness of cyber security. We send phishing emails to staff and watch how many times they are downloaded and who is downloading them. We also have a culture where, if one sees a staff member accessing the internet on an unsecure machine, it is reported.
A chink in the chain
Searle: Partner organisations are another area that is becoming more of a threat to large organisations – weaknesses in the supply chain. If an attacker wants to go after Company A and knows that Company A has robust security controls, there isn’t a lot of return on investment for the hacker to spend days attacking it. So the hacker will move a few hops down the supply chain, enter there and abuse the partner links. In Canada, we saw the example where Potash Corporation’s law firm, during a large-scale M&A activity, was attacked and the information gained used to influence the deal. You need to identify the trust relationships you have with your partners and examine what an attack on them would mean for you. What will you do? Do you have alternative mechanisms in place for supply? You also have to educate them about the issues. You have to ask them what they are doing about security. The next step is to ask what will happen if someone in your supply chain were to turn against you. In the utilities sector, it is also about being aware of where you fit in the supply chain. You have hundreds or thousands of customers. If you have an outage, what will they do?
Filson: So much today is outsourced. People will say: “We are looking at this cloud application.” My first question will always be: “How secure is it?” They will reply: “Oh, they have all the security. It’s fine.” But what they have done is just outsourced the security to someone else, so your security is only as good as theirs.
Gleeson: If a company like Google doesn’t get the security right, its business model will fail. So while you have to probe and test and ask the right questions, there is every chance that it is going to be better at it than you.
Filson: It is not in the interests of companies like Google or PayPal to get it wrong. You do rely on their size and reputations to protect you, but there are also a lot of smaller players in the chain.
Searle: Many organisations address this through “right to audit” clauses to check on their partners’ approach to security. Those clauses are fairly rarely used. It is uncommon for someone to say: “Dear Google, I would like to audit your facilities.” But that is where it needs to go. Organisations need to be proactive. If they are uneasy about a partner, they should enforce that clause and have no fear in doing so. The issue then becomes: “How much is enough?” You can very quickly descend into paralysis by analysis by trying to address every single risk.
Air gaps
Burdis: It’s suggested that one of the best things you can do to protect control systems is to have air gaps.
Gleeson: Well, sort of yes and no. Air gaps can give you a false sense of security too. Someone could be taking information from your system, for example, by using a USB drive. You may think you have dealt with the risk because you believe you have closed it off, but in fact you haven’t.
Searle: You tend to find organisations exist in three different states. Either they believe their environment is completely air- gapped, or they are not quite sure, or they know that the environment isn’t air-gapped and have proactive procedures in place to address the risks. But for those organisations that believe there is an air gap, more often than not we are finding that this air gap is imaginary.
IN THE MIDST OF AN ATTACK
McIntyre: Many scenario exercises involve predetermined events to test your response. But what happens when you are making decisions in the face of total uncertainty? You first have to observe that something is happening, take the time to understand what it is and put it into perspective and then make decisions around that. It is very easy post-event to say: “Yes, it was clearly X. If I was in that person’s shoes, I would have done Y.” But as an event is unfolding, there is always the risk of conservatism when asking: “Do I disconnect this customer, shut down this critical service or cause this financial impact for my company on the assumption that something bad is starting to happen?” To be proactive and take action to mitigate an emerging risk with limited information is a big call. How can we ensure the operation managers have the confidence to be decisive and take action when sometimes it may be a false alarm or in hindsight, it may look like an overreaction?
Searle: The signal to noise ratio in an event can be extraordinarily low. Some of it is based on data, but part of it is just gut feel. You need to rely on someone who has years of experience and training and is a trusted oracle to make the call. If you consider what a cyber-attack against a facility will look like, it is highly unlikely to be a 14 year-old after school with some automated tools. It will be highly resourced, highly patient, well trained attackers. There is every chance that you won’t know that they have been resident on your network for a long time or that they understand your network as well as you do. More likely than not, it will have some greater purpose for them, be they state sponsored or corporate spies. At that point, there are very few signals that help you make a decision. So there has to be a handful of key decision points for the organisation. You should say: “Okay, if we see this, this and this, irrespective of whether we believe it might be a jump too far or an overreaction, we have to do this at that point.” Organisations need to identify what key pieces of information they need to make that kind of “go/no go” decision and how they will know if these are accurate.
Lifson: That is a challenge. It assumes you know what to look for.
Searle: You start getting to things like information analytics. You may have two behaviours or things happening that in isolation are not interesting, but when you have something that can piece the two together, it tells a different story. Technology won’t be the sole basis on which to make the decision, but it will point you down the right path.
Lifson: This is where external providers can sometimes help. If they do this across industries, they can start picking up patterns.
McIntyre: We are now implementing software that monitors patterns of behaviour and identifies abnormal patterns which could be a sign or trigger point. But this is very new and we don’t have the experience yet to fully understand its usefulness.
Searle: From a security slant, we are moving away from signature based analytics, if we haven’t already, because every organisation is different and it has tended not to be that useful. We are starting to look at behavioural analytics and at what behaviour is normal for an organisation at a point of time.
McIntyre: That’s about seeing that someone is logged in twice from two locations at the same time and knowing that can’t happen.
Searle: It can take information like logins and login correlations and put it into one spot to help you make decisions. It develops a small set of tools for you, but I think that is where we will end up because organisations, as they embrace big data, have more and more data which they can use.
Lifson: That will be driven by retail companies looking at consumer behaviour. Then it will be pushed to the back part of the enterprise into things like security and will find lots of other uses.
McIntyre: In the power industry, there is one trend that will provide a natural protection against cyber threats. The industry has traditionally been centralised. You have your major generation sources, the traditional networks like your highways and then your back-street distribution to end users. If you had a disturbance of supply down those chains, all the users downstream would basically lose power. We are now seeing a move towards more local generation, solar being the classic example, and in five or 10 years’ time you will have a lot more local storage of power – for example, through batteries at houses or factories – which will probably have the capacity to generate in and out of the network. This will make networks more resilient. You will have many sources of generation and an ability to change power within local networks as well as to and from the major networks. It will be very hard to cause widespread damage to communities where there are a multiplicity of power sources that can feed up and down.
LOOKING OUTSIDE
Working together
O’Connor: The Victorian auditor general published a cyber security report last November from a whole of government perspective. It noted that in 2012, there had been 26 attacks into the government and of those, half had been reported, according to the Australian Signals Directorate (ASD). It’s really important to be able to access this information from organisations like the ASD, but also to recognise that we are all interconnected.
Searle: There are bodies like the Trusted Information Security Network (TISN) that coordinate secure forums for organisations to share information, often under the ‘Chatham House’ rule (where participants are free to use the information received but the identity of speakers cannot be revealed).
Hosking: TISN and the Energy Security Group do help promote more collaboration. Given the frequency and speed of the threats that are occurring these days, the more discussion among people who are properly authorised and have the right to confidentiality is incredibly healthy. It’s not that you can stop everything, but shared knowledge can help.
Searle: By helping each other out, everyone benefits. This is an area where the energy industry is ahead of the game. This would not happen in some other market segments.
Van Reyk: I agree. It’s about collaboration. Government to government, business to business and employees to government, both nationally and internationally.
O’Connor: I support having emergency response rehearsals where a range of organisations, including the electricity and water utilities and the police, are involved. We did that in Victoria the week before the Black Saturday bushfires and it was tremendously valuable. We had to report to the board on the Wednesday and then the fire occurred on the Saturday. Two thirds of the homes destroyed were in our jurisdiction. We lost critical facilities and there were many learnings from the previous week’s exercise that we were able to unfortunately apply. When something happens, it is not just what you do as a utility that counts. You are part of a broader response.
McIntyre: The electricity operators, through to the transmission companies and the jurisdictional representatives, have done exercises involving multiple states and participants to stimulate thinking around a major coordinated attack. The aim was to test how we would respond to various scenarios at various trigger points and how we would pick up signals from other states and organisations.
Hosking: I attended a three day workshop on just that in Canberra. Everyone was there and we looked at worst case scenarios and how we would deal with them.
Filson: It is the role of government to bring people together. It is a coordinator and provides a forum in which to discuss things. But you can’t rely on the government to do it for you. I think one of the reasons the government, to a large extent, has sold off assets is to remove a lot of the political risk.
Adams: In defined critical infrastructure, the government has fulfilled its role by saying that you will have this exercise every year and you will do this and do that. This is different to saying: “We are going to come in and conduct it.” Instead, it is asking us to demonstrate that we have certain areas covered off.
Van Reyk: Every year we get an improvement from these exercises. Things are constantly re-evaluated and reassessed and everyone is increasing their level of maturity.
Adams: Those annual exercises help you build up those trusted relationships. Rather than blindly calling up some government department, you now know who to talk to now.
Searle: These kinds of activities are becoming increasingly valuable, particularly for senior executives trying to understand organisational risks. There is a realisation that technical testing is a small but important piece of the puzzle and there needs to be a more holistic view. The exercises are useful on two levels. One is that often there are assumptions made about response capability or response processes and the exercises highlight existing flaws very well. Second, from a board perspective, they lead to a realisation that often there is no right or good decision. Often you are faced with the lesser of two evils. That raises some interesting questions for the board and can spur a real change in thinking.
Stakeholders
Burdis: Do customers or institutional investors ask questions about cyber security risks?
Gleeson: While they will always be concerned that something terrible could happen, customers are chiefly interested in the issue of their privacy. Security of supply is an issue, but I don’t think “cyber security” in customers’ minds has necessarily extended to that. That said, we do have plenty of experience in responding to supply disruptions. In the electricity industry, we practise this consistently because real incidents happen all the time. If you have a storm, you have to mobilise large numbers of people to fix lines. This year, for example, we had a technical fault in the transmission system that took out 200,000 households in Melbourne for a short period of time. That was a pretty significant upheaval that we had to manage. It wasn’t a cyber-security incident and the fault was restored in minutes, but it did give us useful practice. If institutional investors hear stories about your company or other companies, they will ask questions. There is no doubt that if there was a financial impact on a company in the US related to a cyber-attack, the questions would be asked in Australia straight away.
Adams: To our large customers, cyber security is just a subset of the whole reliability issue. Normally it’s only when something pops up in the newspaper that investors start asking questions. They are more reactive than proactive.
The need to know
Burdis: Groups like Anonymous publicise what they have done and the company is left with egg on its face because it didn’t even know it had been attacked. In the US, the Department of Homeland Security is forcing companies to disclose incidents. Is there going to be a move towards forced disclosure in Australia at some point?
Hosking: If you are a listed company, you have to sit down every time something happens that may affect your share price because of the continuous disclosure regime. On the other hand, critical infrastructure is usually not privatised so there isn’t that obligation. I am not sure what disclosure achieves. It may just encourage other 14 year-olds to launch their own attacks.
Searle: Initially, when the state of California put in its disclosure laws, it was widely applauded. But these have resulted in some unintended and undesirable behaviours. I believe Australia will end up in the same position because there will be mandatory disclosure here. Reporting to a regulatory body is probably the way to go as long as that body that has the powers to enforce penalties or changes. You might say Company X has been found in breach and has been fined.
That’s enough, as long as the public feels there are some checks and balances in place. I don’t think its serves a purpose to air all the details of what happened.
Hosking: It doesn’t solve your problem. In the energy industry, climate change activists are increasing in their activities and announcing what they have done. We are a target. You can’t prevent them, but if they knew that we had to report everything, they would probably do it more often.
McIntyre: It is less important to have penalties and enforcement provisions. It is more important to have disclosure so that the industry or the government is aware of what attacks have taken place. I would have thought that an obligation to report breaches is necessary from a national economic security perspective, if nothing else.
CYBER CURES
The interconnectedness of everything
Van Reyk: The modern system is highly interconnected and there is a veritable technology ecosystem. It is not security in isolation. Security has to be an end-to-end, whole of system approach where everyone is focused on the issue. Everything from robust prevention to quick detection to early response and rapid recovery is dependent on everything else in the system working.
Hosking: It should be an integrated approach where you apply protection, detection, containment and recovery control across people, processes and technology. It’s not about just being good at disaster recovery or being perfect in protection or detection. You have to use multiple weapons and test them all at one time.
Lifson: The key thing here is that there is an assumption that it is going to happen. Many industries don’t make that assumption. They just focus on prevention.
Lessons from WORK, HEALTH AND SAFETY (whs)
O’Connor: With cyber security, some boards and senior management teams have been focused on technology without considering the need to have a system-wide approach. If you speak to people in cyber security, they are often on a mission to have separate IT sub-committees of the board and treat this as a separate issue. But we think there will be much more benefit and more favourable outcomes from using the things that already work, such as in WHS systems. WHS is about safe places, safe processes, safe people and a safe culture. In the same way, cyber security is about safe systems, safe processes, safe people and a safe culture. You are not relying on one thing. It’s the Swiss cheese model. You have one piece of Swiss cheese with many holes to plug up. If you have a number of controls that are in place, one of those can fail, but the others would still work.
Lifson: The challenge is that WHS is something tangible. It is physical. It can be visceral. But cyber security is something that people can’t really see. Until it becomes physical or tangible, it is going to be difficult.
Searle: From my limited knowledge of the manufacturing sector, you would say a large portion has a culture of safety. If you see someone behaving unsafely around a piece of machinery, it is considered not only standard practice, but good practice, to say: “Hey mate, don’t do that!” From a cyber security perspective, it should be the same thing. If you see someone behaving in an unsafe manner, you should not be seen as a dobber if you say that’s not right. You should be able to say why it’s not right without fear of any repercussions. Most organisations, particularly in Australia, tend to take things like WHS and financial controls very seriously, and I think that cyber security is something that needs to progress because it also has a direct business impact. In my time in the security industry, there has been a maturing, but it’s a long journey.
Hosking: It builds up over time. In the electricity industry, we still get electrocutions and people who don’t make their worksites safe. It’s a constant battle.
Searle: For too long cyber security has been looked at as the department of “no”. It tends to be the one prohibiting the business, but in fact it should be finding ways to enable business. At a board level, it is all about risk and opportunity. The security division’s responsibility is to ensure you can reduce the risk to the point where the opportunity can be pursued.
O’Connor: For regional utilities, the highest risk on the register is typically motor vehicle accidents because of the large number of installations located around the country and the need for staff to drive between them. With the introduction of the next generation of SCADA systems, there is the opportunity to significantly reduce the number of motor vehicle accidents through remote monitoring of control systems and sites. This illustrates how newer systems can significantly reduce the corporate risk profile even though new risks are being introduced.
Gleeson: When we are introducing a new system, we will get an external adviser in to review how it affects security across the board. We recognise all these things are interlinked in different ways and that there can be unintended consequences. Through these reviews, we’ve picked up things and they have been quite valuable learning exercises. I think the most significant thing for boards is to recognise that this requires a holistic system approach. It is not just a matter of plugging one hole or just putting in really good password systems. There are many ways in which the cyber risk manifests itself and you need the Swiss cheese approach: using various ways and coming from various directions. Also, directors don’t have to be tech gurus. It is a matter of having that systems approach and drawing on international standards and practices that are well accepted in our industry. You then have to test yourselves against these through internal and external reviews from time to time. You may think you are doing it right, but you still have to ask the question: “Are we?”
Where to go for help
Searle: In the energy space, there is the North American Electric Reliability Corporation (NERC) Cyber Security Standard which is very popular in the US. However, in our view, it is probably not prescriptive enough. It talks about things you should do, not things that have to been done. If you look at the Australian government’s Information Security Manual, it is much more prescriptive. But if you translate some of the terms from the national security space to the energy or SCADA space, it becomes really applicable and useful in terms of identifying behaviours, technologies and procedures that are appropriate for managing that kind of environment. It gives you a very clear laundry list of things you should do. Also, look at the ASD top 33 strategies to mitigate against cyber-attacks. If you implemented the first four, you would cover much of your threats. There might be depreciating returns as you go further down the list after that, but at least that gives you a point from which to measure your risk appetite and a much better way to make an informed risk-based decision about how much risk you want to absorb and how much risk you see as being appropriate for your organisation.
McIntyre: As a provider of key infrastructure, we also tap in to the government security advisers. We have security clearance status. Notwithstanding any monitoring that you do yourself, if the environment changes or there are known attacks on anything in Australia or some infrastructure internationally, we will receive that advice which puts our people on notice. So while you can rely on monitoring and common sense, that formal communication through the security advisers is really important. It helps you to be aware of what the environment is like and to consider whether you are potentially at a greater level of risk than you were 24 or three hours ago.
THE BOARD
Burdis: At the board level, are organisations fully aware of the risks associated with cyber security and the issues involved?
O’Connor: I can only give one of those “it depends” answers. In organisations where there is a very high level of understanding of the internet and cyber security, it is rated very highly. Typically, it is a top five risk. I have seen organisations that have, in the early stages, rated it somewhere around 15 or below. That reflects that people don’t know what they don’t know. Once they get that information, I have seen some take it from 15 up to the top five.
Filson: It rarely gets discussed in detail at the board level. The audit committee seems to be where most of the discussion on cyber security and its risks takes place. You might have IT come along and talk to the board about systems and penetration testing. But my observation is that it is all being driven from the audit committee.
Gleeson: Implicitly the board is delegating this sort of risk to the audit committee and then occasionally it will come up to the full board.
Adams: We have a separate risk committee which tends to deal with these issues there. They are discussed by the board in the broader sense, not in the detailed sense. The board deals with the issue by relying on management and the work process. The board’s meetings and agendas are long enough and directors are not going to be able to get into sufficient detail at board meetings. Having three-hour risk committee meetings and separate audit committee meetings that can actually drill into that detail is a good idea.
McIntyre: The mitigation and control of cyber security is overseen by the TransGrid board’s audit and risk committee. The board has had some education on cyber risks and has engaged consultants. For example, we had senior advisers to the US government come in and educate the board. So the issue is certainly also on our board’s radar.
Hosking: It’s the same in all of the companies I am involved in. The audit and risk committee is the first reference for formulating a risk profile for the company. It determines the list of the most important risks – IT is one of them. We then rely on internal management reporting and independent audits and advisory services to assure us that the risks are being properly monitored. Any important breach is immediately reported to the committee and the board. It can range from phishing into banking or confidential details to national entity attempts to access the system. We don’t believe it is appropriate for these things to be reported to a body separate to the audit and risk committee.
Van Reyk: You also need to see this in the context of digital technology offering many opportunities. Boards are in the business of trying to find the balance between opportunity and risk. They ensure there is a framework for dealing with a bunch of risks, of which cyber is just one. All sorts of risks go to particular committees to look at because they have the skill sets and more time to take deep dives. So a risk committee may look more closely at the operational and technology risks while the safety committee will look at WHS. I don’t see this as any different to fitting in with what is already good governance anyway.
Tips for other boards
Burdis: What are your tips are for other boards dealing with this issue?
Lifson: The key thing is that at board level there should be some folks with the experience to understand the answers to the questions posed. The questions should be around protection, capability, impact and response because when the big one comes, it won’t be something we have seen before. This area has become so complex and will only get more complex, so if you don’t have some level of understanding to ask the right questions and understand the advice you get, you are not going to be able to do your duty as a director.
Hosking: Without getting technical in its questions, the board must be conscious of the risk and ask questions such as: “Are the roles and responsibilities of our employees in line with their accountabilities? Who owns this risk versus that risk? What is the approach to protection, detection, containment and recovery? Are we focused on one of these or all of them?” The board must then ensure it receives cogent answers and then have the answers tested by an external entity to see whether they work. If you haven’t asked the questions, you are not doing your job as a director. As with WHS, given the reliance on IT these days, you are responsible to your shareholders to ensure the very core thing that runs the business is properly protected and secure.
McIntyre: I have two things to add. One is education. Even an astute experienced director is unlikely to be an expert on IT or this kind of risk. So it is vital for boards to bring in people who are at the cutting edge of IT and security risks to get an awareness of the issues and what they should be asking about. Second, when you consider risk mitigation, controls and testing, you should assume that the worst plausible thing could happen and examine how you would operate in that environment and how you would recover.
Hosking: When a new system is “sold” to the board by management remember that there is nothing that eliminates risk, only things that change the risk. So you need to be fully aware of what the changed risk will be and not just take the system on its face value.
Searle: It comes down to two questions. Boards have to ask what their organisation’s crown jewels are in terms of data. And then, whether it is doing enough to protect them. They also need to consider how these evolve over time because the environment and your crown jewels are not static. Asking these questions on a regular basis tends to elicit different responses at different times. It’s up to the board to ensure that as the business evolves and changes, the right controls are in place for the right pieces of data. It also gets back to understanding that cyber security is a holistic exercise. It involves humans, processes, governance and technology. Also, look at your culture of security. From the highest levels of leadership down, security should be seen as a business enabler, as a fact of life rather than something that hinders business. Part of that is having the right training for the right people at the right level. The education cannot be “one size fits all”. We will need to build awareness. It’s no different to the “stranger danger” campaign when I was a child. You don’t want to be a fear monger or tell people to avoid the internet, but they do need to understand the risks.
Filson: The difficulty for boards is that directors traditionally don’t have an IT background so they are dependent on receiving information from management, IT people and perhaps some internal audit reviews. Because directors are not experts, they need to ensure they have the right people in place if anything happens.
O’Connor: When the board is making a decision about an investment or design or anything else, I like to see the question asked: “If we made this decision, how will it address our top 10 risks?” The answer could be: “This decision is part of the mitigation for risk number three which is cyber security or risk number one which is reducing motor vehicle accidents.” The board can then make that considered judgement about the investment, its return and its value in risk mitigation.
Burdis: I’ve heard it said that if you have an IT guy on the board, you have the wrong person. One just can’t keep up to date with IT developments when you are sitting on a board.
Gleeson: If you are relying on one thing to solve the problem for you – the air gap or the one director – you could be in trouble.
O’Connor: It is all around people. You have to ask whether you have the right skills at the boardroom table to ask the questions and to be able to assess the answers. Do you have the right people in management and do they have the ability to deal with this issue? Do you have access to the most current information, whether it’s from the board, management or external providers? Does the board have the ability, in the face of new information, to listen and analyse new developments? It’s not a point in time thing.
Searle: Some boards have reached an admirable level of maturity around the way they view risks, but others are probably not at that point. We are being asked to provide awareness campaigns to educate directors about the risks. This can be done through activities like war game exercises where you walk the board through a role play. That tends to highlight that this type of risk is not just an IT risk. It is a business risk and often you will have to make decisions where no right decision exists. There is a worse decision and a “less worse” decision. But if a board treats this as it would any other enterprise risk, then the issue starts to become familiar ground and the board starts to ask questions – for example, who will get that dreaded 2am phone call that something horrible has happened?
Who is the right person to make the decisions? It has to be treated as a holistic problem rather than a bits and bytes issue.
Latest news
Already a member?
Login to view this content