Industrialised fraud, digital money mules, social engineering… BAE Systems’ Applied Intelligence Unit takes a military approach to cyberattacks on our assets. Gareth Evans, Senior Business Solutions Consultant for Fraud Prevention, says know your enemy, work with your allies and don’t be a sitting target.
Open banking, artificial intelligence (AI) and big data are all transforming the way people bank. But while greater connectivity promises wider deliverables, it also paves the way for the emergence of new vulnerabilities.
The data-driven concept of open banking is being hailed as a way to revolutionise the financial services industry, increasing competition and innovation in the market. Criminals, on the other hand, see an infinite number of new vectors for fraud, making financial crime harder to identify and defeat.
How to be open and at the same time more secure is the contradiction that many are working to solve. One of the biggest is BAE Systems, better known perhaps for national and supranational defence and security projects, from long-range missiles to military communications. Its Applied Intelligence Unit took the company’s expertise in electronic warfare and used it to improve the defence of our finances. It explores ways to confront new-age cybercriminals and develop effective strategies for cybercrime management with a proprietary suite of commercial applications under its NetReveal brand.
Gareth Evans, BAE Systems’ senior business solutions consultant in fraud prevention, says that to be truly future-proofed, an institution must be a moving target – faster than the criminals seeking to attack it. On open banking, he is clear about the benefits – and the risks.
“It’s a huge opportunity for new players to come into the market, offer new products and services and change the interaction between the customer and the bank. It’s massive. But it also opens up more threats, more opportunities for fraudsters to exploit. I would say that both the biggest opportunity and the biggest threat at the moment sits within open banking,” says Evans.
Open banking and the revised Payment Services Directive (PSD2), which sits at the heart of the open banking initiative in Europe, herald a shift from a closed banking model to one where banks must be able to support customer requests to share data securely with other trusted third-party providers (TPPs), typically via application programming interfaces (APIs). As a result, a bank’s security perimeters are extended beyond its own infrastructure.
“What we are seeing is the attack vectors widening,” says Evans. “In the past, people used to target your bank account because that was the obvious place to go after your money. Now, it’s not just a case of trying to get hold of your data via social media and looking at your online footprint – using Twitter to see when people are complaining, and then messaging them directly to say ‘hi, I’m calling from A, B, C bank, I understand you’ve got a problem. Can you give me your password, I’ll have a look’. Now, they’re using third parties. They’re saying ‘hey, I’m calling from Uber’, or ‘I’m calling from Airbnb’. Customers are conditioned to not giving their bank account details or confidential information to a bank until they verify it, but I think they’re less ready to think like that when it’s somebody that’s not their bank calling them.
“In the same way, with social engineering we’ve seen people like solicitors and the invoicing or procurement departments of organisations being targeted in order to commit fraud further down the line – not just going after the bank. I think that will continue. The bigger the digital footprint we have and the more the banks open up, the wider that attack vector is going to get. It’s not just the obvious targets, it’s the route that gets to that target.”
Crowe Clark Whitehill, administrator of Cambridge Analytica, the political consultancy at the heart of the Facebook data scandal, says companies are now losing an average of seven per cent of annual expenditure to fraud, with the global cost topping £3trillion. And, according to the Financial Conduct Authority (FCA), the total annual bill for UK banks for fighting cybercrime and online fraud is £6.7billion.
Banks need to move away from a compliance-based view of security, where businesses look at the controls needed to protect their assets, and think more like a potential attacker for whom contact information is now of more value than assets, says Evans.
In July, the personal details of about 106 million individuals across the US and Canada were stolen in a hack targeting financial services firm Capital One. The data leak, which affected consumers and small business owners who had applied for credit card products, included names, addresses, phone numbers, self-reported income, credit scores and payment history. While no immediate financial data was involved, context created with the other information was a phisher’s paradise. More than a hundred million Capital One customers are now at serious risk of fraud.
The data breach, one of the largest in banking history, is believed to have been carried out by a lone hacker. So, is the greatest threat from the individual or organised gangs?
“It’s a combination of both,” says Evans. “Fraud itself has become industrialised. You have organised gangs who go after the data, but it’s also segmented. Rather than one person trying to commit all the fraud in one go, they will have people who will just be mining data, people who do the data management exercise, trying to break that down into usable data and getting rid of all the noise. You’ll then have people who’ll use that data to target specific banks, in specific regions, in specific countries. There are also people out there who are out recruiting mules, so that they can move the money from the victim’s account, through the mule accounts to the destination. And then there are people who are doing no more than building tools to capture that data.
“There’s a huge currency on the dark net, for instance, and in the black markets, selling fraud detection tools, cyberhacking tools, selling data, selling access to accounts. It’s an entire industry and there’s not one silver bullet to it. We need to understand the entire ecosystem.”
Evans cites AI as one of the key technologies capable of stemming loss from fraud by making detection of small-sum crime that adds up to big losses more robust. It can help banks identify who the genuine user is by building up a detailed picture of their behaviour and spotting anomalies, or patterns, in transactions that might indicate fraud.
“I think AI has so many uses across banking, some positive, some maybe not so positive,” says Evans. “From a fraud defence perspective, we use AI to help improve models. Traditionally, we used to look at detecting fraud through a series of questions, if you like, but with AI we can become a lot more granular with that.
“On the other side of the equation, banks are able to tailor user journeys to customers using AI, better understanding their behaviours and profiles. That can allow the bank to better communicate, to give a better experience, but it also makes it a more secure experience because it becomes more individual. So, simple things like the AI being able to understand what your communication preferences are, and how you start those communication preferences, which can give me confidence that it is the bank that’s speaking to me and not a third party.”
Security and AI
Cloud technology is becoming an indispensable element of digital strategy for banks, enabling them to provide the on-demand services their customers are used to getting from other consumer industries. But concerns around security have often held banks back from embracing it fully.
“I honestly don’t feel that Cloud necessarily changes the bank’s security, either positively or negatively, because the Cloud infrastructure today and the closed datacentre that the banks have traditionally used, are both very secure,” says Evans. “Even if it’s a public Cloud, the security is the same. That being said, I’ve never truly seen the banks as the real point of vulnerability so much as the customersof the bank. “Where I think the Cloud does add value is in smaller organisations that might interact with a bank as third-party payment providers, or other small companies that maybe don’t have the resources, financial and people, to shore up their defences. By collectively sitting under someone like Amazon, Google, or Microsoft, which do have those resources, they become more secure.”
BAE Systems helps banks hand repetitive work to robots and automates as much of the process as possible to free up employees. Machine learning, AI
and robotic process automation, it says, have much to bring to the table, and successful organisations will be those that combine human and machine intelligence. AI can play an important role in routine cybersecurity functions, such as filtering out phishing emails, which are difficult for people to spot.
“So, one of the things we can do, specifically around social engineering and machine learning, is, rather than looking at binary scenario rules, which are effectively saying ‘what are you doing and is this you?’, we can look at many more data points, that analyse other similar behaviours to that which have previously resulted in, say, a social engineering fraud.
“It becomes less reliant on understanding if you are you and rather asks ‘what is your behaviour and what does your behaviour look like against a selection of other behaviours that we’ve observed among other customers?’. We can identify examples of manipulation/social engineering by looking at patterns of behaviour, rather than at individual, linear behaviours. But that’s just one of the things that we’re doing with AI,” says Evans.
“We’re also looking to become more efficient in terms of how we manage fraud. It’s not just about identifying fraud, it’s about how you manage the experience around that by dynamically looking at things like the customer interaction, to step up authentication, increase the journey where it’s riskier and decrease it where it’s not.
“I often say that the most important thing in a fraud system is not actually detecting fraud, which sounds pretty strange, but it’s telling you, with confidence, when it’s not fraud – because if you know it’s not fraud, you can build a whole number of customer journeys into the equation and give the optimal customer experience, and you can allow that customer to have a really satisfactory view of your bank.”
A cross-industry response
Banks that future-proof their compliance and fraud teams represent a moving target to single bad actors and criminal enterprises, but there is also a strong argument for active and early co-operation with competitors, regulators and law enforcement. Merely responding to threats isn’t enough.
Banks prevented two-thirds of fraud attempts in 2018, according to trade body UK Finance, but a lack of support from authorities is making the job harder, believes Evans.
“There isn’t really a deterrent to fraud,” he says. “But if someone has successfully stolen the money, then the police can prosecute. So how do we work with law benforcement to get better at going after attempted fraud? We need to tackle the root cause and take out the mule accounts.”
A recent report from Her Majesty’s Inspectorate of Constabulary and Fire & Rescue Services in the UK revealed that one police force filed 96 per cent of well-evidenced fraud reports without further investigation.
“Within BAE, we’ve got something called the intelligence network, which is almost a kind of social experiment where we are trying to create thought leadership pieces in conjunction with financial services authorities to tackle subjects like cyberfraud,” says Evans.
“Rather than put the onus on the banks, independently, or on the vendors, to come up with a technological solution, we’re saying how do we collectively, as an industry, work towards solving the problem? How do we tackle this?’.
“The BAE intelligence network helps in that task by running workshops, sharing thought leadership pieces, videos, etc. I think that, in many ways, the industry getting together to solve problems is probably a stronger solution than saying we just need to educate our customers, and putting a help page on a website.”
Clearly, new technology like AI must be part of the industry’s arsenal in responding to and deflecting fraud, but banks must also look at, and be prepared for, who else is using it. If financial institutions see its benefits, so too will the ‘bad guys’.
In March this year, cybercriminals used AI and voice technology to impersonate a UK business owner, resulting in the fraudulent transfer of $243,000 (£201,000), according to a report in the Wall Street Journal.
An unknown hacker group is said to have exploited AI-powered software to mimic the individual’s voice to fool his subordinate, the CEO of a UK-based energy subsidiary. The hackers were then able to convince the CEO to carry out transactions in the guise of urgent funds destined for its German parent company.
This kind of attack could be a sign of things to come, according to some cybersecurity specialists, who expect to see a huge rise in machine-learned cybercrimes, raising the question: will protection of our funds and personal security ultimately come down to who has the smartest robot?
BAE Systems will be revealing its latest version of NetReveal at Sibos 2019. Two years in the making, it will help financial services out-think the bad guys… for now.