How effective processes boost stakeholder trust
While technology glitches, both unintended and malicious, can erode stakeholder trust in businesses, correctly implemented and digitised processes can go a long way towards reducing the risks and chaos caused by criminals and careless, or technologically naïve, humans.
That’s the view of Denis Bensch, CIO of FlowCentric Technologies, who notes that the digital age has brought with it many challenges like hackers, deepfakes, ransomware, prolific data generation, shadow IT, steep learning curves for many people, and occasionally clumsy, complicated or disconnected processes.
According to Bensch, systems and processes are added to companies over the years, often in an attempt to ‘keep up’. Without regular evaluations to ensure that the systems and processes are supporting the goals and performance of the business, and improving relationships with stakeholders, they become clumsy, inconsistent and laborious to navigate.
“Consistent, transparent, connected processes across systems, departments, internal and external resources and regions are absolutely essential to building and maintaining stakeholder trust, as these make for better service delivery and consistent quality,” he says.
Think of a customer whose only way to contact a company is a centralised call centre and who calls in with an urgent request to change delivery details. However, the only way the agent can deal with the request is to escalate it to another department, which then has to route it to the warehouse. But the warehouse either doesn’t receive the request or ignores it, and so proceeds with the incorrect delivery. The result is a frustrated call centre agent and a disgruntled, mistrustful customer.
“When departments operate in data silos because their systems don’t ‘talk to each other’, or the data needs to be input multiple times, it slows or even strangles productivity and exasperates customers as they are bounced from person to person, having to repeat their story multiple times. This hardly engenders trust in the organisation,” he adds.
Bensch believes strong processes can also go a long way towards reducing the risk of the new and innovative types of frauds that are enabled by rapid advances in technologies like artificial intelligence (AI) and machine learning (ML). This includes AI-enabled deepfakes, particularly deepfake audio, which is considered among the most advanced forms of cyber attack today.
One of the first reported deepfake attacks occurred in 2019 when the CEO of a UK-based energy firm received a call from his boss, the head of the firm’s German parent company, requesting an urgent deposit to be made into a particular company’s account. Except it wasn’t his boss, but a deepfake imitation. The company was defrauded of a significant sum of money.
“If a company has well defined and digitally enforced business processes in place, these types of scams are far less likely to occur,” Bensch says. “For example, in the case of the energy firm, the urgent transfer request could have speedily followed the correct process and chain of approval without noticeably reducing the speed at which things were actioned. This could have prevented the release of the money because the CFO, group CFO or other responsible party would have been looped in and could have stopped it or raised an alarm.”
And now the deepfake perpetrators are becoming even smarter, using voice in conjunction with e-mails to improve their chances of defrauding companies.
There is also recognition that uncontrolled adoption of AI and ML can also play a role in undermining stakeholder trust.
Bensch explains that this technology is increasingly being applied in areas in which important judgments and decisions have to be made – such as banking or human resources – in an attempt to reduce the impact of unconscious human biases.
However, this can backfire if the data and model used for the ML are themselves biased. A few years ago, Amazon attempted to build an AI/ML tool to help with its recruitment. However, it was forced to scrap the project when it was discovered that the system discriminated against women as the dataset used was male dominated.
“AI and ML systems that lend themselves to errors, bias or can be easily hacked and manipulated will seriously undermine stakeholder trust and pose significant risk to an organisation’s reputation and bottom line.
“To ensure stakeholder trust in the technology and avoid ML bias, one has to ensure that the data used for the system is as accurate and as free of prejudice as possible. In addition, systems must be put in place to identify and ameliorate any anomalies that may arise,” Bensch says.
He also maintains there should be an international governing body managing and legislating technology in the same way that there are medical or engineering boards holding professionals accountable.
The European Union appears to be moving in this direction. In a recent statement, Guido Lobrano, vice-president for Europe for the Information Technology Industry Council, said that EU regulation of AI was being designed in a way that would increase user trust while simultaneously unlocking innovation.
“A key element in achieving business goals is stakeholder trust. Whether we’re drinking water from a tap, shopping online, or sending an e-mail or applying for a bank loan, we have to trust that somebody, somewhere, has taken the necessary steps to make sure this can be done safely and fairly. It would be nearly impossible to ensure that this trust is earned and maintained without the checks and balances that sound, consistent and tightly controlled processes bring,” Bensch concludes.