Site icon BW Security World

BFSI Sector Struggles To Balance AI Security & Accuracy: Report

The State of Data Infrastructure 2024 Report, based on insights from 231 global IT and business leaders, found that 48 per cent of BFSI professionals consider data security their top concern when implementing AI

The rapid evolution of artificial intelligence (AI) is putting unprecedented pressure on traditional data infrastructures, forcing businesses in the Banking, Financial Services, and Insurance (BFSI) sector to prioritise between security, data quality, and sustainability. A new survey by Hitachi Vantara, the data storage, infrastructure, and hybrid cloud management subsidiary of Hitachi, Ltd. (TSE: 6501), reveals that while financial leaders prioritise security, they risk compromising AI accuracy and long-term returns on investment.

The State of Data Infrastructure 2024 Report, based on insights from 231 global IT and business leaders, found that 48 per cent of BFSI professionals consider data security their top concern when implementing AI. At the same time, 84 per cent believe that losing data due to a cyberattack or mistake would be catastrophic. However, focusing too much on security at the expense of data quality is leading to gaps in AI performance, including inaccurate models, poor decision-making, and diminished trust in AI-powered financial systems.

The Trade-Off Between Security & AI Accuracy

Data security remains a major priority for BFSI institutions, given the rising threat of cyberattacks and AI-related risks. The report highlights that:

Data in BFSI organisations is available only 25 per cent of the time when and where it is needed.

AI models used in BFSI firms are accurate just 21 per cent of the time.

36 per cent of respondents fear that internal AI systems could cause data breaches, while 38 per cent worry about ransomware attacks leading to data loss.

32 per cent of BFSI leaders believe an AI-enabled cyberattack could expose their organisation to security threats.

As BFSI firms embrace AI-driven operations, these security concerns highlight the urgent need for well-structured, resilient AI frameworks that prioritise both data security and quality.

Hemant Tiwari, Managing Director and Vice President of India and SAARC Region, Hitachi Vantara, stressed the importance of this balance:

“As AI rapidly transforms India’s financial sector, the real challenge is ensuring it operates on accurate, secure, and well-governed data. Trust is the foundation of India’s BFSI industry, and even a small AI-related error, whether it is an inaccurate prediction or a security breach, can weaken customer confidence and create regulatory challenges.

While 48 per cent of global financial leaders prioritise security while implementing AI, India’s BFSI sector must take a more proactive approach. We need to build resilient data infrastructures that balance security with accuracy, governance, and sustainability. Institutions that establish reliable AI frameworks today will lead the next wave of financial innovation while maintaining trust and operational integrity.”

Accuracy Challenges & AI Risks In Financial Services

The BFSI sector relies on accuracy and trust in its AI applications, as errors in financial data can lead to serious consequences, including regulatory fines, reputational damage, and financial losses.

Mark Katz, CTO of Financial Services at Hitachi Vantara, highlighted the risks posed by inaccurate AI-generated responses:

“The business model in financial services is inherently tied to trust. Reputational harm is a significant risk, and so in our industry, the interaction between security and accuracy is a critical and complex challenge.

For instance, if a chatbot inadvertently discloses sensitive information that was included in the training data, that will have serious repercussions. Additionally, the cost of a wrong answer or a hallucination poses a significant risk; if someone were to act on bad data, it raises all sorts of questions about liability.”

Despite these risks, AI adoption in BFSI is accelerating, with firms rushing to integrate AI-powered chatbots, fraud detection systems, and personalised financial services. However, 71 per cent of BFSI leaders admit they are testing AI models on live systems, while only 4 per cent use controlled sandbox environments for AI experimentation. This lack of proper testing and governance increases the risk of AI failures.

Building Resilient AI Infrastructure

To address these challenges, BFSI organisations need to adopt a more strategic approach to AI deployment.

Alenka Grealish, co-head of Generative AI Intelligence at Celent, emphasised the need for careful planning and governance:

“While the rapid adoption of generative AI in the financial services sector is exciting, financial institutions need to ensure they are taking a strategic approach.

Organisations must balance speed and innovation with a clear focus on security, accuracy and ethical responsibility. Those that prioritise thoughtful planning and robust frameworks will not only mitigate risks but also unlock the full potential of GenAI to drive sustainable growth and competitive advantage. In the process, they will build lasting trust with their stakeholders.”

The report outlines key strategies to help BFSI firms strengthen their AI infrastructure:

Responsible AI Experimentation: Instead of testing AI in live environments, BFSI firms should develop AI capabilities in secure sandboxes to reduce risks while refining AI applications. Currently, only 42 per cent of BFSI leaders are investing in structured AI training and experimentation.

Sustainability-Focused Infrastructure: From energy-efficient data storage to optimised software models, BFSI firms must embed sustainability practices into their AI strategies.

Simplified AI Management: To reduce complexity, BFSI organisations should unify their hybrid cloud environments, automate security processes, and use integrated data platforms to accelerate AI-driven insights.

Data Resilience and AI-Driven Security: With ransomware and AI-enabled cyber threats on the rise, BFSI firms must build robust backup and recovery mechanisms. This includes using self-healing storage, encrypted AI data, and automated recovery systems to mitigate potential data breaches.

Future Of AI In BFSI

The findings from Hitachi Vantara’s 2024 Global State of Data Infrastructure Survey highlight the urgent need for BFSI firms to rethink their AI strategies. While security is essential, ignoring data accuracy and infrastructure resilience can lead to AI failures, financial risks, and regulatory challenges.

To thrive in the AI-driven financial sector, BFSI institutions must focus on building AI models that are accurate, secure, sustainable, and well-governed. Those that succeed in this balancing act will lead the future of financial innovation while maintaining customer trust and regulatory compliance.

Exit mobile version