Interviews News Security Technology

Why Enterprises Moving Beyond Black-Box AI

One of the biggest risks in securing AI at scale is the black-box API approach, security cannot be enforced on systems that cannot be seen

As enterprises race to integrate artificial intelligence into their core operations, questions around security, transparency, and control have moved to the forefront. The growing interest in open-source AI reflects a broader shift in enterprise thinking away from opaque, vendor-controlled systems towards architectures that prioritise visibility, auditability, and data sovereignty. At the same time, organisations are grappling with increasingly complex hybrid and multi-cloud environments, evolving regulatory frameworks, and the long-term implications of advanced AI systems on cybersecurity models such as zero trust. Against this backdrop, In this conversation with Abhinav Puri , VP & GM – Portfolio Solutions & Services, SUSE ,BW Security World explores how open-source ecosystems, architectural flexibility, and behavioural security models are shaping the future of secure enterprise AI.

How can large enterprises leverage open-source AI to innovate quickly while ensuring strong security, transparency, and control over their data?

For large enterprises, open-source AI delivers two critical advantages. The first is visibility. Organisations can inspect and verify the models they use, ensuring there are no hidden vulnerabilities or embedded biases. The second is control. Enterprises can customise models to meet their specific requirements rather than being constrained by proprietary limitations.

When these two elements come together, innovation accelerates. Open-source ecosystems improve code quality, enhance model performance, and drive better utilisation, while also preventing vendor lock-in. Enterprises retain the freedom to choose tools and technologies without being tied to a single provider.

From a security standpoint, open-source AI offers a “glass box” approach. Unlike proprietary systems, organisations can examine the code, architecture, and, in many cases, the training methodology. This allows security teams to conduct deep audits before deploying models into production.

It is also important to recognise that not everything labelled open source truly adheres to open principles. Models that restrict access to training data, full weights, or permissive licensing undermine transparency. True openness is essential to building trust.

Data sovereignty further strengthens the security argument. Open-source AI allows organisations to self-host models on private infrastructure or within virtual private clouds, ensuring sensitive enterprise data never leaves their environment. Enterprises invest years in building competitive moats around their data and processes, and open-source AI enables them to innovate without exposing that intellectual capital.

How does an open-source ecosystem ensure AI models are scalable, secure, auditable, and protected from an integrity standpoint?

One of the biggest risks in securing AI at scale is the black-box API approach. Security cannot be enforced on systems that cannot be seen. An open-source ecosystem ensures transparency across every layer of the AI stack, making each component auditable.

Scalability does not require pushing more data into public clouds. Instead, it means bringing models closer to enterprise data. Hybrid architectures allow organisations to run AI workloads within private, sovereign environments, including air-gapped deployments where required.

Supply-chain security is another critical challenge. Developers often rely on unvetted containers or frameworks to accelerate pilots, which introduces significant risk at scale. Curated and vetted AI libraries help organisations standardise trusted components, allowing developers to move quickly without compromising security.

As AI workloads scale, compute requirements fluctuate significantly. Security must scale alongside them. Dynamic security models based on Zero Trust principles ensure that as new workloads or inference pods spin up, they are immediately governed by segmentation and access policies.

Encryption is also embedded by default. Rather than treating security as a post-deployment configuration, it is integrated into the platform itself, reducing the attack surface as AI environments grow.

Zero trust is widely discussed but often interpreted differently. Is it a myth or a reality?

Zero trust is not a product or a label—it is a behaviour. Too often, the term is applied superficially to traditional security controls.

In practice, zero trust is behavioural and continuous. Security systems learn how applications normally behave—what processes they run and which services they communicate with. From this, policies are created that allow only explicitly trusted behaviour and block everything else.

If a workload suddenly attempts to communicate with an unexpected external service, it is blocked not because the destination is known to be malicious, but because it was never trusted in the first place. This behavioural enforcement is what makes zero trust real.

Why do modern enterprises need architectural flexibility built on open standards when operating across hybrid and multi-cloud environments?

Hybrid and multi-cloud environments introduce significant complexity. Each platform comes with its own security models, policies, and controls, creating inconsistencies that attackers can exploit.

Open, standardised architectures allow organisations to define security policies once and enforce them consistently across environments. This unified security plane reduces configuration drift and closes gaps that often emerge between clouds and on-prem systems.

Architectural flexibility also prevents security from becoming a source of vendor lock-in. Enterprises must be able to move workloads—and their security posture—between environments if commercial or regulatory conditions change. Open standards make this portability possible.

Consistency is central to effective defence. Vulnerability scanning, policy enforcement, and threat detection must deliver the same fidelity regardless of where workloads are running. Deep inspection at the application and container level ensures that security extends beyond basic network controls.

Operationally, a location-agnostic security model allows organisations to apply the same protections whether workloads are running on hyperscale infrastructure or developer machines. This enables rapid, global security updates across hybrid estates, significantly reducing exposure windows.

How are regulatory frameworks such as India’s DPDP Act influencing enterprise AI deployments?

Regulation is becoming the global default rather than the exception. Frameworks such as the DPDP Act, the EU AI Act, and emerging US regulations reflect a broader push towards sovereign, accountable AI systems.

Platforms built with regulation in mind provide organisations with full audit trails, provenance data, and strict access controls. Rather than limiting innovation, these capabilities give enterprises the confidence to deploy AI responsibly in regulated environments.

How could Artificial General Intelligence influence cybersecurity and enterprise security architecture?

In security, obscurity is a liability. Transparency builds trust and resilience.

As AI evolves towards more general intelligence, the need for open, verifiable infrastructure becomes even more important. Resilient systems are those that can be scrutinised, patched, and adapted continuously.

AGI has the potential to strengthen enterprise resilience by enabling more intelligent and adaptive security policies. However, this depends on infrastructure remaining flexible. Open-source innovation allows organisations to evolve architectures, integrate new tools, and respond to emerging threats without structural constraints.

Ultimately, trust comes from transparency, and resilience comes from adaptability—principles that will define the future of secure AI.

 

Leave a Reply

Your email address will not be published. Required fields are marked *