Reliable figures are hard to come by but, as a rule of thumb, at least half of all IT security projects will fail to fulfil their objectives. Why? Because, all too often, organizations try to impose a technology solution upon a group of people – the end users – who have no say in how the solution will be structured and whose cooperation is simply withdrawn. Orange Business recommends that technology be considered alongside processes and people to ensure the solution ‘lands’ properly in the organisation. In this blog, we apply this three-pillared approach to a key challenge facing security leaders seeking to operationalise GenAI projects – data leakage.
By their nature, GenAI projects carry sensitive data over an enlarged attack surface, increasing the vulnerability of sensitive information to cybercriminals. Unless these risks can be managed and mitigated, then it’s unlikely the GenAI project will be operationalised on an enterprise-wide basis.
It would be naive to suggest that security leaders have failed to spot this connection. A survey we commissioned with GlobalData found that an overwhelming proportion – 96% –of enterprises said they needed to re-evaluate their cybersecurity strategy due to GenAI. However, Gartner recently claimed that 30% of GenAI projects will be abandoned in 2025 and gave inadequate risk controls as one of the primary reasons for this . So, while people understand the nature of the problem, many are a long way from being able to solve it.
My way or the highway?
It is tempting to believe that good security is just a question of selecting the most appropriate solution for your needs and rolling it out across your organisation. However – particularly post-COVID – the bar for user experience has been raised substantially: if a proposed security solution adds to the complexity of users’ working days rather than reducing it, then they will either ignore it or raise complaints that render project implementation impractical. Nor is prohibition an effective alternative – if people want to use GenAI services, they will find a way to do so. And a large ‘shadow GenAI’ estate only compounds the problems that prohibation was intended to solve in the first place.
At Orange Business, we always promote enablement and education over prohibition. Of course, it’s entirely reasonable to block high-risk tools and behaviours, but you should provide alternatives to them. Contextualise the use of these services by providing ‘pop-ups’ that alert users to potential risks and encourage more compliant behaviour. And, in any organisation, there are always certain individuals whose opinions carry more weight than many others – so engage with these opinion-formers early so they can serve as ambassadors for your project.
This is part of a change management mindset that treats People and Process as being of equal importance to Technology in any security project. There is no ‘silver bullet’ that will ensure the success of your GenAI security implementation, but by following this three-pillar methodology, you will drastically increase your chances of doing so.
To illustrate the value of this approach, we will look at one of the key security challenges created by GenAI services – and an area where best practice has yet to fully emerge: sensitive data propagation.
There’s a hole in my bucket
Significant risks arise when employees input confidential or sensitive information into GenAI tools, a practice that introduces several avenues for data leakage and may cause sensitive information to be released to the outside world. Here's a breakdown of key ways this can occur:
- Unintentional Data Exposure: Employees may unknowingly share proprietary information, such as financial forecasts, acquisition plans, or unpublished research, with AI engines.
- Data Retention by AI Providers: Many GenAI tools retain user inputs to improve their models, creating a potential risk of sensitive data being stored or accessed by unauthorized parties.
- Regulatory Non-Compliance: Sharing sensitive data with external AI tools can violate data protection regulations like GDPR, CCPA, or HIPAA, leading to legal and financial consequences.
- Reputational Damage: A data breach involving confidential information can harm an organization’s reputation and erode customer trust.
The pillars of successful and secure GenAI adoption
To successfully adopt GenAI tools while mitigating these risks, organizations must start by defining the desired business outcomes across the critical dimensions of People, Process and Technology. Embracing the three-pillar approach will ensure that the security dimensions of the GenAI services are treated holistically and give your favoured solution the best chance of success.
1. Technology aspects
Technology forms the backbone of secure GenAI adoption and robust security measures are necessary to prevent data leakage and unauthorised access. SASE-based solutions provide a comprehensive framework to address the technological challenges of GenAI adoption. Key aspects include:
- Granular Access Controls: Implement Role-Based Access Controls (RBAC) to ensure only authorized users can access GenAI tools. Limit access to sensitive data based on user roles and responsibilities.
- Conditional Access Policies: Use conditional access policies to enforce security measures based on context, such as device type, location, or user behaviour. For example, restrict access to GenAI tools from unmanaged devices.
- User Behavior Analytics (UBA): Monitor user activities to detect anomalies, such as excessive data uploads or unusual access patterns. UBA helps identify potential insider threats or compromised accounts.
- Data Loss Prevention (DLP): Deploy DLP solutions to prevent sensitive data from being shared with GenAI tools. DLP policies can block or redact confidential information before it is transmitted.
- Real-Time Ticket Creation & Handling: Integrate GenAI usage monitoring with IT service management (ITSM) tools to create real-time tickets for policy violations. This ensures swift incident response and resolution.
2. People aspects for secure GenAI adoption
Employees are the first line of defense and the human element is critical in ensuring secure GenAI usage. Organizations must focus on their people and ensure they are aware of risks and trained on compliance. This should include:
- User Awareness Training: Educate employees about the risks of sharing sensitive data with GenAI tools. Training should include real-world examples and best practices for secure usage.
- Compliance Knowledge: Ensure employees understand regulatory requirements and the consequences of non-compliance. This includes training on data protection laws and internal policies.
3. Process aspects for secure GenAI adoption
Well-defined processes ensure consistent and secure usage of GenAI tools. This includes real-time guidance, approval workflows, and regular policy updates. Key considerations are:
- Contextual User Coaching: Provide real-time guidance to users when interacting with GenAI tools. For example, display warnings if sensitive data is detected in user inputs.
- Data Classification Methods: Implement robust data classification systems to identify and label sensitive information. This ensures that DLP policies can effectively protect critical data.
- Approval Workflow for AI Tools Usage: Establish approval workflows for accessing GenAI tools. Managerial or compliance approvals should be required before employees can use these tools for specific tasks.
- Regular Policy Review & Update Cycles: Continuously review and update policies to address emerging risks and changes in technology. Regular audits will ensure policies remain effective and relevant.
Conclusion
A recent survey found that 89% of global CEOs consider AI central to maintaining profitability – so the GenAI train has clearly left the station. However, with so many organisations failing to successfully operationalise these services, many will find that their train fails to arrive at its destination. The challenge is that – to a degree greater than that for any previous technology – GenAI demands a truly holistic, end-to-end mindset. It’s why Orange Business’s unique capabilities – as a connectivity expert, global system integrator, data specialist, cybersecurity leader and AI pioneer – make us such an attractive partner in this area.
This is also why the three-pillar approach is so relevant to those struggling to secure their GenAI services – and why we are such advocates of SASE-based solutions. SASA addresses the people, process and technology elements of GenAI security by integrating network security and connectivity into a unified framework – making it ideal for controlling sensitive data propagation in GenAI tools. Those who choose to put this theory into practice will find that they are unlikely to be among the 50% whose security projects fail – or the 30% that have their GenAI projects abandoned.
Rob is the Global SOC Lead Architect SASE at Orange Cyberdefense, focusing on Orange’s global strategy and offerings for Managed & Co-Managed SASE/SSE services. He works closely with our ecosystem partners, industry cybersecurity experts and our global SOC operations teams to differentiate our vendor agnostic SASE offerings.