- Google highlights growth and efficiency as core to an effective AI strategy.
- Google also calls for secure data governance and responsible AI practices.
In recent months, interest in artificial intelligence (AI) has surged, with many business leaders eager to get started but often asking the same question: Where do we begin? This is where understanding AI’s potential for growth, efficiency, and future-proofing becomes essential.
AI tools are increasingly embedded in the apps we use daily, from social media to productivity software. “Most of the apps we use daily now feature generative AI, influencing consumer behaviour and how we interact with technology,” noted Caroline Yap, Managing Director of Global AI Business & Applied Engineering at Google, at the Google Cloud APAC Media Summit in Singapore.
This widespread adoption prompts questions about how companies can hire, retain, and reskill talent while determining which AI tools to leverage. Yap’s advice is straightforward: Start with a clear understanding of your business strategy and identify where AI can drive growth, efficiency, and innovation.
Starting with growth, not efficiencies
A recurring question from business leaders is: Why prioritise growth over efficiency? Yap believes that firms should focus on growth when adopting AI. “If you start by focusing solely on efficiency, you risk limiting innovation by simply addressing past issues,” she explained. By focusing on growth, companies can rethink how AI might shape their services and experiences to meet evolving consumer expectations.
“This approach enables companies to develop new experiences that align with changing consumer behaviours,” she said. Yap emphasised that focusing on growth first allows companies to design unique customer interactions, then implement AI solutions to make those experiences efficient.
To support this shift, Yap explained how Google Cloud’s Vertex AI platform enables enterprises to securely design and deploy custom models. Agent Builder and Model Garden are tools that assist businesses address specific business goals while retaining complete control over their data. “We prioritise security, trust, and privacy in our AI models to allow businesses to use them with confidence,” Yap added.
Rising pressures on today’s security organisations
When it comes to security, it’s always a major concern for organisations introducing new technologies. According to Mark Johnston, Director of the Office of the Chief Security Information Officer (CSIO) at Google Cloud APAC, security teams are increasingly strained as cybercrime grows in scale and complexity.
“Cyberattacks are happening at unprecedented rates, and we’re challenged to respond effectively,” he said. Adding AI poses new set of risks, but Johnston believes it might also be a valuable tool in defence.
However, Johnston recognised that AI integration must be handled with caution to prevent creating new risks. Companies require strong data governance to ensure that sensitive financial, personal, and proprietary information is not unintentionally disclosed. “AI must be applied responsibly,” he stated. “We provide not only products but also a methodology to help businesses assess and manage their software, data, and operational risks.”
The importance of data governance in AI
Johnston emphasised that proper data governance is essential for secure AI adoption. “Data management is critical, as it often limits our ability to understand AI-related risks,” he noted. When AI processes sensitive data like personally identifiable information (PII) and financial records, strong data governance is crucial to preventing privacy breaches and protecting intellectual property.
Organisations without robust data management risk violating privacy standards and exposing corporate data. “Handing over sensitive data to multiple models across various business functions without governance is risky,” Johnston explained.

Google’s Vertex AI platform supports data security by providing real-time data protection and data lineage visualisation, which traces data from origin to application. “Data governance maximises the value of AI while ensuring data security,” he added.
Protecting AI models and intellectual property
For startups and companies developing AI models, securing intellectual property is vital. “For many AI startups, it’s not just about the models—they’re existential to the business,” Johnston pointed out. Google’s security team has discovered vulnerabilities in large language models (LLMs), including the company’s own Gemini model, highlighting the potential of remote access and data theft. Recognising the broader implications, Google publicised these results to the industry to improve model security.
“Model theft is a significant concern, as it threatens the integrity and investment in AI,” Johnston stressed. To address this, Google has developed tools to prevent unauthorised access to model weights and data. For example, Google’s AI Notebook Scanner detects vulnerabilities within Jupyter notebooks, ensuring developers work in secure environments.
The Google Secure AI Framework (SAIF)
Google introduced the Secure AI Framework (SAIF) to further support secure AI integration by providing guidelines on safe data usage, risk management, and responsible AI practices. Johnston emphasised that SAIF allows organisations to use AI responsibly by providing a structure that supports security while increasing AI capabilities.
Google’s SAIF and Vertex AI ensure that organisations can utilise AI’s revolutionary potential while adhering to high security standards; so, ensuring a secure and sustainable path forward for AI-driven innovation.
Navigating AI compliance and industry standards with Google
In addition to SAIF, Google recently joined efforts to establish the Coalition for Secure Artificial Intelligence (CoSAI) and contributed to a universal AI risk management standard, ISO 42001. These initiatives aim to create a framework for safe AI applications across industries.
“Our goal is to make AI security consumable, understandable, and practical for any organisation looking to integrate AI into their operations,” Johnston emphasised.
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.