Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, speaking during the companyâs annual Dreamforce conference in San Francisco, California, on Sept. 17, 2024.
David Paul Morris Bloomberg Getty Images
LONDON â The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence â but says itâs important that policymakers donât tar all technology companies developing AI systems with the same brush.
Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation âseriously.â However, she added that any British proposals aimed at regulating AI should be âproportional and tailored.â
Bahrololoumi noted that thereâs a difference between companies developing consumer-facing AI tools â like OpenAI â and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT, face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines.
âWhat we look for is targeted, proportional, and tailored legislation,â Bahrololoumi told CNBC on Wednesday.
âThereâs definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, (but) weâre a B2B organization,â she said.
A spokesperson for the UKâs Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be âhighly targeted to the handful of companies developing the most powerful AI models,â rather than applying âblanket rules on the use of AI. â
That indicates that the rules might not apply to companies like Salesforce, which donât make their own foundational models like OpenAI.
âWe recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy,â the DSIT spokesperson added.
Data security
Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI âagentsâ â essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.
For example, one feature called âzero retentionâ means no customer data can ever be stored outside of Salesforce. As a result, generative AI prompts and outputs arenât stored in Salesforceâs large language models â the programs that form the bedrock of todayâs genAI chatbots, like ChatGPT.
With consumer AI chatbots like ChatGPT, Anthropicâs Claude or Metaâs AI assistant, it is unclear what data is being used to train them or where that data gets stored, according to Bahrololoumi.
âTo train these models you need so much data,â she told CNBC. âAnd so, with something like ChatGPT and these consumer models, you donât know what itâs using.â
Even Microsoftâs Copilot, which is marketed at enterprise customers, comes with heightened risks, Bahrololoumi said, citing a Gartner report calling out the tech giantâs AI personal assistant over the security risks it poses to organizations.
OpenAI and Microsoft were not immediately available for comment when contacted by CNBC.
AI concerns âapply at all levelsâ
Bola Rotibi, chief of enterprise research at analyst firm CCS Insight, told CNBC that, while enterprise-focused AI suppliers are âmore cognizant of enterprise-level requirementsâ around security and data privacy, it would be wrong to assume regulations wouldnât scrutinize both consumer and business-facing firms.
âAll the concerns around things like consent, privacy, transparency, data sovereignty apply at all levels no matter if it is consumer or enterprise as such details are governed by regulations such as GDPR,â Rotibi told CNBC via email. GDPR, or the General Data Protection Regulation, became law in the UK in 2018.
However, Rotibi said that regulators may feel âmore confidentâ in AI compliance measures adopted by enterprise application providers like Salesforce, âbecause they understand what it means to deliver enterprise-level solutions and management support.â
âA more nuanced review process is likely for the AI ââservices from widely deployed enterprise solution providers like Salesforce,â she added.
Bahrololoumi spoke to CNBC at Salesforceâs Agentforce World Tour in London, an event designed to promote the use of the companyâs new âagenticâ AI technology by partners and customers.
Her remarks come after UK Prime Minister Keir Starmerâs Labor refrained from introducing an AI bill in the Kingâs Speech, which is written by the government to outline its priorities for the coming months. The government at the time said it plans to establish âappropriate legislationâ for AI, without offering further details.