The CEO of a leading global AI and conversational commerce company has called on Mark Zuckerberg to shut down Facebook until he can “fix it”, arguing social media is harming communities and demonstrating what not to do in the current AI race.

“Shut it down tomorrow. You will win a Nobel Prize,” said Robert LoCascio, CEO of LivePerson, a NASDAQ-listed company with clients like Adobe, HSBC and IBM.

“You’re destroying society and ultimately you’ve got to go to bed and wake up and look at your kids, look at your family, and look at your community. Everyone knows [Facebook is harming society].”

LoCascio was speaking at a media roundtable on the ethics of artificial intelligence in Sydney yesterday where he used social media as an example of how not to develop AI, singling out Facebook and its manipulation by nefarious actors.

“Is what you’re creating, creating a better community? … I can definitely say social media does not create better community. That’s a fact. We don’t know yet with AI but I think we have to take responsibility [to make sure it does improve communities].”

Robert LoCascio, LivePerson Founder and Chief Executive Officer. Source:

LoCascio said Zuckerberg’s obsession with expanding Facebook’s reach is “not about money, it’s about something else”.

“He [Mark Zuckerberg] is a billionaire … He could buy half of this town… Shut it down, fix it and then put it back up.

“It’s not good enough to let it run when we know it could be harming our community. You’re not making anything good. You don’t need the money.”

LoCascio’s broad point was that repeating the Silicon Valley approach of technology development (generally an “egotistical, self-centred perspective” with an unhealthy obsession on growth) with AI will lead to similarly poor, or potentially worse, outcomes.

He noted Facebook’s direct role in a genocide, election meddling and potentially mass shootings to drive home the point. But the problem extends to Facebook’s Silicon Valley neighbours too.

“I don’t know where they came from that group out there [in Silicon Valley] but they’re done. They created something, we’re all talking about it, it’s written in newspapers, everyday there’s an article about these people and we’re done [with Silicon Valley].

“You can’t run a company like that. They just need to go under.”

The LivePerson chief, who is based in New York and “can’t stomach” Silicon Valley, said while the vast majority of people making technology have good intentions, they often fail to consider the broader implications of their work and often suffer from hubris.

“There’s a perspective, as a technologist ‘we do good’. I actually would challenge us as an industry. We’re doing very bad things right now,” he said.

“I’m quite embarrassed by the things that are happening with my group [of technology makers].”


LivePerson, which counts Bupa and Bankwest among its local customers, provides AI-powered messaging platforms largely for conversational commerce. The company says it facilitates around 400 million conversations each year, with automation involved in around half of them.

It is a considerable player in the AI race but is urging the industry to slow down. LoCascio and other technology leaders have launched EqualAI, an initiative aimed at removing the potential bias of artificial intelligence and ensuring its use betters community outcomes.

He said EqualAI and LivePerson don’t have the scale of some of the Silicon Valley giants but they can still impact how AI is developed.

“I’m not Google, I’m not Facebook but we’re big enough, we have enough customers and we have enough impact that I think we can take a leadership role without threatening people.”

Paramount to ensuring AI doesn’t go down the path of social media, able to be manipulated and weaponised, is a more diverse group of technology makers and users, LoCascio said. He also stressed the importance of government and stakeholders outside industry.

“Let’s create an equal playing field with this [AI]. And that will never happen through competitive nature of businesses. Equal playing fields usually happen from some regulation. I don’t think it’s too late.”

Governments have had mixed but generally minimal involvement in AI regulation so far. In Australia a discussion paper has been released outlining potential ethical frameworks but the technology has been readily available for several years.

Public cloud providers offer a range of AI and machine learning technologies delivered as a service but have distanced themselves from any responsibility for its misuse.

Local AI research

LivePerson’s own research suggests the local market wants the government to do more when it comes to AI ethics. The company commissioned research from Fifth Quadrant, which surveyed 562 IT, customer experience and digital decision-makers in Australian businesses with at least 20 employees.

40 per cent of businesses feel the Australian government should be responsible for setting and enforcing AI ethics and principles, followed by an independent Australian body (25 per cent).

While 84 per cent of the respondents either have or plan to use AI but only 40 per cent have AI standards or guidelines in place, according to the research.

Previous post

A peek into the future - Identifying the benefits of predictive engagement

Next post

Australia signs on to a new set of global principles for developing ethical AI