August 12, 2021

Written by

Nicholas Gordon

  • Posted in
  • News
  • Featured

Employees may need to keep up ‘the pretense of working’ as automation spreads, says A.I. expert Kai-Fu Lee

As businesses begin to automate low-level service work, companies may start creating fake tasks to test employee suitability for senior positions, says Kai-Fu Lee, the CEO of Sinovation Ventures and former president of Google China.

“We may need to have a world in which people have ‘the pretense of working,’ but actually they’re being evaluated for upward mobility,” Lee said at a virtual event hosted by Collective[i], a company that applies A.I. to sales and CRM systems.

Work at higher levels of a company, which requires deeper and more creative thinking, is harder to automate and must be completed by humans. But if entry-level work is fully automated, companies don’t have a reason to hire and groom young talent. So, Lee says, companies will need to find a new way to hire entry-level employees and build a path for promotion.

It was one of several predictions Lee made about the possible social effects of widespread adoption of A.I. systems. Some were drawn from his upcoming book, AI 2041: Ten Visions for Our Future—a collection of 10 short stories, written in partnership with science fiction author Chen Qiufan, that illustrate ways that A.I. might change individuals and organizations. “Almost a book version of Black Mirror in a more constructive format,” joked Lee, a well-known expert in the field of A.I. and machine learning and author of the 2018 book AI Superpowers: China, Silicon Valley, and the New World Order.

Talk of A.I. and its role in social behavior often centers on the tendency of algorithms to reflect and exacerbate existing social biases. For example, a contest by Twitter to root out bias in its algorithms found that its image-cropping model prioritized thinner white women over people of other demographics. Data-driven models risk reinforcing social inequality, especially as more individuals, companies, and governments rely on them to make consequential decisions. As Lee noted, when a “company has too much power and data, [even if] it’s optimizing an objective function that’s ostensibly with the user interest [in mind], it could still do things that could be very bad for the society.”

Despite the potential for A.I. to do harm, Lee has faith in developers and A.I. technicians to self-regulate. He supported the development of metrics to help companies judge the performance of their A.I. systems, in a manner similar to the measurements used to determine a firm’s performance against environmental, social, and corporate governance (ESG) indicators. “You just need to provide solid ways for these types of A.I. ethics to become regularly measured things and become actionable.”

Yet he noted that more work needs to be done to train programmers, including the creation of tools to help “detect potential issues with bias.” More broadly, he suggested that A.I. engineers adopt something “similar to the Hippocratic oath in medical training,” referring to the set of professional ethics that doctors adhere to during their dealings with patients, most commonly summarized as “Do no harm.”

“People working on A.I. need to realize the massive responsibilities they have on people’s lives when they program," Lee said. "It’s not just a matter of making more money for the Internet company that they work for.”

READ THE ARTICLE

Archive

  1. Employees may need to keep up ‘the pretense of working’ as automation spreads, says A.I. expert Kai-Fu Lee

  2. Climate AI Startups Offer Businesses Shelter From Inclement Weather Risk

  3. Collective[i] announces new upskilling initiative focused on the “soft” skills essential for the future of work

  4. The Modern Sale and Collective[i] Announce the Inaugural Top 100 Revenue Operations Leaders of 2021

  5. Pinterest’s Use of AI Drives Growth

  6. 2020 ACM A.M.Turing Award Winner, Alfred Aho to be featured in Collective[i] Forecast speaker series

  7. Sony Music Publishing Chairman and CEO, Jon Platt to be featured in Collective[i] Forecast speaker series

  8. Corporate Tech Leaders Are Mixed on EU Artificial Intelligence Bill

  9. Skip the pranks and join Collective[i] for a quick-witted line-up of comedians, writers, business leaders, and academics in our Forecast speaker series on April 1

  10. Collective[i] Wins 2021 Data Breakthrough Award: "Sales Technology of the Year"

  11. Op-Ed: Why we need a digital New Deal

  12. USHG's Danny Meyer to be featured in special session of Collective[i] Forecast on Clubhouse

  13. Entrepreneurs Neil and Rachel Blumenthal to be featured in Collective[i] Forecast speaker series

  14. Potential IBM Watson Health Sale Puts Focus on Data Challenges

  15. CIO playbook: Building better relations with boards

  16. Tech Pros Predict 12 Trends That Will Dominate The Industry In 2021

  17. Meet The Disruptors: Tad Martin of Collective[i] On The Three Things You Need To Shake Up Your Industry

  18. Collective[i]: How the FAANG companies inspired a B2B sales solution

  19. Artificial Intelligence Shows Potential to Gauge Voter Sentiment

  20. 16 Ways Companies Can Bridge The Disconnect Between IT And Sales

  21. Buying New Tech For Your Business? Here Are 12 Crucial Aspects To Consider

  22. Health Care, Sales Software Draw Big AI Investments

  23. Women Leading The AI Industry: “The reality is that the AI industry needs women more than the other way around.”

  24. [INSIDE AI] Q&A with Co-Founder of Collective[i], Stephen Messer

  25. How AI Changes Marketing and Business

  26. The Morning Download: AI-Enabled Sales Tools Spotlight Data Needs

  27. Smart Sales Tools Seek Better Data

  28. Software Companies Rush to Create New Data-Driven Discipline for Sales