Error loading page.
Try refreshing the page. If that doesn't work, there may be a network issue, and you can use our self test page to see what's preventing the page from loading.
Learn more about possible network issues or contact support for more help.

AI Snake Oil

What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference

Audiobook
0 of 2 copies available
Wait time: At least 6 months
0 of 2 copies available
Wait time: At least 6 months
This audiobook narrated by Landon Woodson reveals what you need to know about AI—and how to defend yourself against bogus AI claims and products Comes with a bonus track featuring an illuminating discussion by Arvind Narayanan and Sayash Kapoor Confused about AI and worried about what it means for your future and the future of the world? You're not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works, why it often doesn't, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil—products that don't work, and probably never will. While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it's being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice. The book explains the crucial differences between types of AI, why organizations are falling for AI snake oil, why AI can't fix social media, why AI isn't an existential risk, and why we should be far more worried about what people will do with AI than about anything AI will do on its own. The book also warns of the dangers of a world where AI continues to be controlled by largely unaccountable big tech companies. By revealing AI's limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home.
  • Creators

  • Publisher

  • Release date

  • Formats

  • Languages

  • Reviews

    • Kirkus

      Starred review from July 1, 2024
      Two academics in the burgeoning field of AI survey the landscape and present an accessible state-of-the-union report. Like it or not, AI is widespread. The present challenge involves strategies to use it properly, comprehend its limitations, and ask the right questions of the entrepreneurs promoting it as a cure for every social ill. The experienced authors bring a wealth of knowledge to their subject: Narayanan is a professor of computer science at Princeton and director of its Center for Information Technology Policy, and Kapoor is a doctoral candidate with hands-on experience of AI. They walk through the background of AI development and explain the difference between generative and predictive AI. They see great advantages in generative AI, which can provide, collate, and communicate massive amounts of information. Developers and regulators must take strict precautions in areas such as academic cheating, but overall, the advantages outweigh the problems. Predictive AI, however, is another matter. It seeks to apply generalized information to specific cases, and there are plenty of horror stories about people being denied benefits, having reputations ruined, or losing jobs due to the opaque decision of an AI system. The authors argue convincingly that when individuals are affected, there should always be human oversight, even if it means additional costs. In addition, the authors show how the claims of AI developers are often overoptimistic (to say the least), and it pays to look at their records as well as have a plan for regular review. Written in language that even nontechnical readers can understand, the text provides plenty of practical suggestions that can benefit creators and users alike. It's also worth noting that Narayanan and Kapoor write a regular newsletter to update their points. Highly useful advice for those who work with or are affected by AI--i.e., nearly everyone.

      COPYRIGHT(2024) Kirkus Reviews, ALL RIGHTS RESERVED.

    • Publisher's Weekly

      July 8, 2024
      Narayanan (coauthor of Bitcoin and Cryptocurrency Technologies), a computer science professor at Princeton University, and Kapoor, a PhD candidate in Princeton’s computer science program, present a capable examination of AI’s limitations. Because ChatGPT and other generative AI software imitate text patterns rather than memorize facts, it’s impossible to prevent them from spouting inaccurate information, the authors contend. They suggest that this shortcoming undercuts any hoped-for efficiency gains and describe how news website CNET’s deployment of the technology in 2022 backfired after errors were discovered in many of the pieces it wrote. Predictive AI programs are riddled with design flaws, the authors argue, recounting how software tasked with determining “the risk of releasing a defendant before trial” was trained on a national dataset and then used in Cook County, Ill., where it failed to adjust for the county’s lower crime rate and recommended thousands of defendants be jailed when they actually posed no threat. Narayanan and Kapoor offer a solid overview of AI’s defects, though the anecdotes about racial biases in facial recognition software and the abysmal working conditions of data annotators largely reiterate the same critiques found in other AI cris de coeur. This may not break new ground, but it gets the job done.

Formats

  • OverDrive Listen audiobook

Languages

  • English

Loading