Taming Silicon Valley

  • Post author:
  • Post category:Book

The similarities and contrast between this book and AI Snake Oil are striking. For example, AI Snake Oil describes generative AI as something which largely works but is sometimes wrong, whereas this book is very concerned about how they’ve been rushed out the door in the wake of the unexpected popularity of ChatGPT despite clear issues with hallucinations and unacceptable content generation.

Yet the books agree on many things too — the widespread use of creators’ content without permission, weaponization of generative AI political misinformation, the dangers of deep fakes, and the lack of any form of factual verification (or understanding of the world at all) in the statistical approaches used to generate the content. Big tech has no answer for these “negative externalities” that they are enabling and would really rather we all pretend they’re not a thing. This book pushes much harder on the issue of how unregulated big tech is, and how it is repeatedly allowed to cause harm to society in returns for profits. It will be interesting to see if any regulation with teeth is created in this space.

(more…)

Taming Silicon Valley Book Cover Taming Silicon Valley
Gary F. Marcus
Computers
MIT Press
September 17, 2024
247

How Big Tech is taking advantage of us, how AI is making it worse, and how we can create a thriving, AI-positive world. On balance, will AI help humanity or harm it?

Continue ReadingTaming Silicon Valley

AI Snake Oil

  • Post author:
  • Post category:Book

Nick recommended I read this book, so here it is.

The book starts by providing an analogy for how we talk about AI — imagine that all transport vehicles were grouped by one generic term instead of a variety like “car”, “bus”, “rocket”, and “boat”. Imagine the confusion a conversation would experience if I was talking about boats and you were talking about rockets. This is one of the issues right now with discussions of “AI” — there are several kinds of AI, but the commentary is all grouped together and conflating the various types. I think this is probably a specific example of what Ben Goldacre talks about in Bad Science — science reporting by non-scientists is often overly credulous, and misses the subtleties.

(more…)

AI Snake Oil
Arvind Narayanan, Sayash Kapoor
2024
348

From two of TIME's 100 Most Influential People in AI, what you need to know about AI and how to defend yourself against bogus AI claims and products. Confused about AI and worried about what it means for your future and the future of the world? You're not alone.

Continue ReadingAI Snake Oil

The wonderful world of machine learning automated lego sorting

Inspired by Alastair D'Silva's cunning plans for world domination, I've been googling around for automated lego sorting systems recently. This seems like a nice tractable machine learning problem with some robotics thrown in for fun. Some cool projects if you're that way inclined: Sorting 2 Metric Tons of Lego A lego sorter using tensorflow This sounds like a great way to misspend some evenings to me...

Continue ReadingThe wonderful world of machine learning automated lego sorting

End of content

No more pages to load