Categories
Book Club

Book Club: Technopoly

For book club this month we read Technopoly: The Surrender of Culture to Technology, which I have to admit was hard at times to read as a technophile. Still, sometimes the best books are the ones that force you to question your assumptions (and our book club selects books by popular vote), so I dug in.

First off I will say most of the folks in our club did not like the book: struggled with determining what the thesis was, and even if they resonated with a point here or there, it didn’t feel actionable. I will attempt to summarize the thesis I took away: part of what makes us human is lost as we become a more processed, controlled, technology-driven culture. The tradeoff is real, and he illustrates many things that change with new technology (religion, family, culture, politics, medicine, etc.) but he juxtaposes this as always negative, rather than just different, and that is where most of us, as a book club full of technologists, struggled. Just because new technology is invented doesn’t necessarily imply that society is net better or worse. Yes, it changes how society approaches something: if you always had to visit your family to connect with them and now you can do it on the phone or via text, is that really worse? Yes, the quality of the connection may not be as strong, but the frequency even as people have had to move farther from one another feels like a reasonable tradeoff to stay connected than just growing apart. Infant mortality being reduced through vaccination…these are just a few examples where it is clear that technology is net beneficial. Fundamentally change shouldn’t be seen as a zero sum game.

Still, I actually really enjoyed the book, not for the anti-technology bent, but because I resonated with one key premise: bias towards belief without knowledge and context leads to chaos, and we have to build educational systems, and norms that ensure we don’t fall into that trap. This line of thought is prescient in my mind given what we are seeing with ChatGPT and LLMs. Fundamentally these tools give definitive, and sometimes very wrong answers, and people believe them because of the form they take. In the book Postman talks about Eliza, an AI project that responded in the forms humans expected (as a teacher, therapist, etc.) and how in a study the humans reacted as if Eliza were in fact a real person when in reality it was just AI. This was done as an experiment, but fundamentally we are living this daily with our LLMs. If we don’t teach the humans interpreting the output a framework for critical thinking, then we will double down on the kind of bias/echo chamber that social media helped sew.

Leave a Reply

Your email address will not be published. Required fields are marked *