Co-Intelligence: Living and Working with AI, by Ethan Mollick. 2024
About the Book
I read this for my book club. It’s a well-written book, relatively hype-free, and very much worth reading for those who don’t know much about AI. I felt it was a bit verbose, but that might be appropriate for the more general audience he is trying to reach. I did not learn a lot from the book, but then I’ve been following the discourse on LLMs for a long time, and have also been getting Mollick’s newsletter in my in-box, so this is neither surprising nor a mark against the book.
But now I’ve gone through my various marginalia and underlining, and feel that I got more out of the book than I had realized when I wrote the previous paragraph. A few of the high-water marks for me:
- The fourth ‘rule’ – assume this is the worst AI you will ever use – is a good reminder. We are used to software improving at a glacial pace, but that may not be true of LLMs. [C3]
- I appreciated the confirmation of my belief that an AI cannot track the reasons for its responses, and that any explanation of a response is a hallucination. [C5]
- It made me think more (in [C9]) about the consequences of the erasure of digital groundtruth by generative AI, and how that will undermine public confidence in ‘facts’ as presented online. Perhaps everyone will retreat into their own filter bubbles; or perhaps there will be a turn towards traditional curated media (though the phenomenon of Fox news makes this seem unlikely).
- An interesting argument is that the speed of innovation has been dropping 50% every 13 years), presumably because one must know more and more to make progress. Perhaps AI can provide a remedy here.
That said, I didn’t think Mollick did a great job of delving into the potential of AIs to enable people to educate themselves. There is much to be said about the pros and cons of using AI in this way (e.g., AIs creating study guides, problem sets, etc.) Little of this is discussed.
Chapter by Chapter Notes
C!: Creating Alien Minds
- A brief history of AI, particularly the way it is used in business; a slightly more in-depth history of the rise of LLM’s and generative AI.
- An introduction to the Transformer architectures with its ‘attention mechanism’ and the resulting LLM’s
- An interesting note on whether LLM’s violate copyright or not, since the LLM does not contain text, just weight-vectors.
- A mention of RLHF — Reinforcement Learning from Human Feedback. This is how LLM’s are ‘taught’ to avoid certain topics.
- Some good examples of how slightly changing prompts can significantly change the response of the LLM.
C2: Aligning the Alien
- Discussion of various ways in which AI can have detrimental impacts
- More discussion of RLHF and guardrails, and how guardrails have been circumvented.
C3: Four Rules for Co-Intelligence
Didn’t learn anything here, but mainly because I’ve been experimenting myself and following Mollick’s newsletter. He also had a not unreasonable piece of advice: “Become the world expert in how to use AI to do a task you know well.“
For the record, the four rules are:
- Always invite AI to the table
- Be the human in the loop
- Treat AI Like a Person (but tell it what kind of person it is)
- Assume this is the worst AI you will ever use
This is not bad advice. I particularly like the last rule.
C4: AI as a Person
A discussion of how AIs can appear to be sentient. Possibly a useful for those who have never interacted with an LLM, but otherwise I don’t think the chapter did much in the way of making useful points. Not sure that there are useful points to make here.
C5: AI as a Creative
- Returns to the point that LLMs don’t store text, they only store weights. So in a sense they don’t know anything.
- Nor can they actually give a real account of why they gave a particular answer, though of course they can generate a plausible explanation. This seems like an important thing for people to understand.
- LLMs are trained on text. The training does not take quality into account; it does not even distinguish between fiction or non-fiction. All it is doing is learning weights.
- We are back on the topic of hallucination, a term I very much dislike. But it does make a good point, which hadn’t sunk in for me: “Anything that requires exact recall is likely to result in a hallucination.”
- And this is a nice quote:
It [an AI] is not conscious of its own processes. So if you ask it to explain itself, the Al will appear to give you the right answer, but it will have nothing to do with the process that generated the original result. The system has no way of explaining its decisions, or even knowing what those decisions were. Instead, it is (you guessed it) merely generating text that it thinks will make you happy in response to your query. LLMs are not generally optimized to say “I don’t know” when they don’t have enough information. Instead, they will give you ananswer, expressing confidence.
—Ethan Mollick, Co-Intelligence, p 96
- A lot of talk about creativity, which did not interest me.
- Cites studies (his own? and colleagues?) that show that using AI greatly decreases the time to perform creative tasks. …I agree this is likely.
- Makes the interesting point that ‘ceremonial tasks,’ such as writing recommendation letters, are likely to be rendered meaningless, or at least greatly reduced in value, as the letter no longer necessarily represents a significant time investment.
C6: AI as a Coworker
- Distinguished between jobs and tasks — AI may radically change the way job-related tasks are carry out, but may not necessarily eliminate the job.
- Argues that the systems within which a job takes place play a crucial role in shaping the job — but I don’t think the argument is taken very far re implications….
- Discusses different types of tasks: delegated tasks, and automated tasks. And different types of workers: Centaurs (with a clear strategic separation of tasks) and cyborgs (a more blended approach).
C7: AI as a Tutor
- Talks about using AI in education. Uses example of introduction of calculators in classrooms to think about this. I suspect some readers will find this very useful.
- Talks about flipped lectures and how AI might be used in the classroom.
- A rather disappointing chapter: I think there is huge potential in individuals using AI to educate themselves, and a lot to be said about the pros and cons of using AI in this way. There is also the prospect of AIs creating study guides, problem sets, etc. None of this is discussed.
C8: AI as a Coach
- Begins with a favorite point of mine about the danger of AI eliminating the on-ramps to expertise — it can do the tasks that were formerly assigned to interns, and eliminate the possibility of apprenticeship.
- Offers the prospect of AI’s as coaches that will help novices and journeymen do the difficult reflective practice that builds expertise. Interesting, but he is just making all this up, as far as I can tell.
- Cites a study of his own that claims that the quality of the middle manager explains 20% of the revenue that a video game eventually produces. This would be a very difficult study to operationalize, and I’m a bit skeptical. But hard to say since I don’t understand how video game companies work, or the role middle managers play in them.
C9: AI as our Future
This chapter offers four scenarios on the future of AI (==LLMs)
- 1. As Good as it Gets.
AI will not improve significantly from here on out, either because of technical limitations (running out of text to train on), or because of regulatory intervention. He argues that this isn’t a very likely future, but it is what most people and organizations are planning for. I agree that regulation is unlikely; less sure about the certainty of significant improvement.- Makes the point (not sure why it is relevant to this scenario) that the erasure of digital groundtruth will undermine public confidence in ‘facts’ as presented online. Perhaps everyone will retreat into their own filter bubbles; or perhaps there will be a turn towards traditional curated media (though Fox news makes this seem unlikely).
- 2. Slow Growth.
The exponential growth in AI capability will slow to 10%-20% a year. He cites various reasons for this, from cost of training, to technical limits for large LLMs (apparently LeCun, chief AI person at Meta has argued this), and my own favorite, not-enough-high-quality text. In this scenario, tasks will change more than jobs, and more jobs will be created than destroyed.- In my view it may be that LLM’s have ‘used up’ the supply of high quality information, and trying to train LLMs on broader swaths of material will introduce ‘semantic pollution.’
- Mollick also talks about the decline in the ‘speed of innovation’ (dropping by 50% every 13 years), and the fact that most major scientific contributions are made by scientists over 40 (whereas the opposite used to be the case). You must know more to make progress, and that slows progress. He suggest that perhaps AI can help here.
- 3. Exponential Growth. In this scenario the speed of AI growth continues. AI-assisted hacking, targeted marketing, AI-assisted law enforcement (and military, which, interestingly, he does not address) proliferate, and government policies/regulations can not keep up. But maybe AI and robotics eliminate the need for a lot of human work, and things like basic income, shortened workweek, and so forth usher in a post-scarcity economy.
- He also comments on AIs becoming better and more interesting companions than other humans, and the possibilities of a decrease in loneliness, but a rise in new forms of social isolation.
- 4. The Machine God. AI becomes sentient. There is not a lot to say here. Could be horrible, could be wonderful.
Afterword: AI as Us
Brief essay on how AI has grown out of our knowledge, includes our own biases, etc, etc. Not very interesting.
# # #
Views: 108