Replies to Ancient Tweets
Occasionally someone tweets about Cold Takes such that I want to respond. But I can't respond over Twitter because I don't tweet. So here are my thoughts on some tweets people have made. The vibe I’m going for here is kind of like if someone delivered a sick burn in person, then received a FAX 6 years later saying “Fair point - but have you considered that it takes one to know one?” or something.
@MatthewJBar on an email I sent to Tyler Cowen on transformative AI (3/9/22)
In response to my email to Tyler Cowen, Matthew Barnett tweeted a number of disagreements. I think this is overall a thoughtful, intelligent response that deserves a reply, though I ultimately stand by what I said on nearly all points. I think most of the disagreements come down to confusions about what I meant in that abbreviated context, which isn't terribly surprising - I word things much more carefully when I have more space, but that was a deliberately shortened and simplified presentation.
Details:
Good response. I broadly agree, but Holden says "You can run the bio anchors analysis in a lot of different ways, but they all point to transformative AI this century; As do the expert surveys, as does Metaculus;"
— Matthew Barnett (@MatthewJBar) March 9, 2022
This is misleading. 🧵 https://t.co/T8eO7dzAYG
(1) The Bio Anchors analysis puts high probability on transformative AI this century, but this is because it assumes fast hardware and algorithmic progress will continue for many decades, which is questionable, as I argue here: https://t.co/ADPPpvT6gc
— Matthew Barnett (@MatthewJBar) March 9, 2022
My comment was intended to highlight that the Bio Anchors report took many approaches to modeling, rather than to claim there's no conceivable or plausible way to use its framework to reach a long-timelines conclusion; however, my wording didn't make that clear, and that's my bad. I do stand by the former message.
Barnett's critique of Bio Anchors points out that the report assumes a 2.5-year doubling time for hardware efficiency, and does not incorporate variation around this in its uncertainty. However:
- Barnett's critique doesn't propose an alternative trajectory of hardware progress he thinks is more likely, or spell out what that would mean for the overall forecasts, besides saying that the doubling time has been closer to 3.5 years recently.
- The Bio Anchors report includes a conservative analysis that assumes a 3.5 year doubling time with (I think more importantly) a cap on overall hardware efficiency that is only 4 orders of magnitude higher than today's, as well as a number of other assumptions that are more conservative than the main Bio Anchors report's; and all of this still produces a "weighted average" best guess of a 50% probability of transformative AI by 2100, with only one of the "anchors" (the "evolution anchor," which I see as a particularly conservative soft upper bound) estimating a lower probability.
- This highlights that it isn't enough to simply assume a slower doubling time in order to expect transformative AI development to stretch past 2100. You need to put in a lot of (IMO) overly conservative assumptions at once.
I do think that in full context, the "conservative" assumptions about compute gains are in fact too conservative. This is simply an opinion, and I hope to gain more clarity over time as more effort is put into this question, but I'll give one part of the intuition: I think that conditional on hardware efficiency improvements coming in on the low side, there will be more effort put into increasing efficiency via software and/or via hybrid approaches (e.g., specialized hardware for the specific tasks at hand; optimizing researcher-time and AI development for finding more efficient ways to use compute). So reacting to Bio Anchors by saying "I think the hardware projections are too aggressive; I'm going to tweak them and leave everything else in place" doesn't seem like the right approach.
Overall, I think there are plenty of open questions and room for debate regarding Bio Anchors, but I think a holistic assessment of the situation supports a broad, qualitative claim along the lines of "It's pretty hard to see how the most reasonable overall usage of this framework would leave us with a bottom-line median expectation of transformative AI being developed after 2100."
(2) Metaculus has no consensus position on transformative AI, and it's unclear which question he's referring to. My own question is here (https://t.co/HaoWqFCyAy), and currently the median community timeline is >2100.
— Matthew Barnett (@MatthewJBar) March 9, 2022
I was referring to https://www.metaculus.com/questions/5121/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of-stronger-operationalization/ and https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ , which seem more germane than the link Barnett gives in the tweet above. (There are many ways transformative AI might not be reflected in economic growth figures, e.g. if economic growth figures don't include digital economies; if misaligned AI derails civilization; or if growth is deliberately held back, perhaps with AI help, in order to buy more time for improving things like AI alignment.) I also note that this question has been particularly volatile; the forecast has been below 2100 a number of times, including (barely) as I write this.
(3) There are few expert surveys that ask about transformative AI. Probably the only one is Grace et al. in 2016 (https://t.co/fQS80dCVoz). But responses varied depending on the question framing, with a median timeline of 2138 for full automation of labor.
— Matthew Barnett (@MatthewJBar) March 9, 2022
The 2138 response was for a subset of respondents; I am referring to the mainline forecast (more here).
Holden states, "Other angles of analysis (including the very-outside-view semi-informative priors) are basically about rebutting the idea that there’s a giant burden of proof here."
— Matthew Barnett (@MatthewJBar) March 9, 2022
But this is also misleading...
The linked report says, "pr(AGI by 2100) ranges from 5% to 35%, with my central estimate around 20%." That's not a "giant" burden, agreed. But it's still consistent with transformative AI after 2100.https://t.co/whz5JfC1oB
— Matthew Barnett (@MatthewJBar) March 9, 2022
I'm not sure how my statement is misleading, if we agree that the burden of proof isn't "giant."
Holden adds, "Specific arguments for “later than 2100,” including outside-view arguments, seem reasonably close to nonexistent"
— Matthew Barnett (@MatthewJBar) March 9, 2022
I disagree. I gave a few arguments here: https://t.co/FOpKqpo3f9
I'm going to stand by my statement here - these look to be simply ceteris paribus reasons that AI development might take longer than otherwise. I'm not seeing a model or forecast integrating these with other considerations and concluding that our median expectation should be after 2100. (To be clear, I might still stand by my statement if such a model or forecast is added - my statement was meant as an abbreviated argument, and in that sort of context I think it's reasonable to say "reasonably close to nonexistent" when I mean something like "There aren't arguments of this form that have gotten a lot of attention/discussion/stress-testing and seem reasonably strong to me or, I claim, a reasonable disinterested evaluator.")
Holden says, "Robin is also forecasting transformative AI of a sort... this century"
— Matthew Barnett (@MatthewJBar) March 9, 2022
As far as I can tell, no he isn't. @robinhanson's most recent public statements have indicated that he thinks AI is over a century away. For example, see here: https://t.co/C88otbUYfy
I think the confusion here is whether ems count as transformative AI.
- In the link Matthew gives above, Robin states: "Now of course, I completely have this whole other book, Age of Em, which is about a different kind of scenario that I think doesn’t get much attention, and I think it should get more attention relative to a range of options that people talk about. Again, the AI risk scenario so overwhelmingly sucks up that small fraction of the world. So a lot of this of course depends on your base. If you’re talking about the percentage of people in the world working on these future things, it’s large of course."
- In the context of that conversation, Robin is contrasting "AI" with "ems." But in a broader context, I think it is appropriate to think of the Age of Em scenario as a transformative AI scenario: it's one in which digital minds cause an economic growth explosion.
- (This is why I said "of a sort" in my abbreviated summary.)
@ezraklein on Rowing, Steering, Anchoring, Equity, Mutiny (11/30/21)
Ezra Klein on Rowing, Steering, Anchoring, Equity, Mutiny:
I liked this Rowing/Steering/Anchoring/Equity/Mutiny schema from Holden Karnofsky: https://t.co/WQe3wctuK9 pic.twitter.com/ZSreoovC9T
— Ezra Klein (@ezraklein) November 30, 2021
Some thoughts:
— Ezra Klein (@ezraklein) November 30, 2021
I'd like to see a lot more Steering in public debate, by which I mean more detailing of proposed far futures.
There's an opportunity for philanthropists and grant makers here. Journalism and academia undersupply Steering relative to its importance.
I think there's an obvious one Holden misses: Maintenance.
— Ezra Klein (@ezraklein) November 30, 2021
Sometimes the boat is going in a reasonable direction, or would be, but it's breaking down, or the crew is so poorly organized it can't make decisions, or even row effectively.
— Ezra Klein (@ezraklein) November 30, 2021
Those of us who focus on, say state capacity or democracy would fit here.
I'll take a moment here to plug the @NiskanenCenter's interesting new State Capacity project: https://t.co/CG4fZcplDg
— Ezra Klein (@ezraklein) November 30, 2021
My other critique is Holden frames these as separate and even competing orientations, and confusion between them as a generator of unnecessary disagreement.
— Ezra Klein (@ezraklein) November 30, 2021
That's surely sometimes right. But ideally, it's wrong.
I'd suggest people rarely truly just favor one or the other approach.
— Ezra Klein (@ezraklein) November 30, 2021
Better to think of these as an integrated framework where you should understand the answer to each question for your worldview, and the worldviews you're interested in or arguing with.
So you'd start by answering where you're trying to Steer. Do you even know?
— Ezra Klein (@ezraklein) November 30, 2021
Then: Does Rowing or Mutiny get you closer?
What Maintenance is needed to get there?
What does Equity look like between here and there?
What should you try and Anchor?
I basically think I just agree with all of this. My post didn’t present them as mutually exclusive, just as sources of confusion (see this table where I categorize different non-exclusive combinations). “Maintenance” is a good one.
@Evolving_Moloch on Pre-agriculture gender relations seem bad (11/29/21)
William Buckner on Pre-agriculture gender relations seem bad:
Notice how the only time Matt and many others bother to discuss hunter-gatherer societies at all it’s just to affirm some broad generalization and culture war. They don’t care about the topic, they’re just memeing for attention. https://t.co/xlmKPw8YES
— Will (@Evolving_Moloch) November 29, 2021
Also you should know that, while I do like the paper that post is based on, coding decisions like these are highly subjective and debatable. So re: 'possibility of female leaders' code, women lead women's spaces and initiations across HG societies (see https://t.co/z3YjZiuF5n) pic.twitter.com/WNo6Wi55PD
— Will (@Evolving_Moloch) November 29, 2021
Among Andaman Islanders specialist who scarifies *both sexes* at initiation is a women, & she chooses the design--would you consider this a leadership role? Paper above doesn't seem to. Think I would, but debatable! Which gets to issue: you have to read, can't just accept codes! pic.twitter.com/GUHHNTKrIz
— Will (@Evolving_Moloch) November 29, 2021
I've been working on project similar to this, coding male/female bias in various categories across HG societies for like 2 years now, & I've found it extremely challenging making those sort of coding decisions. There's great deal of unavoidable subjective decision making involved
— Will (@Evolving_Moloch) November 29, 2021
I feel spiritually very on board with a comment like “You have to read, can’t just accept codes!” I ideally would’ve read all of the details, and I hope to come back and do this someday. Why didn’t I?
- I think I did check out a couple, but it was really logistically difficult for a number of them, as they were sourced from expensive out-of-print books and things like that. (I had a similar issue with the data on early violence cited by Better Angels of our Nature, and I’ve been slowly finding used versions of the books, compiling a little library, and planning to eventually dig through everything and see whether the whole picture comes crashing down. I might not finish that for a long time, but stay tuned!)
- I ultimately decided not to explore every angle I could, which could’ve taken weeks. Instead I figured: “I’ve dug further into this than any other concise presentation I’ve seen, and certainly further than the existing highly-cited sources (e.g., Wikipedia) seem to, so why not put out what I have, and if it spreads, maybe someone else will point out what I missed?” In a sense, then, this worked OK, and I basically endorse “Dig deeper than others have” as a better rule than “Do a full minimal-trust investigation every time.”
- Another factor was that I expect my conclusion to hold up based on a number of other subtler data points, such as the fact that Wikipedia’s “support” for good gender relations actually contained quite a bit of info that seemed to suggest bad gender relations, and the fact that the best source I’d found on hunter-gatherers overall seemed pretty down on the situation. (More at the full piece.)
- And indeed, if the above points are the biggest corrections out there, that really doesn’t seem to change the picture much. I do not think “women lead women’s spaces” is good enough! And yeah, I wouldn’t instinctively classify the example he gives as a “leader” in the sense of having political power over the society as a whole, though I’d guess there are a lot more details that matter in that case.