cjbarber 12 hours ago

This is written by Kevin Bryan from University of Toronto. He has good tweets on the economics of AI, too (https://x.com/Afinetheorem).

My recap of the PDF is something like:

1. There are good books about the near-term economics as AI.

2. There aren't many good books about "what if the AI researchers are right" (e.g. rapid scientific acceleration) and the economic and political impacts of those cases.

3. The Second Machine Age: Digital progress boosts the bounty and widens the spread, more relative inequality. Wrong on speed (e.g. self driving tech vs regulatory change).

4. Prediction Machines: AI = cheaper prediction. Which raises the value of human judgement, because that's a complement.

5. Power and Prediction: Value comes when the whole system is improved not just from smaller fixes. Electrification's benefits arrived when factories reorganized, not just when they added electricity to existing layouts. Diffusion is slow because things need to be rebuilt.

6. The Data Economy: Data is a nonrivalrous asset. As models get stronger and cheaper, unique private data grows in relative value.

7. The Skill Code: Apprenticeship pathways may disappear. E.g. survival robots prevent juniors getting practice reps.

8. Co-Intelligence: Diffusion is slowed by the jagged frontier (AI is spiky). Superhuman at one thing, subhuman at another.

9. Situational Awareness: By ~2027, $1T/yr AI capex spend, big power demand, and hundreds of millions of AI researchers getting a decade of algo progress in less than a year. (Author doesn't say he agrees, but says economists should analyze what happens if it does)

10. Questions: What if the AGI-pilled AI researchers are right, what will the economic and policy implications be?

  • catigula 11 hours ago

    If AI researchers are wrong they're gonna have a lot of explaining to do.

    • rhetocj23 11 hours ago

      TBH its far more likely they are wrong than right.

      Investors are incredibly overzealous to not miss out on what happened with certain stocks of the personal computing, web 2.0 and smartphone diffusion.

      • catigula 11 hours ago

        There's a certain anthropic quality to the idea that if we lived in a doomsday timeline we'd be unlikely to be here observing it.

        • uoaei 2 hours ago

          Humanist, maybe. The anthropic argument is tautological: nothing is a doomsday without there being someone for whom the scenario spells certain doom.

    • blibble 11 hours ago

      they'll just move onto the next grift

      quantum? quantum AI? quantum AI on the blockchain?

  • tuatoru 12 hours ago

    This all sounds like it has been covered in detail by the "AI as a Normal Technology"[1][2] guys (formerly AI Snake Oil - they decided they preferred to engage rather than just be snarky).

    Invention vs innovation vs diffusion - this is all well-known stuff.

    It's a completely different episteme than the one IABIED guys have ("If Anyone Builds It, Everyone Dies").

    I don't think there can be any meaningful dialogue between the two camps.

    1. Substack: https://www.normaltech.ai/ book: https://www.normaltech.ai/p/starting-reading-the-ai-snake-oi...

    2. "Normal technology" like fire, metals, agriculture, writing, and electricity are normal technologies.

    • dwohnitmok 5 hours ago

      It feels kind of crazy to go from "AI is 'only' something like snake oil" to "AI is 'only' something like fire, metallurgy, agriculture, writing, or electricity" without some kind of mea culpa of what was wrong about their previous view. That's a huge leap to more or less imply "well AI is just going to be comparable to invention of fire. No biggie. Completely compatible with AI as snake oil."

      • uoaei 2 hours ago

        I think the point is more to posit that our civilization will come to normalize AI as a ubiquitous tool extremely quickly like the other ones mentioned, and to analyze it from that perspective. The breathless extremist takes on both sides are a bit tiresome.

dzink 6 hours ago

Electricity supply is not matching AI datacenter electricity demand. In areas where solar is not as strong, prices to regular electricity subscribers are skyrocketing. With unemployment moving up and debt levels increasing all over the country, inflation caused by Energy demand may be very risky.

leakycap 13 hours ago

Posted yesterday: https://news.ycombinator.com/item?id=45284985

Zero comments there, too. Likely because direct-linking a PDF to a "book" on the multi-multi impacts of something that has barely started and we don't understand... seems less than useful.

  • cjbarber 13 hours ago

    Thanks for the note, yes, I suppose this is a format that's a bit hard to engage with. I'm not the author, but I've interacted with them and think they're very sharp!

hugh-avherald 5 hours ago

Does anyone have any clue as to why economists in particular have such awful LaTeX settings by default?

Animats 10 hours ago

"This essay reviews seven books from the past dozen years by social scientists examining AI’s economic impact."

Going back that far may not be too helpful. Three years ago, ChatGPT wasn't out. A year ago, LLM-type AI only sort of worked. Now, it's reasonably decent in some areas. If you look at AI impacts retrospectively, you probably underestimate them.

  • huitzitziltzin 5 hours ago

    If you read the review, you will see the author notes that issue pretty often and specifically discusses ways in which the oldest book he reviews (from 2014) both is and isn’t useful.

    It’s worth reading