The bitterest lesson is that AI is improving. It didn't actually hit a wall. The first product was to early... it failed because AI was not good enough. Back then everyone said we hit a wall.
Now the AI is good enough. People are still saying we hit a wall. Are you guys sure?
He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?
AI is not in a bubble. This technology will change the world. The bubble are people like this guy trying to build GUI's around AI to smooth out the rough parts which are constantly getting better and better.
But you have to realize, Before AI was capable of doing something like NotebookLLM nobody bought into it. And they were wrong. They failed to extrapolate.
Now that AI CAN do NotebookLLM, people hold on to the same sentiment. You guys were wrong.
No. But you all run under the same label. This is common if you didn't know. For example a certain group of people with certain beliefs can be called republican or democrat or catholic. I didn't name the label explicitly. But you all are in that group. I thought it was obvious I wasn't talking about one person. I don't think you're so stupid as to actually think that so don't pretend you misinterpreted what I said.
>2. It's also not the same argument as before. It's not the same extrapolation.
Seems like the same argument to me, you thought LLMs were stochastic parrots and inherently and forever limited by it's very nature (a statement made with no proof).
The extrapolation is the same since the dawn of AI: upwards. We may hit a wall, but nobody can know this for sure.
>3. And being right or wrong in the past has no bearing on current
It does. Past performance is a good predictor of current performance. It's also common sense, why else do we have resumes?
You were wrong before, chances are... you'll be wrong again.
>It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"
You just make this statement without any supporting evidence? It's just wrong because you say so?
This is my statement: How about the trendline points to an eventual future that remains an open possibility due to a trendline...
If AI becomes as good as you claim, there is no need for you. Since it can replace you in every endeavor and be better at it, ANY energy given to you is logically better invested by giving it to the AI. Stop wasting our collective resources.
It can. That's the future bro. It replace me, you and all of us.
You're dropping that line as if it's absurd. Be realistic. Dark conclusions are not automatically illogical. If the logic points to me being replaced, then that's just reality.
Right now we don't know if I (aka you) will be replaced, but trendlines point to it as a possiblity.
I'm surprised at this, LLMs have had many developments since Gpt3.5, technologically and culturally. What kind of developments would be impressive to you?
I get pretty good results with Claude code, Codex, and to a lesser extend Jules. It can navigate a large codebase and get me started on a feature in a part of the code I'm not familiar with, and do a pretty good job of summarizing complex modules. With very specific prompts it can write simple features well.
The nice part is I can spend an hour or so writing specs, start 3 or 4 tasks, and come back later to review the result. It's hard to be totally objective about how much time it saves me, but generally feels worth the 200/month.
One thing I'm not impressed by is the ability to review code changes, that's been mostly a waste of time, regardless of how good the prompt is.
Company expectations are higher too. Many companies expect 10x output now due to AI, but the technology has been growing so quick that there are a lot of people/companies who haven't realized that we're in the middle of a paradigm shift.
If you're not using AI for 60-70 percent of your code, you are behind. And yes 200 per month for AI is required.
maybe if openai let me generate an image through api? that would impress me. instead, they took away temperature and gave us verbosity and reasoning effort to think about every time we make an api call.
Rich Sutton, the guy behind both "reinforcement learning" & "the Bitter Lesson", muses that Tech needs to understand the Bitter Lesson better:
https://youtu.be/QMGy6WY2hlM
Longer analysis:
https://youtu.be/21EYKqUsPfg?t=47m28s
To (try and) summarize those in the context of TFA: builders need to distinguish between policy optimisations and program optimisations
I guess a related question to ask (important for both startups and Big Tech) might be: "should one focus on doing things that don't scale?"
The bitterest lesson is that AI is improving. It didn't actually hit a wall. The first product was to early... it failed because AI was not good enough. Back then everyone said we hit a wall.
Now the AI is good enough. People are still saying we hit a wall. Are you guys sure?
He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?
AI is not in a bubble. This technology will change the world. The bubble are people like this guy trying to build GUI's around AI to smooth out the rough parts which are constantly getting better and better.
Not all of us buy into that extrapolation.
> He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?
I don't know, ask me again in 50 years.
Nobody buys into it. That's the problem.
But you have to realize, Before AI was capable of doing something like NotebookLLM nobody bought into it. And they were wrong. They failed to extrapolate.
Now that AI CAN do NotebookLLM, people hold on to the same sentiment. You guys were wrong.
Your argument is a fallacy in three immediate ways:
1. We're not all the same person, to be clear.
2. It's also not the same argument as before. It's not the same extrapolation.
3. And being right or wrong in the past has no bearing on current
NotebookLM doesn't need new AI. It's tool use and context. Tool use is awesome, I've been saying that for ages.
It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"
>1. We're not all the same person, to be clear.
No. But you all run under the same label. This is common if you didn't know. For example a certain group of people with certain beliefs can be called republican or democrat or catholic. I didn't name the label explicitly. But you all are in that group. I thought it was obvious I wasn't talking about one person. I don't think you're so stupid as to actually think that so don't pretend you misinterpreted what I said.
>2. It's also not the same argument as before. It's not the same extrapolation.
Seems like the same argument to me, you thought LLMs were stochastic parrots and inherently and forever limited by it's very nature (a statement made with no proof).
The extrapolation is the same since the dawn of AI: upwards. We may hit a wall, but nobody can know this for sure.
>3. And being right or wrong in the past has no bearing on current
It does. Past performance is a good predictor of current performance. It's also common sense, why else do we have resumes?
You were wrong before, chances are... you'll be wrong again.
>It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"
You just make this statement without any supporting evidence? It's just wrong because you say so?
This is my statement: How about the trendline points to an eventual future that remains an open possibility due to a trendline...
versus your conclusion which is "it's wrong"
If AI becomes as good as you claim, there is no need for you. Since it can replace you in every endeavor and be better at it, ANY energy given to you is logically better invested by giving it to the AI. Stop wasting our collective resources.
It can. That's the future bro. It replace me, you and all of us.
You're dropping that line as if it's absurd. Be realistic. Dark conclusions are not automatically illogical. If the logic points to me being replaced, then that's just reality.
Right now we don't know if I (aka you) will be replaced, but trendlines point to it as a possiblity.
i've not been impressed since gpt3.5
I'm surprised at this, LLMs have had many developments since Gpt3.5, technologically and culturally. What kind of developments would be impressive to you?
This is a common sentiment from my peers who have not spent any real time with the frontier models in the last six months.
They tend to poke the free ChatGPT for ill defined requests and come away disappointed.
Same experience here, using new models. Every time it's a disappointment. Useful for search queries that are not too specialized. That's it.
I get pretty good results with Claude code, Codex, and to a lesser extend Jules. It can navigate a large codebase and get me started on a feature in a part of the code I'm not familiar with, and do a pretty good job of summarizing complex modules. With very specific prompts it can write simple features well.
The nice part is I can spend an hour or so writing specs, start 3 or 4 tasks, and come back later to review the result. It's hard to be totally objective about how much time it saves me, but generally feels worth the 200/month.
One thing I'm not impressed by is the ability to review code changes, that's been mostly a waste of time, regardless of how good the prompt is.
We've been trialing code rabbit at work for code review. I have various nits to pick but it feels like a good addition.
Company expectations are higher too. Many companies expect 10x output now due to AI, but the technology has been growing so quick that there are a lot of people/companies who haven't realized that we're in the middle of a paradigm shift.
If you're not using AI for 60-70 percent of your code, you are behind. And yes 200 per month for AI is required.
maybe if openai let me generate an image through api? that would impress me. instead, they took away temperature and gave us verbosity and reasoning effort to think about every time we make an api call.
Then you should be very impressed, because they let you generate videos by API: https://platform.openai.com/docs/models/sora-2
That's a low bar.
>AI is not in a bubble. This technology will change the world.
The technology can change the world, and still be a bubble.
Just because neural networks are legit doesn’t mean it’s a smart decision to build $500 billion worth of datacenters.
The internet was a bubble! Somewhat after, it took over planet earth. But it was also a bubble.
You are right we should've built $5 trillion /s.