I think you’ve misunderstood the article. There isn’t anything to refute as the original source is a thought exercise. Instead, these are things to keep in mind when considering AI speculation so the reader can come to conclusions on their own.
Most of the speculation isn’t worth getting upset over as its likely just that: speculation. Even the most seasoned AI veterans have a hard time making predictions, so why would someone without much knowledge be able to?
As stated in the second section, they don’t understand the friction that still exists to get software into production. Most outsiders assume there isn’t any and base their assumptions on that.
You’re correct and it’s important to keep in mind for those things as well. The reason its important here is because those failures are much more important. When building software, we don’t want functionality that can happen, we want functionality that will happen.
Yes, pretty much by definition. Average is good enough for some tasks and not for others. The best way to see this in action is to use agents.
Logan, Thanks for responding. "they don’t understand the friction that still exists to get software into production" -- this is where I was sort of hoping to snag a pithy quote from you that I could use! We see the same thing and struggle to temper the optimistic exuberance that a demo is somehow production worthy. I do think it's worth driving concretely into what's still missing; where the AI doesn't help. To your point perhaps, very little of building a production system is the happy path... it's the NFRs & ops. It seems like there's a lot of potential to address the drudge associated with this part of the problem but that's not where the focus is now. Or is it?
I expected a bit more from a refutation. If I understood the article, there are a few reasons why Citrini is off:
1. Historically, predictions about ai have been off. So this is a bet based on probability?
2. Software construction is poorly understood by those who don’t practice it. What don’t they understand that refutes the argument?
3. The reporting favors success stories and doesn’t cover the presumably far more often failures. Isn’t that so for the reporting on anything?
4. AI is kind of average and agents are unreliable. Are they really average? In the universe of people writing code the mean is not a high bar.
Not saying I disagree with your intuition. I just don’t see the support for it.
I think you’ve misunderstood the article. There isn’t anything to refute as the original source is a thought exercise. Instead, these are things to keep in mind when considering AI speculation so the reader can come to conclusions on their own.
Most of the speculation isn’t worth getting upset over as its likely just that: speculation. Even the most seasoned AI veterans have a hard time making predictions, so why would someone without much knowledge be able to?
As stated in the second section, they don’t understand the friction that still exists to get software into production. Most outsiders assume there isn’t any and base their assumptions on that.
You’re correct and it’s important to keep in mind for those things as well. The reason its important here is because those failures are much more important. When building software, we don’t want functionality that can happen, we want functionality that will happen.
Yes, pretty much by definition. Average is good enough for some tasks and not for others. The best way to see this in action is to use agents.
Logan, Thanks for responding. "they don’t understand the friction that still exists to get software into production" -- this is where I was sort of hoping to snag a pithy quote from you that I could use! We see the same thing and struggle to temper the optimistic exuberance that a demo is somehow production worthy. I do think it's worth driving concretely into what's still missing; where the AI doesn't help. To your point perhaps, very little of building a production system is the happy path... it's the NFRs & ops. It seems like there's a lot of potential to address the drudge associated with this part of the problem but that's not where the focus is now. Or is it?