AI for SWEs 73: What Ilya Saw and the Time of TPUs
Plus, the White House unifies US AI research and Antigravity exposes why agents are so hard to build
Welcome to AI for SWEs where I share everything software engineers need to know about AI from the past week. This week has seen fewer but more important headlines. Iâve detailed them below.
Also, the Rapid Fire and Career Development sections are now exclusive to paid subscribers. Thanks for reading!
Ilya Sutskever declares the age of scaling over and the age of research begun
âYou look at the evals and you go, âThose are pretty hard evals.â They are doing so well. But the economic impact seems to be dramatically behind.â
Ilya Sutskever and Dwarkesh Patel recently discussed AGI, current AI paradigms, and scaling on Dwarkeshâs podcast. Ilya brought up two topics vital for any software engineer working with AI.
First: Application is the most important thing in AI. Weâre seeing impressive models that excel at evaluations and benchmarks but lack the expected economic impact. AI research is advancing rapidly, but usefulness comes from understanding how to apply it. This means identifying practical applications and understanding the complexity of engineering systems for that application.
Second: We are returning to the age of AI research. From 2012 to 2020, we were focused on researchâdeveloping effective architectures and models. Around 2020, we entered the scaling phase where we realized we could achieve impressive results by simply increasing data and compute. Now that weâve scaled, weâre realizing that we need to explore further developments to continue advancing AI, so weâre back to research.
Iâve often stated that reaching AGI will need a new architecture or a fundamental research breakthrough. Current models are impressive and useful, but theyâre insufficient for the promises of AGI.
Safe Super Intelligence is now focusing on pushing the next frontier of AI. I highly recommend watching this episode. I could listen to Ilya speak for hours.
Google Antigravity exposes critical agent vulnerabilities in local coding environments
Iâm a huge Antigravity fan. I believe thereâs a better way to code with AI than just a chat interface, tab autocomplete, and reviewing agent output, and Antigravity has a great chance at figuring this out.
Over the past week, Antigravity has leaked sensitive information and engineers should understand whyânot just to use Antigravity, but also to build with AI. This is an issue applicable to all AI agents.
A lot of software engineers are building agents to automate developer tasks, which is great. The best way to start learning and building with AI is by automating your own tasks. The problem is that building AI agents introduces security and safety concerns not present in deterministic systems.
For a good example, read about Antigravity ingesting hidden text into its context window and that hidden text being used to collect and exfiltrate sensitive workspace files. Prompt injection can also cause Antigravity to read a userâs .env file and ingest sensitive information into its context window.
Agents may also complete tasks that a user didnât intend. When an agent has access to a userâs local environment, this can be a huge issue. Read about Antigravity deleting the contents of a userâs drive here.
Iâll be writing a more in-depth guide on agent safety soon.
Googleâs TPUs are the best business decision of the 2010s
This week highlighted just how advantageous Googleâs TPUs are and just how few people understand this. Google is the only company that controls its entire AI stack including hardware, models, and applications. When developing AI, the only company Google has to wait on is itself.
This control stems from a business decision made over a decade ago to invest in AI-specific hardware. Google was the first true AI company and has been heavily investing in AI applications since the early 2010s, including machine learning libraries, infrastructure for training large-scale models, talent, and, most critically, TPUs.
TPUs provide the most significant advantage. Setting up and integrating new processors into data centers is incredibly time, capital, and resource intensive. Starting this process today would require years of work just to get it workable at scale.
Given that TPUs were designed specifically to be energy efficient for tensor processing, Google has an entire stack built to increase machine learning development velocity and keep it resource efficient.
It makes sense, then, that other companies training large AI models would want to take advantage of this. This is why major generative AI players like Anthropic and Meta are making deals to use TPUs, and why companies in capital-intensive settings, such as high-frequency training firms, are switching to TPUs in droves.
Thereâs a huge demand for AI chips right now, as seen by the many startups succeeding in the space. And over time, weâll just see more companies adopting TPUs.
The White House unifies AI federally
President Trump signed an executive order, the âGenesis Mission,â on November 24th, 2025. This order aims to federally harness AI to revolutionize scientific discovery and innovation. Itâs an effort of national significance, compared to the urgency and importance of Manhattan Project or the Apollo program and focused on integrating federal resources to accelerate scientific and technological breakthroughs.
A year ago, AI was widely discussed as a true national asset and a competitive advantage on a global scale, similar to weapons of mass destruction. Considering that biases and information are trained into AI models, allowing another nation to build your models for you is an inherent national security risk.
The Genesis Mission establishes the American Science and Security Platform. This secure AI ecosystem combines various machine learning assets, such as compute power, models, and datasets. The platform enables âclosed-loop AI systemsâ to conduct research autonomously. The idea is that these closed-loop AI systems can complete research in weeks that would take humans months or even a year.
This mission combines efforts from academia, all 17 Department of Energy national facilities, and industry leaders, including Microsoft, IBM, OpenAI, Google, Anthropic, NVIDIA, and Oracle. As far as I know, this is the first serious federal push to combine U.S. assets to advance AI.
Note that this mission builds on other Trump-era policies like promoting AI exports, preventing biased data, and enabling AI-driven research developments.
Loganâs Picks
DMs Are the New Cover Letter: How to Get Hired in AI in 2025/2026 by Logan Thorneloe: DMs are super important in a market where job listings are heavily saturated and this is my guide for how you should DM others for job opportunities based on my experience posting a job opportunity a few weeks ago.
Launching DeepSeek-V3.2: The new reasoning-first model balances inference cost with performance, positioned at GPT-5âlevel performance and supporting âThinking in Tool-Useâ. The release includes a new massive agent training-data synthesis method covering 1,800+ environments.
Bubble, Bubble, Toil and Trouble by James Wang: Wang distinguishes between financial bubbles (leverage-driven) and tech bubbles (forecast-driven). Tech bubbles are hard to time because they often overshoot initially but deliver revolution in a later âGen2â phase once infrastructure matures.
Treat AI-Generated code as a draft by Addy Osmani: Developers must treat AI code as a draft, verifying every line to prevent bug proliferation and skill erosion. Teams should enforce strict review processes and consider manual implementation for critical logic.
How good engineers write bad code at big companies: Bad code at big companies is often a structural result of high engineer churn and incentivized fungibility rather than incompetence. Frequent reassignments mean most changes are made by engineers new to the codebase.
In case you missed itâŚ
In last weekâs AI for Software Engineers, we discussed Gemini 3 Pro, Cloud Opus 4.5, and Olmo 3, all three important model releases. You can find last weekâs issue here:
Upskill
Interesting Learning Resources
The MCP Workbook aids in learning agent design using an interactive âby-handâ pedagogy to understand complex architectures.



