AI for Software Engineers

AI for Software Engineers

🖥️ Weekly AI for SWEs

AI for SWEs 73: What Ilya Saw and the Time of TPUs

Plus, the White House unifies US AI research and Antigravity exposes why agents are so hard to build

Logan Thorneloe's avatar
Logan Thorneloe
Dec 02, 2025
∙ Paid

Welcome to AI for SWEs where I share everything software engineers need to know about AI from the past week. This week has seen fewer but more important headlines. I’ve detailed them below.

Also, the Rapid Fire and Career Development sections are now exclusive to paid subscribers. Thanks for reading!

Ilya Sutskever declares the age of scaling over and the age of research begun

“You look at the evals and you go, ‘Those are pretty hard evals.’ They are doing so well. But the economic impact seems to be dramatically behind.”

Ilya Sutskever and Dwarkesh Patel recently discussed AGI, current AI paradigms, and scaling on Dwarkesh’s podcast. Ilya brought up two topics vital for any software engineer working with AI.

First: Application is the most important thing in AI. We’re seeing impressive models that excel at evaluations and benchmarks but lack the expected economic impact. AI research is advancing rapidly, but usefulness comes from understanding how to apply it. This means identifying practical applications and understanding the complexity of engineering systems for that application.

Second: We are returning to the age of AI research. From 2012 to 2020, we were focused on research—developing effective architectures and models. Around 2020, we entered the scaling phase where we realized we could achieve impressive results by simply increasing data and compute. Now that we’ve scaled, we’re realizing that we need to explore further developments to continue advancing AI, so we’re back to research.

I’ve often stated that reaching AGI will need a new architecture or a fundamental research breakthrough. Current models are impressive and useful, but they’re insufficient for the promises of AGI.

Safe Super Intelligence is now focusing on pushing the next frontier of AI. I highly recommend watching this episode. I could listen to Ilya speak for hours.

Google Antigravity exposes critical agent vulnerabilities in local coding environments

I’m a huge Antigravity fan. I believe there’s a better way to code with AI than just a chat interface, tab autocomplete, and reviewing agent output, and Antigravity has a great chance at figuring this out.

Over the past week, Antigravity has leaked sensitive information and engineers should understand why—not just to use Antigravity, but also to build with AI. This is an issue applicable to all AI agents.

A lot of software engineers are building agents to automate developer tasks, which is great. The best way to start learning and building with AI is by automating your own tasks. The problem is that building AI agents introduces security and safety concerns not present in deterministic systems.

For a good example, read about Antigravity ingesting hidden text into its context window and that hidden text being used to collect and exfiltrate sensitive workspace files. Prompt injection can also cause Antigravity to read a user’s .env file and ingest sensitive information into its context window.

Agents may also complete tasks that a user didn’t intend. When an agent has access to a user’s local environment, this can be a huge issue. Read about Antigravity deleting the contents of a user’s drive here.

I’ll be writing a more in-depth guide on agent safety soon.

Google’s TPUs are the best business decision of the 2010s

This week highlighted just how advantageous Google’s TPUs are and just how few people understand this. Google is the only company that controls its entire AI stack including hardware, models, and applications. When developing AI, the only company Google has to wait on is itself.

This control stems from a business decision made over a decade ago to invest in AI-specific hardware. Google was the first true AI company and has been heavily investing in AI applications since the early 2010s, including machine learning libraries, infrastructure for training large-scale models, talent, and, most critically, TPUs.

TPUs provide the most significant advantage. Setting up and integrating new processors into data centers is incredibly time, capital, and resource intensive. Starting this process today would require years of work just to get it workable at scale.

Given that TPUs were designed specifically to be energy efficient for tensor processing, Google has an entire stack built to increase machine learning development velocity and keep it resource efficient.

It makes sense, then, that other companies training large AI models would want to take advantage of this. This is why major generative AI players like Anthropic and Meta are making deals to use TPUs, and why companies in capital-intensive settings, such as high-frequency training firms, are switching to TPUs in droves.

There’s a huge demand for AI chips right now, as seen by the many startups succeeding in the space. And over time, we’ll just see more companies adopting TPUs.

The White House unifies AI federally

President Trump signed an executive order, the “Genesis Mission,” on November 24th, 2025. This order aims to federally harness AI to revolutionize scientific discovery and innovation. It’s an effort of national significance, compared to the urgency and importance of Manhattan Project or the Apollo program and focused on integrating federal resources to accelerate scientific and technological breakthroughs.

A year ago, AI was widely discussed as a true national asset and a competitive advantage on a global scale, similar to weapons of mass destruction. Considering that biases and information are trained into AI models, allowing another nation to build your models for you is an inherent national security risk.

The Genesis Mission establishes the American Science and Security Platform. This secure AI ecosystem combines various machine learning assets, such as compute power, models, and datasets. The platform enables “closed-loop AI systems” to conduct research autonomously. The idea is that these closed-loop AI systems can complete research in weeks that would take humans months or even a year.

This mission combines efforts from academia, all 17 Department of Energy national facilities, and industry leaders, including Microsoft, IBM, OpenAI, Google, Anthropic, NVIDIA, and Oracle. As far as I know, this is the first serious federal push to combine U.S. assets to advance AI.

Note that this mission builds on other Trump-era policies like promoting AI exports, preventing biased data, and enabling AI-driven research developments.

Logan’s Picks

  • DMs Are the New Cover Letter: How to Get Hired in AI in 2025/2026 by Logan Thorneloe: DMs are super important in a market where job listings are heavily saturated and this is my guide for how you should DM others for job opportunities based on my experience posting a job opportunity a few weeks ago.

  • Launching DeepSeek-V3.2: The new reasoning-first model balances inference cost with performance, positioned at GPT-5–level performance and supporting “Thinking in Tool-Use”. The release includes a new massive agent training-data synthesis method covering 1,800+ environments.

  • Bubble, Bubble, Toil and Trouble by James Wang: Wang distinguishes between financial bubbles (leverage-driven) and tech bubbles (forecast-driven). Tech bubbles are hard to time because they often overshoot initially but deliver revolution in a later “Gen2” phase once infrastructure matures.

  • Treat AI-Generated code as a draft by Addy Osmani: Developers must treat AI code as a draft, verifying every line to prevent bug proliferation and skill erosion. Teams should enforce strict review processes and consider manual implementation for critical logic.

  • How good engineers write bad code at big companies: Bad code at big companies is often a structural result of high engineer churn and incentivized fungibility rather than incompetence. Frequent reassignments mean most changes are made by engineers new to the codebase.

In case you missed it…

In last week’s AI for Software Engineers, we discussed Gemini 3 Pro, Cloud Opus 4.5, and Olmo 3, all three important model releases. You can find last week’s issue here:

AI for SWEs 72: Gemini 3 Pro, Claude Opus 4.5, and Olmo 3

AI for SWEs 72: Gemini 3 Pro, Claude Opus 4.5, and Olmo 3

Logan Thorneloe
¡
Nov 25
Read full story

Upskill

Interesting Learning Resources

  • The MCP Workbook aids in learning agent design using an interactive “by-hand” pedagogy to understand complex architectures.

User's avatar

Continue reading this post for free, courtesy of Logan Thorneloe.

Or purchase a paid subscription.
© 2025 Logan Thorneloe · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture