Google engineer finds rival’s AI replicate year of work in one hour

CALIFORNIA, UNITED STATES — A Google principal engineer has revealed that Anthropic’s Claude Code AI coding assistant replicated a year-long distributed systems project in approximately one hour, highlighting the accelerating impact of generative AI on software engineering foundations.
“I gave Claude Code a description of the problem, it generated what we built last year in an hour,” said Jaana Dogan, a Principal Engineer at Google Gemini, in an X post.
I'm not joking and this isn't funny. We have been trying to build distributed agent orchestrators at Google since last year. There are various options, not everyone is aligned… I gave Claude Code a description of the problem, it generated what we built last year in an hour.
— Jaana Dogan ヤナ ドガン (@rakyll) January 2, 2026
AI accelerates architectural prototyping
According to The Economic Times, Dogan’s experience shows that advanced AI coding tools can now serve as powerful accelerators in the initial design and prototyping phase of software engineering.
With a high-level description of the problem but no proprietary code or other internal documentation, the Claude Code produced by Anthropic was highly reminiscent of architectural concepts that her team had formulated.
Dogan clarified that the output was a prototype or architectural draft, not a finished system. However, the core revelation was the speed, compressing a lengthy human process into a single hour.
This demonstrates that current AI tools, when given clear, domain-specific inputs, can rapidly generate coherent technical drafts, potentially reshaping early-stage project planning.
Developer community divided over AI coding agents
In another report by The Times of India, the reaction to Dogan’s experience highlights a deeply polarized developer community regarding AI coding agents.
Dogan lamented that coding agents have created a major division, stating she has never seen a trend be as divisive and noting a spectrum of reactions from genuine learners to dogmatic skeptics.
“There is a ton of hype & fluff about coding agents, which unfortunately often drowns out the genuinely good work being done. Things don’t have to be this divisive,” she noted in another X post.
Since working on programming languages, I haven't seen this kind of polarized response from the developer community. There is a ton of hype & fluff about coding agents, which unfortunately often drowns out the genuinely good work being done. Things don't have to be this divisive.
— Jaana Dogan ヤナ ドガン (@rakyll) January 3, 2026
Despite working on a competing product, Dogan acknowledged Anthropic’s “impressive work” and expressed motivation to push the field forward, rejecting a zero-sum competitive view.
She encouraged skeptics to test AI tools in their own areas of deep expertise, framing it as a way to assess progress.
Her stance points to an industry at an inflection point, with Dogan predicting 2026 will be “an interesting year” for the evolution of these tools and the community’s response.
The ability of AI to produce in an hour what once took months of human design means the future of software engineering may rely less on the slow craft of drafting and more on the critical skills of judging, refining, and guiding AI-generated prototypes.

Independent




