There is a five-floor office building in San Francisco where the first thing visitors receive is a small book. It is the size of the pocket Bibles that proselytisers hand out on street corners, and it contains a 14,000-word essay that the company’s CEO, Dario Amodei, wrote in 2024. The essay is called “Machines of Loving Grace,” and it envisions AI as the mechanism through which humanity defeats disease, conquers poverty, and achieves scientific progress at a pace that was previously inconceivable. It is a sincere and carefully argued document. Visitors are expected to read it.

That same building is the headquarters of Anthropic, which raised $30 billion in a single funding round in February of this year at a valuation of $380 billion, making it one of the most valuable private companies in human history. Its flagship product, the Claude family of AI models, is now used by eight of the ten largest companies in the Fortune 10. Its coding tool, Claude Code, is generating more than $2.5 billion in annualised revenue and is growing so fast that no analyst has produced a reliable quarterly projection. When Anthropic launches a new product or updates an existing one, software stocks crater. When it announced Cowork - a version of Claude Code designed for non-programmers - $300 billion disappeared from the combined market capitalisation of enterprise software companies in a single day. When it published a blog post claiming Claude Code could translate legacy COBOL systems into modern languages, IBM lost roughly $40 billion in market value in a single session.

Anthropic Claude AI Role in IT Sector Layoffs

The company that is doing all of this is, by its own explicit self-description, in the business of making artificial intelligence safe for humanity. Anthropic’s founding story, its corporate structure, its research agenda, and the small book it hands to visitors are all organised around a single proposition: that the most dangerous technology in the history of civilisation is being built, that someone is going to build it regardless, and that it is better for safety-focused people to be the ones building it than to leave the field to those who are not. Dario Amodei has warned publicly that AI could displace half of all entry-level white-collar jobs within one to five years and suggested that displaced workers could form “an unemployed or very-low-wage underclass.” Then he continued building the technology that will produce that outcome.

This article is the most complete account yet assembled of Anthropic’s specific and consequential role in the global IT sector layoff wave that is now eliminating more than 700 jobs per day. It covers Anthropic’s founding, its products, its stated values, its economic research, its enterprise partnerships, its specific role in documented layoffs at major IT employers, and the profound tension at the heart of an organisation that is simultaneously the most thoughtful analyst of AI’s risks to humanity and one of the most effective engines of those risks. No other analysis has placed Anthropic’s role in the 2026 IT employment crisis in full context. This one attempts to.


Part One: The Origin Story and What It Tells Us

From OpenAI to Anthropic: A Schism Built on Principles

Anthropic was founded in 2021 by Dario Amodei and Daniela Amodei, along with several other researchers who had been among the most senior figures at OpenAI. Dario had served as Vice President of Research at OpenAI. His sister Daniela was VP of Operations. Other co-founders included Tom Brown, Chris Olah, Sam McCandlish, Jack Clark, and Jared Kaplan - collectively a group that represented a significant portion of OpenAI’s most influential technical and research leadership.

The reason they left is important for understanding everything that followed. The Amodeis and their co-founders believed that OpenAI was not taking AI safety seriously enough given the pace at which capability was advancing. They wanted to build an organisation that would prioritise safety research alongside capability development, rather than treating safety as a secondary concern. The founding vision of Anthropic is therefore not primarily about building the most capable AI, or the most profitable AI, or the most widely deployed AI. It is about building AI that is, as the company’s tag-line puts it, safe, beneficial, and understandable.

This founding premise creates the central paradox that runs through every aspect of Anthropic’s current situation. The company that exists to make AI safe is the company whose products are now most visibly accelerating the displacement of human workers. The organisation most focused on the long-term societal consequences of AI is generating some of the most immediate societal consequences of any AI company. Deep Ganguli, who leads Anthropic’s societal impacts team and whose job is to study the labour market effects of Claude, described the tension to Time magazine with unusual candour: “It feels like we might be speaking out of both sides of our mouths.”

This is not hypocrisy in the conventional sense. Anthropic has not changed its stated values. What has happened is that the company’s products have become so capable, and so widely adopted at enterprise scale, that the safety mission and the disruption consequences are no longer separable. You cannot deploy a system that writes 90% of Anthropic’s own code, partners with Infosys to automate enterprise software delivery for telecom and financial services clients, and integrate into the workflows of eight Fortune 10 companies without producing material effects on the labour market for the workers doing those tasks.

The Governance Architecture and What It Signals

Anthropic was incorporated as a Public Benefit Corporation (PBC) rather than a standard C-corporation. This structure creates legal obligations to pursue public benefit alongside shareholder returns. The company also established a “Long-Term Benefit Trust” - a purpose trust holding Class T shares with the power to elect directors - specifically to ensure that long-term safety considerations cannot be overridden by short-term financial pressure even as the company raises billions from investors.

These governance features are not cosmetic. They have produced real consequences. In the most dramatic recent example, the Trump administration designated Anthropic a “supply-chain risk” in early 2026 and instructed federal agencies to stop using its technology - a designation normally reserved for companies under espionage suspicion. The trigger was Anthropic’s refusal to grant blanket permission for its AI tools to be used in autonomous weapons systems or mass surveillance applications. The company held its position despite the financial and reputational cost of losing a $200 million defence contract.

Within hours of the designation, OpenAI announced a new Pentagon deal - and CEO Sam Altman publicly stated that it included the same prohibitions on autonomous weapons and mass surveillance that Anthropic had sought. The episode illustrated both the genuine nature of Anthropic’s ethical commitments and the competitive dynamics of the AI market: being seen to hold principles became, paradoxically, good for Anthropic’s brand among the enterprise customers and consumers who switched to Claude in protest at OpenAI’s deal.

Understanding this governance architecture matters for interpreting Anthropic’s role in the layoff wave. The company is not cutting corners on safety to maximise short-term profits. It is building products as safely and responsibly as it knows how, and those products are still producing enormous workforce disruption. This suggests that the disruption is not primarily the result of irresponsible AI deployment. It is the result of AI becoming genuinely capable enough that even responsible deployment at scale has large employment consequences.

The Funding Trajectory: From $580 Million to $380 Billion in Four Years

Anthropic’s funding history is one of the most extraordinary capital formation stories in the history of technology. From its initial $580 million raise in 2022 - which included a $500 million investment from FTX under Sam Bankman-Fried, a relationship that became complicated when FTX collapsed - the company has raised successive rounds that collectively tell the story of how the market has valued frontier AI:

In 2023, Amazon announced its initial investment of $1.25 billion, with a commitment to invest a total of $4 billion. This partnership established AWS as Anthropic’s primary cloud and training infrastructure partner - a structural relationship that would become one of the most significant in the cloud industry.

In 2023, Google invested $500 million with a commitment to invest an additional $1.5 billion over time. This gave Google both a financial stake and a cloud services relationship, making Anthropic’s Claude models available through Google Cloud’s Vertex AI platform while competing directly with Google’s own Gemini models on the same platform.

By early 2025, Anthropic’s valuation had reached $61.5 billion in a round that included Lightspeed Venture Partners, Bessemer, Cisco Investments, Fidelity, Menlo Ventures, and Salesforce Ventures. Amazon subsequently doubled its total investment to $8 billion. In October 2025, Anthropic signed a landmark cloud deal with Google that would bring over one gigawatt of AI compute capacity online by 2026, powered by up to one million of Google’s custom Tensor Processing Units.

In November 2025, Microsoft announced an investment of up to $5 billion, and Nvidia announced an investment of up to $10 billion - making Anthropic the only frontier AI model available on AWS, Google Cloud, and Microsoft Azure simultaneously.

And then in February 2026, Anthropic closed the largest single funding round in private company history at the time: $30 billion, led by GIC and Coatue, at a post-money valuation of $380 billion. The round included participation from Microsoft, Nvidia, Sequoia Capital, D. E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, MGX, Qatar Investment Authority, and others. Anthropic’s annual revenue run-rate had reached $14 billion, growing more than 10x annually for three consecutive years.

The velocity of this capital formation is itself a data point about the disruption Anthropic is causing. Investors are not paying $380 billion for a company that is merely improving enterprise productivity at the margins. They are paying for a company that they believe will reshape fundamental categories of knowledge work - and the labour market for the people doing that work.


Part Two: Claude and Its Products - The Disruption Engine

The Claude Model Family: From Chatbot to Enterprise Infrastructure

Claude began as a conversational AI assistant - a response to ChatGPT - but has evolved into something considerably more consequential: an enterprise intelligence platform that is embedded in the operational workflows of organisations across every major industry sector.

The current Claude model family as of March 2026 consists of three tiers. Claude Haiku is the fastest and most cost-efficient model, designed for high-volume, latency-sensitive applications where speed matters more than maximum capability. Claude Sonnet is the mid-tier model, balancing capability and cost for the majority of enterprise applications. Claude Opus is the most capable model in the family, designed for complex reasoning tasks, extended research, and the most demanding enterprise workflows.

The latest versions - Claude Opus 4.6 and Claude Sonnet 4.6 - represent significant capability advances over their predecessors. Opus 4.6 features a one million token context window, which allows it to process and reason across entire codebases, legal document archives, financial reporting periods, or research corpora in a single session. The model supports Agent Teams - the ability for multiple Claude instances to collaborate on complex tasks, dividing work and coordinating outcomes in ways that resemble a distributed human team. Opus 4.6 tops the GDPval-AA benchmark, which measures performance on economically valuable knowledge work tasks across finance, legal, and other high-value domains.

These capability advances are not incremental. They are discontinuous jumps that repeatedly expand the range of tasks that Claude can perform reliably enough to replace human judgement in production settings. Each capability jump directly affects the employment calculus for workers in the domains Claude is entering.

Claude Code: The Product That Changed Everything

If there is a single product that most clearly illustrates Anthropic’s role in the IT sector layoff wave, it is Claude Code. Released as a research preview in February 2025 and moved to general availability in May 2025, Claude Code is an agentic coding assistant that operates in the terminal, integrates with Git workflows, and can autonomously write, debug, test, refactor, and deploy software with minimal human direction.

Claude Code is not a code completion tool in the mould of GitHub Copilot’s early iterations. The distinction is fundamental. Copilot autocompletes lines and functions based on context. Claude Code plans multi-step software development tasks, executes them across entire repositories, iterates on its own output when it encounters errors, interacts with external tools and APIs, and can take a natural language description of a desired system and produce a working implementation from scratch.

The numbers around Claude Code are staggering. Within two months of its launch, it had reached $500 million in annualised revenue - a pace that Anthropic claimed made it the fastest-growing product in history. By February 2026, Claude Code’s annualised revenue had reached $2.5 billion and was still accelerating. The number of weekly active users had doubled since January 2026 alone. Business subscriptions had quadrupled since the beginning of the year, and enterprise customers now account for more than half of Claude Code’s total revenue.

What Claude Code is doing to software development workflows is documented in the accounts of individual engineers as much as in aggregate revenue statistics. A senior engineer at Google described Claude Code recreating a year’s worth of software development work in an hour. Boris Cherny, the Anthropic head of product who created Claude Code, disclosed on Lenny’s Podcast that he has not edited a single line of code by hand since November 2025. He created Claude Code in Anthropic’s internal Bell Labs-style experimental division, and even he no longer writes the product’s code himself. Dario Amodei told an audience at the World Economic Forum in Davos that he has engineers within Anthropic who say “I don’t write any code anymore. I just let the model write the code. I edit it.” Amodei subsequently disclosed that approximately 90% of Anthropic’s code is now generated by Claude Code itself.

The “Claude Christmas” Moment

The inflection point that the software engineering community most clearly remembers is what engineers have taken to calling “Claude Christmas.” On November 24, 2025 - the week before the American Thanksgiving holiday - Anthropic released a significantly upgraded version of Claude Code. Engineers across Silicon Valley spent the holiday break experimenting with it. They emerged from those days deeply unsettled.

What they had discovered was not a tool that made them more productive. It was a tool that could autonomously build projects that they would have spent weeks coding by hand. The qualitative shift was from “AI helps me code faster” to “AI builds things I direct.” The difference matters enormously for employment implications. A tool that makes each programmer 50% more productive might reduce the number of programmers needed for a given project. But a tool that makes one programmer capable of doing the work of ten - without necessarily working ten times as many hours - changes the entire architecture of software development teams.

By the new year, the anxiety had spilled into public view. In San Francisco and San Mateo counties, where approximately 190,000 jobs are tied to the technology sector, the conversations that had previously stayed within private Slack channels became visible in public forums, opinion pieces, and the kind of gallows humour that surfaces in communities that are beginning to genuinely fear their own obsolescence. The phrase “permanent underclass” - referring to the class of formerly well-paid knowledge workers who would be locked out of economic mobility by AI - moved from abstract theoretical discussion to everyday tech conversation.

What Claude Code Specifically Does to Software Engineering Employment

The employment consequences of Claude Code operate at several levels that compound each other.

At the individual productivity level, software engineers using Claude Code consistently report productivity multiples of two to five times their previous output, with some reporting dramatically higher multiples for specific task categories like boilerplate generation, test writing, documentation, and legacy code refactoring. When one engineer can produce the output of two to five engineers using Claude Code, the immediate implication is that a team that previously needed ten engineers to deliver a certain velocity of software may now need two to five.

At the project scoping level, companies are beginning to reduce their software engineering headcount requirements when scoping new projects. Rather than estimating a project at six months and twelve engineers, engineering managers are beginning to estimate at three months and four engineers, with Claude Code absorbing the remainder. This is not theoretical. The evidence from companies including Block, Atlassian, and several enterprises in the financial services sector is that headcount requirements for software projects are being revised downward in direct correlation with Claude Code adoption.

At the hiring level, the effect on entry-level software engineering roles is the most immediately documented. Anthropic’s own economic research, published in March 2026, found a 14% drop in the job finding rate for workers in AI-exposed occupations in the post-ChatGPT era compared to 2022. Within the specific occupation of Computer Programmer - which Claude Code targets most directly - the observed AI coverage rate is 75%, meaning that three-quarters of the tasks performed by computer programmers are tasks that Claude is now actively handling in real enterprise deployments.

The BLS occupational projections, cited in Anthropic’s own research, show that occupations with higher observed AI exposure are projected to grow more slowly through 2034. Computer Programmers, consistently one of the most AI-exposed occupations by Anthropic’s measure, are in the category of roles facing the most structural headcount pressure.

Boris Cherny made the prediction explicit on Lenny’s Podcast: “I think by the end of the year, everyone is going to be a product manager, and everyone codes. The title software engineer is going to start to go away.” This is the creator of Claude Code, employed by the company building it, saying publicly that the software engineering job title - one of the most sought-after career designations of the last two decades - is being structurally eliminated by his own product.

Cowork: From Coding to General Knowledge Work

If Claude Code’s impact on software engineers represents Phase One of Anthropic’s disruption of the knowledge work labour market, Cowork represents Phase Two. Launched in January 2026, Cowork is a version of Claude Code’s agentic capabilities designed not for developers but for the broader population of knowledge workers: salespeople, lawyers, financial analysts, HR professionals, marketing managers, and executive assistants.

Boris Cherny and his team built Cowork using Claude Code itself, in approximately a week and a half. This detail is not incidental. The speed of development illustrates the self-accelerating character of the disruption: AI tools are being used to build the next generation of AI tools faster than any human engineering team could develop them.

Cowork can access files, control browsers through the Claude in Chrome extension, and manipulate applications - executing tasks rather than simply advising how to do them. Where Claude Code required engineers who understood command-line interfaces and Git workflows, Cowork requires only the ability to describe a task in natural language. The democratisation of agentic AI to non-programmers dramatically expands the range of roles and workers whose work can be automated.

When Anthropic launched Cowork’s industry-specific plugins in February 2026 - adding capabilities for private equity scenario modelling, HR job description development, design brief creation, vendor proposal summarisation, and sales operations management - the stock market reaction was immediate and severe. Software industry ETF fell nearly 6% in a single day, its worst session since April 2025. The sell-off was concentrated in exactly the companies whose products are most exposed to Cowork’s capabilities: enterprise analytics software, sales intelligence platforms, HR management software, and legal research tools.

The market reaction was not irrational. When Anthropic publishes that Cowork can “model scenarios in private equity work” through a FactSet plugin, the clear implication is that the analyst hours previously required to run those models are a target for displacement. When it can “develop job descriptions and offer letters in human resources” through an integration with HR workflows, the HR specialists doing that work face a restructuring of their role. When it can “put together creative briefs for design-related work,” the professionals whose careers were built around creative brief development need to find new sources of value that the tool cannot replicate.

Claude Cowork’s Specific Industry Plugins and Their Employment Targets

The eleven open-source Cowork plugins launched in January 2026 were not selected randomly. They represent Anthropic’s assessment of the highest-value, most immediately automatable workflows in the enterprise market. Each plugin is a precise targeting of a category of knowledge work employment.

The FactSet plugin for financial analysis targets the quantitative research and scenario modelling work performed by financial analysts at asset managers, investment banks, private equity firms, and corporate finance departments. The workflow Cowork automates - gathering financial data, building multi-scenario models, generating presentation-ready outputs - is precisely the workflow that junior and mid-level financial analysts perform as the primary deliverable of their role.

The S&P Global and LSEG plugins similarly target financial data analysis workflows in credit analysis, market research, and sector intelligence. The professionals whose careers are most exposed to these plugins are exactly the highly educated, well-compensated workers who have traditionally been considered secure because of the complexity of their analytical work - research analysts, credit analysts, market intelligence specialists.

The Apollo plugin for sales operations targets the workflow of sales development representatives and sales operations analysts who research prospects, build contact lists, personalise outreach, and track pipeline development. The entire sales development representative (SDR) category - a job tier that became a standard entry point into technology sales careers over the last decade - is directly in Cowork’s cross-hairs.

The HR workflow integrations with Google Drive, Gmail, Google Calendar, DocuSign, and other workplace tools target the broad category of HR professionals who manage documentation, communication, scheduling, and compliance processes. At large companies, entire teams of HR generalists, coordinators, and administrative specialists perform these tasks.

The design brief workflow targets creative project managers and marketing professionals who translate business objectives into creative briefs, manage agency relationships, and coordinate creative production workflows. This expands Anthropic’s disruption reach into the marketing and creative services sectors that had previously believed themselves relatively insulated from AI automation.

Claude Code in Enterprise: The Infosys Partnership

No single partnership better illustrates Anthropic’s specific and documented role in the IT sector layoff wave than its collaboration with Infosys, announced in February 2026. The deal creates a dedicated Anthropic Center of Excellence at Infosys, where the two companies will jointly develop and deploy AI agents for telecom, financial services, and manufacturing sector clients.

The partnership is significant for the IT layoff story at multiple levels. Infosys is simultaneously one of the world’s largest IT services employers and one of the companies most directly exposed to the headcount reduction pressure described in the previous article in this series. It employs approximately 317,000 people globally, with the majority in India, and has been managing a deliberate headcount reduction through slower hiring and managed attrition as AI tools improve delivery efficiency.

Now Infosys is partnering with the company whose tools are the primary mechanism of that efficiency improvement. The Anthropic-Infosys collaboration uses Claude Code to accelerate software delivery for enterprise clients, Claude’s agent SDK to build AI agents that can work across “long, complex processes” rather than one-off interactions, and the combined capabilities of Anthropic’s models and Infosys’s Topaz AI platform to modernise legacy systems and automate compliance-heavy workflows.

When Dario Amodei spoke about the partnership, he framed it as delivering value to clients who need “precision, compliance and deep domain knowledge.” That framing is correct. The partnership does deliver genuine value to clients. But the mechanism through which that value is delivered is the reduction of the human hours required to produce the same output - and the direct employment consequence is the contraction of the workforce that was previously delivering those hours.

Anthropic’s partnership announcement came, as the trade publication Capacity noted, “just weeks after Telstra outsourced jobs to Infosys in India while making AI-related job cuts.” The sequencing is instructive. A company cuts jobs attributing the reduction to AI. That same company or its peers then partner with Anthropic to further embed the AI tools that justified the cuts. The cycle is self-reinforcing.

Claude Gov and the National Security Dimension

One dimension of Anthropic’s enterprise footprint that receives less attention in the context of IT employment is Claude Gov - the company’s sovereign deployment option for government and national security clients. Launched in June 2025, Claude Gov is a version of Claude designed to operate in air-gapped or classified environments where standard cloud connectivity is not permitted.

By February 2026, Claude is in active use at multiple US national security agencies. Anthropic’s partnership with Palantir makes Claude the only AI model cleared for classified mission use. The Department of Defense committed $200 million to Anthropic as part of a broader AI integration programme that also involves Google, OpenAI, and xAI.

The employment implications of AI adoption within national security and government agencies are substantial and largely invisible in public layoff tracker data. Government employees are not counted in the TrueUp.io and Layoffs.fyi trackers that generate the widely cited layoff statistics. But the adoption of AI tools in intelligence analysis, signals processing, documentation, research synthesis, and administrative operations within the federal government represents a large category of knowledge work employment that is being restructured by the same AI tools affecting the private sector.

The specific designation of Anthropic as a supply-chain risk by the Pentagon - stemming from Anthropic’s refusal to allow its tools in autonomous weapons systems - adds a layer of complexity to this government relationship. Anthropic is both an approved supplier to multiple national security agencies and a company that the Pentagon’s acquisition office has flagged as a compliance risk. The contradiction reflects the broader tension in Anthropic’s position: the company is too capable to exclude from high-stakes applications and too principled to grant unconditional deployment permissions.


Part Three: Anthropic’s Own Research on What Claude Is Doing to Jobs

The Economic Index: Watching the Damage in Real Time

One of the most unusual things about Anthropic’s position in the IT employment crisis is that the company has built, and continues to publish, one of the most detailed systems in the world for tracking how its own AI is affecting the labour market. The Anthropic Economic Index - updated in regular reports - uses actual Claude usage data from millions of professional interactions to measure which occupations are seeing the highest rates of AI task coverage in production settings.

This is remarkable. Anthropic is not relying on theoretical analysis of which tasks AI could theoretically perform. It is measuring, using real usage data from its own model, which tasks are already being performed by AI in actual workplace deployments. The data is not a forecast. It is a live measurement of the employment disruption already underway.

The fifth Economic Index report, covering February 2026 Claude usage data, builds on what the company calls an “economic primitives” framework - a way of mapping AI capabilities to the specific tasks within each occupation, rather than treating occupations as monolithic units that are either AI-exposed or not.

The Labour Market Impacts Study: Published March 5, 2026

On March 5, 2026, Anthropic published “Labor Market Impacts of AI: A New Measure and Early Evidence,” authored by Maxim Massenkoff and Peter McCrory. This study represents the most methodologically sophisticated analysis yet of what AI is actually doing to the US labour market, and it is particularly significant because it uses real Claude usage data rather than theoretical task mapping.

The study introduces what the authors call “observed exposure” - a measure that combines the theoretical capability of AI to perform a task with the actual observed rate at which Claude is performing that task in production deployments. This is different from previous measures of AI exposure, which typically estimated how capable AI was at various tasks without measuring whether those capabilities were actually being deployed at scale.

The key findings of the study are simultaneously reassuring in their current numbers and alarming in their directional trajectory:

On unemployment, the study finds “no systematic increase in unemployment for highly exposed workers since late 2022.” The headline finding that generated most of the media coverage was that AI has not yet produced measurable unemployment. This is the “reassuring” finding.

On hiring, the picture is more ominous. The study finds a 14% drop in the job finding rate in the post-ChatGPT era compared to 2022 in AI-exposed occupations. This is the mechanism that economists have come to call “hiring freeze rather than firing” - AI is not yet causing mass layoffs in exposed occupations, but it is causing companies to reduce hiring in those occupations as productivity improvements reduce headcount requirements. For young workers particularly - those aged 22 to 25 entering the labour market - the study cites a 6% to 16% fall in employment in AI-exposed occupations.

On the capability-deployment gap, the study identifies what may be the most important finding for understanding the future trajectory of disruption. In computer and mathematical occupations, AI systems could theoretically handle 94% of tasks. Actual current AI usage covers approximately 33% of those tasks. This gap is enormous - and it is closing fast as AI capability improves and enterprise adoption accelerates. When that gap narrows substantially, the labour market impact will not be gradual. It will be sudden.

The study authors are careful to note that “AI is far from reaching its theoretical capability” and that “actual coverage remains a fraction of what’s feasible.” But this is cold comfort when the gap is closing at the observed pace and when the capability ceiling is being raised continuously by each successive Claude model release.

The Top Ten Most Exposed Occupations by Anthropic’s Own Data

Using the observed exposure measure from real Claude usage data, the Anthropic study identifies the occupations currently seeing the highest rates of AI task coverage in production settings. The list is instructive in ways that go beyond the expected narrative about AI and low-skill work:

Computer Programmers top the list at 75% observed task coverage. This is not a surprise given Claude Code’s explosive adoption, but the magnitude is striking. Three-quarters of the tasks that computer programmers perform as their core function are tasks that Claude is actively handling in enterprise deployments right now.

Customer Service Representatives rank second, reflecting the widespread deployment of Claude-powered customer service automation at enterprises ranging from telecom companies to financial institutions. The companies covered in the previous article in this series - BT, Vodafone, Salesforce, IBM - have all explicitly cited AI-driven customer service automation as a driver of headcount reduction.

Data Entry Keyers rank third at 67% observed coverage. The “primary task of reading source documents and entering data” is one of the cleanest examples of an AI-native workflow, and the automation of this function is already well advanced at the companies deploying Claude in data processing workflows.

The remaining top-ten positions include roles in technical writing, paralegal work, bookkeeping, medical transcription, and market research analysis. The common thread is that all of these roles involve documented, rule-following, pattern-matching work that can be described precisely enough for an AI model to execute reliably.

What is notable about the list is not simply which roles are highly exposed, but the profile of the workers in those roles. Anthropic’s research notes that workers in highly exposed occupations tend to be “older, female, more educated and higher-paid.” This directly contradicts the popular narrative that AI automation primarily threatens low-skill, low-wage workers. The current wave of Claude-driven displacement is concentrated in knowledge work performed by experienced, educated professionals - precisely the workers who were previously considered most secure from automation.

The Productivity-Unemployment Paradox

One of the intellectual contributions of Anthropic’s March 2026 research is its attempt to explain what appears to be a paradox: AI productivity improvements are real and measurable, but unemployment has not increased in aggregate. How can these two things both be true?

The study offers several mechanisms that can reconcile high AI exposure with stable unemployment:

The hiring slowdown mechanism: Companies are not firing existing workers in exposed occupations at a high rate. Instead, they are simply not hiring replacements when workers leave voluntarily. This produces headcount reduction through attrition that does not show up in unemployment statistics but does reduce employment levels over time.

The demand expansion mechanism: As AI makes certain tasks cheaper to perform, demand for those tasks may increase, partially or fully offsetting the productivity-driven reduction in human hours required per unit of output. This is the technological unemployment paradox in its classic form - the efficiency gain generates enough new demand to keep workers employed at different tasks or at higher volumes.

The skills transition mechanism: Workers in highly exposed occupations may be shifting to AI-supervisory functions - overseeing, validating, and directing AI output rather than producing it themselves. This maintains employment while fundamentally changing what the work involves.

The time lag mechanism: AI adoption in enterprises takes time. Large companies have complex procurement processes, security review requirements, integration challenges, and organisational change management needs that slow the pace of AI deployment relative to the speed of capability improvement. The disruption that is visible in enterprise deployments today reflects AI capabilities from six to twelve months ago, because that is how long enterprise adoption typically lags behind capability release.

The study suggests that all of these mechanisms are partially operating simultaneously. The unemployment stability is real but may be temporary, sustained by adoption lags and demand expansion that will not necessarily persist as AI capabilities continue to accelerate and enterprise adoption catches up to those capabilities.

The Closing Gap: Why the Stability Cannot Be Assumed to Continue

The most intellectually important section of the Anthropic study is its analysis of what happens when the gap between observed exposure (approximately 33% of computer programmer tasks currently handled by AI) and theoretical capability (94% of those tasks potentially addressable) begins to close. The authors are appropriately cautious about predicting timelines, but the directional implication is unambiguous.

When Claude Code can handle 75% of computer programmer tasks rather than the 33% currently observed, the employment implications are not a continuation of the current gradual hiring slowdown. They are a step-change restructuring of the software development labour market. The difference between “AI helps engineers work faster” (which produces gradual, manageable adjustment) and “AI handles most of what engineers do” (which produces structural restructuring of engineering teams) is not a matter of degree. It is a matter of kind.

Dario Amodei told The Economist that AI “might be 6 to 12 months away from when the model is doing most, maybe all, of what software engineers do end to end.” He said this at Davos in January 2026. Boris Cherny said the software engineer title would “start to go away” by the end of this year. These statements, from the people with the clearest visibility into the actual capability trajectory of Claude, suggest that the timeline for the gap to close is not measured in years. It is measured in months.


Part Four: The Market Destruction Anthropic Is Causing

The $2 Trillion Software Sector Sell-Off

One of the clearest indicators of the scale of disruption that Anthropic’s products are generating is what happens to the stock market when Anthropic announces something new. The enterprise software sector has lost approximately $2 trillion in combined market capitalisation from its peak as investors grow concerned about AI’s potential to render existing software products obsolete.

This is not abstract market volatility. Each Anthropic announcement that wipes tens or hundreds of billions from enterprise software valuations is the market’s aggregate assessment that the companies building those products have fewer years of earnings ahead of them than previously assumed, because Claude can now perform functions that those products were paid to enable.

When Anthropic launched Cowork’s enterprise plugins with integrations covering financial analysis, HR, sales, design, and operations, a software industry ETF fell 5.8% in a single session. The sell-off was specific: the companies that declined most sharply were precisely those whose products overlapped with Cowork’s new capabilities.

When Anthropic published its COBOL modernisation capabilities in Claude Code, IBM lost $40 billion in a single session. The market’s reasoning was straightforward: if AI can reliably translate COBOL mainframe code to modern languages, the human expertise in IBM’s mainframe services business - historically one of its most durable competitive advantages - faces compression. IBM’s mainframe consulting practice employs thousands of highly skilled engineers who command premium rates for their COBOL expertise. If Claude Code can perform that translation reliably, the economic foundation of that practice changes.

When Anthropic released cybersecurity capabilities in Claude Code that can “scan codebases for security vulnerabilities and suggest targeted software patches for human review,” cybersecurity software stocks declined. The market was pricing in the possibility that some of the functions performed by cybersecurity software platforms could be absorbed by a general-purpose AI tool rather than requiring specialised products.

The Software Replacement Threat in Detail

The threat that Anthropic poses to existing enterprise software products is more specific than a simple “AI can do it better” narrative. It operates through several distinct mechanisms.

The first mechanism is workflow absorption. When Cowork can perform a workflow that previously required a specialised software product - building a private equity scenario model, generating a sales prospecting list, reviewing a legal contract - the specialised product loses its necessity for that use case. Users who were paying for the specialised product because it was the only tool that could perform the workflow may now have an alternative that is cheaper and more flexible.

The second mechanism is integration displacement. Many enterprise software products derive significant value from their integrations - their ability to pull data from multiple sources, combine it, and produce outputs. Claude’s API access and its ability to integrate with any data source through the Claude developer ecosystem makes it a potential integration layer that can replicate the data connection value of specialised tools without requiring the dedicated product.

The third mechanism is the democratisation of technical capability. Specialised software products often derive their pricing power from the fact that they require trained professionals to operate effectively. When Cowork makes the same capabilities accessible to non-technical users through natural language, the specialisation premium that supported those products’ pricing compresses.

The fourth mechanism is the API commoditisation of AI capability. Companies that built their products around proprietary AI models are now competing against general-purpose frontier models that are more capable in most dimensions. The era of proprietary AI as a product differentiator is compressing rapidly as Anthropic, OpenAI, and Google release successive models that outperform specialised alternatives.

The COBOL Crisis: A Case Study in Market Disruption

IBM’s reaction to Anthropic’s COBOL capabilities deserves particular attention because it illustrates how even the most deeply moated professional knowledge categories can be disrupted by frontier AI. COBOL is a programming language developed in 1959 that still runs approximately $3 trillion in daily financial transactions through the mainframe systems of major banks, insurance companies, and government agencies. Maintaining, extending, and modernising these systems is a specialised skill that has become scarcer as the generation of engineers trained in COBOL has retired.

The specialised knowledge required to work with COBOL systems has historically made mainframe modernisation projects among the most expensive and time-intensive in enterprise technology. Consultants with genuine COBOL expertise command premium rates. Projects to migrate COBOL systems to modern languages are routinely scoped at tens of millions of dollars and three to seven years. IBM’s consulting practice built its most durable competitive advantage precisely on this expertise.

Anthropic’s claim that Claude Code could perform COBOL translation at enterprise quality was met with scepticism from IBM and other mainframe specialists, who correctly noted that translation of code is only one dimension of mainframe modernisation. Understanding the business logic embedded in decades of COBOL code, managing the transition without disrupting critical financial infrastructure, and validating the translated code in regulated environments are all genuine challenges that go beyond automated translation.

However, the market’s $40 billion reaction suggests that investors do not require Claude Code to completely replace IBM’s mainframe practice. They only require it to automate enough of the most time-consuming and expensive portions of that practice to compress the economics and reduce IBM’s pricing power. That is a lower bar, and it is one that current AI capabilities may already be approaching.


Part Five: The Tension Anthropic Lives With

The Dario Amodei Paradox: Seeing the Damage and Accelerating Anyway

Dario Amodei is unusual among the leaders of major AI companies in the directness and specificity of his warnings about AI’s potential to harm workers. He has said that AI could displace half of all entry-level white-collar jobs within one to five years. He has written about the possibility of AI creating “an unemployed or very-low-wage ‘underclass.’” He has said “I don’t know where these people will go or what they will do” and has specifically warned that the wealth concentration from the AI transition could exceed that of the Gilded Age, with individual fortunes potentially reaching into the trillions. He has publicly criticised other AI leaders for “sugar-coating” the disruption.

At the same time, Amodei continues to build and deploy the technology producing these outcomes at maximum speed. Anthropic’s product release velocity in 2026 has been extraordinary. More than thirty new products and features were launched in January alone. The Series G round of $30 billion was described in the funding announcement as fuel for “infrastructure expansion, research and continued investment in enterprise-grade products” - in other words, acceleration, not moderation. The projected revenue trajectory of $20 to $26 billion for 2026 implies a further significant expansion of enterprise adoption and, consequently, a further acceleration of the labour market disruption already underway.

How does Amodei reconcile his genuine concern about AI’s human costs with his role as the primary driver of those costs? His answer, articulated in his “Machines of Loving Grace” essay and in numerous public statements, has three components.

The first component is the inevitability argument. Powerful AI is going to be built. The question is not whether AI will be developed but who will develop it. If safety-focused researchers like Amodei do not build frontier AI, it will be built by people less focused on safety - and the outcomes will be worse. From this perspective, every month that Anthropic leads in capability while maintaining its safety commitments is a month when the development of the most powerful AI is in better hands than the alternative.

The second component is the long-run prosperity argument. The “Machines of Loving Grace” vision holds that sufficiently advanced AI will generate such enormous increases in human productivity - accelerating scientific discovery, enabling medical breakthroughs, solving previously intractable global problems - that the near-term disruption to labour markets is worth the long-run gains. This argument is explicitly utilitarian: the aggregate human welfare improvement from AI’s long-run benefits outweighs the individual human welfare losses from near-term labour market disruption.

The third component is the governance argument. The right response to AI’s labour market disruption is not to slow the technology but to build the policy, regulatory, and social infrastructure to manage the transition. Anthropic employs economists, policy experts, and social impact researchers specifically to study and advocate for appropriate governance responses. Amodei has called on governments to build safety nets, retraining programmes, and redistribution mechanisms to absorb the transition costs that AI is generating.

All three components of this argument are intellectually defensible. None of them provide much comfort to the software engineer in Bengaluru who is watching her joining letter remain unfulfilled, or the programme manager in Seattle who received a severance package after fifteen years with a company that just posted record revenues.

The Internal Tension: What Anthropic’s Own Employees Say

The tension that Amodei acknowledges exists not just in public philosophy but within Anthropic’s own workforce. Deep Ganguli, who leads the company’s societal impacts team and whose professional function is to study the labour market effects of Claude, told Time magazine: “It feels like we might be speaking out of both sides of our mouths.”

This is not a throwaway comment. Ganguli is the person within Anthropic who is paid to understand the employment consequences of the company’s products. When he says it feels like speaking out of both sides of the mouth, he is articulating the experience of an organisation that is simultaneously conducting the most serious research into AI’s social risks and deploying the technology creating those risks as fast as it can.

Boris Cherny, the creator of Claude Code, is another example of this internal tension. He told Fortune that the disruption his product is causing “shouldn’t be up to us” and that society needs to have a larger conversation. Anthropic takes the disruption “very, very seriously,” he said. And then he said: “I do think in the meantime, it’s going to be very disruptive, and it’s going to be painful for a lot of people.” He then noted that Anthropic is planning an IPO this year.

The IPO trajectory is itself a data point about the tension. An Anthropic IPO would create substantial personal wealth for its founders, employees, and investors. It would also create capital market pressure for continued revenue growth and product velocity - pressure that would make it structurally harder to moderate the pace of development for social reasons, even if individual employees wanted to.

The Funding Paradox: Safety Research Funded by Disruption

Anthropic’s official mission is “the responsible development and maintenance of advanced AI for the long-term benefit of humanity.” Its safety research programme is one of the most respected in the world. Its work on mechanistic interpretability - understanding how neural networks implement their computations at a mathematical level - has produced genuinely important scientific contributions. Its constitutional AI approach to training safer models is cited by researchers globally. Its alignment research team includes some of the best minds in the field.

This safety research is funded by the revenue generated by Claude’s enterprise deployment. Without Claude Code generating $2.5 billion in annualised revenue, without eight Fortune 10 companies paying for Claude access, without the enterprise automation capabilities that are displacing workers in IT services, BPO, and knowledge work, Anthropic would not have the resources to fund the safety research that is its stated primary mission.

The funding paradox therefore has a structural logic that goes beyond individual choices. The safety mission requires capital. Capital comes from enterprise adoption. Enterprise adoption causes labour displacement. Labour displacement generates the need for safety research and governance advocacy. The cycle is complete and self-sustaining.

This does not mean the safety research is insincere or the mission is fraudulent. It means that the economic constraints of building frontier AI create a situation where the organisation most capable of doing safety research is the organisation most dependent on the adoption of the tools creating the safety risks. This is not unique to Anthropic - the same dynamic characterises every safety-focused AI lab - but it is most visible at Anthropic because of how explicitly the company has articulated its safety mission relative to its commercial activities.


Part Six: The Employment Impact by Sector - Claude’s Specific Role

Financial Services: The Analyst Pipeline Disruption

Of all the enterprise sectors in which Anthropic has achieved deep adoption, financial services shows the clearest documented link between Claude deployment and workforce restructuring. Financial services firms have historically been among the most aggressive early adopters of AI tools, partly because the economic value of marginal improvements in analysis, risk management, and workflow efficiency is so high in a sector dealing with very large sums of money.

Anthropic’s Economic Index data shows that financial services occupations - particularly those in the Business and Financial category - rank among the highest in observed AI exposure. Roles including financial analyst, credit analyst, investment research analyst, and financial modelling specialist are all showing high rates of Claude task coverage in production deployments.

Several of the largest investment banks have deployed Claude through their Amazon Bedrock and Google Vertex AI relationships for internal research, document summarisation, financial modelling, and compliance documentation functions. While none have made public announcements linking Claude specifically to headcount changes, the pattern of hiring freezes and reduced analyst class sizes at several major banks correlates closely with the period of aggressive Claude enterprise deployment.

The most specific public data point comes from Anthropic’s own Cowork product announcement, which described modelling partnership integrations with FactSet, S&P Global, and LSEG specifically for private equity scenario modelling. These integrations are targeted directly at the analyst tier of investment management - the population of professionals who build the financial models that drive investment decisions. When Claude with FactSet access can build a scenario model in minutes that an analyst would take days to build, the economics of maintaining analyst-intensive research processes change fundamentally.

IT Services: The Infosys Partnership in Context

The Anthropic-Infosys partnership announced in February 2026 requires deeper analysis than it has received in most reporting, because it sits at the exact intersection of two simultaneous trends: the growth of Anthropic’s enterprise AI capabilities and the contraction of the Indian IT services workforce described in the previous article in this series.

Infosys, as previously documented, has approximately 317,000 employees with a declining headcount trajectory. It has been using its own Topaz AI platform to improve delivery efficiency, which in practice means the same project requires fewer person-hours and therefore less headcount. The partnership with Anthropic amplifies this dynamic by giving Infosys access to more capable underlying AI models and a more sophisticated agentic framework for building the automation tools that power Topaz.

The partnership announcement describes building “AI agents tailored to industry-specific operations” for telecom, financial services, and manufacturing clients. These are not agents that augment human workers. They are agents that replace specific categories of human workflow. The telecom clients are the same clients that companies like Tech Mahindra and Ericsson depend on, and those clients are already restructuring their IT spending in response to the AI tools that have made certain categories of telecom IT services work automatable. Introducing more capable Anthropic-powered agents into Infosys’s delivery for those clients accelerates the efficiency gains and, correspondingly, the headcount reduction.

Dario Amodei’s comment at the partnership announcement is worth quoting in full: “There’s a big gap between an AI model that works in a demo and one that works in a regulated industry - and if you want to close that gap, you need domain expertise. Infosys has exactly that kind of expertise across important industries: telecom, financial services and manufacturing.” What Amodei is describing is not just a business partnership. He is describing the mechanism through which frontier AI capability (Anthropic’s models) gets combined with industry-specific deployment expertise (Infosys’s domain knowledge) to produce AI agents that can reliably automate regulated industry workflows at scale. That is precisely the combination that produces the most severe employment consequences, because it overcomes the barrier that has previously slowed AI adoption in regulated industries: the difficulty of deploying AI reliably in contexts with strict compliance requirements.

Consulting and Professional Services

Anthropic’s most directly disruptive move into consulting and professional services may still be unfolding. When OpenAI announced in February 2026 that it was embedding engineers inside major consulting firms through its Frontier platform - creating the infrastructure for AI agents to be deployed directly into consulting workflows - Anthropic responded by accelerating the development of Cowork’s professional services plugins.

The competitive dynamic between Anthropic and OpenAI for the consulting sector is creating a race to the bottom on the economics of human consulting work. Both companies are actively competing to make their AI tools the standard for replacing the analytical, research, documentation, and synthesis work that consulting firms charge substantial fees for human consultants to produce. When a junior consultant who previously billed $150 per hour for research and analysis can now produce the same output in a fraction of the time using Claude or GPT-4o, the consulting firm either reduces its junior consultant headcount or adjusts its pricing - or more likely both.

The Big Four consulting firms - Deloitte, PwC, EY, and KPMG - are all active Claude enterprise customers. Accenture, which cut 11,000 employees as part of an AI-focused restructuring in 2025, has been one of the most aggressive consulting-sector adopters of AI tools. When consulting firms are simultaneously cutting headcount and deploying AI tools more aggressively, the causal relationship is not coincidental.

One of the most professionally significant sectors where Claude adoption is documented is legal services. Legal work has historically been considered relatively AI-resistant because of its dependence on professional judgement, contextual reasoning, and the liability frameworks that require licensed professionals to be responsible for legal advice.

The adoption of Claude in legal workflows has begun to challenge this resistance at the specific layer of legal research, document review, contract drafting, and regulatory analysis. These are functions that junior and mid-level lawyers, paralegals, and legal assistants have traditionally performed as the production layer beneath senior partner relationships.

Several major law firms are using Claude through enterprise agreements to handle first-pass contract review, regulatory change summaries, case research, and due diligence document processing. The economic effect is that the billable hours for associate-level work on these functions compresses, because AI-assisted associates can handle significantly more work per hour. For large law firms that have historically structured their business around large pyramids of associates billing thousands of hours per year, this compression creates a structural restructuring of the staffing model.

The specific legal sector stocks that declined when Cowork announced its professional services capabilities confirm the market’s view of the threat. Legal research and management software providers saw their shares decline as investors assessed the risk that Cowork’s natural language interface could absorb functions previously handled by specialised legal software products.

Healthcare and Life Sciences

Anthropic made Claude available to HIPAA-regulated healthcare enterprises in 2025, opening up a significant new market segment. Healthcare is one of the few sectors where AI adoption has been moderated by regulatory complexity, data privacy requirements, and the genuinely high stakes of decisions affecting patient care.

However, within healthcare the specific functions most vulnerable to Claude-based automation are not those involving clinical judgement - which remain appropriately protected - but those involving administrative, documentation, and analytical work. Medical transcription and clinical documentation, insurance pre-authorisation processing, medical coding and billing, and healthcare analytics are all categories where Claude’s capabilities are already being deployed in production environments.

The healthcare workforce impact is therefore concentrated not among clinicians - nurses, physicians, therapists - but among the large populations of administrative, documentation, and analytical workers who support the healthcare system. Medical transcriptionists, coding specialists, prior authorisation coordinators, and healthcare data analysts are the specific roles most exposed.

For these workers - many of whom work as contractors or in lower-wage positions without the financial cushions of highly paid tech workers - the displacement is financially severe relative to their resources.


Part Seven: What Anthropic’s Rise Means for India’s IT Sector Specifically

The Infosys Partnership: A Direct Line from Anthropic to Indian IT Employment

The previous article in this series documented the headcount reductions at TCS, Infosys, Wipro, HCL Technologies, and Tech Mahindra in detail. What that article did not fully develop was Anthropic’s specific causal role in those reductions. The Infosys partnership makes that link explicit.

Infosys’s Topaz platform, through which the company delivers its AI-enhanced services to clients, is now being powered in part by Claude. When Infosys completes a five-year financial services transformation project in three and a half years using AI-assisted delivery, Claude Code and Claude’s agent capabilities are part of the mechanism making that compression possible. When those compressed project timelines reduce the billable hours and therefore the headcount required per engagement, Infosys’s workforce contracts.

The Bengaluru, Hyderabad, and Pune employees who are on bench at Infosys while waiting for project redeployment are waiting, in part, because Claude is doing work that would previously have required their skills. This is not an exaggeration or a polemical claim. It is the direct implication of Infosys’s own technology strategy, which explicitly deploys Claude to improve delivery efficiency.

TCS’s situation is similar even without a formal Anthropic partnership announcement. TCS’s management has described AI tools - including large language models accessible through its commercial cloud relationships - as central to its delivery transformation. When TCS says it has 114,000 employees with “higher-order AI skills,” those skills include the ability to use and govern Claude and similar models in client delivery. The productivity improvement those skills enable is the mechanism through which TCS justifies reducing the headcount needed for project delivery.

The Fresher Pipeline Impact

For the hundreds of thousands of engineering students across India who are currently preparing for IT sector placement - using tools like the TCS NQT Preparation Guide and TCS ILP Preparation Guide on ReportMedic - Anthropic’s product trajectory has direct career implications.

The skills that create employment value in a world where Claude Code handles 75% of programming tasks are different from the skills that created employment value in a world where every line of code was human-written. The traditional IT services fresher hiring path - clear the NQT, join the ILP, become a proficient Java or testing engineer - is under direct pressure because the tasks those engineers were trained to perform are precisely the tasks most exposed to Claude’s current capabilities.

This does not mean that engineering careers are ending. It means that the skills most valuable at the entry point of an engineering career are changing faster than most educational institutions are adapting. Students who understand how to work with AI coding tools - how to direct Claude Code effectively, how to validate and review AI-generated code, how to architect systems that intelligently use AI components - are significantly better positioned than those who treat traditional programming skills as their primary career asset.

The deeper implication for Indian engineering education is structural. The NQT and ILP model was built around the assumption that a large volume of freshers would be hired and trained in standard development practices for deployment on large-scale IT services projects. When AI tools compress the number of humans required for those projects, the entire pipeline needs to be rethought - and Anthropic’s product roadmap suggests the rethinking needs to happen faster than the educational system is likely to move.

The GCC Opportunity as Anthropic Accelerates

The most optimistic scenario for Indian engineering talent in the Anthropic era is the GCC pathway. Global Capability Centres of Fortune 500 companies in Bengaluru, Hyderabad, and Pune are increasingly hiring specifically for AI-native roles - engineers who can build, deploy, validate, and govern AI systems including Claude-based agents. These roles pay significantly more than traditional IT services roles and offer exposure to the kinds of problems that are most intellectually interesting in the current period.

Anthropic itself, through its enterprise partnerships, is creating demand for a new category of Indian technical talent: engineers who understand how to build production-grade AI agents using Claude’s APIs, how to design agentic workflows for regulated industries, and how to validate AI outputs against compliance requirements. These are skills that Infosys’s Center of Excellence with Anthropic will develop, and that will command a premium in the broader market.

The challenge is the mismatch in scale. The number of high-quality AI-native roles available at GCCs and through Anthropic-related enterprise deployments is a fraction of the number of traditional IT services roles that are contracting. The opportunity is real but narrow relative to the employment base it would need to absorb.


Part Eight: Competitive Dynamics - Anthropic vs OpenAI and What It Means for Employment

The Race That Nobody Can Win by Slowing Down

Understanding Anthropic’s role in the IT layoff wave requires understanding the competitive dynamics of the frontier AI market, because those dynamics create structural pressures that prevent any responsible reduction in pace regardless of the social consequences.

Anthropic is in a race with OpenAI, Google DeepMind, Meta AI, and a growing field of well-funded challenger labs. The race is for enterprise revenue, for talent, for infrastructure capacity, and for the capability improvements that translate into both. In this race, the company that releases the most capable model fastest acquires the most enterprise customers, generates the most revenue, and uses that revenue to acquire more compute and attract more researchers, which produces more capable models, which attracts more customers - the self-reinforcing cycle that Amodei describes in technical terms as the “loop closing.”

No company in this race can choose to slow down for social reasons without suffering a competitive consequence. If Anthropic were to voluntarily moderate its product release pace to give the labour market more time to adjust, OpenAI would capture the enterprise customers that Anthropic declined to serve, generate the revenue that Anthropic declined to generate, and use it to advance capabilities that Anthropic chose not to advance. The safety mission that requires Anthropic to remain at the frontier would be undermined by the decision to prioritise short-term social considerations over competitive position.

This dynamic is not a rationalisation invented by AI companies to excuse their behaviour. It is a genuine structural constraint that operates like an arms race, where every party would prefer slower, more orderly progress but cannot unilaterally disarm without losing the strategic position that allows them to influence the outcome.

The employment consequence of this competitive dynamic is that the IT sector is not going to get a pause or a gradual deployment timeline that allows workers and institutions to adjust. The products are going to keep coming at the pace that the competitive market demands, and the labour market adjustments will have to happen in response rather than in anticipation.

The OpenAI Parallel and Why Both Companies Cause Harm

Anthropic and OpenAI are often positioned as being at opposite ends of the AI safety and responsibility spectrum. This positioning is partially true but substantially misleading in the context of employment disruption. Both companies are deploying frontier AI at the maximum speed their compute and talent allow. Both companies are building products that directly automate knowledge work. Both companies generate enormous economic value for their enterprise customers and their investors while generating labour market disruption for the workers whose functions those products are automating.

The differences between them are real but operate at a different level than the employment disruption question. Anthropic is more willing to refuse deployment requests that conflict with safety principles (as illustrated by the Pentagon controversy). Anthropic’s research into AI safety and alignment is more systematic. Anthropic’s communications about AI risks are more direct and less optimistic than OpenAI’s.

But for the programme manager at Amazon whose role was eliminated, or the financial analyst at a hedge fund whose responsibilities have been absorbed by Claude, or the QA engineer at TCS whose manual testing function has been automated, the difference between Anthropic and OpenAI is not salient. Both companies’ products are doing the same kind of work to the same categories of jobs at comparable speeds.

The practical implication is that the employment disruption is not something that can be addressed by replacing one AI company’s products with another’s. It is structural to the deployment of frontier AI capability in enterprise workflows, and it requires structural policy and social responses rather than product selection decisions.

The Stock Market Reaction as a Leading Employment Indicator

One of the most useful data sources for anticipating future employment disruption is the stock market reaction to Anthropic’s product announcements. When markets assess that Anthropic’s new capability poses a credible threat to an established software company or service sector, the negative stock price reaction is an efficient market estimate of the economic damage the capability is likely to cause. Tracking these reactions over time provides a forward-looking indicator of which sectors and worker populations will face the next wave of disruption.

The $2 trillion enterprise software sector sell-off that has accompanied Anthropic’s aggressive product roadmap in 2026 is therefore not just a financial event. It is a labour market signal. The sectors whose stocks have declined most sharply are the sectors where displacement is most imminent, because market pricing reflects the expected near-term changes in those sectors’ economics.

The enterprise analytics and business intelligence software sector has been one of the hardest hit, reflecting the threat Cowork’s financial analysis plugins pose to specialised analytics tools. The HR management software sector has declined as Cowork’s HR workflow capabilities demonstrate that AI can handle significant portions of HR operations. The legal research software sector has declined as Claude’s legal analysis capabilities expand. The IT services sector - including the Indian IT companies whose headcount is discussed throughout this analysis - has underperformed the broader market as investors reduce their revenue growth assumptions for companies whose billable hours are being compressed by AI.


Part Nine: The Forward View from Anthropic’s Roadmap

What Claude Opus 4.6 and the Next Model Release Signal

The current Claude Opus 4.6 release, with its one million token context window and Agent Teams capability, represents a specific advance in the employment disruption trajectory. The one million token context window means that Claude can process and reason across entire codebases, entire financial reporting archives, entire legal document sets, or entire research corpora in a single session - without the context compression that previously required human judgement to manage long-form analysis.

For software engineering, the practical implication is that Claude Code can now work with large, complex, interdependent codebases more effectively than it could with the previous 200,000 token limit. The long-context capability removes one of the main remaining human advantages in software maintenance: the ability to hold the context of a large codebase in mind while making changes that need to be consistent across it. Claude can now do this more reliably than most human engineers.

For financial and legal analysis, the implication is that Claude can review and synthesise entire deal rooms, entire regulatory change sets, or entire investment research archives in a single analysis session. The analyst hours previously required to work through these document sets sequentially are directly targeted by this capability.

The Agent Teams capability - multiple Claude instances working in parallel on complex tasks - begins to replicate the productivity model of human teams for AI-automatable workflows. If one Claude can do what one human can do in certain task categories, an Agent Team can do what a team of humans could do, at a fraction of the cost and with continuous availability.

The next capability advance that Anthropic is most likely to release, based on the pattern of recent releases and the competitive dynamics with OpenAI, is improved computer use - the ability for Claude to directly control computer interfaces, not just generate text outputs. Computer use enables Claude to perform any task that involves navigating software applications, filling forms, extracting data from visual interfaces, and managing files through graphical interfaces. When that capability matures to production reliability, the range of human workflows that Claude can automate expands dramatically beyond the text and code generation functions that define its current footprint.

The Agentic AI Market and the Employment Multiplier

The agentic AI market - the market for AI systems that can execute multi-step tasks autonomously, not just respond to individual prompts - has grown to approximately $9 billion in 2026 according to industry estimates. Anthropic and OpenAI are the two primary suppliers of the frontier models that power agentic deployments, with Claude’s Agent SDK providing the framework for building autonomous Claude agents across enterprise workflows.

The employment consequences of agentic AI are qualitatively different from those of conversational or generative AI. A conversational AI that helps a human write documents more quickly amplifies that human’s productivity but keeps them in the loop. An agentic AI that autonomously manages a workflow - researching, drafting, reviewing, approving, and communicating - removes the human from the loop entirely for that workflow. The employment implication is not “one AI-assisted human replaces two humans.” It is “one AI agent replaces one human workflow entirely.”

Claude’s Agent Teams capability pushes this further by enabling collections of AI agents to work in parallel, with different agents specialised for different aspects of a complex workflow. This directly targets the project-based employment model that underlies IT services delivery at companies like TCS, Infosys, and Capgemini - where teams of humans with complementary skills collaborate on complex, multi-phase deliverables. When Agent Teams can replicate that collaborative model at a fraction of the cost, the headcount required for those projects compresses toward the minimum needed for human oversight and validation.

The IPO and What It Means for Pace

Anthropic’s reported preparations for an IPO - including retaining Wilson Sonsini as an IPO adviser - would be one of the most consequential financial events in the current AI cycle. An Anthropic IPO would almost certainly be among the largest in technology history given the company’s current $380 billion valuation, and it would create massive new wealth for founders, employees, and investors.

It would also create new structural pressures on the company’s pace. Public market investors expect quarterly revenue growth, market share expansion, and visible product momentum. These expectations are not inherently incompatible with safety - the two need not be in direct tension. But they create an institutional gravity toward prioritising product development velocity and market share over the more contemplative, governance-focused activities that safety research entails.

Anthropic’s governance architecture - the Public Benefit Corporation structure and the Long-Term Benefit Trust - was explicitly designed to resist this gravity. Whether those structures are robust enough to maintain the company’s current balance between safety research and product velocity under the scrutiny of public capital markets is an open question. The answer matters enormously for the trajectory of AI deployment and its employment consequences.


Part Ten: The Response Anthropic Believes Is Adequate

The Economic Policy Advocacy

Anthropic has been more active in policy advocacy around AI’s labour market consequences than most of its peers. The company has submitted comments to regulatory processes, engaged with Congressional and parliamentary staff, and published research that is intended to inform policy rather than simply advance the company’s commercial interests.

The policy prescriptions that Anthropic most consistently advocates for include: expanded social safety net programmes that can support workers during the transition period between AI-displaced roles and new employment; investment in reskilling and education programmes that give workers the ability to move into AI-adjacent roles; and transparency requirements that give workers better information about when and how AI is being used in employment decisions.

On the redistribution question, Amodei has endorsed the idea that the extraordinary wealth generated by AI - including the substantial wealth that Anthropic’s founders and investors will realise - should be subject to redistribution mechanisms that share those gains more broadly with workers displaced by the technology. He has been specific enough to name “trillions” of individual fortunes and to suggest that tax policy should be adapted to prevent that level of wealth concentration.

These are genuinely progressive policy positions, more progressive than those typically associated with Silicon Valley founders. The gap between Anthropic’s policy advocacy and the pace of its product deployment does not reflect hypocrisy so much as the structural constraint discussed earlier: Amodei can simultaneously hold that aggressive redistribution is required and that Anthropic must continue deploying at maximum speed, because the alternative is losing the competitive position that makes Anthropic’s safety research influential.

The Societal Impacts Team

Anthropic’s investment in a dedicated societal impacts team, led by Deep Ganguli, is one of the most visible institutional commitments to taking the employment consequences of AI seriously. The team’s work includes the Economic Index reports, the labour market impacts research, and ongoing analysis of how Claude is being used in practice and what that means for workers in exposed occupations.

The research produced by this team - including the March 2026 labour market study - represents a genuine contribution to the understanding of AI’s employment effects. It is more methodologically rigorous than most corporate impact research, more transparent about its data sources, and more forthcoming about the uncertainties in its findings.

But the team’s work is also downstream of the product decisions. The Economic Index reports come after Claude has been deployed, not before. The labour market research measures the disruption that is already happening, rather than informing whether to deploy capabilities that might cause disruption. The institutional architecture positions societal impact research as a monitoring and advocacy function rather than as a gating function for product releases.

This is probably unavoidable given the competitive dynamics. If Anthropic’s societal impacts team could veto product releases based on labour market concerns, the company would rapidly lose competitive position to rivals with no such constraint. But the institutional positioning of the team - important, well-funded, intellectually rigorous, and ultimately advisory rather than operational - is itself a statement about the relative weight of commercial and social considerations in the company’s decision-making.


Frequently Asked Questions

Q1: Is Anthropic directly causing the IT sector layoffs documented in 2026?

The causal chain is real but not always direct. Anthropic builds and deploys Claude, including Claude Code. Enterprise companies use Claude to improve the efficiency of software development, IT services delivery, financial analysis, legal research, and a widening range of knowledge work functions. The efficiency improvements reduce the number of human workers required for those functions. The headcount reductions documented at TCS, Infosys, Amazon, Microsoft, and others are the downstream consequence of that efficiency improvement, which Claude is powering in part. The Infosys partnership makes the link explicit: Anthropic is directly partnering with one of India’s largest IT employers to deploy AI agents that automate enterprise workflows, and Infosys is simultaneously managing a declining headcount trajectory. These two facts are connected.

Q2: What did Anthropic’s March 2026 labour market study actually find?

The study found three things that need to be understood together. First, there is currently no measurable increase in unemployment in AI-exposed occupations compared to non-exposed ones. Second, hiring has slowed by approximately 14% in AI-exposed occupations - workers are not being fired but are not being hired as replacements when they leave. Third, the gap between what AI is currently doing and what it is theoretically capable of doing is enormous and closing fast. Computer Programmer tasks, for example, show 75% current observed AI coverage against a 94% theoretical capability ceiling. The study’s conclusion is not “AI isn’t hurting workers yet.” It is “the labour market impacts so far are concentrated in reduced hiring, and the full impact is not yet visible because adoption lags capability.”

Q3: Why does Anthropic keep releasing disruptive products while publishing research about the harm they cause?

The answer is structural rather than personal. Anthropic must remain at the frontier of AI capability to fulfil its safety mission, because safety research is only influential if conducted by organisations that are building the most capable systems. If Anthropic slowed its product development for social reasons, it would lose competitive position to OpenAI and others with less focus on safety, producing worse outcomes. The company is therefore in a situation where the safety mission requires the commercial velocity that produces the disruption the safety mission is meant to prevent. This is the central tension that Anthropic employees, including Deep Ganguli, have publicly acknowledged as uncomfortable.

Q4: How significant is the $2 trillion enterprise software sell-off that Anthropic’s products have caused?

Extremely significant as a forward indicator. Stock price declines reflect investors’ revised expectations about future earnings for the affected companies. When enterprise software companies lose $2 trillion in aggregate market value in response to Anthropic’s product launches, it represents the market’s estimate that those companies’ products will generate less revenue over the coming years than previously assumed, because Claude’s capabilities are absorbing functions those products were paid to perform. The sectors experiencing the largest stock price declines - enterprise analytics, sales intelligence, HR software, legal research tools - are the sectors where the next wave of employment disruption is most likely to be concentrated.

Q5: What specifically makes Claude Code different from GitHub Copilot or other coding tools?

The fundamental difference is between assistance and agency. GitHub Copilot (in its initial versions) and most earlier AI coding tools are context-sensitive autocomplete systems: they suggest completions for lines or functions based on surrounding code context. Claude Code is an agentic system: it can receive a high-level description of a desired outcome, plan the steps to achieve it, execute those steps across files and repositories, interact with external services and APIs, and iterate on its own output when it encounters errors. One senior Google engineer described Claude Code recreating a year’s worth of software development work in a single hour. That scale of capability difference is what separates assistance from replacement.

Q6: Is Anthropic’s valuation of $380 billion justified?

At $14 billion in annualised run-rate revenue growing more than 10x annually, Anthropic is one of the fastest-growing technology companies in history. Its enterprise penetration - eight Fortune 10 clients, over 500 customers paying more than $1 million annually - reflects genuine product adoption rather than speculative positioning. The valuation multiple relative to revenue is high by historical software standards but is being applied to a company whose revenue trajectory, if maintained, would produce revenue levels that make the current multiple appear reasonable. The primary risk factors are competition from OpenAI, Google, and others, and the unresolved question of Anthropic’s path to profitability given its ongoing $3 billion annual cash burn on compute infrastructure.

Q7: What does Anthropic’s partnership with Palantir mean for government and defence employment?

The Anthropic-Palantir partnership makes Claude the only AI model cleared for classified mission use in the US intelligence community. Within intelligence and defence functions, Claude is being deployed for document analysis, intelligence synthesis, research support, and administrative automation - functions previously performed by human analysts, contractors, and administrative staff. Government employment data does not appear in the tech layoff trackers that generate public statistics, so the employment consequences within the national security community are essentially invisible in most analyses. But the adoption of Claude at multiple national security agencies represents a significant deployment of AI into government knowledge work that will have employment consequences that are as real as those in the private sector, simply less visible.

Q8: How does Anthropic’s Claude Gov product affect IT employment in India?

Claude Gov creates a specific employment challenge for Indian IT services companies that provide IT services and support to US federal agencies. Companies including Infosys, Wipro, and TCS all have significant US federal IT services businesses. When the US government deploys Claude through classified deployments for intelligence analysis, document processing, and operational support, it reduces the scope of the human-staffed IT services engagements that Indian IT companies provide to support those functions. The government contracting market in the United States is a significant revenue source for several Indian IT companies, and the penetration of Claude Gov into classified deployments represents an automation of services that were previously delivered by human teams including offshore delivery components.

Q9: What is the “economic primitives” framework that Anthropic’s Economic Index uses?

The economic primitives framework is a method of decomposing occupations into their constituent tasks and then measuring AI capabilities against those specific tasks rather than treating occupations as monolithic units. Rather than asking “is a financial analyst’s job exposed to AI?” - which produces an oversimplified yes or no - the framework asks “which specific tasks within a financial analyst’s workflow is AI currently performing in production settings?” This task-level decomposition produces much more actionable insights: it can identify that the data gathering and initial modelling tasks in financial analysis show high AI coverage while the client communication and interpretive judgement tasks show low coverage. The framework is methodologically superior to previous approaches and represents a genuine contribution to understanding how AI adoption is progressing through the economy.

Q10: How is the competitive race between Anthropic and OpenAI affecting the pace of employment disruption?

The competitive dynamics between Anthropic and OpenAI are accelerating the pace of enterprise AI deployment, which accelerates the pace of employment disruption. When OpenAI announces new enterprise capabilities, Anthropic responds with updates and new features. When Anthropic releases Claude Code capabilities that threaten enterprise software markets, OpenAI accelerates its Codex product. Each competitive move forces a response, and the net effect is a product release velocity that is constrained only by compute capacity and engineering talent rather than by social considerations. Workers and institutions are given less time to adapt to each capability advance because the advance is followed immediately by the next one. The competitive dynamics are producing a pace of disruption that no individual company in the race has chosen to create, but that the collective competitive structure is generating.

Q11: Should Indian engineering students avoid preparing for software engineering careers?

No, but they should change what they are preparing for. The engineering fundamentals - algorithmic thinking, systems design, mathematical reasoning, software architecture - retain their value in the AI era even as the specific implementation tasks those skills were previously applied to become increasingly automatable. The students best positioned for the AI era are those who can understand, direct, validate, and govern AI systems rather than those who compete with AI on the production tasks. Building a portfolio that demonstrates AI tool proficiency - projects that show effective use of Claude Code, AI testing frameworks, AI-enhanced development pipelines - alongside strong fundamentals is the combination that creates career resilience. The TCS NQT Preparation Guide provides the foundational technical skills that remain relevant, while supplementing that preparation with hands-on AI tool experience creates the differentiated profile that the current hiring environment rewards.

Q12: What is Anthropic’s “Machines of Loving Grace” essay and why does it matter for understanding IT employment?

Published by Dario Amodei in 2024, the essay lays out Amodei’s vision for how advanced AI could transform human welfare. The essay is important for understanding Anthropic’s approach to the employment question because it frames the displacement of human workers as an acceptable near-term cost for the enormous long-term benefits that AI-enabled scientific and economic progress would generate. The essay is explicitly presented to every visitor at Anthropic’s San Francisco headquarters. It represents not just Amodei’s personal view but the philosophical foundation from which Anthropic’s product decisions flow. Understanding the essay helps explain why a company that acknowledges it may be creating a “permanent underclass” continues to accelerate the products producing that outcome: the utilitarian calculus, as Amodei articulates it, favours long-run human welfare even at the cost of near-term disruption.

Q13: How does the Pentagon supply-chain risk designation affect Anthropic’s enterprise relationships?

The designation creates legal and procedural complications for defence contractors and companies with significant US government revenue who use Anthropic products. Amazon and Google, both major investors in Anthropic, are substantial US defence contractors, and the designation creates a potential conflict between their Anthropic relationships and their defence contracting relationships. The expected resolution is either a negotiated agreement between Anthropic and the relevant government agencies about usage conditions, or a gradual replacement of Anthropic products in defence-adjacent applications with OpenAI or other providers who have accepted less restrictive usage conditions. The commercial impact on Anthropic is real but bounded - government and defence represent a small fraction of Anthropic’s overall revenue relative to commercial enterprise. The significance is more reputational and precedent-setting: it establishes that Anthropic will accept commercial costs to maintain its safety commitments.

Q14: Is Claude Code actually used in Indian IT services delivery today?

Claude is available through AWS Bedrock and Google Cloud Vertex AI, both of which are widely used by Indian IT services companies for cloud infrastructure. The Anthropic-Infosys partnership explicitly involves Claude in enterprise delivery workflows. While TCS and Wipro do not make specific disclosures about which AI models power their internal tools, their cloud partnerships with AWS and Google imply access to Claude alongside other frontier models. Claude Code has been adopted by Infosys developers, as Dario Amodei noted in the partnership announcement. The penetration is real and growing as the partnership expands, and the efficiency implications for delivery headcount are being realised across the industry.

Q15: What happens to the employment market if Anthropic achieves its prediction that AI handles “all of what software engineers do end to end”?

If Dario Amodei’s six to twelve month timeline for AI handling most or all software engineering work is realised, the employment consequences for the global software engineering workforce would be unprecedented in scope and speed. Approximately 26 million software developers work globally, with substantial concentrations in the United States, India, China, and the EU. A rapid transition to AI-driven software production would not eliminate all of these roles immediately - the supervisory, architectural, validation, and governance functions would retain human value for a significant transition period. But the economics of the software engineering profession - and the enormous investment that has been made in training the generation of engineers currently entering the workforce - would be fundamentally disrupted on a timeline that the educational and policy systems are not prepared to match.

Q16: Does Anthropic make money from the IT layoffs it contributes to?

Yes, directly. When a company deploys Claude to improve delivery efficiency and consequently reduces its human workforce, Anthropic receives the API revenue from the Claude usage that powered the efficiency improvement. The enterprise customers who are reducing headcount while increasing Claude usage are simultaneously growing Anthropic’s revenue and reducing the employment base in the sectors they are rationalising. Anthropic’s $14 billion revenue run-rate is built substantially on this dynamic: it is the financial expression of the efficiency gains that enterprises are realising by replacing human labour with Claude. This does not make Anthropic malicious or unique - every company selling productivity software benefits from the efficiency gains its customers achieve. But it does mean that Anthropic’s financial success is structurally correlated with the labour market disruption its products generate.

Q17: How is Anthropic’s approach to safety different from OpenAI’s, and does the difference matter for employment?

The most visible difference is in the refusal to allow certain high-risk deployments - the Pentagon controversy being the clearest example. Anthropic’s constitutional AI training approach, its mechanistic interpretability research, and its more systematic alignment research programme represent genuine differences in how the two companies approach AI safety. For employment specifically, the difference is less significant than for other safety dimensions. Neither Anthropic nor OpenAI has imposed voluntary constraints on the pace of capability deployment for social reasons. Both are deploying at the maximum speed competitive dynamics allow. The employment disruption from frontier AI deployment is structurally similar regardless of which company’s model is being used - the safety differences are meaningful for other risk categories but not for the labour market disruption question specifically.

Q18: What would a responsible Anthropic deployment policy look like if the company prioritised employment consequences?

This is a genuinely difficult question because the competitive dynamics make voluntary restraint strategically costly. But a more socially responsible deployment approach might include: mandatory advance notice periods before releasing capabilities shown by the Economic Index to cross employment displacement thresholds; revenue-sharing arrangements where Anthropic contributes a portion of the efficiency gains captured by enterprise customers to worker transition funds; active advocacy for specific, quantified redistribution policies rather than general statements about the need for safety nets; and partnership with labour organisations and worker advocacy groups to co-design deployment policies that maintain worker voice in the transition. None of these would be costless or competitively simple. But they represent the kind of specific commitments that would give substance to the general statements about taking disruption seriously.

Q19: What is the Anthropic Economic Index and how often is it published?

The Anthropic Economic Index is a regular publication from Anthropic’s societal impacts team that tracks how Claude is being used in professional and enterprise contexts, maps that usage to occupational data, and measures the observed AI exposure of different occupations and industries. It has been published in multiple editions through 2025 and into 2026, with the fifth report covering February 2026 usage data released in early March 2026. The Index represents a serious and methodologically sophisticated effort to understand AI’s labour market effects in near-real-time, using actual usage data from Anthropic’s own model rather than theoretical capability assessments. It is one of the most useful public data sources for understanding which occupations and industries are already experiencing significant AI-driven workflow change in production settings.

Q20: If Anthropic achieves its mission of making AI safe, does the employment disruption become acceptable?

This is the core philosophical question at the heart of Anthropic’s position, and there is no consensus answer. Amodei’s view, as expressed in “Machines of Loving Grace,” is essentially yes: if AI is deployed safely and the long-run gains in human welfare are realised, the near-term disruption was worth it. The workers displaced in the transition, in this view, are unfortunate casualties of a process that will ultimately make humanity better off. A competing view holds that the distributional consequences of AI deployment - who bears the costs and who captures the benefits - cannot be separated from the question of whether deployment is safe. A technology that concentrates enormous wealth at the top while displacing middle-class careers in the middle is not safe in the relevant social sense regardless of whether it is aligned in the technical sense. Both views have serious intellectual backing, and the difference between them reflects a deeper disagreement about what counts as harm, who counts as the relevant community, and what timescale is appropriate for evaluation.


Conclusion: The Paradox That Defines Our Moment

The portrait that emerges from this analysis is of a company that is, by almost any measure, doing exactly what it said it would do. Anthropic was founded to build powerful AI safely, to study its risks seriously, and to deploy it in ways that pursue long-run human benefit. It is doing all of those things. It has published some of the most serious research on AI risks and labour market impacts of any organisation. It has refused commercially costly deployments on principle. It has built safety research infrastructure that is genuinely advancing the science of making AI more understandable and aligned. And it has, simultaneously, raised $30 billion, achieved a $380 billion valuation, generated $14 billion in annualised revenue, and deployed products that are contributing to the most concentrated episode of white-collar job displacement in the history of the technology industry.

The paradox is that Anthropic’s success at its stated mission is inseparable from the disruption its products are generating. The company builds capable AI carefully and deploys it responsibly - and capable AI, responsibly deployed at enterprise scale, still displaces workers. The question that the history of technology never fully resolves is whether the long-run prosperity that technological disruption ultimately generates is worth the human cost of the transition, and whether that cost is distributed fairly enough that the workers who bear it do so as part of a just social arrangement rather than as the collateral damage of someone else’s progress.

What is certain is that the workers in Bengaluru, Seattle, Stockholm, and London who are experiencing the disruption right now do not have the luxury of the long run. They are navigating the transition in real time, with the skills they have, in the labour market that currently exists. For them, the most useful thing is not philosophical analysis of whether Anthropic’s utilitarian calculus is correct. It is accurate information about what is happening, which skills remain valuable, and which pathways are still open.

The previous article in this series covered the twenty largest IT companies globally and their specific layoff numbers. This article has attempted to add the dimension that was most conspicuously absent from most coverage: the role that Anthropic - the company building one of the most capable and most widely deployed AI systems in the world, while simultaneously studying and publishing the evidence of its own workforce displacement effects - plays in the story of 2026’s IT employment crisis.

That role is central, documented, and irreversible. The direction of travel is clear. The pace is faster than any institution is moving to manage the consequences. And the conversation that Boris Cherny said “shouldn’t be up to us” - about what the future of work looks like when AI can do most of what engineers do - is a conversation that is no longer hypothetical. It is the conversation that the IT industry is having right now, in earnings calls and bench management meetings and job forums and parliamentary committees, whether it has chosen to or not.


Research for this article draws on Anthropic’s published Economic Index reports and labour market research, earnings call transcripts and investor communications from Anthropic’s enterprise partners, reporting from Time, Fortune, CNBC, Quartz, SF Standard, Bloomberg, and CBS News, Wikipedia and Sacra financial data on Anthropic’s funding and revenue trajectory, and the original reporting of the IT sector layoff wave documented in this series. All figures are accurate as of the date of publication. For IT career preparation resources, the TCS NQT Preparation Guide and TCS ILP Preparation Guide at ReportMedic remain among the most comprehensive study tools available for engineering students navigating the current market.


Part Eleven: The Products in Detail - Every Claude Tool and Its Employment Footprint

To understand Anthropic’s specific contribution to the IT layoff wave with the precision that workers, policymakers, and industry observers need, it is necessary to examine each major Claude product not just in terms of its commercial features but in terms of its specific employment footprint - which occupations it targets, which workflows it absorbs, and which populations of workers are most directly affected.

Claude.ai - The Gateway Product

Claude.ai is Anthropic’s direct consumer and professional interface - the web application, mobile app, and desktop application through which individual users access Claude’s capabilities. With subscription tiers including Claude Pro ($20/month), Claude Teams ($30/month per seat), and Claude Max for high-volume professional users, Claude.ai serves the individual and small team market that represents the first point of contact between Claude’s capabilities and knowledge workers.

The employment significance of Claude.ai is primarily at the individual productivity level. A professional using Claude Pro to write, research, analyse, summarise, and communicate has productivity multiples that reduce the need for support staff and junior contributors in their immediate workflow. A marketing director using Claude to draft campaign briefs, a lawyer using it to summarise research, a consultant using it to synthesise interview notes - each of these use cases represents a substitution of AI for the junior professional or support staff who would previously have done that work.

At the subscription scale Anthropic has reached - business subscriptions quadrupled since the start of 2026 alone - the aggregate employment effect of Claude.ai’s productivity enhancement across millions of professional users is substantial even if each individual user’s impact seems modest. When Claude Pro substitutes for one hour per day of junior professional time for a million users, it is displacing the equivalent of approximately 125,000 full-time positions in aggregate.

The products that have been most directly affected in the stock market as Claude.ai has expanded are the productivity, content creation, and research tools whose core value propositions are the same as Claude’s: helping knowledge workers produce higher-quality work faster. Grammarly, Notion AI, and similar tools have faced competitive pressure as Claude’s capabilities overlap and in many cases exceed their core features.

Claude API and Amazon Bedrock

The Claude API is the channel through which developers and enterprises integrate Claude capabilities directly into their products and workflows. Every API call represents Claude performing a function that was previously performed either by a human or by a less capable automated system. The API has two major institutional access points: direct API access from Anthropic and access through Amazon Bedrock, which is where the large majority of enterprise deployment happens.

Amazon Bedrock’s role in the employment story is structurally important. When an enterprise deploys Claude through Bedrock for a specific workflow, they are typically replacing a human process that was either labour-intensive or previously unautomated. The Bedrock relationship means that Amazon’s $8 billion investment in Anthropic produces two simultaneous effects: it makes Anthropic’s models more widely accessible to enterprises, and it makes those enterprises’ operations more efficient in ways that reduce their human headcount requirements. Amazon is therefore simultaneously the primary beneficiary of Claude’s efficiency gains among enterprise customers, a major contributor to the capital funding Claude’s development, and one of the companies most aggressively cutting its own workforce through AI-driven efficiency.

The major verticals where Claude API deployment through Bedrock has the most documented employment impact are:

Financial services back-office operations: Automated document processing, compliance reporting, and client communication workflows.

Healthcare administrative functions: Prior authorisation processing, clinical documentation, medical coding, and insurance claims processing.

Legal services document review: Due diligence, contract review, regulatory analysis, and discovery processing.

IT services delivery: Code generation, testing automation, documentation, and project management support at scale - the functions most directly affecting Indian IT services employment.

Customer service and support: First-line customer inquiry handling, complaint resolution routing, and FAQ management.

In each of these categories, the API deployment pattern is similar: a company identifies a high-volume, well-defined workflow; deploys Claude to handle it with minimal human oversight; measures the productivity gain; and adjusts headcount accordingly. The workers previously performing those workflows experience either direct redundancy or a significant restructuring of their roles toward oversight and exception handling.

Claude Code - The Deepest Cut

Claude Code’s employment footprint has been discussed extensively in earlier sections, but several dimensions deserve additional development.

The legacy code modernisation use case is one of the most consequential for senior engineering employment. Many of the highest-compensated software engineers in enterprise IT - those earning $200,000 to $400,000 at major banks, insurance companies, and government contractors - earn those salaries specifically because they maintain expertise in legacy systems written in COBOL, Fortran, or early-generation Java that are difficult to understand, modify, and extend. These systems typically underpin the most critical operations of their owning organisations.

When Claude Code can reliably translate and modernise these systems, the scarcity premium that makes these engineers so valuable diminishes. The expertise in legacy COBOL that previously took decades of accumulated knowledge to master can be partially replaced by an AI that can read, understand, and translate that code with a context window large enough to hold an entire codebase.

IBM’s specific $40 billion stock market loss in response to Anthropic’s COBOL claim reflects the market’s estimate of how much of IBM’s enterprise services revenue is protected by the scarcity of this expertise. If that scarcity is reduced, IBM’s ability to charge premium rates for mainframe services and modernisation projects is reduced, and the employment of the highly skilled consultants who deliver those projects is at risk.

The open-source development ecosystem is another dimension of Claude Code’s employment impact that receives less attention. An enormous amount of software infrastructure - the web frameworks, security libraries, data processing tools, and development utilities that power the modern software stack - is maintained by small teams of developers, sometimes individuals, who are compensated through a combination of GitHub Sponsors, Open Collective funding, and corporate sponsorships. When Claude Code can perform the routine maintenance, bug fixing, and feature addition tasks that these maintainers previously had to do manually, the economic case for funding human maintainers weakens. Several prominent open-source maintainers have publicly noted that their sponsorship income has declined as enterprises have begun using AI tools to address the issues and extensions they were previously paying humans to implement.

Cowork - The Non-Coder Disruption

Cowork represents the extension of Claude Code’s agentic capabilities to the non-technical knowledge worker population - a population that is larger, more geographically distributed, and less financially resilient than the software engineering workforce. The eleven open-source plugins launched at Cowork’s general availability in January 2026 each warrant individual analysis of their employment implications.

The FactSet integration for private equity scenario modelling targets a workflow performed primarily by analysts at private equity firms, investment banks, and corporate finance departments. A senior PE analyst in a major fund might spend twenty to forty hours building a detailed financial model for a potential acquisition - gathering market data, structuring the model, running scenarios, and producing the presentation. The FactSet-powered Cowork workflow can produce a comparable output in a fraction of that time. At the analyst tier of finance, where headcount is already under pressure from AI tools, this capability directly affects hiring and retention decisions.

The S&P Global and LSEG integrations target credit analysis and financial research workflows at banks, asset managers, and insurance companies. These are populations of workers who have invested in highly specific technical credentials - CFA designations, credit analyst training, quantitative finance backgrounds - that are now being partially displaced by tools that do not require those credentials to produce comparable outputs.

The Apollo integration for sales operations targets the Sales Development Representative role specifically. The SDR function - reaching out to prospects, personalising communications, managing follow-up sequences, and qualifying leads - was one of the fastest-growing categories of tech company employment in the 2015 to 2022 period. A typical SDR role involves researching prospects through tools like Apollo, personalising outreach based on that research, and managing a high volume of personalised communications. Cowork with Apollo integration can perform this entire workflow autonomously, at a volume and personalisation level that no individual SDR could match. The SDR role, already under pressure from earlier generations of sales automation tools, faces existential pressure from Cowork.

The DocuSign and Google Drive integrations target document management, contract administration, and administrative workflow automation. These are functions performed at every level of enterprise and professional service organisations - legal assistants, executive assistants, HR coordinators, and operations administrators. The people most affected are those whose role consists primarily of managing, routing, and processing documents through defined workflows, tasks that Cowork can automate end-to-end.

Claude in Chrome - The Browser Agent Dimension

Claude in Chrome, Anthropic’s browser integration, allows Claude to operate directly within the web browser environment - navigating websites, filling forms, extracting data, and interacting with web applications in ways that directly replicate a significant portion of knowledge worker web-based activity. This capability is particularly relevant for roles whose primary activity involves working with web-based tools: CRM systems, project management platforms, web-based analytics dashboards, customer-facing web portals, and the many enterprise systems that are accessed through browser interfaces.

The practical employment implications are most acute for roles in customer-facing web administration, data collection and entry from web sources, online research synthesis, and any function that involves systematically interacting with web-based systems in defined, repeatable ways. When Claude in Chrome can log into a CRM, review customer records, generate follow-up communications, and update account statuses autonomously, the need for humans to perform those operations manually disappears.

Claude in Excel and Claude in Powerpoint

Two of Anthropic’s beta products are particularly targeted at the populations of knowledge workers whose primary productive function involves creating and maintaining Excel spreadsheets and PowerPoint presentations. These are, in aggregate, some of the most widely deployed work tools in the world. The typical financial analyst builds dozens of Excel models per year. The typical management consultant produces hundreds of PowerPoint slides. The typical project manager maintains complex Excel trackers across multiple simultaneous engagements.

Claude in Excel can create, modify, and analyse spreadsheets based on natural language instructions. A user can describe the structure of a financial model they want, specify the data sources, define the calculation logic, and have Claude produce a complete, formatted spreadsheet with working formulas, charts, and documentation. Claude in PowerPoint can create and modify presentations, generate slide content from briefs, reformat existing decks for different audiences, and produce entire presentation packages from source materials.

These beta products target, between them, an enormous share of the quantifiable work output of the financial services and consulting sectors - two of the highest-margin, highest-pay knowledge work sectors in the economy. When the primary deliverable of an analyst or consultant role can be produced autonomously by AI with minimal human direction, the economic model of selling analyst hours at premium billing rates changes fundamentally.

Claude Code for Cybersecurity

Anthropic’s addition of cybersecurity capabilities to Claude Code - specifically the ability to scan codebases for security vulnerabilities and suggest targeted patches - introduces Claude into the cybersecurity profession. This is a sector that has been considered relatively AI-resistant because of the creative and adversarial nature of security work - attackers constantly find novel vulnerabilities, and defenders must find them first.

The code vulnerability scanning function that Claude Code performs is not the creative, adversarial dimension of cybersecurity. It is the systematic, comprehensive review dimension - ensuring that known vulnerability patterns are not present in a codebase, that security best practices are followed, and that obvious attack surfaces are identified. This systematic review work has historically required significant human effort, particularly in large codebases. Automating it reduces the need for junior security analysts who perform this function.

The cybersecurity sector has been one of the few IT categories with consistent employment growth even through the broader layoff wave, because the threat landscape keeps expanding. Claude Code’s security scanning capability does not eliminate this growth, but it may slow it by allowing smaller security teams to review larger attack surfaces with the same or greater thoroughness.


Part Twelve: The Numbers Behind Anthropic’s Growth and Their Employment Mirror Image

Revenue Growth and What It Represents

Anthropic’s revenue growth from approximately $1 billion at the start of 2025 to an estimated $19 billion in annualised revenue in March 2026 - growth of more than 18x in approximately 14 months - represents one of the most extraordinary revenue scaling events in the history of enterprise software. To put this in context: Salesforce, one of the most successful enterprise software companies ever built, took approximately eight years to reach $1 billion in annual revenue from its founding. Anthropic covered that distance in a matter of months.

This revenue growth has a direct mirror image in employment displacement. Every dollar of Anthropic’s revenue comes from an enterprise customer that is using Claude to perform a function. Some of those functions are genuinely new - things that were not previously done at all because they were too expensive or time-consuming to perform manually. But most of Anthropic’s enterprise revenue comes from customers using Claude to perform functions that were previously performed by human workers, performed by less capable software, or performed at lower quality or efficiency.

The revenue from Claude Code alone - $2.5 billion annualised as of February 2026 - represents enterprise spending on a product whose entire value proposition is that it reduces the human effort required to produce software. Every dollar spent on Claude Code is money that is not being spent on the human engineers who would previously have produced the same output. The ratio is not one-to-one - Claude Code enables individual engineers to produce more output, so some of the revenue comes from enabling expanded output rather than from replacing existing output. But the structural direction is clear: Claude Code revenue growth is inversely correlated with the growth in software engineering employment per unit of software output.

The Customer Concentration Data

Anthropic’s customer data tells an employment story that is sometimes obscured by the focus on total customer counts. The company reports more than 300,000 business customers and more than 500 customers spending over $1 million annually. Eight of the Fortune 10 are Claude customers.

The Fortune 10 relationship is particularly significant. The ten largest companies in the United States by revenue include Amazon, Walmart, Apple, UnitedHealth Group, Berkshire Hathaway, CVS Health, Exxon Mobil, Alphabet, Microsoft, and McKesson. At least eight of these companies are now Claude customers. Their combined employment is approximately 3 million people in the United States alone, and their combined global employment is considerably higher.

When Walmart, UnitedHealth Group, and CVS Health - three of the largest employers in the United States - are using Claude to automate administrative, analytical, and operational workflows, the scale of the employment impact is vast. These are not technology companies with relatively small workforces. They are companies with hundreds of thousands of workers in roles that include significant amounts of document processing, data analysis, customer communication, and administrative coordination - all functions with high Claude task coverage rates according to the Economic Index.

The 7x growth in customers spending over $100,000 annually on Claude in the past year indicates that the deepest enterprise adoption - where Claude is embedded in critical business workflows rather than used for peripheral tasks - is growing faster than overall customer counts. Deep workflow embedding is where the most permanent employment displacement occurs, because it represents replacement of human workflow rather than supplementation of human capability.

The Compute Investment and Its Implications

Anthropic has committed to $30 billion in computing capacity through its partnerships with Microsoft and Nvidia, in addition to its massive usage of AWS Trainium chips and Google TPUs. The company expects to bring over one gigawatt of AI compute capacity online in 2026. These are not incremental investments in marginal capability improvements. They are the infrastructure required to serve a customer base that is growing at the rate Anthropic’s revenue trajectory implies.

The compute investment signals continued capability advancement. More compute enables training of more capable models, which enables the deployment of more ambitious agentic systems, which enables the automation of more complex human workflows, which affects more knowledge worker employment categories. The $30 billion compute commitment is therefore a commitment to the continuation and acceleration of the disruption trajectory - at a scale that no previous enterprise software company has been able to sustain.


Part Thirteen: Geographic Specifics - Where Anthropic’s Impact Is Felt Most

San Francisco: The Epicentre’s Complicated Relationship with the Tool

San Francisco is simultaneously the city most dependent on technology sector employment and the home of Anthropic, the company most aggressively disrupting that employment base. The tension is not lost on the city’s residents or its economic planners.

As documented in the previous article in this series, San Francisco’s commercial office vacancy rate reached 36.7% in Q1 2026. The technology sector, which has been the primary driver of San Francisco’s extraordinary economic development over the last two decades, is contracting its physical footprint even as the AI companies within it are growing rapidly in revenue and valuation.

Labour economist Enrico Moretti at UC Berkeley offers the most cited optimistic view of San Francisco’s specific situation: “San Francisco is probably the only city in the US where, on net, the AI revolution will add to employment and not subtract.” His reasoning is that the early phase of AI development concentrates research and technical employment in San Francisco, and that as AI moves from research to commercial deployment, the scale of that deployment will create more employment in the Bay Area than the displacement it causes elsewhere.

Moretti’s argument may be correct at the level of aggregate employment in the San Francisco metro area. But it obscures the distributional consequences within the area: the software engineer in Menlo Park who was earning $250,000 developing enterprise applications and loses that job to Claude Code does not automatically transition into one of the AI research roles that Moretti expects to grow. The skills gap between displaced enterprise engineers and the researchers being hired at Anthropic and OpenAI is substantial.

Additionally, the jobs being created at Anthropic, which employs well under 10,000 people, are far fewer in number than the jobs being displaced across the technology sector by Anthropic’s products. The argument that Anthropic’s growth creates enough employment to offset the displacement is not supported by the ratio of Anthropic’s headcount to the scale of displacement its products generate.

Bengaluru: Ground Zero for the IT Services Transition

Of all the cities affected by Anthropic’s enterprise deployment, Bengaluru faces the most acute combination of scale, concentration, and limited alternative employment options. The city’s technology employment is almost entirely concentrated in IT services companies - TCS, Infosys, Wipro, and their peers - rather than distributed across a diverse technology sector that includes product companies, AI labs, and hyperscalers. When IT services employment contracts, Bengaluru’s economy contracts with it in ways that Silicon Valley does not, because there is no Anthropic-equivalent employer growing in Bengaluru to partially absorb the displaced workforce.

The Anthropic-Infosys partnership means that the AI tool most directly compressing Infosys’s delivery headcount is being built and managed by a company headquartered 12,000 kilometres away, whose leadership has acknowledged the disruption while continuing to accelerate it, and whose economic gains will accrue primarily to investors in the United States, Singapore, and the Gulf while the employment consequences are distributed among workers in Bengaluru, Hyderabad, and Pune.

This geographic asymmetry in who captures the gains and who bears the costs of the AI transition is one of the most politically and socially significant dimensions of the current moment. Anthropic’s $380 billion valuation reflects the aggregate assessment of global investors about the value of what the company has built. The workers in the Indian IT services industry whose employment it is displacing will not share in that valuation. The policy and redistribution frameworks that could bridge that gap do not yet exist.

Seattle: Amazon’s Company Town Feels the Ripples

Seattle’s specific relationship with the Anthropic story runs through Amazon. Amazon has invested $8 billion in Anthropic, making it the single largest investor. Amazon has also simultaneously eliminated approximately 30,000 corporate positions - many concentrated in Seattle and Bellevue - while describing the cuts as an “anti-bureaucracy push” partially enabled by AI tools. Claude, deployed through AWS Bedrock, is among the AI tools enabling that efficiency improvement.

The irony that Amazon’s investment in the company whose tools are partially justifying its own workforce reduction has not been lost on analysts or on displaced Amazon employees. From Amazon’s perspective, the investment is strategically rational: if AI tools are going to compress headcount requirements, it is better to be the primary cloud provider for the most widely adopted AI model than to be the enterprise customer paying for someone else’s AI deployment. The efficiency gains from AI-driven headcount reduction and the revenue gains from being Anthropic’s cloud partner can coexist as positive financial outcomes for Amazon, regardless of the employment consequences for the workers eliminated in the process.

Seattle’s commercial real estate market reflects this dynamic. As Amazon reduces its headcount and its corresponding office space requirements, the sublease availability in Seattle’s tech corridor has increased 22% year over year. The businesses - restaurants, retail, services - that built themselves around Amazon employee spending are experiencing a secondary contraction that compounds the primary employment loss.


Part Fourteen: The Voices of Those Affected

What Software Engineers Are Actually Saying

The qualitative experience of the software engineers most directly affected by Claude Code’s deployment is documented across public forums, social media, and journalistic accounts in ways that aggregate statistics cannot capture. The “Claude Christmas” accounts are the clearest collective record of what the felt experience of encountering a tool that can replace significant portions of your professional function is actually like.

One Silicon Valley engineer, quoted anonymously in a February 2026 article in the SF Standard, described working with the November 2025 Claude Code release over the holiday break: “I gave it a project I had been avoiding for months - a complete rewrite of our recommendation engine. I expected to spend three weeks on it after the holiday. Claude Code had a working version in six hours. Not a prototype. A working version with tests and documentation. I sat with that for a long time.”

The psychological experience of watching an AI tool accomplish in hours what would take a skilled professional weeks is different in kind from watching it autocomplete a function. It is not an efficiency improvement to an existing professional identity. It is a displacement of that identity. The engineers who went through this experience are now navigating a professional identity crisis that has not been adequately addressed in the public discourse around AI and employment, which tends to be either triumphalist (new tools enable more creativity) or apocalyptic (all jobs will be replaced) without engaging with the specific emotional experience of the transition.

The “permanent underclass” gallows humour that has spread through San Francisco’s tech community reflects this specific psychological position: the awareness that one is on the wrong side of a capability gap that is widening faster than one can close it through retraining, combined with the financial security that still exists (for those still employed) and therefore prevents the urgency that might drive faster adaptation.

What Indian IT Workers Are Saying

The qualitative experience in India is different in its specific texture but parallel in its fundamental character. Engineering graduates who received TCS or Infosys offer letters in 2024 and are still waiting for joining dates in March 2026 have developed an entire language and community around what they call the “joining date limbo.” Forums including Reddit’s r/developersIndia, LinkedIn communities, and specific Telegram groups for affected candidates document the experience in detail.

What emerges from these communities is a picture of enormous uncertainty that has qualitatively different dimensions from the more financially cushioned uncertainty of Silicon Valley engineers. For the engineering graduate from a family that borrowed for the engineering education on the expectation of an IT sector job, the uncertainty is financial in a more immediate and severe sense. The TCS or Infosys offer letter was not just a career opportunity. It was a family economic event, a vindication of the educational investment, and a social status marker in a community where IT employment carries enormous meaning.

When that offer letter sits unfulfilled for seven, nine, twelve months while the companies cite “business conditions” and “fresher intake planning,” the human cost is measured not just in the income not earned but in the family conversations not had, the loans not paid, and the social expectations not met. The AI efficiency improvements that are partially responsible for this situation are generating their benefits in the quarterly earnings of companies headquartered far from the communities bearing the cost.


Part Fifteen: The Counterfactual - What If Anthropic Did Not Exist?

It is worth examining, briefly, the counterfactual: if Anthropic had not been founded, or had been less successful, would the IT sector layoff wave look different?

The honest answer is: not dramatically. OpenAI, Google DeepMind, Meta AI, and other frontier labs are all building AI systems with comparable capabilities on comparable timelines. The capabilities that Claude Code demonstrates - agentic software development, long-context reasoning, multi-step workflow automation - are not unique to Anthropic. OpenAI’s Codex, Google’s Gemini Code, and Meta’s Code Llama are all developing along parallel trajectories. If Anthropic did not exist, enterprise customers would be deploying the alternatives, and the employment consequences would be similar.

What is distinctive about Anthropic’s contribution is not the existence of the disruption but its velocity and its specific enterprise penetration pattern. Anthropic’s focus on enterprise reliability, safety, and compliance has made Claude the AI of choice for the most consequential enterprise deployments - in regulated industries, in national security, in the Fortune 10. These are the deployments with the largest employment footprints, and the quality and reliability of Claude’s performance in those contexts has driven adoption faster than would likely have occurred with less enterprise-focused alternatives.

Anthropic has also, through the Economic Index and its labour market research, produced the most detailed and credible public documentation of the employment disruption already underway. By making this data public, Anthropic has ensured that the disruption cannot be dismissed as theoretical or speculative - its own usage data proves that computer programmers are seeing 75% of their tasks handled by AI in production settings. This transparency is genuinely valuable for policy and for workers, even though it comes from the same company creating the disruption.

The counterfactual assessment therefore reaches a nuanced conclusion: the IT sector layoff wave would exist without Anthropic, but Anthropic’s specific contributions - enterprise reliability, safety reputation, and the Infosys-style partnerships that embed Claude in large-scale IT services delivery - have accelerated and deepened the disruption in specific enterprise and regulated industry contexts that are central to the employment narrative.


Postscript: The Small Book

The pocket-sized copy of “Machines of Loving Grace” that Anthropic hands to visitors is an act of unusual honesty. Most technology companies do not invite visitors to read extended documents about their belief systems before they proceed to business discussions. Anthropic’s decision to do this reflects the genuine seriousness with which the company’s founders hold the philosophical stakes of what they are building.

The essay is worth reading in full by anyone trying to understand Anthropic’s specific role in the employment story of 2026. It is not a corporate communication document. It is a sincere attempt by a person who has thought deeply about the consequences of what he is building to articulate both why he is doing it and what he fears the consequences might be. The fears are present. The words “unemployed underclass” and “wealth concentration exceeding the Gilded Age” are not rhetorical flourishes. They are outcomes that Amodei considers genuinely possible and genuinely alarming.

What the essay does not contain is a mechanism for preventing those outcomes while continuing to build at the pace Anthropic is building. The essay calls for policy responses, for redistribution, for social safety nets, for the conversation about the future of work that should not be left solely to AI companies to define. What it does not contain is a slowdown.

The question that the twenty companies analysed in the previous article and Anthropic’s specific role analysed here collectively raise is not whether AI disruption is happening. The data is unambiguous on that point. The question is who should bear the cost of the transition and who should capture the gains. The answer that the current market structure is producing is deeply asymmetric: investors in Anthropic and its competitors are capturing enormous gains at $380 billion valuations, while the workers whose functions are being automated are absorbing the costs in the form of slower hiring, deferred joining dates, eliminated roles, and an uncertain retraining pathway into an economy whose shape nobody can clearly see.

Whether that asymmetry can be corrected, at the pace the disruption is happening, through the policy mechanisms that Amodei and others are advocating for, is the most important open question in the global labour market right now. The small book does not answer it. Neither does this article. But asking it clearly, without the softening language of “restructuring” and “workforce transformation” that obscures the human stakes, is the necessary beginning.


This article was produced using data from Anthropic’s published Economic Index reports, labour market research published March 5, 2026, funding announcements, and product launches through March 2026. Financial data sourced from Sacra Research, CNBC, and Anthropic’s own investor communications. Competitive landscape information drawn from reporting in Fortune, Time, Quartz, SF Standard, Inc., and Business Insider. Wikipedia’s Anthropic article provided chronological context for the company’s development history. All editorial analysis and synthesis is original. For career preparation resources suited to the current AI-era IT job market, visit ReportMedic.


Part Sixteen: A Deeper Look at the Anthropic Economic Index - Methodology and Implications

How the Index Actually Works

The Anthropic Economic Index deserves extended examination because it is the most methodologically robust publicly available dataset for understanding which specific knowledge work tasks are already being automated by AI in production settings. Most AI employment analyses rely on theoretical capability assessments - asking whether AI could, in principle, perform a given task. The Economic Index measures what AI is actually doing, in real workflows, at enterprise scale.

The methodology begins with Anthropic’s first-party data on how Claude is being used across millions of professional interactions. The company can observe, at an aggregated and privacy-preserving level, the categories of tasks being submitted to Claude, the types of outputs being requested, and the professional contexts in which those requests are made. This usage data is then mapped to the Standard Occupational Classification (SOC) codes that the US Bureau of Labor Statistics uses to categorise employment, creating a bridge between AI usage patterns and employment data.

The resulting measure of “observed exposure” is constructed for each occupation by combining two dimensions: what Claude is theoretically capable of doing with the tasks in that occupation (theoretical capability), and how frequently Claude is actually being used for those tasks in production (observed usage). The resulting metric is more useful than either dimension alone: theoretical capability without observed usage may reflect potential that has not yet been deployed; observed usage without theoretical capability context may understate the future exposure as capability improves.

The February 2026 data shows several patterns that are important for understanding the current and near-future employment landscape:

The top exposed occupations by observed measure are concentrated in three SOC categories: Computer and Mathematical occupations, Office and Administrative Support, and Business and Financial Operations. These are not the manual, routine occupations that popular narratives about AI automation focused on in the 2015 to 2020 period. They are precisely the occupations that the white-collar professional class considers the most secure and career-stable.

The gap between observed and theoretical exposure is largest in Legal, Healthcare, and Education occupations. The theoretical AI capability to handle tasks in these occupations is substantial - judges, physicians, and teachers perform many tasks that AI could theoretically assist with or replace. The observed usage is lower than the theoretical ceiling, reflecting the legal and regulatory constraints, liability considerations, and institutional inertia that slow AI adoption in these professions. This gap represents future exposure potential: as constraints ease and tools improve, these occupations will see adoption acceleration.

The occupations with the smallest gap between theoretical and observed exposure - meaning AI adoption is most mature relative to capability - are Computer Programmers, Data Entry workers, and Customer Service Representatives. These are the occupations where the theoretical case for AI automation was clearest and where enterprise adoption has proceeded fastest. They are the leading edge of a disruption pattern that will extend to other occupations as the constraints limiting adoption in other areas are progressively removed.

The BLS Projection Correlation

One of the most important findings in Anthropic’s March 2026 study is the correlation between observed AI exposure and the Bureau of Labor Statistics’ employment projections through 2034. Occupations with higher observed AI exposure are projected to grow more slowly - or decline - through 2034, even in BLS projections that were made before the most recent accelerations in AI capability.

This correlation is significant because BLS projections are typically conservative - they are made by professional labour economists who historically underestimate the pace of technological displacement. If even conservative BLS projections show slower growth for AI-exposed occupations, the actual employment trajectory in those occupations is likely to be more adverse than the projections suggest.

The occupations showing both high current observed exposure and projected employment decline through 2034 include Computer Programmers, File Clerks, Data Entry Keyers, Word Processors, and Customer Service Representatives in insurance. The total employment in these specific occupations is in the millions. The employment decline projected by BLS for these roles over the next eight years, when adjusted for the likely acceleration of AI capability and adoption, may be substantially front-loaded rather than gradual.

What the Data Does Not Yet Show

Honest interpretation of the Economic Index requires acknowledging what the data cannot yet show. The Index measures current observed exposure based on current Claude usage patterns. It does not reflect the usage patterns that will exist when Claude Code handles 75% rather than the current significant fraction of programming tasks, or when Cowork’s plugins are embedded in workflows across the Fortune 1000, or when the next generation of Claude models dramatically expands the capability ceiling in occupations currently below the observed exposure frontier.

The March 2026 study’s finding of “no systematic increase in unemployment for highly exposed workers” is therefore a statement about the past eighteen months, not a prediction about the coming eighteen months. The capabilities that are accelerating the disruption were not yet deployed for most of the period the study covers. Claude Code reached general availability only in May 2025. Cowork launched in January 2026. The full enterprise adoption of these tools at scale will be reflected in Economic Index data published in 2027 and 2028, not in the March 2026 report.

This temporal gap between capability release, enterprise adoption, and observable labour market impact is the mechanism through which AI companies can simultaneously claim that their research shows “no systematic increase in unemployment” and that their products are among the most consequential workforce disruption tools ever deployed. Both statements are accurate but refer to different time horizons.


Part Seventeen: The Constitutional AI Approach and Whether It Changes the Employment Calculus

Constitutional AI: What It Is and What It Is Not

One of Anthropic’s most distinctive technical contributions is Constitutional AI (CAI), the training methodology that uses a set of principles (the “constitution”) to guide Claude’s behaviour across a wide range of deployment contexts. CAI is designed to make Claude more helpful, harmless, and honest by training it to critique and revise its own outputs based on the constitutional principles rather than relying solely on human feedback for every possible response.

From a safety perspective, CAI represents a genuine technical advance: it produces models that are more reliably aligned with human values across a wider range of contexts than models trained purely on human preference data. This matters significantly for high-stakes deployments in regulated industries, national security, and applications where model behaviour needs to be predictable and auditable.

What CAI does not change is the employment impact calculus. Whether Claude declines harmful requests because of constitutional AI training or for any other reason, its core productivity-enhancing and workflow-automating capabilities - the features that drive employment displacement - are unchanged. Constitutional AI makes Claude safer to deploy at enterprise scale, but it also makes it more trustworthy and therefore more adoptable at enterprise scale, which accelerates rather than moderates the deployment that produces employment consequences.

In the IT employment context specifically, Constitutional AI’s primary relevance is that it makes regulated industry clients - banks, insurance companies, healthcare organisations, government agencies - more willing to deploy Claude in production workflows with real compliance and audit requirements. This is the specific adoption unlocking that Anthropic’s enterprise strategy has relied on: being the AI that regulated industries trust. That trust, built on technical safety features like CAI, translates directly into the enterprise deployment that drives the revenue growth and the workforce disruption simultaneously.

The Interpretability Research and What It Could Theoretically Enable

Anthropic’s mechanistic interpretability research - the work of understanding how Claude’s neural networks implement their computations at a mathematical level - represents a longer-horizon research programme that could, in principle, produce tools for understanding and predicting AI behaviour in ways that would help manage deployment risks.

For employment specifically, mature interpretability tools could theoretically enable more precise predictions of which tasks Claude would perform reliably in a given deployment context, allowing more targeted deployment that captures productivity benefits while preserving employment in tasks that require human judgement. Rather than deploying Claude broadly across all functions in a workflow and then discovering which human tasks it has displaced, interpretability tools might allow deployment to be precisely scoped to the tasks where AI assistance is genuinely beneficial without displacing human roles that remain valuable.

This is a theoretical future-state, not a current reality. Anthropic’s interpretability research is at an early stage, and the gap between academic advances in neural network interpretability and production deployment tools for enterprise AI governance is wide. But it represents one potential pathway through which safety-focused AI research could, in the long run, produce deployment practices that are more targeted and less broadly disruptive than the current “deploy broadly and measure consequences” approach.


Part Eighteen: What Workers and Students Should Do

The macro analysis of Anthropic’s role in the IT layoff wave is genuinely important for policy, for corporate governance, and for social understanding of the transition we are in. But it has limited direct utility for the software engineer navigating a changed job market, the engineering student planning a career, or the IT services professional trying to understand whether their specific role is at risk. This section provides the most evidence-based guidance available for those individual decisions.

For Current Software Engineers

The pattern of Claude Code adoption in enterprise environments suggests several specific career strategies for engineers navigating this environment.

Move up the abstraction stack. The roles most resilient to Claude Code disruption are those operating at higher abstraction levels than the tasks Claude Code handles most effectively. Claude Code excels at implementation - writing code to specification. It is considerably less capable at requirements analysis, architecture design, system integration strategy, and the product thinking that determines what should be built rather than how to build it. Moving toward roles that concentrate on the “what” and “why” of software systems rather than the “how” creates distance from the specific tasks most automated by Claude Code.

Develop AI-native skills. The engineers most valuable in the current environment are those who can work with Claude Code and similar tools not just as users but as designers and validators - understanding how to architect agentic systems, how to validate AI-generated code for correctness and security, and how to design workflows that intelligently incorporate AI components. These are skills that require investment in understanding AI system behaviour at a technical level, not just using AI tools as black boxes.

Build domain depth alongside technical skill. The Infosys-Anthropic partnership and similar enterprise AI deployments create demand for engineers who combine technical AI skills with deep domain expertise in specific industries. The engineers who understand both how to build Claude-powered agents and how regulated financial services workflows actually operate are far more valuable than those with only one dimension. Domain depth in healthcare, financial services, manufacturing, or other regulated industries creates a combination that AI tools currently cannot replicate.

For Engineering Students Entering the Market

For students currently preparing for IT sector entry, the strategic landscape has changed in ways that the institutions preparing them have not fully absorbed.

The traditional Indian IT services entry path - clear the NQT, join the ILP, build Java or testing skills in the first years of employment - is under direct pressure because the tasks those engineers were trained to perform are precisely the tasks most exposed to Claude’s current capabilities. This does not mean the pathway has closed, but it means that the skills expected of entry-level engineers at the start of their careers are shifting faster than most college curricula are adapting.

Students building the strongest profiles for the current market are combining foundational engineering skills (data structures, algorithms, system design) with hands-on AI tool experience (Claude Code, Python AI libraries, cloud AI services) and demonstrable project portfolios that show AI-enhanced development capability. The TCS NQT Preparation Guide provides the foundational technical skills base that remains relevant and is still tested in selection processes, while the ILP preparation at TCS ILP Preparation Guide helps with the initial training programme. Building on these foundations with visible AI tool proficiency is the combination that differentiates candidates in the current hiring environment.

The GCC pathway - joining a Global Capability Centre of a US or European company in India rather than a traditional IT services firm - should be seriously explored by high-ability students. GCCs typically pay 40% to 60% more than IT services entry roles, offer technically more challenging work, and have a lower exposure to the specific AI efficiency pressures that are contracting IT services headcount. The jobs available at JPMorgan’s Bengaluru GCC, Goldman Sachs’s Hyderabad tech centre, or Walmart’s India technology organisation are qualitatively different from IT services roles and are growing while IT services roles contract.

For IT Services Mid-Career Professionals

The mid-career professional - the five to fifteen year experienced engineer, delivery manager, or project coordinator at a major Indian IT firm - faces the most complicated navigation challenge. They are too experienced to benefit easily from the mass fresher hiring that continues at reduced volumes, and they are exposed to the AI-driven compression of the delivery functions their roles support.

The most evidence-based advice for this cohort combines several strategies. Invest in AI tool proficiency actively rather than treating it as a nice-to-have - the engineers who demonstrate genuine working knowledge of Claude Code, AI testing frameworks, and AI-enhanced delivery workflows are the ones who are being retained and redeployed rather than benched. Pursue certifications and training in cloud platforms, specifically in the AI services layers of AWS (Amazon Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Azure AI Foundry) - these are the infrastructure layers through which Claude and other AI models are deployed, and proficiency in managing those deployments is in genuinely high demand. Build client-facing skills actively if currently in a back-office delivery role, because client relationship management - understanding client requirements, communicating project status, managing stakeholder expectations - is among the functions least automatable by current AI tools and most valuable to IT services companies that need to maintain revenue from AI-transformed delivery models.


All guidance in this section is based on the observable patterns in current hiring data, Anthropic’s Economic Index research, and the documented restructuring decisions at the twenty companies analysed in this series. It represents the best available evidence-based assessment as of the date of publication, in a rapidly changing environment where new developments may alter the outlook significantly.


Part Nineteen: Anthropic vs the World - A Competitive Analysis with Employment Implications

How Claude Compares to OpenAI’s ChatGPT/Codex on Employment Impact

The competitive distinction between Anthropic and OpenAI that matters most for the employment story is enterprise penetration depth rather than raw capability comparison. Both companies produce models at the frontier of AI capability. The differences in their employment impact are a function of how deeply each has been embedded in the enterprise workflows that are most employment-intensive.

Anthropic’s share of enterprise AI spending climbed from approximately 4% of US companies paying for AI tools in early 2025 to 20% in January 2026, even as OpenAI’s share fell from 50% to 27% over the same period. This 40% market share for Anthropic represents a concentration in the enterprise segment that is particularly employment-significant: enterprise deployments in regulated industries and large organisations are the deployments with the highest human headcount exposure per deployment.

OpenAI’s Codex and ChatGPT Enterprise are broadly competitive with Claude Code and Claude for business in terms of raw capability. The enterprise market dynamic has shifted toward Anthropic for several reasons that are relevant to the employment story: the safety and compliance positioning that makes Claude more trustworthy for regulated industry deployment; the constitutional AI training that reduces output variability in high-stakes settings; and the specific relationship architecture with AWS Bedrock that embeds Claude as the default AI model for the largest enterprise cloud platform.

The practical employment implication is that the IT sectors where Claude has achieved the deepest penetration - financial services, government, healthcare, and the IT services sector through the Infosys partnership - are also the sectors where enterprise AI adoption is having the most concentrated employment consequences. OpenAI’s enterprise penetration is broader but shallower on average, with stronger concentration in the creative and marketing sectors. The employment impact is real in both cases, but Claude’s specific enterprise deepening in employment-intensive regulated industries gives Anthropic a disproportionate role in the employment story relative to its total market share.

The Google Gemini Comparison

Google’s Gemini models and Anthropic’s Claude compete directly on Google Cloud’s Vertex AI platform - a situation of unusual competitive intimacy where Google is simultaneously an investor in, a cloud provider for, and a direct competitor to Anthropic. The Gemini-Claude competition matters for the employment story because the enterprise customers choosing between them are often making the same deployment decisions with the same workforce implications.

Google’s specific advantage in enterprise AI deployment is its integration with Google Workspace - the productivity suite used by hundreds of millions of workers. Gemini’s deep embedding in Gmail, Google Docs, Google Sheets, and Google Meet gives it a deployment path into knowledge worker workflows that Anthropic has attempted to replicate through the Claude in Chrome extension and the Google Drive/Gmail integrations in Cowork. The competitive race between Anthropic and Google to embed AI agents most deeply in the productivity workflows of knowledge workers is directly correlated with the speed at which those workflows are automated and the associated employment consequences.

Meta’s Open Source Challenge

Meta’s approach to the competitive landscape - releasing the LLaMA family of open-source models rather than selling proprietary API access - creates a different kind of competitive pressure with different employment implications. The LLaMA models, particularly LLaMA 4, have achieved performance levels that approach frontier proprietary models for many tasks, making them viable alternatives for enterprise deployments where cost is a primary consideration.

The employment implication of open-source AI models is potentially worse than the proprietary model dynamic, because the marginal cost of deployment is lower. A company deploying Claude through Bedrock pays per token - a cost that limits the economic incentive to automate workflows below a certain value threshold. A company deploying LLaMA on its own infrastructure has a near-zero marginal cost per token once the compute is provisioned, which means the economic threshold for workflow automation is dramatically lower. Functions that are too low-value to automate with paid API access may be economically viable to automate with self-hosted open-source models.

Anthropic’s response to the open-source competitive pressure has been to focus on the enterprise quality and reliability dimensions where proprietary models maintain advantages - the predictability, safety, and compliance features that regulated enterprises require and that open-source deployments struggle to guarantee. This is a defensible strategic position, but it means that the employment impact extends beyond Claude specifically to the broader frontier model ecosystem of which Claude is a part.

The Microsoft Azure AI Foundry Relationship

Microsoft’s investment of up to $5 billion in Anthropic and the availability of Claude on Azure AI Foundry creates a specific competitive and partnership dynamic that matters for the employment story in a way that is not immediately obvious. Microsoft Azure is the cloud platform of choice for many of the European enterprises covered in the previous article - SAP, Capgemini, and the major European financial institutions. When Claude is available through Azure AI Foundry, these European enterprises have access to Claude’s capabilities within their existing Azure infrastructure relationship, without requiring a separate Anthropic enterprise agreement.

This infrastructure embedding accelerates European enterprise adoption of Claude in ways that might otherwise have been slowed by procurement complexity. The European employment implications - the slower pace of restructuring enabled by labour law constraints - may therefore accelerate somewhat as Claude’s penetration through Azure Foundry deepens the availability of automation tools to European enterprises that are currently restructuring more slowly than their US counterparts.


A Data Summary: Anthropic’s Numbers and Their Employment Mirror

For readers who want the key statistics in consolidated form, this summary presents the most important numbers from this analysis alongside their employment implications.

Anthropic’s Scale as of March 2026

Valuation: $380 billion post-Series G. Revenue run-rate: approximately $19 billion annualised in March 2026, growing at approximately 10x annually for three consecutive years. Enterprise customers: 300,000+ businesses; 500+ customers spending over $1 million annually; 8 of the Fortune 10 as customers. Claude Code revenue: $2.5 billion annualised, more than doubled since the start of 2026. Enterprise customers paying over $100,000 annually: grown 7x in the past year. Market share of US enterprise AI spending: approximately 40%, up from approximately 4% a year earlier.

The Employment Mirror

Observed AI task coverage of Computer Programmer occupation: 75%, the highest of any occupation in Anthropic’s Economic Index. Theoretical AI capability ceiling for Computer and Mathematical occupations: 94%. Gap between current coverage and ceiling: 19 percentage points, representing the near-term automation potential not yet realised. Drop in job finding rate for AI-exposed occupations since 2022: 14%. Fall in employment among AI-exposed occupations for workers aged 22-25: 6% to 16%. Enterprise software sector market cap loss attributed to Anthropic product launches: approximately $2 trillion. IBM stock loss on single day following Claude Code COBOL announcement: approximately $40 billion.

The Pace Signal

At Anthropic’s current enterprise adoption growth rate, the observed AI coverage of the most exposed occupations will approach theoretical capability ceilings within a timeframe measured in months rather than years, based on the compound growth rates visible in the Economic Index data. When that occurs, the current labour market impact - characterised by hiring slowdowns rather than unemployment spikes - is likely to transition to a more severe phase characterised by direct role elimination rather than hiring moderation.


This analysis was produced with research conducted as of March 25, 2026. The Anthropic Economic Index, competitive market share data, funding figures, and product announcements cited are drawn from primary sources including Anthropic’s own publications, CNBC, Fortune, Sacra Research, Time magazine, and SF Standard. The employment impact analysis draws on the methodology of Anthropic’s own March 5, 2026 labour market study. For career preparation resources that help engineering students and professionals build the skills most valued in the current AI-era IT market, the TCS NQT Preparation Guide and TCS ILP Preparation Guide at ReportMedic remain comprehensive foundations for technical skill development.


Epilogue: The Fifth Floor and the Small Book

Visitors to Anthropic’s San Francisco headquarters receive a copy of “Machines of Loving Grace” at reception. The building’s fifth floor, described by Time magazine as all warm wood and soft light, with windows looking out on a park and a portrait of Alan Turing on the wall, is where some of the most consequential decisions in the history of the technology industry are made.

Those decisions are made by people who are, by every available account, genuinely thoughtful about the consequences of what they are building. Dario Amodei thinks carefully about the “permanent underclass” risk. Deep Ganguli and his societal impacts team study the labour market data that documents the disruption Claude is producing. Boris Cherny says it will be “painful for a lot of people” and means it. These are not the cynical statements of people who have decided that profit justifies harm. They are the sincere statements of people navigating a genuine dilemma without a clean resolution.

The dilemma is this: the only way to prevent the worst possible versions of powerful AI from being built is to be at the frontier building the best possible versions. Being at the frontier requires revenue. Revenue requires enterprise deployment. Enterprise deployment disrupts employment. Disrupted employment is the cost of maintaining the frontier position that allows safety research to influence the trajectory of the technology. The logic is coherent. The consequences are real.

In Bengaluru, on the same day that Anthropic’s Series G was announced, hundreds of engineering graduates opened their phones and searched for updates on joining date notifications from TCS and Infosys. In Seattle, the programme managers whose roles had been eliminated in Amazon’s January sweep were updating their LinkedIn profiles with the careful language of the voluntarily mobile. In Stockholm, Ericsson engineers were attending consultations with their works council representatives about the timeline of the next round of redundancies. None of them received a copy of “Machines of Loving Grace.”

The history that Amodei invokes - the printing press that disrupted scribes, the mechanisation that disrupted weavers, the computing that disrupted keypunch operators - shows that technology transitions have always produced displacement before they produced abundance. The disrupted scribes did not live long enough to enjoy the literacy explosion the printing press eventually produced. The disrupted weavers did not benefit from the eventual prosperity of the industrial revolution. The question of whether the workers disrupted by AI will, in their own lifetimes, benefit from the prosperity the technology ultimately generates is not a historical question. It is a question about the policy decisions, redistribution mechanisms, and social institutions that societies will either build or fail to build in the next decade.

Anthropic has said that those decisions “shouldn’t be up to us.” That is probably correct. But Anthropic’s specific contribution to the urgency of those decisions - through the pace of its product development, the depth of its enterprise penetration, the velocity of its capability advancement, and the clarity of its own research documenting the disruption - means that it has a particular responsibility to be among the loudest advocates for the policies it says should be built.

The small book is the beginning of that advocacy. The advocacy that will determine whether it matters will not happen on the fifth floor of a San Francisco building with soft light and a park view. It will happen in legislatures and labour negotiations and educational institutions and community organisations and the individual career decisions of millions of workers trying to understand what the future holds for them.

This article has attempted to provide the most complete picture yet of one company’s specific and consequential role in the employment story of this moment. The picture is complicated because the company is genuinely complicated - more self-aware, more research-focused, and more honest about the costs of what it is building than most of its peers, and at the same time more effective at deploying those costs at enterprise scale than any predecessor. Understanding that complexity clearly is the prerequisite for responding to it adequately.


Published March 25, 2026. InsightCrunch is an independent technology analysis publication. For practice resources for IT careers including the TCS NQT Preparation Guide and full TCS ILP Preparation Guide, visit ReportMedic.