IBM down 10%: the signal isn’t “COBOL is dead”
A 10% single-day drop gets attention, but the interesting part isn’t “IBM loses because AI arrived.” The useful signal is that markets may be pricing in a shift in who captures value from mainframe modernization.
Multiple outlets reported that Anthropic positioned Claude Code as particularly strong for working with legacy code, including COBOL—reading it, explaining it, refactoring it, and supporting modernization/migration efforts where companies currently rely on expensive consulting, scarce skills, and slow processes.
If your business runs COBOL in production (banks, insurance, government, retail, logistics), the practical question is simple: can AI reduce cycle time and risk on the work that blocks roadmaps and burns budgets? And if it can, who captures that value—IBM as the platform vendor, AI tooling vendors like Anthropic, or the teams/integrators who operationalize the transformation?
Why an AI COBOL tool can worry IBM (for non-hype reasons)
IBM doesn’t “live off COBOL” in isolation. It benefits from ecosystems: mainframes, licensing, enterprise tooling, services, and long-term contracts. COBOL is part of the operational glue that keeps massive installations stable. When an external player claims it can significantly lower the barrier to working with legacy, it pressures three levers.
Lower operational lock-in
If a team can:
- understand code faster,
- generate tests,
- produce documentation,
- turn modules into services,
- modernize incrementally,
…then the cost of switching (or at least renegotiating) can drop. Even the perception of that risk can move a stock.
Margin compression on services
A lot of “legacy spend” is human work: impact analysis, reverse engineering, remediation, and test planning. If AI automates even 20–30% of repetitive tasks, value shifts toward whoever controls the pipeline and governance (prompts, policies, models), not necessarily whoever owns the hardware.
Narrative shift: from “fortress” to “transformable code”
Mainframes are often seen as solid but opaque. AI that makes COBOL more readable and manipulable reduces opacity. As opacity drops, competition rises.
The technical reality of COBOL + AI: where it shines and where it breaks
If you expect “press a button and migrate everything to Java,” you’re setting the project up for failure. AI can help a lot—but only inside a disciplined process.
Where AI pays off fast
- Explanation and onboarding: understanding modules, copybooks, flow, dependencies.
- Documentation: functional descriptions, data flow, diagrams, glossaries.
- Safe micro-refactoring: renames, isolating routines, extracting functions.
- Test scaffolding: harnesses, test cases, synthetic datasets (carefully).
- Impact analysis: “if I change X, what else might break?”
Where you need guardrails
- Implicit business logic: undocumented rules and historical exceptions.
- Batch/JCL and scheduling: nightly windows, restartability, rollback behavior.
- Data formats: EBCDIC/ASCII, packed decimals, fixed-length records, VSAM.
- Compliance and secrets: you can’t ship sensitive data to a model without governance.
Bottom line: AI accelerates work; it doesn’t replace risk engineering.
If you run COBOL: a practical 4-phase approach
1) Inventory and classification (before prompting)
You need to know what you have:
- programs and call graphs,
- copybooks and record layouts,
- batch jobs and dependencies,
- DB/file access patterns,
- business criticality (core banking vs reporting).
Goal: a usable map—not perfect, but decision-ready.
2) Put AI into a controlled pipeline
The common mistake is using an assistant like a chat window. What you want is a pipeline that:
- retrieves context from internal repos (RAG),
- enforces policy (redaction, allowlists, logging),
- produces auditable outputs (diffs, tests, reports).
3) Modernize incrementally
Three realistic paths:
- Encapsulation: expose COBOL functions via APIs first.
- Strangler pattern: replace parts gradually.
- Replatform: move runtime/compilation elsewhere (delicate, often vendor-specific).
Pick based on risk and ROI, not hype.
4) Verification: tests, tests, tests
If AI generates code, the key question isn’t “is it pretty?” but “is it equivalent?”
You need:
- golden files,
- regression tests on real (anonymized) datasets,
- batch output comparisons,
- production monitoring (canary/shadow runs).
A concrete example: explain a COBOL module and generate tests with AI
Below is a minimal “skeleton” showing how to orchestrate:
1) context retrieval from your repo, 2) a structured request to the model, 3) verifiable output.
Orchestration (Node.js) with a structured prompt
import fs from "node:fs";
import path from "node:path";
function loadCobolModule(modulePath) {
const code = fs.readFileSync(modulePath, "utf8");
return code;
}
function buildPrompt({ code, copybooks }) {
return `You are a COBOL modernization assistant.
GOAL:
1) Explain what the program does in functional terms.
2) List inputs/outputs (files, records, key fields) and side effects.
3) Highlight business rules and risk points.
4) Propose a test suite (at least 10 cases) with example data.
CONSTRAINTS:
- Do not invent dependencies that are not present.
- If information is missing, list what you need.
- Output JSON with keys: summary, io, rules, risks, tests.
COBOL CODE:
${code}
COPYBOOKS (if any):
${copybooks.join("\n\n")}
`;
}
// Demo: replace with your provider call (Anthropic/OpenAI/on-prem)
async function callModel(prompt) {
throw new Error("Implement your model call here");
}
async function main() {
const code = loadCobolModule(path.resolve("./src/LEGACY01.cbl"));
const copybooks = [
fs.readFileSync(path.resolve("./copybooks/CUSTREC.cpy"), "utf8")
];
const prompt = buildPrompt({ code, copybooks });
const result = await callModel(prompt);
console.log(result);
}
main().catch(console.error); Key details:
- JSON output makes results auditable and diffable.
- Include copybooks or the model will guess record layouts.
- The “magic” is repeatability and governance, not chatting.
What teams should do in the next 2–4 weeks
If you’re responsible for legacy systems, the useful move isn’t picking a vendor immediately. It’s running a low-risk experiment that answers three questions.
1) Does AI improve code understanding?
Practical metrics:
- onboarding time for a module,
- fewer bugs caused by misunderstanding,
- documentation quality (reviewed by seniors).
2) Does AI reduce lead time on change requests?
Try on a real task:
- a small business rule change,
- with tests and output comparisons,
- measure engineering hours before/after.
3) Is governance sustainable?
Minimum checklist:
- where source/data go (cloud vs on-prem),
- logging and audit trails,
- secrets/PII policy,
- licensing/IP stance for code and generated output.
What the “IBM move” means for builders and buyers
This isn’t a verdict on mainframes. It’s a reminder:
- Competitive advantage isn’t “having COBOL.” It’s changing it without breaking it.
- AI shifts value toward tooling, process, and tests.
- Teams investing in incremental modernization (APIs, observability, test harnesses) gain leverage regardless of vendor.
Practical takeaways
- If you run COBOL, start with inventory + testing before any migration.
- Use AI to accelerate understanding, documentation, and scaffolding, not to replace analysis.
- Design verifiable outputs (JSON, diffs, golden tests) or you’re just chatting.
- Evaluate vendors on governance and integration: security, audit, CI/CD integration, context handling.
IBM’s -10% is a story about perception and power in the modernization toolchain. For practitioners, it’s mainly a push to make legacy less mysterious and more testable—where AI delivers real value.