Few decisions in an enterprise engineering organisation feel as decisive as forking an open source dependency. It signals technical capability, independence, and ownership. It also signals the beginning of an ongoing commitment that many teams have deeply underestimated.
This article is a framework for making the fork decision deliberately, not reactively.
What "forking" actually means in an enterprise context
There are two distinct things that get called "forking":
Soft fork (internal patch): You take a copy of the dependency, apply one or several patches, and consume your patched version. You do not intend to maintain this divergence long-term — it's a short-term fix while waiting for upstream or planning a proper migration.
Hard fork (maintained divergence): You take the codebase and intend to maintain it independently, applying your own security patches, compatibility updates, and potentially new features, indefinitely.
These have radically different cost profiles. Most articles about forking conflate the two. This one distinguishes them, because the decision logic is different.
When a soft fork is the right call
A soft fork is appropriate when:
- A CVE exists with a known fix that upstream is slow to release or has no capacity to release (project is semi-abandoned)
- A compatibility issue affects your specific environment and you need a short-term fix while you plan migration
- You need a behaviour change that is genuinely out of scope for upstream but is contained to one or two call sites
The key question for a soft fork: can you commit to migrating off it within a defined timeframe? If the answer is "yes, within this quarter," a soft fork is a legitimate bridge. If the answer is vague, you are likely starting a hard fork without admitting it.
The four conditions that justify a hard fork
A sustained hard fork of an upstream open source project is only justified when all four of the following are true:
1. The project is genuinely critical to your infrastructure
Not "it's convenient" or "migration would be a lot of work." Critical means: if this code broke catastrophically or was found to contain a high-severity exploitable vulnerability, you would have a production incident with immediate business impact.
Payment processing, authentication, core data storage, message brokering, and cryptography qualify. A utility library that formats dates does not.
2. No viable maintained alternative exists
Before forking, you must have genuinely evaluated replacements. This means actual spikes, not just a quick read of the alternatives page. If there's a maintained alternative with a 6-month migration effort, that is almost always preferable to a fork with a 5-year ownership burden.
"We can't migrate because it would be expensive" is a valid constraint, but it should be costed against the ongoing fork maintenance cost, not assumed to make the fork cheaper.
3. Your team has domain expertise in the relevant codebase
Forking a cryptographic library without cryptography engineers is not a security solution — it's a liability. Forking a database storage engine without engineers who understand page formats, WAL, and MVCC is a production risk, not a risk mitigation.
Be honest about what you have. Forking is not a substitute for expertise; it concentrates the risk of its absence.
4. You can commit to a resourcing plan for ongoing maintenance
What is the minimum staffing to maintain this fork? For a non-trivial library, this typically means:
- At least one engineer as primary maintainer with meaningful capacity (not "20% of a rotating engineer")
- A process for monitoring upstream CVEs and evaluating which apply to your fork
- A documented process for applying security patches and releasing new versions internally
- A plan for what happens when that primary maintainer leaves
If you cannot resource this realistically, you are creating a slow-moving risk that will crystallise at the worst possible moment.
The hidden costs most teams discover after forking
Tracking upstream divergence
Once your fork diverges, every upstream commit is a diff you need to evaluate. Is this a security fix you need to backport? A bug fix that applies to your version? A refactor that conflicts with your patches? This analysis cost is invisible until you're doing it.
The "N+1" problem
Your fork exists to solve one problem. Once engineers know the fork exists, it becomes the path of least resistance for the next problem. "Just add it to the fork" is a tempting shortcut that gradually turns a targeted patch into a fully custom build of something you never intended to own.
Ecosystem compatibility
Plugins, extensions, adapters, and tooling are often built against the mainline package. As your fork diverges, ecosystem compatibility degrades. You may find yourself maintaining compatibility shims for tooling built against upstream, adding a second category of maintenance work.
Knowledge concentration
Internal forks become known to two or three engineers and essentially no one else. When those engineers leave — and they will — the organisation inherits a critical dependency that no current employee understands. This is a textbook bus factor risk.
An alternative worth considering: commercial OSS support
Between "accept the risk" and "fork it yourself" there is a third option that most teams overlook until they've tried forking: engaging enterprise support from a vendor with existing expertise in the relevant project.
For projects where the abandonment or EOL risk is the primary driver, enterprise support can provide:
- Security patches and CVE response under SLA, without requiring your team to understand the codebase deeply
- Maintained hardened builds for your target environment
- A contracted upgrade path when you're ready to migrate
This is particularly relevant for infrastructure-layer dependencies — databases, runtimes, message brokers, web servers — where the codebase complexity makes internal fork maintenance genuinely risky.
The decision checklist
Before committing to a fork, get explicit answers to these questions:
- Is this dependency critical enough to justify ongoing engineering maintenance cost?
- Have we evaluated all maintained alternatives and costed the migration accurately?
- Do we have engineers with the relevant domain expertise?
- Have we costed ongoing maintenance (CVE monitoring, patch development, testing, releasing)?
- Do we have a documented exit plan — either a future migration or a long-term support contract?
- If the two engineers who would own this fork left tomorrow, what happens?
If you can answer all of these confidently and the fork still makes sense, it may be the right call. If any answer is vague, the fork is a risk management decision masquerading as a technical one.