A Business-Critical Mistake
One of our customers is currently facing a major challenge: they are being forced into a migration from a legacy API to a completely new version that is not backwards compatible. This is not just a technical inconvenience—it is a high-risk operation that threatens the stability of their business.
To be clear, I understand that APIs evolve. Technology moves forward, and sometimes a clean break is necessary to improve performance, security, or scalability. But what I find unacceptable is the way this migration is being handled— without proper regard for the risks and impact on customers.
This is not an experimental side project. This is a business-critical API that drives the entire operation of our customer. They have a 24/7 operation with a dozen of people working on an application that depends on this API. If the API is unavailable for even a few hours, it results in significant manual work, delays, and operational chaos.
A seamless transition is not a “nice to have” — it is an absolute requirement. Yet, they are being asked to switch to a new API without a safety net. My biggest concern is that there is no easy way back. Once we migrate, we are committed. If the new API does not behave as expected, we could be looking at severe economic damage. A breaking change without a rollback plan is a recipe for disaster.
The new API has a different data format and different unique identifiers. This means we need to build custom migration scripts, not only to translate data but to make sure our system remains backwards compatible during the transition. If anything goes wrong, we might be dealing with corrupt or mismatched data that is hard to fix.
The supplier is giving us a sandbox environment to test the new API, which is a minimum requirement. However, our experience tells me that issues only truly surface in production. Load, concurrency, and real-world data variations often reveal edge cases that were not caught in testing.
The migration is being forced on a company that operates 24/7. We cannot halt operations for a few hours to execute the migration. Even a few minutes of downtime could have ripple effects across the business.
What frustrates me the most is that it feels like the engineers who built this new API were allowed to freewheel, without thinking about the customers — the developers who need to implement it. API changes should be designed with migration in mind, with backwards compatibility, transition phases, and clear deprecation paths.
I am not saying that APIs should never change. But when a business-critical API is rewritten, the following should be non-negotiable:
- A clear migration plan with a rollback option. Customers should have a way to fall back if things go wrong.
- A transition period where both APIs run in parallel. This allows for a gradual migration and reduces risk.
- A compatibility layer or versioning strategy. Breaking changes should be rare, and old versions should be supported for a reasonable period.
- More realistic testing environments. A sandbox is good, but suppliers should also provide ways to validate production-like scenarios before going live.
- Empathy for developers. API providers should work with their customers, not force change upon them without considering the consequences.
This situation is a perfect example of why API design is not just a technical problem — it is a business decision. Poorly managed API migrations break trust with customers and can have severe operational and financial consequences.
If you are designing an API that others depend on, think beyond the code. Think about the businesses, teams, and workflows that rely on it. And if you are an API consumer facing a forced migration, push back — make sure your concerns are heard before it is too late.
Member discussion