Managing database changes manually is error-prone, time-consuming, and risky—something I experienced firsthand before joining the Harness Database DevOps team. This article shares my journey from fragile SQL scripts to fully automated, version-controlled schema migrations using Harness, significantly improving deployment speed, reliability, and traceability.
Managing database state has always been a nightmare for most development teams, and I was no exception. Joining the Harness Database DevOps team marked a pivotal moment in my career. Until then, I’d managed application code through CI/CD pipelines, but treated database changes as an afterthought - manual scripts, random rollbacks for hot fixes, and environment drift. However, I quickly realized that managing SQL schema changes at scale was more complex than I had imagined, so I started by looking at how developers on the team were deploying schema changes the old way.
In this article, I will take you on my journey from struggling with error-prone, manual SQL migrations to moving towards an automated, version-controlled workflow powered by Harness Database DevOps.
In many team projects, I have seen people using PostgreSQL for databases. Initially, migrations were handled by writing ad-hoc SQL scripts and running them manually. It didn’t take long to see the drawbacks of this approach. Every deployment felt risky, and setting up test environments was a headache. Below are some of the pain points I encountered:
Human error is a valid scenario when running database migrations by hand. Even minor typos or omissions in SQL scripts could break deployments. It’s all too easy to miss a comma, run scripts in the wrong order, or skip a necessary step.
These manual steps often led to late-night troubleshooting. For example, forgetting to run the script that adds a new column before the script that populates it would cause errors on deployment. Running the scripts manually also increased the odds of a script accidentally not getting run against a particular environment, which could cause a later script to break unexpectedly.
The team then had to rollback changes by hand or scramble to fix issues on-the-fly, consuming valuable engineering time. This overhead put a lot of pressure on the team, especially as we were rapidly growing our product features. Clearly, automation was needed.
Another challenge was setting up and verifying test environments. A database replica and a test PostgreSQL instance were maintained in each environment (development, staging, production), which initially was a very good approach.
First, setting up this whole pipeline required a huge operational effort. Manually verifying the schema state also required a lot of effort and often required writing ad-hoc SQL queries to inspect tables and columns and compare environments. There was no single source of truth to answer “which migrations have been applied, and where?”
Second, it was expensive to set up the complete replica of the database, and third, there was no easy way to spin up a fresh environment with an exact copy of the production schema. More often than not, engineers had to manually apply a series of migration scripts to reach a specific schema state for testing.
The more complex migrations became, the more risk they introduced. Some schema changes require taking locks or making structural transformations that can briefly stall the database. Manual migrations did not coordinate these transitions well with the application deployment.
Moreover, if something went wrong during a manual migration, for example, a script partially applied changes, and there is no easy way to roll back. Databases have stateful data, so undoing a migration by hand is extremely tricky, unlike redeploying old application code.
One of the biggest problems was simply keeping track of what had been done. With manual processes, each engineer might have their scripts or local changes, and it wasn’t easy to know which changes had been applied to the dev server, staging, or production database. There wasn't an automated tracking table or log. So if something failed in production, we couldn’t quickly tell if a particular migration had already run there or not.
This lack of visibility or traceability is a classic cause of database drift. Database drift, also known as schema or version drift, happens “when the database schema in one environment no longer matches the schema for the same database in a different environment”. In most cases, drift occurred because sometimes “hotfixes and patches” are directly applied in production without the same changes being applied in lower environments (i.e., Staging, Dev, QA).
Harness Database DevOps provides native support for database schema management, meaning we can build deployment pipelines that include steps for databases and applications in our CI/CD pipeline. Instead of manually creating SQL on release day, every changeset now lives and is picked up from Git. When a developer commits a new migration, the Harness pipeline automatically invokes Liquibase against the target SQL instance. After every commit, a staging migration can be updated with the database and application, making it easy to verify that all migrations work correctly.
By tagging changesets with contexts- dev, staging, prod, only the specific scripts run in each environment. This makes it easy to, for example, apply reference data to test environments while having the same set of migrations for all environments. Similarly, merges to the staging branch trigger staging-only migrations, while pushing to production applies production-ready changes. This GitOps-style flow gives complete visibility into what ran where, when, and by whom.
If a stage fails, Harness DB DevOps allows rollbacks to revert the database to its previous state, preserving data integrity and saving hours of recovery work.
The impact is immediate. Deployments that once required two engineers and an evening window are now complete in minutes and error-free. Spinning up a new test database became trivial: point Liquibase at an empty schema and watch it replay every changeset. QA cycles are also accelerated because the environment reflects the exact same schema version.
Traceability improved exponentially with Liquibase’s built-in "DATABASECHANGELOG" table showing which migrations ran, in which order, and when. Harness dashboards give a high-level view of migration status across all environments. Or when a staging deployment fails due to an unexpected schema conflict, it instantly pinpoints the offending changeset and rolls back with confidence.
With such automation in place, the team’s confidence has increased since deploying to production without dread. Backed by automated rollback logic defined in each changeset, performance-impacting migrations now run according to schedule, coordinated with application updates, eliminating unplanned downtime.
Two lessons stand out. First, keep changes small. Breaking a large refactoring into discrete, numbered files made testing and rollback straightforward. Second, automate everything. From checksum verification to context-based targeting, the fewer manual steps, the lower the risk of human error.
For teams drowning in manual migrations, the advice is clear: start small, but start now. Take just one service, convert its migrations into a structured ChangeLog, and plug it into your CI/CD pipeline with Harness Database DevOps - the platform purpose-built to bring automation, visibility, and peace of mind to database deployments. See how fast your team can go when the database stops being the bottleneck.
Database DevOps is not a luxury but a necessity for organizations that depend on data-driven features, compliance, and uninterrupted service. By embracing automated migrations, GitOps workflows, and robust rollback strategies, you empower your team to deliver value faster and more confidently. With engineers thanking you, your on-call rotations calming down, and your stakeholders will be happier with the consistency and transparency.
Simply connect your Git repository containing Liquibase ChangeLogs to Harness. Define a pipeline stage for “Database Migration,” specify your target environments (dev, staging, prod) with Liquibase contexts, and Harness handles the rest, triggering migrations on every commit and managing rollbacks on failure.
Harness DB DevOps natively supports all major SQL databases, including PostgreSQL, MySQL, Oracle, Microsoft SQL Server, Google Cloud Spanner, and NoSQL databases such as MongoDB.
Each ChangeSet can include rollback logic defined in the Liquibase ChangeLog. If a migration fails on any environment, Harness automatically invokes the specified rollback commands, restoring the database schema to its previous stable state and notifying your team through configured alert channels.