As you’ll know if you’re a database developer, the whole “continual integration and delivery” philosophy tends to get much more complicated when you have to preserve data and aren’t simply deploying code. This problem will be solved of course; the two podcasts linked below give some clues as to where the solution might come from.
Enrico Campidoglio uses some interesting vocabulary which make database deployment issues clear. He also names some of the tools that can be used to apply his methods. Richard Campbell mentions an idea he has been talking about in several podcasts, a data service writing everything to a NoSql document database, which acts as an application-level “log”. As with a database transaction log, the events are “played” to the OLTP database, which transforms the data and makes it available for reporting.
These log events could be written to more than one RDBMS at the same time, which opens up the possibility of failing over the primary instance to the pre-change version if a release goes wrong. If the data in the old and new versions can be perfectly aligned, the continuous delivery approach looks much more realistic. Lots of problems come to mind of course, but it’s definitely food for thought.