Database replication

I have been looking at database replication which there is quite alot of information but i dont know if it is what i am looking for so dont want to read into it too much at the moment.
A single (parent) database that holds all the objects required and initial necessary data is propagated. I know i can copy this database to create duplicate identical independent databases.

Is there such a thing as then when the parent database is changed e.g. a procedure is tweaked or an extra column added to a table this change is replicated in the child databases copied from it - without affecting the data users have in the meantime put into the database?

In other words i would like a template database that i only need to make the changes in and not have to duplicate the changes in each database clone manually.

Log shipping would be a good option.its light and easy to manage.

1 Like

Thanks - that looks like it could help.

Log shipping isn't going to work - as you cannot have a log shipped database set to read/write, which is what it seems you want to be able to accomplish.

It sounds like you want to be able to control and manage code changes and replicate those changes to multiple databases without affecting the data that has been saved in the individual databases.

There is no method I know of that can accomplish that goal - which is why we have code repositories (git - for example) to manage code changes and deployments. I don't think you would want this either - because you would not want changes to be made to procedures as you were testing/validating changes in the source database.

I have 2 options with the database I am developing if it is going to be used by a number of individuals who will use the database as individuals. One is to add each user to one database. The disadvantage of this is that if the user stops using it but has added a lot of data to it, their data stays in the database forever and if there are a lot of users one database will be holding a lot of data. I wanted to avoid using a script that deletes a user's data from the tables.

The second option is to give each user a replica database of their own. The disadvantages of this which is what I was wondering how a real life scenario deals with, is how do you avoid having to update individual databases as a result of what would be the natural development of the database model. I cant imagine that there isn't a way of managing these changes methodically rather than manually update and therefore prone to error and data loss (the users own data which would be a disaster!). It may be a case that creating database clones isn't the norm and I am missing the point of what can or should be done. As you point out there are alternatives such as git but I wouldn't know where to start and it sounds like this could get complicated for what in essence is quite a simple ask.
Thanks for your input.

this is what we do not sure if it helps

  1. All of our devs have installed SQL on their dev machine and can develop against that at will.
  2. We also have a shared dev server that devs can push tested scripts to (manually or automated fashion think jenkins feeding from gitlab)
  3. We have a QA server where testers test their stuff.
  4. Staging or Production database

not sure if this helps?

Run the SQL server object explorer and update the target DB with any script changes.
Simple and works.

In either option - you have to build deployment scripts to update the objects in the database. If you add a column to a table, you would then have a script to add that column. If you modify a view then you would have a script that alters the view - same for procedures, functions, indexes, etc...

You then manage your scripts in a code repository - with versioning - so you can build a set of deployment scripts depending on the version being deployed.

This is regardless of how the system is deployed - whether it is deployed to a separate instance for each user - a separate instance per customer - or a single instance for all users/customers.

The advantage to having a single instance for all users is that an upgrade to the system upgrades all users. The advantage to having separate instances per user is that each user can determine when to upgrade...the disadvantage is that you will have multiple versions of your system deployed.

It might also be worth taking a look at SSDT together with visual studio; we've used this very successfully to "publish" changes to various different databases

https://docs.microsoft.com/en-us/sql/ssdt/sql-server-data-tools?view=sql-server-2017

Or a tools like

https://www.red-gate.com/products/sql-development/sql-compare/
and
https://www.red-gate.com/products/sql-development/sql-data-compare/

(disclaimer: I have no relationship with redgate, but have used some of their tools successfully)

Either of these two would give you options for keep multiple databases in "sync"