I am wanting to understand why I would use a Delta table over a regular SQL table for some datasets.
Creating an SQL table with datatypes etc is easy for me.
The data is used daily but live data is not needed and in most examples the data is static.
I know how to create a table(s) that can provide a what it looked like on a certain day by only recording changes and the dates of those changes.
Delta tables in SQL Server are queried via Polybase:
The Delta table structure is hosted in a Databricks environment, in Microsoft's case, via Azure Databricks:
It's a storage format optimized for analytical workloads & processing, rather than OLTP workloads commonly found in SQL Server. Databricks also allows for other analytical tools and workloads like ETL/ELT, data warehouse pipelines, etc., as well as options for higher parallel processing than what would be supported by SQL Server.
You wouldn't create a delta table in SQL Server. The closest you would get is to create a table using clustered columnstore storage. That has similar optimizations for analytical querying. Neither it, nor delta tables, are a good fit for a typical transactional workload, where there's more than 10% write activity or need for ACID transaction integrity.