We currently have an OLTP database that supports our application which has some canned reporting in it. We are building an ad-hoc reporting tool (by integrating with Dundas BI) to give our clients more reporting flexibility. I realize that the best solution here is to create an ETL to push data into an OLAP database and we are exploring that (using Redshift). But that is more of a medium-term solution for us and we would like to get a proof of concept out sooner.
So my questions are...
(1) Would it be better to use an ETL to denormalize some of the schema into flatter tables that resemble cubes or to leave the OLTP structure as is. By flattening out the structure we are saving in terms of joins but we are also reducing the flexibility of SQL Server's query optimizer (I think).
(2) Should we create very narrow indexes on every column in the tables to address the ad-hoc nature of the tool. This would prevent table scans but would likely result in many key lookups. I presume key lookups are better than table scans in most cases???
(3) If we do use an ETL to flatten out the table structure what is the best way to only move over data that changed since the last data push?
Of course, any other advice on how best to proceed both in the short term and medium term would be appreciated.