I would have thought it should "depend" .. rather than "keeping 20% as standard"
Having slack space where rows expand, and indexes have keys added randomly (i.e. FILL FACTOR < 100% to reduce chance of index page splits) makes sense, but (for example) for an index which is only ever added to with increasing-value keys it would be a waste, surely?
So perhaps they should change their policy to a case-by-case basis ... although if they are only going to save 5% on a 350GB database maybe its not worth it.
Any indexes (big ones!) that are unused?
TLog file bigger than it should be? Not backed up as often as it could be? (assuming FULL Recovery model)
Archiveable data that could be moved to a separate database and stored more efficiently perhaps? Fewer indexes, or no slack space (on the grounds that it is essentially read only)
Index rebuilds copying the large table to "fresh space" and extending the file / leaving a large hole in it. (Not sure there is a solution that that! but we reorganise large tables, rather than rebuilding them)
Change indexes to be Filtered if there are a large number of irrelevant entries in them. For example, if a column which is NULLable has an index and is only ever queried for non-NULL values then a Filtered Index WHERE MyColumn IS NOT NULL could save some space.
Probably a bunch of other such space-saving tricks ... would be interested to hear of any that other folk use / consider worthwhile.