Forced Parameterization?

Based on this article...

https://technet.microsoft.com/en-us/library/ms175037(v=sql.105).aspx

"Databases that may benefit from forced parameterization are generally those that experience high volumes of concurrent queries"

We have very high loads on our databases. What is the best way to determine if we would benefit from this? How dangerous is it to make this change?

Thanks.

What types of high loads are you talking about?
OLTP alone
or OLTP along with queries against same OLTP database?

I guess it is all relative. What is the best way to measure that? Here are some metrics...

46 databases on the DB Server
It is the largest server instance you can purchase on AWS
The 46 databases are 10 TB in size combined
I would estimate that we have 5-10 thousand concurrent users at peak times.

What prompted my question is that when I look at the most costly queries and their query plans, for most of them it says that they have only executed 1 or 2 times but that the plan was created within 15 minutes. This tells me that there are high levels of query plan recompliles that might go away with forced parameterization.

Oh. It acts as both an OLTP and OLAP database in that there is not a separate reporting server. We are in the process of migrating to Enterprise Edition with Always On and plan on pointing much of the reporting to the Read-Only replica.

I personally have never used this forced parm setting. Usually before I make any config changes on our servers, I try to reason with devs what is the underlying issue: more rams, more config changes and other setting changes without first vetting the underlying issues can come to bite you later on.
Missing indices, no backups ==> huge log files, badly written sprocs? mid tiers issues between back end and front end, etc are the issues I try to first vet out before tweaking other things.

Yeah. Unfortunately, our application uses LINQ and EF which means the devs have full control over accessing the database. No stored procs in sight :frowning:

This makes it challenging to get that aspect of the performance issues addressed. It feels like we have used the "throw more hardware at the problem" solution to the max already. That's why I am starting to look at some of the these parameters.

I guess I just need to keep sending poorly performing queries to the devs, hope they can reverse engineer them to find the source in the code, and hope they can tune them. It is frustrating.

I despise linq and ef. it is not because these technologies are bad in of themselves. it is how they are implemented and the architectural approach. last place I worked at the pagination of one of the pages they used was horrendous on performance. on review of the .net code we found out it was not designed to scale. but also there were a couple of approaches we could do on db side. so if you work collaboratively with the devs you can get great results, otherwise you as the dba have to push back.
you can do a lot more stuff to sniff out problematic calls to the backend and report these to the devs.
some .net c# devs just because they got linq and ef think they suddenly are sql experts.
if you can you might enable some sort of logging and capture specific types of activities based on source application handle , specific service account used by the app or web pool to narrow down the offending calls to database.
you might also look into search technologies such as SOLR search, or elastic search to offload the strain on your sql servers.