I'd love it except for the databreach blame part. The DBA can't be held responsible for things like phishing, infrastructure attacks (Specture, etc), and other forms of stupidity beyond the DBA's control. For example, I went through a problem where someone got into our network and, somehow, figured out my 13 character password on the AD side of the house, and started dropping databases using my login, They determined that it got in by someone clicking on an O365 Outlook email (and I don't know how they figured that out). There's also this ransom-ware stuff and a whole bunch of things on the Windows side that DBAs shouldn't have to mess with nor be responsible for simply because it's all too big just for a DBA team to handle.
Shifting gears a bit, I will say that I was large and in charge of one application where I enforced users and the application having only PUBLIC privs and the privs to only execute stored procedures that I had personally reviewed. It was incredibly efficient and there were no performance issues. It wasn't because I was the person doing the enforcement and reviews of the stored procedures. It was because the folks I was working with actually understood how to write the stored procedures and why it was important. All I did was occasionally help the devs over a hurdle when writing the code and during reviews.
At the end, the Devs were pretty amazed at how quickly we were able to put together both front end code and backend code and how easy it was to make changes and corrections even with me being a review bottleneck because we ended up with nearly zero rework. A large part of our success was also because we formed a rock solid team that spent zero time bickering among ourselves.
And the penetration testing done by a third party flew with zero suggestions never mind faults. And, to be sure, we did have a "current user" variable for everything facing the front end and had other things within the stored procedures to track other sources of usage. It was pretty cool.
I was very fortunate to have the opportunity to do that once in my life. I very seriously doubt that opportunity will ever present itself ever again but that shouldn't stop people from trying to get there. The closer you can get to that, the better your application and database will be. You'll also be amazed at how fast you get the job done because you can approach the Nirvana of zero rework and rework typically takes a whole lot longer than just doing it right the first time.
Not by my design but we have that at work, as well. The trouble is, whoever designed it really screwed up. Instead of doing a slightly more complicated thing to auto-magically convert users to integers, they use a VARCHAR(50) for the "LastModifiedBy" column. The problem with that is that it causes "ExpAnsive" updates, which takes a perfectly designed, ever increasing clustered index that should never fragment and fragments the bloody begezus out of it. And, no.... lowering the Fill Factor on a 146GB CI to prevent the fragmentation is just stupid when it comes to both totally wasted memory and disk space. I do have a couple of work arounds but you shouldn't need one. To make matters worse, such "poor mans auditing" is pretty much ineffective and certainly isn't tamper proof. So far as I'm concerned, it's a total waste.
And, no... please... that is NOT meant as a personal jab at you. I used to advocate the same thing until I actually sat down and figured out all of the actual damage it does when it comes to things like indexes.
Here's the 146GB CI that I'm talking about... notice the extremely low page densities at the right side of the chart... that's mostly from doing an update to the LastModifiedBy column and the logical fragmentation within that "silo" is also incredibly high.
There are no indices on the audit columns. What you're looking at is the clustered index. It's keyed on an "ever increasing" value and should never fragment. The fragmentation is cause by the "ExpAnsive" growth of the ModifiedBy column going from NULL to some value, which causes the row to no longer fit on the page, causing a page split, which is what is causing both the logical and rather nasty physical (page density) fragmentation that you're seeing in the chart.
Of course, the clustered index is the worst place for page splits to occur.
I don't know about using the objectUID coming from AD but it won't matter if you start the ModifiedBy column as a NULL because you'll still get the expansion of rows when you fill it. The only way to solve this issue is to stop the expansion either by making the ModifiedBy column a fixed width (can be a total waste if sporadically populated), assign blanks during the first insert that are equal in number to the average size or slightly larger of the entries in the column, or default it to a blank and move it out of row so the fixed length out of row pointer is always assigned, OR create the column as an int for a user number and have a trigger always lookup the user name to retreive that number. I suppose the AD objectUID could be used for that but UID's are typically 16 bytes where an int is only 4 bytes.
I like the int approach, make it not null and use a bogus value of -1 for default? But maybe you are saying the data type is not the issue but the fact that is a nullable column that when being updated causes fragmentation?
Going forward, I see the role of DBA as socially needed. Keeper of the password. Responsible.
Knowledgeable on the details. Provides data for audits (I have a product idea for audit companies, based on the above approach, if anyone is interested in writing it. Works for law enforcement too.)
I say 'socially needed', because trust has been violated worldwide. The technical approach to providing trust (eg. blockchain) is insufficient to regain trust, societally. Responsible people are needed. I expect DBA's to be prestigious, public positions with lots of responsibility and respect. Officer of the corporation, with public accountability. An enviable position due to automation.
Also, I expect a lot of software must be rewritten.
No... INT is a fixed width datatype. It occupies the same space in a row whether it is NULL or NOT NULL and so will never create an "ExpAnsive" update. That, not withstanding, NULL means "unknown" to me and, at the time of INSERT for the row, we absolutely know that it has not yet been modified. Combine that with my dislike of NULLs and the fact that SQL Server indexes don't care for them either, the concept of making the column NOT NULL and populating it with something would be the way that I would go although I'd likely use a zero to represent "Not Modified Yet".
I agree. I wrote a mapping utility for stored procedures about 18 years ago (recently updated for all the datatypes in SQL Server). If null, parameters are filled with default values.
Secondly, as far as I can tell, SQL Server won't let you NOT NULL a parameter, but the utility can force a parameter to not be null during mapping (requires modifying the 'database reflection' stored procedure).
EDIT: the utility also supports Nullable types in .NET. If the variable is Nullable, it will send null to the database, otherwise it sends the default value for that type.