I am just now in the midst of a strange issue. A client moved their SQL database server from Hyper-V to VMWare on Saturday. Yesterday they called me complaining about performance. In the brief time and limited access I had I was able to see a lot of indicators of poor I/O (and many other problems) and I made a few recommendations.
They restarted their SQL server to add on the VMWare tools and take care of a few other tasks. Upon restart their data reverted (from their perspective) to the moment they brought it up on the new VM. They found that the database was pointed to the old iSCSI drives instead of the new vmdk files and wasn't writing anything at all to disk. They tried restoring their backup but it fails every time before completion.
Did the database just fill up the dirty cache, never writing to disk? How in the world did the database continue to function for five days without presenting anything more than slow I/O to the end users? A complicating factor is they didn't have any alerts set up on the database that might have warned them of these kinds of problems. The event logs on the server itself show that there were numerous write errors.