I agree with you that in the normal case the data should be protected either by Check Constraints or by the validation-logic of the application. However I see three situations where the tool can be used:
One is where the application is really not validating the data. This was the initial reason I wrote the tool. It was for a newborn clinic in a hospital where the doctors entered the length of the newborn babies. Because the application used legacy technologies the length wasn't validated by the application and no one seemed to be around to fix it. They tried to correct the invalid lengths but some invalid values got lost, so they had to be filtered for every statistic.
The second is when the validation-logic changes or you are merging databases with different validation-constraints, for example to include them into a data warehouse. In this case it may be helpful to assemble the "invalid" entries at the source.
The third is for the situation where you are getting a database but you don't know how the data that is inside looks like. In that case you can quickly define validation rules that filter uncommon values. Then you can investigate the entries with those uncommon values.