I have put a database to simple recovery mode and trying to insert 125 million rows into a table. however database log grown from 25mb to 47gb server had clustering by any chance cluster made any changes?
Even in simple recovery model, any big query will still require to write to transaction log, hence for that large insert, it will still need to write to transaction log thus the log file growth.
If you use bcp to perform the insert and the DB in bulk-logged model, your log will not growth at the time, but will write out when you perform the log backup.
hope this helps
Irrespective of recovery model,your log will grow when performing huge inserts and updates.
Alternatively, as @dennisc suggested, use bcp or bulk-insert.
Where are you inserting from and to? I ask because there is the concept of "minimally logged" evolutions that might be applied here. DennisC has briefly touched on this subject in his post above but there can be some minimal logging done depending on the condition of the target table.
For example, does the target table already have data in it or does it start out empty. There a multiple required conditions for "minimal logging" but, until we know the starting conditions of the source and target, there's no way to determine if "minimal logging" can be accomplished nor whether or not "batching" the data would be appropriate.