I don't know how many rows you generated but consider the size of the statements that were generated. Every row is a full blown INSERT/VALUES with column names for every row. It's damned near as bad as XML.
The, on top of all that, you're running it all in a single batch. Again, I don't know how many rows you actually ended up with but there's an execution plan that get's generated for each and every row even it it's a "reused" plan.
It would be much better if you used a single INSERT/SELECT to insert into your remote server. An even faster way would be to do the same thing that replication does on an initial table setup... BCP the data out of your source server in "native" format and then do either a BCP insert in the remote server or a BULK INSERT. If you can also take advantage of "Minimal Logging", it will be at least twice as fast and you can populate the remote table with the clustered index in place. If you lookup Trace Flag 610, you can even do faster inserts and can also take advantage of parallel inserts, if the notion strikes you.
But those 10 bazillion INSERT/VALUES statements just aren't going to do it for you even for such a relatively small table.