Option 1: Run away!
If there is a UNIQUE identifier in the table you could use that as your filter for "what is new", but it won't help with "what is changed". If, perchance, you need new rows more urgently than you need changed row then that might help.
You can use a Check Sum to figure out which rows have changed (more easily than comparing every single column), but Check Sum has a tiny chance of giving a false negative
I take a very simple view in these circumstances. I require that our end knows when the row was changed. It always comes up in debate when the client is pointing fingers, and we have always saved our own time, and face, by being able to say "This last changed ON xxx" and thereby diagnosing where the problem stemmed from ... if the other party has no ChangeDate and no PKeys then they have JackShit idea of what changed / when. So we have a DateTime column for "We last received a MODIFIED version of this row ON". We use a staging table for this, and then further process from the staging table into our own tables. (So staging table is an exact replicate of the remote, but only in columns that are of interest to us (which might be all of them ...) (**)
Where there is no ChangeDate in the remote table we pull the whole table into a scratch-pad table, compare every single row and column, and update only those rows that are different (with [ChangedOn] set to current date/time. (***)
We can then further process that [into our own tables] filtered to "Only rows which are new/changed since last time"
So basically we have "Figure out what has changed between Remote DB and our staging tables" and then "Map that onto our APP DB's tables / columns". We mechanically generate all that code, including all validation logic (e.g. "remote column values have become longer than local column"). Sounds like, with hundreds of tables to do, it might be worth your while doing the same. We a template of how we would like the ideal code to be, and then the actual tables and lists of column-actions are just "injected" into that so whenever possible we make zero changed by hand - its a boring job, computer does it consistently every time, of course.
(**) possibly pertinent point. We take the view that the staging table should be an identical replica of the remote. We NEVER use some massaging of columns when pulling data from the remote. That means that when we find a bug in the upstream code we can fix that, rerun the UPDATE (from ChangeDate >= '19000101') and we don't have to pull ALL that data AGAIN from the remote. If we need an additional column then that is a big query from the remote.
(***) If the remote is on a slow connection, or pulling TB over the wire will annoy everyone else! we mount a duplicate set of Staging Tables close to the source DB. We freshen that up, with a full COMPARE EVERYTHING, only setting our staging table's ChangeDate column on rows that are materially changed (i.e. ignoring any columns in original tables that are not part of our process / staging table). We can then use that ChangeDate to ONLY move rows across the wire from remote staging tables to our local staging tables.