Filenames should not matter to the SQL code as they are not relevant, but perhaps you meant Database Name?
In principle if the table already exists on both the Local and the Remote you can do:
INSERT INTO MyRemoteServer.MyRemoteDatabase.dbo.MyRemoteTableName
SELECT *
FROM dbo.MyLocalTableName
or
FROM MyLocalDatabase.dbo.MyLocalTableName
If you need to "empty" the target table first then
TRUNCATE TABLE MyRemoteServer.MyRemoteDatabase.dbo.MyRemoteTableName
(You cannot do that on a table that has Foreign Keys associated with it, so you would have to use DELETE instead - but beware that that will LOG all the deletions, so on a big table will use a lot of Log File space).
If your table has an IDENTITY column then you need to SET INSERT_IDENTITY on the (remote) table to ON before the operation, and SET it OFF afterwards.
In order to be able to use "MyRemoteServer" that would first need to be set up as a LINKED SERVER.
The 4-part-naming i.e. MyRemoteServer.MyRemoteDatabase.dbo.MyRemoteTableName is fine for one off operations, but if you are going to write operational code, e.g. Stored Procedures and Views, that reference the remote server (or even any 3-part naming to reference a table in a different database [on the Local server]) then I strongly recommend that you always use SYNONYMS.
So ... in the Local database you create a SYNONYM called "dbo.MyRemoteTableName" which physically refers to MyRemoteServer.MyRemoteDatabase.dbo.MyRemoteTableName
Then (executing in your local database) your code becomes
INSERT INTO dbo.MyRemoteTableName
SELECT *
FROM dbo.MyLocalTableName
and in the event that anything about the Remote Server/Database/Table name changes you can just DROP and reCREATE the Synonyms pointing to the new, correct, location.
All this said, copying data to a remote server can be slow (or tie up the network) if the table is big. So you could consider another approach which is to export the table(s) to a file(s), transfer the file (ZIP it up / whatever) and then import the files(s) at the far end. There are various ways to do this, but we use the external program BCP.
Benefits of this also include being able to control the batch size of the task - using the plain SQL statement INSERT INTO will do ALL rows in a single transaction - that may be A Big Processing Job for the receiving server
so using a Bulk Import (and setting a sensible Batch Size) helps with that, and you can also pre-sort the data , when exported into the transfer file, into Clustered Index Order, AND provide a HINT to the Bulk Import that the file is pre-sorted, and SQL will import the data more efficiently.
Another performance suggestion, for import of larger tables, is to DROP all non-clustered indexes BEFORE the Import, and recreate then AFTER
You should also consider how you handle any referential integrity - if you transfer multiple tables which have FOREIGN KEY relationships. Ideally you would import PARENT Table first then CHILD Table, but sometimes relationships are circular - so sometimes it is easier to drop the Foreign keys before and recreate them after.
Also watch out if you have TRIGGERS on the remote tables - you might want to DISABLE TRIGGERS before the import, and ENABLE afterwards.
Beware of Log File space usage during long operations. If the target database is in FULL Recovery Model then I recommend that you set up a Log Backup to run EVERY MINUTE during the import (depends a bit on whether the import is a single transaction, and a single table - multiple transactions (batches) and multiple tables will backup more often, in smaller pieces)
Hopefully way more things there than you ACTUALLY need to worry about! Either way, should help you (as one of my clients often says ...) "cover your Front, Back and BOTH Sides" 