Bert Wagner has some good JSON blogs for SQL Server:
This one is my go-to for when I have to parse indeterminate JSON:
It should work for your case where columns are missing. All of his JSON blogs are here, there are 4 pages worth:
https://bertwagner.com/category/sql/development/json.html
He covers some pretty deep stuff.
Regarding AWS S3 files, if SSIS proves to be problematic, you may want to consider the AWS CLI tool, using the S3 commands:
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/index.html
Depending on file naming schemes, this may offer a faster way to process thousands of files, assuming you're going to pull them from S3 to a local or WAN file store in your domain. Unfortunately it doesn't allow for getting newer files based on date created or modified, but it does have a sync operation that will grab only files that don't exist in the destination. If you're only ever getting new files, and not reloading modified files, that might be beneficial.
Once the files are downloaded to your side, SSIS ForEach loop should work, or you can process JSON files by importing into NVARCHAR(MAX) columns in a SQL Server table. OPENROWSET, BULK INSERT, or the bcp utility should work fine for any of those. From there, you'd go with OPENJSON.
As a point of reference, I recently did a project that was the reverse of your situation: I had to extract JPEG images stored in binary columns in SQL Server and upload them to S3. It was all done via bcp and AWS CLI S3 commands. In the end there were 10s of millions of files exceeding 6 TB of space. It took about 2 weeks in various stages, mostly because our uplink to AWS was throttled.
Also note that you will pay network egress charges to pull data from S3. It's not much per GB but it adds up, and you'll want to check your actual invoiced amounts regularly (daily at least) until you get a good picture on how much it will cost. And if you're going to be dealing with 100s of thousands of files, you'll need to manage them somehow on your local filestore. You DO NOT want to have more than a few thousand files per folder on an NTFS file system, you'll see abysmal performance and possible directory corruption once you exceed 50K-100K files. (One option to consider is moving your processed files in S3 to another bucket, or a folder within the same bucket, so that they're not available for the next round of processing. This will involve additional costs though.)