Easy enough to shrink it ... but all sorts of possible consequences, and those almost certainly need considering.
if it was a one-off batch process that went haywire then shrinking will be a reasonable solution. Provided the haywire process is not run again, or not very often (I would define anything more frequent than "once a year" as too often )
My suggestion would be:
Shrink the log file
Put an alert on it so that if (when?!!) if grows back again you know exactly when it happened.
Then find what the cause was and "improve" that so that the log file expansion doesn't happen.
It might be that an Index Rebuild is causing this (although I'm not sure how likely that is in Simple Recovery Model - it would require a massive table within the DB), but if something like that is the case then the log file growth will happen every time the Index rebuild rubs (might be "every Saturday night", for example).
Repeatedly shrinking the LDF file, to keep it under control, will result in physical fragmentation of the file and CPU cycles and I/O each time that SQL needs to extend it, after shrinking.
If it DOES need to be big then I would manually extend it to that size in a large increment so that there aren't loads of VLFs as a consequence of loads of, say, 50MB file-extensions ... (although 1GB shouldn't be a problem for VLFs compared to a file much much bigger than that ... but Every Little Helps )