I should have been obvious, but the text files I alluded to in the previous post had headers as they were set up for ETL based on delimiter [in this case ^].
So, I wrote code to sort and remove duplicates, but in that case the headers were at the last row of the de-duped file. So I added code to preserve the header row and append to the unique de-duped 'body'. This ended up with the header at both the beginning and end of the output file. So added extra code to remove the last 2 lines [assuming CRLF at end] of the output file. All worked well. But, of course, it turned out the process would only work for that specific file format.
So, went back to original idea of a map with fileread => filewrite... and that worked for several files I tested with headers... but as Tony alluded... can take time based on either file size or number of duplicates.
Maybe a stupid ask: but is there a way WB could import a text file [with headers but irrespective of the delimiter - so it is treated as a single line, not a column-based file] into an array
as UNIQUE values, then export the array to de-duped output file with headers intact.
and would that make any difference in time as opposed to using a map procedure [as suggested by Tony and others]