First, thanks to all of you for your input.
Think of this app as a way to keep track of items in your collections. Using baseball cards as an example the client would first establish what information or attributes they would like to track (and search for) about each card in the collection. Examples would be the condition of the card from "mint" to "poor", the date the card was printed (if that is able to be determined), what you paid for for it, the latest appraised value, team name, location of the card in your storage bin, etc.
As each card is acquired the app is used to create a record which contains the various values of the attributes. So, a sample record in the file could look like this...
C5D19871206P8A16T123L012R235
with C5 being its "Condition" 5, D1987... being Dec 6, 1987, P8 ... paid $8 for it, A16 ... last appraisal was $16, T123... team 123, L012 ... location 12, and R235 ... record number 235. The record number could be used to provide further detailed information that is not searchable but is accessable via the app.
When this file is even partially complete you now have a way to locate one or more cards in your collection and pull them out to maybe sell, display, or trade.
The search program in the app allows the client to make multiple selections with and/or logic for each attribute - for instance if you wanted to locate all Jackie Robinson cards AND condition 4 AND appraised at $10 or more.
So, with each initiated search I need to go through all the records as I don't know how many Jackie Robinson cards I have, where they are in the file, what the condition of the card is, etc. Besides that I might need to search through each record multiple times depending on the number of attributes defined and the complexity of the search. That's why I'm concerned about the response time on the search when you have a 100K card collection.
By spending the time to pre-load the data file into memory before allowing the search to occur, I don't have to read the records from disk more than one time, so I'm really looking for the fastest way to sequentially go through the data in memory using either a Winbatch list, an array, or the Binary functions.
I'm trying to make this as simple as possible and as fast as possible which is why I haven't considered using a database.
Based on what I've read in your posts I will likely upgrade to the latest WB+Compiler and compile this as a 64bit executable then test the performance of each memory store and search method. I am working on creating a set of test versions of the record set with 100K, 500K, and 1MB records and I'll post the resulting performance benchmarks.
Thanks again.
Scifidude