CPU bound - Context Switching - Slow thread processing

etl2016

Active member
Joined
Jun 29, 2016
Messages
39
Programming Experience
3-5
hi,

My environment is as follows : a virtual machine with 16 virtual processors and 64 GB memory. I have a small test csv file with hundred and sixty thousand rows. I am spinning 16 threads at the rate of one thread per CPU, thus, each sharing a work load of ten thousand rows. Each of the threads is converting ten thousand rows to equivalent in-memory DataTable. And, until this stage all the processing is happening in few seconds (in under 7-10 seconds from start of program, and at this point in time, I have a collection of 16 in-memory Datatables, each holding ten thousand rows). So, Disk IO ends here at this stage. From now on, CPU bound processing happens. Each of the 16 threads continue to loop through their respective work-load of ten thousand row in-memory DataTable and connects to Database to update one of the columns in their respective in-memory Datatables. The overall throughput is observed to be 90 minutes for these ten thousand rows, which is roughly 100 rows per minute. Each of the 16 threads takes exactly the same amount of time and within these common 90 minutes all the 16 threads process their respective work load of ten thousand rows. This looks very very low throughput. A closer look at the Database turn-around times is verified to be very fast, within a few milli/micro seconds the Database is able to respond back. There is no Disk IO within the loop. As soon as the value is looked up in the Database, the in-memory Datatable is updated. And, each of the these instructions themselves are happening extremely fast (from times logged). Whereas, it has been observed that between successive programming instructions within the loop, there is a short delay, probably due to context/thread switching.

Question-11 : In a hardware setup of 16 virtual processors with 64 GB memory, and the program spinning threads at a ratio of 1:1 (and almost zero user programs running in the VM), why is context switching happening at all? Question-2: Can Async sort of TPL Asynchronous programming help here - which I believe is of little help (please correct me otherwise), because there is nothing the program has to do while waiting for the response from Database. There are hardly 3 or 4 instructions within the for-loop processing the ten thousand row in-memory Datatable, and each of these programming instructions is dependent on the completion of the previous programming instruction, there is nothing an instruction X can do out-of-sync while its predecessor instruction (X-1) is still being processed. These 3 or 4 instructions in the for-loop have to go sequentially. X, X+1 , X+2 are happening extremely fast in milli/micro secs. It seems the root cause is the short delay between some X and X+1 or (x-1) and x or sometimes (X+2) and (X+3), giving an overall through put of just over 1 row per second.

Could you please share your views?

thank you
 
constructed and fired, as in lines 7 to 17 in the earlier post. As the next step, I am trying to figure out how to associate the ten thousand responses fedback by Redis to their respective requests so that the {key, value} pairs are aligned correctly. TPL code to achieve this association is being constructed. Any inputs are welcome.
My reading of the documentation suggests that the responses come back in the same order as the requests in the batch.

As for async updates of the DataTable, I believe that the DataTable methods were written before async/await came along and had not been retrofitted like the other older classes in the framework with newer async task methods. My gut feel is that your biggest gain maybe to partition the batch results and update the table rows in parallel, but that can only be verified with a profiler. By firing off more threads, you maybe forcing more context switching and actually slowing things down like before when you were running hundreds of threads.
 
Unless our OP's description of the problem is incomplete, based on post #1, the database he is talking about is the Redis database. Since it's only a single key-value that he is updating, I doubt that having an EF wrapper around Redis is really going to do much more for the OP.

I also have some doubts about even needing those DataTable's that he is putting the parsed CSV's into. Nothing in his description indicates that he is cross-referencing the current row with other rows in the DataTable when determining a key, or computing a value if the key is not found in the database. Neither does his pseudo code in post #9, or post #12 show any cross row accesses. If there is no need to reference other rows, then a simple list of rows parsed from the CSV is needed (to give O(1) performance when retrieving an item from the list) as opposed to trying to get a row from the DataTable (which has an O(log n) performance).
 
hi, there has been some significant performance gain I noticed with below pipeline approach.

C#:
List <Task> addTasks = new List<Task>();
for (int i = 0; i < In_memory_DataTable.Rows.Count ; i++ )
{
    DataRow row = In_memory_DataTable.Rows[i];
    Task<bool> addAsync = MultiplexerConn.StringSetAsync() );  // StackExchange.Redis library
    addTasks.Add(addAsync);
}

Task[] tasks = addTasks.ToArray();
Task.WaitAll(task);

however, the above was just an effort to assess performance of the pipeline approach, which I have verified as being super fast (200k in less than min), so am continuing in that direction.

Now, with above working proof-of-performance pipeline approach, am trying to achieve my real functional requirement, which is on the guidelines of "concurrency" suggested at the end of this article on StackExchange.Redis library : Pipelines and Multiplexers

Approach being looked at is:

1) Asynchronously get Value corresponding to my Key using StackExchange.Redis library method StringGetAsync (where key is each element of my in-memory Datatable, roughly 10k units) (and 16 such threads processing their respective 10k Datatable workloads simultaneously)
2) If successful in retrieving, use the fetched value, update my in-memory DataTable
3) If not, construct a new value and Set new {key, value} pair in redis using StackExchange.Redis library method StringSetAsync (key, value)

From research, it is understood that await is more performant approach than continuewith, and am considering to use the approach suggested here : Process asynchronous tasks as they complete.

Since the communication between my C# program and the end point Redis database has to pass through StackExchange.Redis library, I am exploring what TPL/Async functionality has already been implemented by StackExchange's StringGetAsync and StringSetAsync methods behind the scenes (such as kicking off Task.Run) , so that such features needed not (should not) be implemented redundantly in my program code.

Any inputs or suggestions of alternate approaches are welcome, thank you.
 
Bit the bullet, installed the library, and managed to come up with this in about 5 minutes. Does it all in one go, and you can manage behaviors using the When and CommandFlags enums... If you need a response, then `await` the response and remove the FireAndForget flag... If you need to retrieve the values before or after, use the same overload that takes and returns an array with StringGet. Don't foreach through it, let the library get and set them all at once. As always, best to let the library do its thing.

As a sidenote, with the library installed I realized you AGAIN posted incomplete code... I don't get what's so hard to understand, if you need help solving a problem with some code, you ought to post the EXACT COMPILABLE CODE in the first place. Not a reproduction, example, prototype, or workflow. The exact code. Otherwise you are asking everyone reading your posts and trying to help you to work extra hard to get the information they need to answer your question. It's inconsiderate.

Come on people, get it together.

C#:
        public void SetRedisValues(DataTable table, string colKey, string colValue)
        {
            using (var mux = ConnectionMultiplexer.Connect("localhost"))
            {
                var db = mux.GetDatabase();

                KeyValuePair<RedisKey, RedisValue>[] values = table.Rows.Cast<DataRow>()
                                                                   .ToDictionary(r => new RedisKey(r[colKey].ToString()),
                                                                                 r => new RedisValue(r[colValue].ToString()),
                                                                                 null)
                                                                   .ToArray();

                var task = db.StringSetAsync(values, When.Always, CommandFlags.FireAndForget);
            }
        }
 
Last edited:
Sorry and thanks Herman, the last bit had not been tried, yet. Was considering to choose between a couple of options, yet. New to .net world, sorry.
 
Back
Top Bottom