I wrote another couple of things, and i the end tested your "lookup keys by looping dictionary values, then lookup in 2d array" approach against a dictionary with tuple, and a sparse 2D array. I also converted yours to use decial rather than double, for fairness against other decimal approaches
Then I also pondered "what happens if one day there is a file with tenth of a degree precision on both axes"
Here are the results:
BenchmarkDotNet v0.13.6, Windows 10 (10.0.19044.3086/21H2/November2021Update)
Intel Core i7-7820HQ CPU 2.90GHz (Kaby Lake), 1 CPU, 8 logical and 4 physical cores
.NET SDK 7.0.306
[Host] : .NET 7.0.9 (7.0.923.32018), X64 RyuJIT AVX2
DefaultJob : .NET 7.0.9 (7.0.923.32018), X64 RyuJIT AVX2
| Method | Mean | Error | StdDev | Gen0 | Allocated |
|---------------------- |-------------:|-----------:|-----------:|-------:|----------:|
| Cjard2dArray | 28.58 ns | 0.358 ns | 0.335 ns | - | - |
| Cjard2dArrayBigger | 56.34 ns | 0.689 ns | 0.576 ns | - | - |
| CjardDict | 49.82 ns | 0.656 ns | 0.513 ns | - | - |
| CjardDictBigger | 247.97 ns | 4.157 ns | 3.889 ns | - | - |
| SankaUD | 411.65 ns | 8.169 ns | 8.023 ns | 0.0648 | 272 B |
| SankaUD_Decimal | 508.33 ns | 3.820 ns | 3.573 ns | 0.0725 | 304 B |
| SankaUD_DecimalBigger | 48,769.85 ns | 562.027 ns | 498.222 ns | 0.0610 | 304 B |
The sparse 2D array is fastest, and this is entirely expected. There is very little work to do in performing a lookup and each lookup takes about 30 nanoseconds. Dictionary tuple is slower; there is more to do in creating a tuple, hashing it, storing it in an array and jumping if there are collisions. All in it's around 60% slower, at 50 nanoseconds per operation. They're both and order of magnitude faster than a "looping the dictionary approach"
The really intersting thing is how the look changes when we work on much bigger datasets. At tenth of a degree precision we've gone from about 200 measurements to 6.5 million, and if we're considering datasets generated by machine that might even be a small dataset.
With bigger arrays and bigger dictionaries comes a performance punch on the nose; the 2D array is half the speed at 60ns per lookup, the Dictionary tuple is a fifth the speed at 250ns per lookup but these are something of drops in the ocean on the looping approach taking 95 times longer, at nearly 49,000 nanoseconds per lookup...
---
And what does it look like if you built the dictionary the other way round and used it as it was intended, rather than looping through its values looking for your hit? That 49,000 ns comes down to 135ns (not present in the above table).
The key take-away here is that nested loops are a very slow way to do things.. You need to understand that doing a FirstOrDefault on a dictionary is just going to loop through it one by one, looking for whatever entry satisfies the predicate passed in. On average, assuming random lookup values, that is going to find the wanted entry after searching half the values so in our bigger dataset, a random lookup will result in 3.25 million values being checked befoer the wanted one is found..
Dictionary was designed to be able to calculate where a wanted value is and go straight(ish) to it. It does some checking that it found what it wanted, because it's possible that Dictionary will end up wanting to store two different keys in the same place, so it has a strategy for mitigating that but it costs a bit of time. A sparse 2D array is guaranteed able to calculate where the wanted value is and there will never be a collision with another value, but you cannot always universally apply it and it might burn a lot of memory in some cases. Dictionary is usually a good tradeoff between time and space consumption