handling dbContext with Async/Await

JasinCole

Well-known member
Joined
Feb 16, 2023
Messages
66
Programming Experience
1-3
Me again, but this time I know what my issue is. How to best handle this situation?

Using the following code is causing an exception because I am trying to use the dbContext in another operation while another thread is using dbContext and that operation hasn't finished.
What's the best way to create a scoped instance so I am not accessing the dbContext from two threads at the same time? I don't really need the UnitOfWork here because I am not trying to write to the database(failures don't require rollbacks), but I would still like to keep the dashboard in sync, such that any task that reads from the database should be correct across all information displayed. That is why I am creating task and then WhenAll finish either as a failure or success.
C#:
public partial class HomePageViewModel : ViewModelBase
{
    [ObservableProperty] private Task<IEnumerable<Lgract>>? _accounts;

    [ObservableProperty] private Task<(decimal, decimal, decimal)>? _agingPayables;

    public HomePageViewModel(IUnitOfWork srvc) : base(srvc) // Dependency Injection UnitOfWork - UnitOfWork holds the dbContext that gets passed to all the repository objects
    {
        InitializeAsync().SafeFireAndForget();
    }

    private async Task InitializeAsync()
    {
        var task = new Task[]
        {
            LoadAccountBalances(), // <- uses dbContext inside Srvc scope
            LoadAgingPayables()  // <- causes exception because different thread is using the samedbContext inside Srvc scope
        };

        await Task.WhenAll(task);
    }
   
    private async Task<IEnumerable<Lgract>> LoadAccountBalances()
    {
        ...
       
        return await Srvc.Lgracts.GetAccountsByRecnumAsync(accounts.Select(act => act.Number).ToList());
    }

    private async Task<(decimal due7Days, decimal due30Days, decimal overdue30Days)> LoadAgingPayables()
    {
        ...

        return (dueInSevenDays, dueInThirtyDays, thirtyDaysOverdue);
    }
}
 
Moving to Entity Framework...
 
after some more research I do understand the problem that I am presented with. But it also seems like I made a fundamental problem. According to the MSDN a Unit-Of-Work should be shortlived. Now I am not 100% sure of what that means exactly, but I do not think getting a dbContext for the life of the 'UserControl' in this instance is shortlived.

It reads like I need to create a new dbContext each time I want to query the database. Possibly like using a factory to create instances on the fly. But I am also unaware of the cost implications using this approach and the correct process of creating that factory.

Moving to Entity Framework..
I see what you did there now. LOL
 
It reads like I need to create a new dbContext each time I want to query the database

Maybe not "each time" as that's a bit excessive but my contexts typically live for as long as the unit of work takes to carry out. Loading 500 records, including lookup and or creation of parent entities is a unit of work and will entail use of a single context

What you need to appreciate is that a context typically automatically tracks things put into it, which can be useful when working with related data as part of a business logical operation. The same context handling all related elements works out well in most scenarios. For example, your user's browser makes a request to edit a person, one context downloads the person and related data, the code makes viewmodels of it and serializes it to the front end, later a response will come from the user of altered viewmodels and another context will pull the same person from the db, the code patches the changes over the person and the context saves the changes.

The two contexts mentioned are different; you wouldn't hold a context for an unknown amount of time while a user edits the person you sent them, just because it's tracking the person you downloaded and you want to optimise that away to a single download, remember the person for 25 minutes, patch it, save it.. One context is used for the read, another for the check and update op. Split this way they lend themselves to delivery by DI too, as each op (get person , update person) is triggered by a separate request

But I am also unaware of the cost implications using this approach and the correct process of creating that factory.
Db contexts are lightweight and intended to be (re)newed on a whim

Take a look at MSDN's advice on using contexts in Blazor; if DI did them there they would be incredibly long lived, so instead we create a factory and then use it to manufacture contexts on demand, together with a using statement to dispose of them
 
Last edited:
I guess the bigger picture here for me is future proofing my app. Right now I do not write to the database. I am only reading. I am working off database that comes with our accounting software. Writing to that database atm is probably not the smartest. As of right now mostly what I care about is just pulling the information to present it in a way that is more pertitent and robust than the way the original accounting software works. That approach doesn't really need the Unit-of-work pattern per say. But eventually I may get to the point where I want to write to the database and that should be done with the ability to rollback changes as needed.

I'll revisit this when I am done with the dashboard and see how it handles the I/O bound workload. One thing I did overlook that caught my eye today was database request that weren't entirely needed, refactoring these request into one request and creating a local copy as a list to run LINQ queries on the same data seemed like a good approach.
 
Also, as an aside, in Entity Framework, the DbContext already represents the unit of work. (EF also already implements the repository pattern as well.)
 
That approach doesn't really need the Unit-of-work pattern

I don't typically regard UOW as mandatorily including one or multiple write operations, by the way; my previous post might have implied this by accident.

In terms of determining how long a context should live for, you have some related work to do with a context - download all employees, their addresses and most recent appraisal for example, that involves data from multiple tables, generate a report pdf and send it for print - it didn't include any writing but the context lifetime idea is the same; create one, use it, get rid of it (having done the work item)

creating a local copy as a list to run LINQ queries on the same data seemed like a good approach
It can be indeed. Typically the database holds the most finely grained detail available and the network IO between db and machine consuming the data is the slowest part of the chain. We thus make some judgement as to what granularity and volume of data we need and take that minimum across the network and then post process if we need even less granularity. It might be good to avoid dragging lots of detail over the network to summarize on the consumer.

For example, imagine your employees log in and out multiple times a day, whenever they take a break, go home, move to a different area etc; you have hundreds of login records per employee per day and thousands of employees. You want a report of employee hours worked and department man-hours for the month.
It perhaps makes sense to do some grouping and summing on the db and boil the data down to one record per employee per day of a single float of the hours worked, including the employee details (incl department) then have the consuming machine run the sums to group by department and month.

This is likely preferable to downloading hundreds of thousands of login records and doing the math on the consumer, and possibly also preferable to hitting the DB for two grouping queries, one for hours per employee-day and another for hours per department-month
 
Last edited:
Back
Top Bottom