Make HTTP Requests

Joined
Aug 26, 2023
Messages
14
Programming Experience
Beginner
what is the proper way to create a console app using .net 7 and use http client to hit an api endpoint that requires oauth?
 
Solution
would still have same hurdles of passing in basic auth
Please read the flurl docs on their website; doing auth is essentially a one line operation:

1693252987388.png


Flurl works fluently; you set everything up in one line like "api.com".WithThis(..).AddThat(...).SetOther().Foo().Bar().GetJsonBlah()
So the TotalResults is 16876, the API will only return 1,000 at a time, so I would need to make 16876/1000 requests in order to get all of the records.

I've never set up something like this....I assume I would then use this for my skip and take params and continue on until all data hass been returned.
 
It wasn't clear that the API you were calling only returns 1000 items at a time. It sounded like the API returns 16878 items, but you could only handle 1000 items at a item.

Now that you have made that clear, read the documentation about that API. How do you tell it to give you the first 1000, and then the succeeding pages?

Edit after:
For example, here's Atlassian's documentation on how their API supports pagination and how to use it:

As far as I know, there is no standard way of doing REST API pagination, so it'll be up to each API author on how they want to do things. So you'll need to read the documentation for the API that you are using.
 
so that is a big hurdle, 1 my inexperience, but 2 the lack of documentation on the API. So they tell us that we can get the TotalRecords that our query params return, and then we must use skip and take to query/query/query until we get all of the results. But limitations are only 1,000 records are returned at at time, and we can only make 15 requests every 60 seconds

Then we process them however we need on our end.

Since there are so many stipulations/requirements in place by the APi with very little documentation, I wasn't sure if a 3rd party lib or nuget package etc would handle this best.
 
So they tell us that we can get the TotalRecords that our query params return, and then we must use skip and take to query/query/query until we get all of the results. But limitations are only 1,000 records are returned at at time, and we can only make 15 requests every 60 seconds

Break the problem into smaller problems. Solve the problems one at a time to solve the bigger problem.

If their API documentation doesn't tell you how to skip and take, but you are contact with the developers, then ask them to tell you specifically how to do that and/or update their documentation to show how to do that.

If their API has throttling (most of APIs do to prevent denial of service attacks), then you simply have to implement queuing on your end. You need to implement retrying on your end anyway. Most production quality code implements some form of retrying strategy that has some back-off schedule. Since you need to that back off scheduling for retrying, then you would do that same kind of scheduling for each of the pages.

If you don't want to roll your own scheduling system, most people just take an off the shell message queuing system and queue messages with specific times for when the messages should be processed. So in your case, each queued item would be one of the pages as well as one queued item to process all the pages once they are available.
 
If you don't want to roll your own scheduling system, most people just take an off the shell message queuing system and queue messages with specific times for when the messages should be processed. So in your case, each queued item would be one of the pages as well as one queued item to process all the pages once they are available.

I followed you up until this part.

Breaking it into smaller parts would def make it easier, makes it more maneagable and will def help me wrap my mind around how to write the code to skip/take until all results have eben returned. Should be a somewhat basic loop

Like this? (well i just realized i somehow have to iterate my skip and take variables too)
pagination:
var res = await response.Content.ReadFromJsonAsync<T>();

int totalRecords = res.TotalRecords;
int batchSize = 1000;
int skip = 0;
int take = 0;
int iterations = 0;
int maxIterations = totalRecords / batchSize;

do
{
    Console.WriteLine("API Requset Number: {0}", iterations);

    iterations++;

} while (iterations <= maxIterations);
 
Last edited:
Or is it best practice to use just the base http client?

Do you know how to use http client correctly? Flurl does, which is why I use it. RestSharp does, which is why I have used it. They're both wrappers around httpclient and add useful things, as well as taking a load off me in terms of writing http client handling stuff. If you want to do it yourself there are a couple of blogs that detail it:
You're using HttpClient wrong and it is destabilizing your software
You're (probably still) using HttpClient wrong and it is destabilizing your software

You don't have to use third party libs, but then again, you don't have to use Windows/Linux either- you can take a few years out and write your own operating system. By the same token, you could grow your own breakfast cereals, make your own bowls and milk your own cow too.. But people tend not to remake everything from the ground up; they build their own innovations on other people's abstractions. Some people do reinvent wheels when they perceive that the existing solutions don't give them a choice or feature they require. If that fits your argument, then it's a reason to write your own RestSharp, Flurl, or even AutoRest/NSwag/Keith/OAG/SC..

..but for sure if the API you're consuming exposes an OpenAPI spec I'd consider using a generator against it and moving on to write interesting code; writing an API client is generally incredibly tedious. At least consider a library to do the calling
 
Last edited:
f you want full control to be able to monitor and tweak performance (or manage any potential security issues), then writing your own code would be the way to go

I don't completely agree with this; good libraries are fully testable and already have teams of people with vested interests in providing a secure code base, coupled up to millions of users applying real world testing and potentially feeding back. It's like employing a team of developers and testers who do nothing but work on a component of your app. Even better, the code of these libraries discussed here is open source so if you do find an issue you can do something about it as easily as if it were your own code.
 
Last edited:
So the TotalResults is 16876, the API will only return 1,000 at a time, so I would need to make 16876/1000 requests in order to get all of the records.
Technically it's (16876/1000)+1 - you can't make 0.876 of a request (if you're doing double math) and integer math would omit the last page of 876 results

Every API is different as to how it paginates so no, there isn't something built that automatically paginates for you. The notion is simple though, typically you supply some parameter to start from, so you just run your calls in a loop that starts x=0, gets 1000 results from api.com/getthings?start=0&num=1000 and then increments x. In flurl that might look like:

C#:
for(int x= 0; x<totalCount; x+= 1000){
  var r = await "api.com/getthings"
    .SetQueryParams(new { start = x, num = 1000 } )
    .GetJsonAsync<MyThing[]>();
}

Compared to a non paginated version

C#:
var r = await "api.com/getthings"
   .GetJsonAsync<MyThing[]>();

Pagination isn't particularly onerous.

I tend to read the api docs too and find out if there is a request limit like 2 per second and then add some delay code to keep to that

C#:
var nextReq =DateTime.Now;

for(int x= 0; x<totalCount; x+= 1000)
{
  if(nextReq > DateTime.Now) await Task.Delay(nextReq - DateTime.Now);

  var r = await "api.com/getthings"
    .SetQueryParams(new { start = x, num = 1000 } )
    .GetJsonAsync<MyThing[]>();

  nextReq = nextReq.AddMilliseconds(667); //keep to 3 req in 2 sec, a bit slower than the 4 req in 2 sec rate limit
}

Edit: I just read that you're allowed 15 in 60. You might want to keep a queue of all the dates you've made a request, and if the queue length is <15 or the oldest request in the last 15 is >60s ago, make the request, else wait for 60-(now-oldest).TotalSeconds). This way you can burst request up to 15, then wait the minimum time. Cap the queue at 15 items

If that doesnt make sense or is too hard, just make one request every 4 seconds (nextReq.AddSeconds(4))
 
Last edited:
Since there are so many stipulations/requirements in place by the APi with very little documentation, I wasn't sure if a 3rd party lib or nuget package etc would handle this best.

An API with lots of rules and no documentation is definitely something that a third party library can't handle fully automatically, unless it's powered by AI! In this sense you are the AI handling the rules
 
Last edited:
Technically it's (16876/1000)+1 - you can't make 0.876 of a request (if you're doing double math) and integer math would omit the last page of 876 results

Every API is different as to how it paginates so no, there isn't something built that automatically paginates for you. The notion is simple though, typically you supply some parameter to start from, so you just run your calls in a loop that starts x=0, gets 1000 results from api.com/getthings?start=0&num=1000 and then increments x. In flurl that might look like:

C#:
for(int x= 0; x<totalCount; x+= 1000){
  var r = await "api.com/getthings"
    .SetQueryParams(new { start = x, num = 1000 } )
    .GetJsonAsync<MyThing[]>();
}

Compared to a non paginated version

C#:
var r = await "api.com/getthings"
   .GetJsonAsync<MyThing[]>();

Pagination isn't particularly onerous.

I tend to read the api docs too and find out if there is a request limit like 2 per second and then add some delay code to keep to that

C#:
var nextReq =DateTime.Now;

for(int x= 0; x<totalCount; x+= 1000)
{
  if(nextReq > DateTime.Now) await Task.Delay(nextReq - DateTime.Now);

  var r = await "api.com/getthings"
    .SetQueryParams(new { start = x, num = 1000 } )
    .GetJsonAsync<MyThing[]>();

  nextReq = nextReq.AddMilliseconds(667); //keep to 3 req in 2 sec, a bit slower than the 4 req in 2 sec rate limit
}

Edit: I just read that you're allowed 15 in 60. You might want to keep a queue of all the dates you've made a request, and if the queue length is <15 or the oldest request in the last 15 is >60s ago, make the request, else wait for 60-(now-oldest).TotalSeconds). This way you can burst request up to 15, then wait the minimum time. Cap the queue at 15 items

If that doesnt make sense or is too hard, just make one request every 4 seconds (nextReq.AddSeconds(4))

Ohhhh - that makes sense. Wow,

Last question, and I think I'm all set on this. What variable/datatype do I store each result in while I'm waiting for each subsequent request to be made?
 
Ohhhh - that makes sense. Wow,
Forum etiquette; please don't quote a massive post in it's entirety and then (essentially) just say "thanks" under it.

If you hit Reply and it quotes a massive post, edit the other person's words down please. Do not quote anything when the post you're replying to is the last one in the thread and you have no further responses to make to specific parts of their post
 
Last edited:
would still have same hurdles of passing in basic auth
Please read the flurl docs on their website; doing auth is essentially a one line operation:

1693252987388.png


Flurl works fluently; you set everything up in one line like "api.com".WithThis(..).AddThat(...).SetOther().Foo().Bar().GetJsonBlah()
 
Last edited:
Solution
@cjard - i'm missing something....what did I mix up here?
flurl:
    var r = await "https://api.xxxxx.com"
        .AppendPathSegment("/hipments/search")
        .SetQueryParams(new { skip = x, take = 1000, shipmentDateBegin = "08/01/2023", shipmentDateEnd = "08/10/2023"} )
        .WithBasicAuth("username", "password")
        .GetJsonAsync<T[]>();
 
Last edited:
Is that in a generic method that declares T (and why would a particular endpoint vary what it returned anyway)?

Are the dates really presented like that? I'd have thought an ISO style date would be more appropriate for an API to support..

Assuming you got some error message somewhere, telling us exactly what it is would be the biggest single most useful step you can take when seeking assistance
 
Back
Top Bottom