API Gateway Throttling Option

Hey everyone,

Just curious but I couldn’t find anything about throttling options for api Gateway. Is there a way to do this with some sugar?

Thanks

Currently, no sugar for API Gateway throttling. Wondering what it would look like.

For throttling, were you thinking the UsagePlans: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigateway-usageplan.html

Or is there something else?

Yeah basically trying to prevent over consumption. I’m guessing it would work like the caching options. It’s under stage in the UI I believe.

However for more refined throttling is it possible to use the rack middleware? I have read the docs but don’t quite understand how Jets handles the context across requests like In a use case like the Rack attack gem…

I see, not just rate limiting and throttling, but also IP blocklists for malicious traffic.

RE: However for more refined throttling is it possible to use the rack middleware?

Read through the Rack::Attack README and source code. Should be possible to add the Rack::Attack middleware Jets docs here: https://rubyonjets.com/docs/rack/middleware/

Think that may work fine for your needs. So maybe try that.


There’s another good path for this. It is to eventually add AWS WAF support to Jets. The WAF runs in front of API Gateway so the request never even makes it to the Lambda functions. Here’s the flow:

User/Client > WAF > APIGW > Lambda > Business Logic

The Rack::Attack middleware works too but the flow is a little different:

User/Client > APIGW > Lambda > Rack::Attack > Business Logic

An advantage with WAF is that AWS provides some managed lists of known malicious IP addresses already. It also provides SQL injection and cross-site scripting protection. Additionally, you can also customize WAF rules.

There’s a cost associated with the WAF service. Unsure it is enough to offset with the savings of not hitting Lambda. Probably not.

Note: FWIW, API Gateway already come with built-in AWS Shield Standard for free. It’s not IP blocklists like Rack::Attack. It’s more generalized. Here’s the description:

Automated mitigation techniques are built-into AWS Shield Standard, giving you protection against common, most frequently occurring infrastructure attacks. Automatic mitigations are applied inline to your applications so there is no latency impact. AWS Shield Standard uses several techniques like deterministic packet filtering, and priority based traffic shaping to automatically mitigate attacks without impact to your applications. You can also mitigate application layer DDoS attacks by writing rules using AWS WAF. With AWS WAF you only pay for what you use. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks.

Yes exactly, that’s what I am thinking. In my particular scenario I only have one function that would need to be throttled and the rest of the system should be fine. It would probably be the same as using

User/Client > APIGW > Lambda > Rack::Attack > Business Logic

but can also use it as an Authorizer. But I don’t think we can have multiple authorizers correct?

My question for you @tung is what would be the limitations of using something like RackAttack as a middleware that probably stores information in the running function context that could potentially shut down? For instance, if the function is doing a cold start there wouldn’t be any information for previous RackAttack ip addresses, etc. correct? Which should be fine because it would only happen on DDOS or something. But for the sake of the argument would that information just be lost as GC happens on lambda? And how about having lots of information in a short period of time, do you know of any memory limits on the context of warm functions?

RE: but can also use it as an Authorizer.

Interesting. Guess you can abuse authorizers in that way. By leveraging the authorizer ttl, it’ll avoid hitting lambda and possibly cache the deny. On the flipside, if the request is authorized by regularly logging in, it’ll bypass the protection. They bypass it once, grab the authorizer token, and then can go all in. So gut says it’s probably not the right path to use authorizers in that way. :thinking:

RE: But I don’t think we can have multiple authorizers correct?

You can have multiple authorizers associated with different functions. You cannot have multiple authorizers associated with the same function.

RE: would be the limitations of using something like RackAttack as a middleware that probably stores information in the running function context that could potentially shut down? For instance, if the function is doing a cold start there wouldn’t be any information for previous RackAttack ip addresses, etc. correct? … would that information just be lost as GC happens on lambda?

Have a look at Rack::Attack StoreProxy It seems like it stores info like ip addresses in different cache providers: Redis and Memcache. That storage would not be affected by the lambda functions recycling. Please confirm, though. :ok_hand:

RE: do you know of any memory limits on the context of warm functions?

Generally, function memory limits are what you configure the function to. It’s a function property. Docs here: http://rubyonjets.com/docs/function-properties/ Currently anywhere from 128MB to 3GB. That will likely change as AWS continues to improve lambda.