Posts about Actix
Is it normal that memory usage keeps going up with each request and doesn't go down? Will it eventually plateau? I built a web server with Rust and Actix. The CPU usage stays at less than 1% but the memory usage climbs and doesn't seem to plateau. It never goes back down even days after it has been hit. Is this normal behavior or is there some sort of issue? Any idea what the issue might be? I could be worried about nothing and Actix could just be warming up to a steady memory usage state. I am new to Rust and Actix. Any guidance would help a ton. Thank you!
https://i.redd.it/4nxe080axqq91.pngHi,
I have an actix web server running (release) on a c5.large linux instance (2 vCPU and 4 GB mem). While running the benchmark from the instance itself (locally), I am getting around 64,000 req/sec.
Testing from the instance itselfHowever, when accessing the same endpoint through internet (from my local machine), I am only able to clock ~1,100 req/sec
Testing over the internetOn the other hand, I've deployed a simple nodejs - express application as well.
Locally benchmarking the express server produces these results -
Nodejs - locally on the instanceBenchmarking through the internet -
It's doing the same req/s as the actix-web, which is almost 15x faster when benchmarked locally.
Why is it so?
Does that mean - this actix-web server can only handle a max of 1100 requests/sec for a simple hello world response? Or does it have to do something with the way `wrk` benchmarks?
In a hypothetical scenario, if 60,000 people make a request to my endpoint at the same time, what would be the result?
https://github.com/jacob-pro/actix-extensible-rate-limit
Recently I've written a new rate limit middleware for Actix-Web.
You may be asking - why do need another crate?
The initial motiviation is that a lot of the existing crates unfortunately don't seem to be maintained / up to date with the latest actix web 4.0
But the primary benefit is that I was finding using some of the existing crates quite restrictive, they seem to be designed for very specific use cases, and difficult to extend if you want to do something different.
This crate is intended to support both basic and complex scenarios:
Derive a custom rate limit key from the request context (based on a header, client IP address, a request path, combination of these, or implement it yourself with an arbitrary future)
You can pass in dynamic rate limits and intervals based on the request context, for example you could write a future that maps user id to particular RPS limit, and pass this to the rate limiter.
In-memory and redis backends are provided, but you can also implement your own backend if you want to do something different.
You can set a custom 429 response, and you can choose to transform response headers based on the rate limit result (e.g adding x-ratelimit-remaining).
You can choose to roll back a rate limit count after the request has completed based on the response code. E.g. you may not want 5xx errors to count against a user's rate limit.
I would be glad to hear any feedback / suggestions :)
I've built a simple REST API to run YoloV5 predictions using opencv and actix-web crates. It requires an ONNX model and you're ready to go. Dockerfile available too.
This is my first real project in Rust so be kind :)
Github: https://github.com/masc-it/yolov5-api-rust
Loving the language btw, coming from C++ and python. This is just paradise.