SyntaxHighlighter

Tuesday, January 31, 2023

Actionhero Web Response Compression

Our company's recent internal tool was basically a database API with basic CRUD actions. There was some business logic to make sure things worked correctly, and I still like actionhero for my NodeJS APIs. The bulk of the project was in the web app (built with Quasar on top of Vue). I also wanted to try Docker and AWS Fargate, the instances are running actionhero directly (no nginx proxy).

Recently the company created a "big project". Most previous projects had 5-10 inner data hierarchies. This one had 206. The "/getFullProject" action had a real-world stress test! I thought VueJS would have issues, as in the past too many rendered objects or large arrays would slow down reactivity, but it works fine. Yay, version 3 : )

The hangup was the JSON download size. 120MB+ and it took over a minute. Too slow. Options are larger network tiers for Fargate, lazy-loading those data hierarchies from the web (API updates), or use nginx as a web proxy to gzip everything. However, I wanted a quicker solution and attempted to compress the JSON payloads directly from actionhero.

The compression project is meant for ExpressJS, but could we make it work in actionhero? Yes. Yes we can. (kind of : )


The result was the middleware above, that I only applied to a single action. My "/getFullProject" action. The `compression` project modified the raw request and response of actionhero (which are the base NodeJS http/https library). I had to turn off actionhero's normal response (data.toRender = false), just like building a file buffer or similar. Also the Content-Type header needed set before trying to compress; that is specific to the compression library (yay OpenSourceSoftware).  There are some issues with error handling, as modifying the `rawConnection` messes with how actionhero processes responses. And probably some other issues to be found! But it's an example of applying ExpressJS middleware inside actionhero.

For me, this reduced the 120MB+ payload down to under 8MB, and time from 1min+ to ~20s. All node changes, no API or infrastructure. 

It's still a slow action. I might change Docker to using a nginx proxy, and then lazy-load if needed. But for now, this experiment was sufficient and successful. 

No comments:

Post a Comment