
I have been made to optimize the loading speed of my blog recently, the most important part of this task is to compress content during transmission. I have searched online and found a NodeJS compression library named compression. However, after applying it, the expected result of compressing the transmitted content was not achieved. This article documents the troubleshooting process for failing to enable compression.
Usage of NodeJS Library compression
According to the official usage guideref1, we wrote the following demo:
- import express from 'express';
- import compression from 'compression';
- const app = express();
- app.use(compression());
- app.get(
- '*',
- (request, response) => {
- response.end('<html><body>' + 'Hello World!'.repeat(1000) + '</body></html>');
- }
- );
- app.listen(
- 5000,
- '0.0.0.0',
- () => console.log(`listening on port 5000 ...`)
- )
It was expected that by including anAccept-Encoding header in the request, theContent-Encoding of the response returned by Express would match one of the methods specified in theAccept-Encoding header. However, this was not the case in reality, as shown in the following request example:
- >> curl -v http://127.0.0.1:5000/ -H 'Accept-Encoding: gzip'
- * Trying 127.0.0.1:5000...
- * Connected to 127.0.0.1 (127.0.0.1) port 5000
- * using HTTP/1.x
- > GET / HTTP/1.1
- > Host: 127.0.0.1:5000
- > User-Agent: curl/8.12.1
- > Accept: */*
- > Accept-Encoding: gzip
- >
- * Request completely sent off
- < HTTP/1.1 200 OK
- < X-Powered-By: Express
- < Date: Thu, 27 Mar 2025 03:48:20 GMT
- < Connection: keep-alive
- < Keep-Alive: timeout=5
- < Transfer-Encoding: chunked
- <
- * Connection #0 to host 127.0.0.1 left intact
- <html><body>Hello World!……</body></html>
As can be seen, the response was not compressed. The demo code comes from official document, why did this discrepancy occur?
Finding the Cause in the Source Code
Fortunately, the compression library is open source. Upon reviewing its source code, we discovered a function calledshouldCompress:

Judging from the function name, its purpose is to determine whether compression is needed, with the decision based on theContent-Type.
This involves a small piece of knowledge about Expressref2, where calls to "end" function do not automatically append certain HTTP headers in the response, this directly leading to theshouldCompress function retrieving an undefined value from theContent-Type header, thereby this function returns afalse result indicating that the content should not be compressed.
At this point, the solution became clear: add the corresponding HTTP header or use a function call that automatically adds the HTTP header (for example, thesend method will automatically add this header). Here, we opted for the approach of adding the HTTP header:
- app.get(
- '*',
- (request, response) => {
- response.type('text/html');
- response.end(...);
- }
- );
Note: The length of the response content inresponse.end should be sufficient; if too short, it may also lead to the content not being compressed (because the compressed version might be larger than the original).
Now, let's look at the returned content:
- >> curl -v http://127.0.0.1:5000/ -H 'Accept-Encoding: gzip'
- * Trying 127.0.0.1:5000...
- * Connected to 127.0.0.1 (127.0.0.1) port 5000
- * using HTTP/1.x
- > GET / HTTP/1.1
- > Host: 127.0.0.1:5000
- > User-Agent: curl/8.12.1
- > Accept: */*
- > Accept-Encoding: gzip
- >
- * Request completely sent off
- < HTTP/1.1 200 OK
- < X-Powered-By: Express
- < Content-Type: text/html; charset=utf-8
- < Vary: Accept-Encoding
- < Content-Encoding: gzip
- < Date: Thu, 27 Mar 2025 04:20:48 GMT
- < Connection: keep-alive
- < Keep-Alive: timeout=5
- < Transfer-Encoding: chunked
- <
- Warning: Binary output can mess up your terminal. Use "--output -" to tell curl to output it to your terminal anyway, or consider "--output <FILE>" to save to a file.
- * client returned ERROR on write of 10 bytes
- * Failed reading the chunked-encoded stream
- * closing connection #0
The output indicates that it has become binary, and the correspondingContent-Encoding header is present in the HTTP headers.
However, as debugging continued, I found that some parts of the content were still not being compressed.
Other Cases Where Compression Does Not Occur
Firstly, let's take a look at the following code:
- import fs from 'fs';
- import express from 'express';
- import compression from 'compression';
- const app = express();
- app.use(compression());
- app.get(
- '*',
- (request, response) => {
- response.type('image/jpeg');
- response.end(fs.readFileSync('./example.jpeg'));
- }
- );
- app.listen(
- 5000,
- '0.0.0.0',
- () => console.log(`listening on port 5000 ...`)
- )
It's essentially no different from the code at the beginning of this article, just changing the returned content to an image (binary data), and this time using the type function to mark the content as 'image/jpeg' type in the returned HTTP header.
According to the analysis above, this content should be compressed. However, in reality, it was not compressed:
- >> curl -v http://127.0.0.1:5000/ -H 'Accept-Encoding: gzip' | hexdump -C | head -3
- % Total % Received % Xferd Average Speed Time Time Time Current
- Dload Upload Total Spent Left Speed
- 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 127.0.0.1:5000...
- * Connected to 127.0.0.1 (127.0.0.1) port 5000
- * using HTTP/1.x
- > GET / HTTP/1.1
- > Host: 127.0.0.1:5000
- > User-Agent: curl/8.12.1
- > Accept: */*
- > Accept-Encoding: gzip
- >
- * Request completely sent off
- < HTTP/1.1 200 OK
- < X-Powered-By: Express
- < Content-Type: image/jpeg
- < Date: Thu, 27 Mar 2025 04:36:08 GMT
- < Connection: keep-alive
- < Keep-Alive: timeout=5
- < Transfer-Encoding: chunked
- <
- { [32588 bytes data]
- 100 67275 0 67275 0 0 43.1M 0 --:--:-- --:--:-- --:--:-- 64.1M
- * Connection #0 to host 127.0.0.1 left intact
- 00000000 ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 48 |......JFIF.....H|
- 00000010 00 48 00 00 ff db 00 43 00 03 02 02 03 02 02 03 |.H.....C........|
- 00000020 03 03 03 04 03 03 04 05 08 05 05 04 04 05 0a 07 |................|
There is noContent-Type header in response, the content is directly a jpeg image.
What caused this content not to be compressed? It seems we need to continue searching for answers within the source code of compression.
Back to the Source Code
Did you notice that in theshouldCompress function above, there was another function call we just ignored:
- function shouldCompress (req, res) {
- var type = res.getHeader('Content-Type')
- if (type === undefined || !compressible(type)) {
- debug('%s not compressible', type)
- return false
- }
- return true
- }
So, what does thecompressible function do?
- var db = require('mime-db')
- function compressible (type) {
- if (!type || typeof type !== 'string') {
- return false
- }
- // strip parameters
- var match = EXTRACT_TYPE_REGEXP.exec(type)
- var mime = match && match[1].toLowerCase()
- var data = db[mime]
- // return database information
- if (data && data.compressible !== undefined) {
- return data.compressible
- }
- // fallback to regexp or unknown
- return COMPRESSIBLE_TYPE_REGEXP.test(mime) || undefined
- }
In fact, it checks the obtained content type against themime-db library, which is a json file containing information on various MIME file types. For instance, the information for the JPEG file involved this time is as follows:
- {
- ....
- image/jpeg: {
- source: iana,
- compressible: false,
- extensions: [jpeg,jpg,jpe]
- },
- ....
- }
Withcompressible set tofalse - it turns out that JPEGs are not compressible.
According to wiki, the JPEG format is a lossy compression image formatref3 - meaning that this type of image has already been compressed, so compressing it again would be meaningless.
Therefore, not all files transmitted over HTTP are compressible. Just because there is no compression option in theContent-Type does not mean there is a problem with compression.
Final Effect
As shown in the figure below, it is compressed to within 10 kB while the original file is around 70kB, this significantly reducing bandwidth pressure under high website traffic conditions.

Well, another casual article written, feeling quite happy ;)