I sent a few slightly tipsy St. Patrick’s day Slack messages asking our dev team what it meant for product images and for Pixelz. Specifically, I wanted to know:
Is quality really better? Are file sizes really smaller? What’s the catch?
The immediate answers were “Maybe,” “It depends,” and “Holy shit, it’s SLOOOOW!”
By “slow,” we found that encoding an image as JPEG via Guetzli can take literally 1000 times as long as our current preferred method, MozJPEG. Encoding that currently takes 2-4 seconds was taking 30 minutes or more in tests. 30 minutes for a single image!
Guetzli was consistently several hundred times slower than MozJPEG, regularly measuring in minutes what MozJPEG did in tenths of a second.
There are a few other significant tradeoffs as well: Guetzli doesn’t support progressive image loading (where images load from blurry to sharp, instead of top to bottom) and only supports sRGB. Using sRGB is a best practice for images on the web, but so is progressive loading. Its absence is a big loss.
Pixelz Verdict: MozJPEG Remains First Choice for Product Images on the Web
- MozJPEG files had fewer bytes 6 out of 8 times
- MozJPEG and Guetzli were visually indistinguishable
- MozJPEG encoded literally hundreds to a thousand times faster than Guetzli
- MozJPEG supports progressive loading
I had to see it for myself (Could it really be THAT slow? And what about quality?), so I ran more tests on my own lightweight system.
My testing results were the same as our dev team’s, and as I read deeper I saw they mostly aligned with Google’s own documentation. Maybe Google Guetzli just isn’t intended for e-commerce or for use on larger images. Perhaps it’s a proof of concept for psychovisual perceptual encoding rather than a practical tool...
Maybe Guetzli will continue to improve, but the bottom line is that at this time there are better solutions for compressing product images.
A Brief History of JPEG Encoders
Before we dive into the tests, let’s take a quick walk down memory lane to get some context on JPEG encoding. If you don’t care, skip straight to the tests.
JPEG is an image file format that’s been around since the early 1990s, and it uses lossy compression. Ever since its introduction, people have worked on improving the compression algorithms with three different (and sometimes conflicting) goals: speeding up encoding time, increasing quality, and reducing byte size.
The JPEG Tripod: Speed, Quality, Byte Size
Encoding time is how long it takes to compress or decompress a JPEG. For example, if you capture an image in-camera using JPEG as your file format, the time it takes to save your photo is encoding time. When you’re working on an image in Photoshop and select “Save as JPEG,” the time it takes to save your image is encoding time.
Quality is more difficult to quantify. Most compressors let you pick a JPEG quality number from 1-100, but this isn’t actually standardized. Because JPEG is a lossy format, information is lost every time you compress; however, not all information is created equal. Algorithm improvements are usually based around increased knowledge of human visual perception. If the eye doesn’t notice it, why store it?
File byte size is determined by image dimensions, quality, and encoding method. Reducing file size saves on disk space and bandwidth, which provides cost savings while accelerating web page loading speed.
Comparing Current JPEG Encoders
Libjpeg is the baseline JPEG encoder built into Windows and Mac operating systems, maintained by an informal independent group and iterating occasionally. It tries to balance encoding speed, quality, and file size.
Libjpeg-turbo is intended to be a higher performance replacement for libjpeg, and it is in fact the default library for most Linux distributions. “Performance” in this scenario means using less CPU time during encoding and decoding. Libjpeg-turbo forked libjpeg in 2010 and claims to be “generally 2-6x as fast as libjpeg, all else being equal.”
MozJPEG is intended for a specific use case: images on the web. Mozilla forked libjpeg-turbo in 2014 so they could focus on reducing file size in order to reduce bandwidth and get images on the web to load faster. It does this through progressive coding and trellis quantization that sacrifices some encoding time, in the range of 4-7x. You can read the libjpeg-turbo creator’s thoughts on MozJPEG and why their two projects’ goals are incompatible.
Guetzli is what brings us here today, and it is focused on image quality. Google uses a new perceptual psychovisual model to decide where image quality loss will be the least noticeable. It sacrifices encoding time, on the order of 800-1000x slower than MozJPEG. Google claims file byte size is about 20-30% smaller than libjpeg, which is comparable to MozJPEG.
Google Guetzli vs MozJPEG Head-to-Head Comparison
Test Computer Overview
MacBook Air (13-inch, Early 2015)
Processor: 1.6 GHz Intel Core i5
Memory: 4 GB 1600 MHz DDR3
Encoders
Mozilla MozJPEG
Google Guetzli
Methodology
I’m going to export product images from Photoshop as quality 100 JPG and then run each through both MozJPEG and Guetzli at quality 90. I’ll give before-and-after images for quality comparison, record CPU encoding time, and compare file sizes.
Commands:
$ time mozcjpeg -quality 90 -progressive [input] > [output]
$ time guetzli --quality 90 [input] [output]
I’ll compare images in a range of Megapixel sizes, from 1MPix to 48MPix. Our product image report shows that 1MPix is the most common size for a product image, but it’s a plurality and not a majority. Product image dimensions are all over the map.
I’m also trying out a wide variety of MPix because I want to see how accurate two of Google’s usage notes on Guetzli are:
Note: Guetzli uses a large amount of memory. You should provide 300MB of memory per 1MPix of the input image.
Okay, that might matter once I get past 12 MPix images. How long should this take, anyway?
Note: Guetzli uses a significant amount of CPU time. You should count on using about 1 minute of CPU per 1 MPix of input image.
That's pretty dramatic. Hmmm.... Without further ado, let’s get cracking!
Guetzli vs MozJPEG at 1 Megapixel
1 MPix JPEGs exported as quality 100 and 90 from Photoshop. Quality 100 then compressed via MozJPEG and Guetzli with byte size, real encoding time, and CPU encoding time provided.
Lightning Analysis
Average results, rounded to hundredths:
Byte Compression | Encoding Real Time | Encoding CPU Time | |
---|---|---|---|
MozJPEG | 74.47% | 0.40s | 0.34s |
Guetzli | 74.59% | 1m26.73s | 1m24.57s |
Bytes: Byte size is a wash. In three of four instances Moz produced smaller images, but the average tilts marginally in Guetzli’s favor because the one instance it won was the largest image.
Time: Look at those encoding times! Moz averaged less than half a second, while Guetzli was pushing a minute and a half. If you’re processing hundreds or thousands of images (or in our case, tens of thousands of images a day), that really adds up.
Quality: I can’t tell the difference. Can you?
Let’s take a look at larger images and see how that effects our results.
Guetzli vs MozJPEG at 12 MPix - 48 MPix
JPEGs exported as quality 100 and 90 from Photoshop. Quality 100 then compressed via MozJPEG and Guetzli with byte size, real encoding time, and CPU encoding time provided.
Lightning Analysis
Average results:
Byte Compression | Encoding Real Time | Encoding CPU Time | |
---|---|---|---|
MozJPEG | 74.92% | 5.52s | 5.41s |
Guetzli | 71.12% | 68m34.67s | 48m59.81s |
Bytes: MozJPEG averaged 3.8% better byte compression, which makes them about equal when you consider the small sample size.
Time: Again, wow! The fastest Guetzli encoding was 20 minutes, while the slowest was over 3 hours. If you’re a photographer or someone else looking to store full size JPGs locally, Guetzli is completely impractical. Even iPhones shoot 12MPix these days, and our test at that size took 25 minutes to encode.
Parting Google Guetzli Thoughts
The idea of perceptual psycho-visual encoding improvements sounds cool. It is cool. At this time, however, there simply isn’t enough of a visual improvement to make Google Guetzli worth the absolutely astonishing processing time it takes—at least not for e-commerce.
In the interest of fairness, I should note that our test uses a small sample size and is for a specific use case: product photography workflows. Product images typically have simple backgrounds, full focus, and a high megapixel count; maybe Guetzli would perform better with another kind of image. We also care deeply about encoding speed because getting images from the studio to the web as fast as possible is critical to e-commerce success.
If you’re compressing product images for use on the web, MozJPEG remains the encoder of choice.