Page Speed Insights Results

Since the PageSpeed Insights (PSI) testing tool was introduced, they have recommended lossless compression. This means that no quality is lost. In fact, if you change a single pixel, you can no longer call it lossless. On the flip side, the recommendations at webpagetest.org have always recommended a lossy compression with a quality level of 50, but why is that relevant? It matters because Google supports/sponsors webpagetest.org (WPT). I don’t know what exactly their involvement is, but they are listed prominently on the about page for WPT.

As such, I’ve long wondered when the time would come that Google would start changing their image compression recommendations to line up with WPT. On December 12, it finally happened, but I delayed writing about it, because some thought it was an accident, an oversight. Someone accidentally flipped a switch, and certainly Google would revert the changes in short order. At this point, Google will not be reverting anything. Most likely, they are fine-tuning the quality detection used by the testing tool to give more consistent results.

So what does that mean for us, the website owners and developers, the content creators, and publishers?

First, it does NOT mean that you should just blindly follow their recommendations. The PageSpeed Insights tool is a computer after all, and it cannot determine the absolute best quality to compression ratio 100% of the time. It DOES mean that Google is recognizing a changed landscape in the world of image optimization tools. Not just “changing”, but the future is already here. In the past, lossy compression was seen as a way to get great compression and speed, but it was not for sites that needed great looking images. That time is no more, and there are many tools available that can give you the best of both worlds: high compression and high image quality. EWWW Image Optimizer is one of those, and we use the amazing TinyJPG and TinyPNG tools to do so.

So one would think, you can just compress your images using whatever lossy tool you want, even Photoshop’s “save for web”, and problem solved, right?

Well, yes… and no. You can achieve the savings that PSI recommends with several different methods. You can even achieve better quality than PSI when using an intelligent algorithm like TinyJPG, EWWW IO, or Imagify. You will achieve your goal, a faster website, and lower page loads, but there is something you need to watch out for when using PSI to confirm those results.

PSI’s switch to lossy compression means retesting has a fatal flaw. To get to that flaw, we have to know how they determine the optimal image compression level for your images. PSI currently uses the freely available ImageMagick tool for compression, and it is likely they use it for quality detection also.

So how do they know how much to compress my images?

The most likely method, is to use the compression library to determine what quality level was used on the image. So they would use ImageMagick to detect the existing quality of your images. Then, the number ImageMagick reports is compared to what PSI has established as the ideal quality level of 85.

While this would be great the first time around, if you compress an image, and then retest it, we run into problems.

So, what about that flaw, and what’s wrong with the PSI testing method?

There are several problems. First, the JPG quality level is an arbitrary measure. Every tool is free to implement their own quality levels, and interpret them however they want. While most tools use a 1-100 scale, Photoshop used to have a 1-12 slider, but no matter the numbers, the interpretation is arbitrary.

Second, the quality level is not stored in the image, so the only way to find out what quality level was used in an image is to guess. The only way to have an educated guess, is to look for patterns in the image that tell you how much compression was used by that tool. Stock libjpeg is different than mozjpeg, and mozjpeg is very different from libjpeg-turbo, and the more sophisticated compression engines are a completely different ball of wax. Even if PSI built-in tests for every JPG compression tool out there, how would they know what tool was used to compress an image in the first place? They can’t, so they would fall back to the library they are using (ImageMagick), and make an estimate of what compression was used, based on what ImageMagick would do to an image at the ideal quality level. This works fine on relatively uncompressed images, but once you’ve compressed your images as much as they tell you to, the algorithm starts throwing false positives, especially if you are using something besides ImageMagick.

When using lossy compression, there is no perfect method for detecting how much compression needs to be performed, but as I’ve told many customers over the years, there is a potential that the algorithm, no matter how finely crafted, will remove pixels on every subsequent pass. It gets less and less after the first attempt, but because we are dealing with lossy compression, and computers attempting to approximate what the human eye will tolerate, there is always the potential for more pixel loss if you recompress an image with any method.

A short analogy can help to understand the difference between PSI compression, and EWWW IO’s methods. Using a tool like ImageMagick is like using a chainsaw to compress your images. It’s fast, but rough around the edges, and leaves a noticeable impact on your images. Using EWWW IO, with the industry leading TinyJPG compression, is like using a chisel to sculpt your image precisely to the best quality and compression level. It takes longer, sometimes a lot longer, but the end result is a finely-tuned image where the compression is not noticeable. If you then use a tool like PSI to test the compression of your hand-chiseled images, what would happen? Well, PSI starts off with the assumption that everyone is using chainsaws on their images. It looks for those noticeable changes (often referred to as “artifacting”), and attempts to calculate the quality of the image based on the effects of using a chainsaw. It’s looking for the rough edges to tell how much compression was applied. But if you’ve used EWWW IO, there will be a serious absence of rough edges, and that’s why PSI can fail to detect when EWWW IO has already compressed an image. Could we compress it farther, since Google thinks your visitors will be ok with chainsaw-compressed images? Certainly, but who wants to have chainsaw images when you can achieve the same (or better) compression with the chisel that is EWWW IO?

So what’s the point of testing with PSI?

It gives you a target to aim for, and recommendations to make your site faster. If you achieve what their test says regarding image optimization, you will have a faster site, and that increases both your SEO, and your bottom-line. But unlike the days when they used lossless compression, retesting to verify that you’ve completed the steps will not always be accurate, and will sometimes be horribly inaccurate.

If you still want to do lossless compression, and you want a way to test that, there is good news. As of now, the PSI results that Gtmetrix.com reports are still lossless. They don’t appear to actually use PSI in their testing, they’ve just replicated the test, and they have not (yet) updated their results to match PSI’s new lossy recommendations.

So welcome to the future! Using EWWW IO’s API can give you amazing image quality and great compression. There’s no sacrifices here, just faster page loads. There’s really no reason not to use the new generation of lossy compression tools to make your site stand out from the rest.

UPDATE: Google updated their compression recommendations on March 24, 2017, so this article has been heavily revised to reflect those changes.