Wednesday, June 12, 2013

.Net's Built-In JPEG Encoder: Convenient and Terrible

Those coding in .Net may not have discovered the System.Drawing namespace, which lets you load up an image in any popular web format (gif, jpg, png) without writing any extra code, manipulate it in as elaborate a way as you'd like, and save it back out to any of those web formats.

If you have discovered this you may also have noticed that the jpg codec is just terrible. For my task I was loading up a picture and drawing something geometric on it - the picture being the motivation to save it out as a jpeg. But for these examples I'll just use a blue background and a yellow box. I realize something that plain and geometric is terrible on jpeg (png would be preferred), but if it were a picture in the background png would be a poor choice, and the jpeg results from this bad codec would be no better.

This is a development blog so if you're wondering how to draw a yellow rectangle on a blue background, here's the code:

using System.Drawing;
using System.Drawing.Imaging;


var image = new Bitmap(400, 400);
var g = Graphics.FromImage(image);
g.FillRectangle(new SolidBrush(Color.FromArgb(0xa2, 0xbf, 0xdf)), 0, 0, 400, 400);
g.DrawRectangle(new Pen(Color.FromArgb(0xf2, 0x9a, 0x02), 19), 40, 40, 100, 100);

var jpegCodec = ImageCodecInfo.GetImageEncoders().First(enc => enc.FormatID == ImageFormat.Jpeg.Guid);
var jpegParams = new EncoderParameters(1);
jpegParams.Param = new[] { new EncoderParameter(Encoder.Quality, 100L) };
image.Save(App.AppRoot + @"\test.jpg", jpegCodec, jpegParams);

App.AppRoot is a class I include in all my apps - how you decide where to write files is up to you, but you can determine the project root here.

Note: Blogger isn't a very good blogging platform, and amongst its flaws is the fact it reencodes images and adds an annoying white border to them, as you'll see in the images below. Each will also be linked to the actual output file, which I've uploaded separately outside of Blogger. You can also see a lossless PNG with all 6 here.

.Net 4.5 Windows, JPEG 60% quality

Terrible, so let's get rid of all the compression artifacts by turning quality up to 100%.

.Net 4.5 Windows, JPEG 100% quality

Still pretty bad, and really not acceptable for "100% quality" given that's the max you can possibly ask for. You can easily see the edges of the box are still blurry, and there are still noticeable artifacts inside the box itself. Were we using a photo background instead, artifacts might be less noticeable in the photo - but the photo would likely worsen the artifacts in the box itself. This would be fine if this were the 60% or 80% setting, but 100% should sacrifice file size to get you as close to the original image as possible.

But perhaps the problem is the JPEG format itself. Here's Photoshop:

Photoshop CS5 60%

Photoshop CS5 100%

So clearly the JPEG standard is not the issue - you can render a standard JPEG that's nearly identical to the original. It's also worth noting that Photoshop's 100% comes out at 6.3k, while .Net's comes out at 7.3k, despite the quality disparity.

But that's closed source and it's clear Adobe's invested heavily in their JPEG encoder. How about an open source JPEG encoder with a rag tag bunch of open source coders working on it?

Gimp 2.8 60%

Gimp 2.8 100%

.Net's default JPEG encoder is so bad that Gimp's 60% effort is about equivalent to the top quality level .Net can deliver. If Microsoft were to just use Gimp's codec as-is (available cross-platform including on Windows), that would be a major step up in quality.

I experimented with other encoder implementations; ArpanJpegEncoder is an OK reference project, but as quality goes it's barely better than the built-in .Net encoder. LibJpeg, one of the most popular encoders on the Linux side, has been partially converted to pure C# and is available for free at BitMiracle. Its output is substantially better, and closer to the Gimp output above. Unsurprising given Gimp appears to use the C++ version of LibJpeg. However, BitMiracle's high-level API forces using LowDetail Chroma Subsampling. I forked the code to add support for HighDetail at the top JpegImage level.

By far the best implementation I've been able to get working is unfortunately not C# at all - Magick.Net. You can install it via NuGet pretty easily - you want the AnyCPU q8 version if you feel unsure. And, it requires not just one but 2 special installs on any server you use it on, both from the Visual Studio SDKs - see the docs. Because it's just a wrapper around the famous ImageMagick C++ project you likely won't be making many improvements yourself, but it is able to deliver relatively high-quality, low-filesize images (Photoshop still beats it though, more for some images than others for some reason).


  1. Hi,

    The images links are dead :(

    1. You're right! Sorry about that. Fixed. And updated to discuss the implementation we use today.

  2. Hi Chris, Thanks for writing the article, I had no idea the .NET jpeg encoder was that bad!

    Every time I do a search related to jpeg your name seems to pop up so Im wondering if I could possibly ask for your assistance on something I'm working on?

    I'm currently writing a cross platform imaging library for .NET based on CoreFX. You can see it here.

    As part of that library I have been writing codecs for various imaging formats and I've cracked most of them, the only one I can't seem to fathom is jpeg.

    I've experimented with LibJpeg.NET but found it too be far too slow and also unable to process images with only one row so I'm working on a baseline encoder. I cannot figure out what algorithm to use to collect the correct YCbCr components for chroma subsampling though so everything I encode is broken. Is this something you could shed some light on? I'm finding the original code almost impossible to follow.

    Kindest Regards


    1. Unfortunately I don't fully understand JPEG either; in the LibJPEG case I was just muddling through the code like I would anything someone else wrote - not reading with a full understanding of the JPEG standard.

      That said what you're asking sure seems like something that would be laid out in the JPEG standard, which is an open standard. I believe you're looking for Page 3 in this PDF:

  3. probably this article is written at the time that dotnet version was not 4.6. And the image that have been chosen for the test is not a good example if you add a real life photo the results will be completely different. I made an applicaiton and try this with different images and the result was not so much different between .net and the other library.

    1. The image is chosen to maximize JPEG artifacts, to simplify comparing .Net's and modern encoders. You can see despite the difficulty, Photoshop and GIMP can do it without noticeable artifacts at modest quality levels. There is no quality level where .Net can.

      While "a real life photo" will be often less challenging, that's not always true - hard lines do appear in photos - and these failings are still indicative of the quality - and file size, remember - of what you can expect. In short, the .Net Encoder should just not be used, for anything.

      Magick.Net for example is a free alternative and delivers better results in every regard.

  4. 1. JPEG is by nature designed to be LOSSY (although supports LOSSLESS), and can't be compared with PNG, cause PNG is designed to be LOSSLESS. Also JPEG was designed in last millennia (and left there on purpose for backward compatibility). .NET has also many other windows encoders, like Windows Presentation Framework (System.Windows.Media - from 2006.) and similar, have you tried those ones results? Also there is JPEG Lossless, JPEG-LS Lossless, JPEG-XR Lossless standards (the last one is actually not from Jpeg Group but rather from Microsoft and supported by WPF). Unfortunately I haven't found that many JPEG encoders/decoders support actually entire standards mostly LOSSLESS features of standard is not supported by many, where LOSSLESS actually give awesomely better compression's than ones we use daily. TIFF also gives nice compression's and can be LOSSLESS. But world tends to change slowly - mainly backward support.

    2. Why didn't you diff resulting images and give some actual difference in pixel values, rather than claiming that gives some artifacts. People see colors differently, and some are also color blind (male population 8% ~ 320 million males).

    3. Also interpreting compression/quality from different encoders loosely is bad. Some encoders use float 0..1, some use integers 1..100, some use 1..12 (PS), and some have predefined constants. And does not map one to one (1:1). You would need to read documentation to know how compression/quality value from each encoder maps to original standard, and then compare them from standard view point. Progressive compression and similar, when we talk about world wide web.

    4. Also image quality depends on decoders as well, decoders are differently optimized - generally give the same results but some are optimized to give better image quality when used in pair with same company encoder's (subtle jpeg AppX headers in jpeg streams). Some use float's, some use int's giving loss in signal information, and so on...

    And to sum it, different image formats are for different uses.
    If you want the original image pixel values to be remembered on some hard media, you should always use LOSSLESS method.
    Now only question is how many of storage you have. Ie: the less storage you have the better compression you need, and you would need more CPU power to compress them.
    It's rather simple equation for all encoders/decoders.
    If you need not original image pixel values but need something close to it, use LOSSY method but with higher quality.
    The same formula from above, less storage more CPU power to compress.