I know you all immediately wondered, better compression?. We're already working on that. And parallel encoding/decoding, too! Just like this update, we want to make sure we do it right.
We expect the next PNG update (Fourth Edition) to be short. It will improve HDR & Standard Dynamic Range (SDR) interoperability. While we work on that, we'll be researching compression updates for PNG Fifth Edition.
One thing I'd like to see from image formats and libraries is better support for very high resolution images. Like, images where you're zooming into and out of a very large, high-resolution image and probably only looking at a small part of the image at any given point.
I was playing around with some high resolution images a bit back, and I was quite surprised to find how poor the situation is. Try viewing a very high resolution PNG in your favorite image-viewing program, and it'll probably choke.
At least on Linux, it looks like the standard native image viewers don't do a great job here, and as best I can tell, the norm is to use web-based viewers. These deal with poor image format support support for high resolutions by generating versions of the image at multiple pre-scaled levels and then slicing the image into tiles, saving each tile as a separate image, so that a web browser just pulls down a handful of appropriate tiles from a web server. Viewers and library APIs need to be able to work with the image without having to decode the whole image.
gliv used to do very smooth GPU-accelerated panning and zooming --- I'd like to be able to do the same for very high-resolution images, decoding and loading visible data into video memory as required.
The only image format I could find that seemed to do reasonably well was pyramidal TIFF.
I would guess that better parallel encoding and decoding support is likely associated with solving this, since limiting the portion of the image that one needs to decode is probably necessary both for parallel decoding and for efficient high-resolution processing.
Lossless WebP is still gets way better compression than PNG though, this doesn't change that. Although they mention they're looking to improve it in the next version, so we'll see then.
Because most of my software doesn’t have support for webp, including but not limited to my Mac, phone, and messaging apps. Until everything supports it, I have to keep converting it to use it, so I just don’t bother saving anything as webp. In fact, I have a firefox extension that lets me save webp as other image formats.
It may be good to use for the web, but it’s not yet good for me.
Crazy huh but APNG was so well done it just showed the first frame like a normal PNG in any non supported browser which was amazing. I used to have an avatar which has the TF2 engineer as the first frame and the spy as the second.
I'm glad it is now. I remember a decade or so ago, I wrote an APNG decoder, so I was deep in the world of APNG.
And I remember reading various things that made me think MNG was the 'more official' flavour of "animated PNG", and it was absurd to me, because APNG seemed like a much more approachable spec. I'm glad the winds have turned...
I remember MNG and never understood why APNG wasn't officially recognized. I didn't know it was widely supported already. Why do people still create and use GIF in the internet, if there is a superior format?
On the "better compression" front, I'd also add that I doubt that either PNG or WebP represent the pinnacle of image compression. IIRC from some years back, the best known general-purpose lossless compressors are neural-net based, and not fast.
These guys apparently ran a number of tests. They had a neural-net-based compressor named "NNCP" get their best compression ratio, beating out the also-neural-net-based PAC, which was the compressor I think I recall.
The compression time for either was far longer than for traditional non-neural-net compressors like LZMA, with NNCP taking about 12 times as long as PAC and PAC taking about 127 times as long as LZMA.