JavaScript alone is not a simple beast. It needs to be optimized to deal with modern JavaScript web apps so it needs JIT, it also needs sandboxing, and all of the standard web APIs it has to implement. All of this also needs to be robust. Browsers ingest the majority of what people see on the Internet and they have to handle every single edge case gracefully. Robust software is actually incredibly difficult. Security in a browser is also not easy, you’re parsing a bunch of different untrusted HTML, CSS, and JavaScript.
Then there is the monster that is CSS and layout. I can’t imagine being the people that have to write code dealing with that it’d drive me crazy.
Then there are all of the image formats, HTML5 canvases, videos, PDFs, etc.
Adding on to this, while this article is fast approaching 20 years old, it gets into the quagmire that is web standards and how ~10 (now ~30) years of untrained amateurs (and/or professionals) doing their own interpretations of what the web standards mean–plus another decade or so before that in which there were no standards–has led to a situation of browsers needing to gracefully handle millions of contradictory instructions coming from different authors’ web sites.
Thanks for these explanations, that makes a lot more sense now. I didn’t even think to consider browsers might be using something else than an off-the-shelf implementation for image/other file formats…, lol
Sorry I didn’t mean to imply they don’t use shared libs, they definitely do, but they have to integrate them into the larger system still and put consistent interfaces over them.
Yeah I realize that. My go-to comparison would be PDF. Where Firefox has PDF.js (I think?), Chromium just… implements basically seemingly the entire (exhaustive!) standard.
JavaScript alone is not a simple beast. It needs to be optimized to deal with modern JavaScript web apps so it needs JIT, it also needs sandboxing, and all of the standard web APIs it has to implement. All of this also needs to be robust. Browsers ingest the majority of what people see on the Internet and they have to handle every single edge case gracefully. Robust software is actually incredibly difficult. Security in a browser is also not easy, you’re parsing a bunch of different untrusted HTML, CSS, and JavaScript.
Then there is the monster that is CSS and layout. I can’t imagine being the people that have to write code dealing with that it’d drive me crazy.
Then there are all of the image formats, HTML5 canvases, videos, PDFs, etc.
Adding on to this, while this article is fast approaching 20 years old, it gets into the quagmire that is web standards and how ~10 (now ~30) years of untrained amateurs (and/or professionals) doing their own interpretations of what the web standards mean–plus another decade or so before that in which there were no standards–has led to a situation of browsers needing to gracefully handle millions of contradictory instructions coming from different authors’ web sites.
Here’s a bonus: the W3C standards page. Try scrolling down it.
Thanks for these explanations, that makes a lot more sense now. I didn’t even think to consider browsers might be using something else than an off-the-shelf implementation for image/other file formats…, lol
Sorry I didn’t mean to imply they don’t use shared libs, they definitely do, but they have to integrate them into the larger system still and put consistent interfaces over them.
Yeah I realize that. My go-to comparison would be PDF. Where Firefox has PDF.js (I think?), Chromium just… implements basically seemingly the entire (exhaustive!) standard.