Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This situation reminds me days when we, at W3C HTML5 WG, were trying to sneak SQL specification into HTML5 "umbrella". And one particular flavor of it - the SQLite's SQL as it was defined at the moment of writing. Haven't got through for many good reasons as we know.

As of GPU exposure to the Web ...

It makes sense only when we will have stable and unified GPU abstraction. As for now DirectX.12, Vulcan and Metal are close but different.

Like the WebGL that is (more or less unified) OpenGL. But even that looks too foreign to HTML/CSS/script - immediate mode rendering in architecturally retained display model of web documents.

And conceptually: HTML5 umbrella is large but not infinite. 3D rendering is too far from HTML "endless flat text tape" model.

I remember those <applet> days when browser was used as a component delivery platform for stuff that does not fit into DOM and structured yet styled text. That was conceptually the right way "to grasp the immensity".

These days, with WebAssembly, we have another incarnation of the <applet> idea and I think GPU functionality belongs to it rather than to HTML. GLSL to be expressed in WebAssembly bytecode terms but not in JS.

Web standards have to be backward compatible and I doubt that current still ugly GPU paradigms will survive on the long run. Like tomorrow someone will come with practical voxel based system instead of current abstract vector ones, what will we do?



> Like tomorrow someone will come with practical voxel based system instead of current abstract vector ones, what will we do?

For one thing, any new approach will still have to solve the same problems around data transfer and formats that make up a lot of current APIs.

And for another, no matter how good this hypothetical new approach is, the old one will still be around to handle existing content and workflows forever anyway.


The SQL situation is a bummer. It made sense to avoid a monoculture, but indexedDB is a near-useless wreck due to its terrible performance and it has no meaningful querying primitives at all so you get to implement them yourselves and the performance is even worse.

Wish we could've just had two SQL implementations. There are other ones out there - Microsoft ships JET and an embeddable version of MSSQL, someone could've embedded postgres or something. As-is people who care about performance are just going to compile SQLite down to wasm/asm.js and run it in a worker.


Canvas is immediate mode rendered and already breaks through the DOM. There are also popular movements to make the DOM appear more as an immediate mode abstraction (React). In contrast, most retained mode 3D abstractions are seen as pretty bad.


The majority of game engines scene graphs are retained mode 3D abstractions.

Everyone thinks that they can do better with an immediate mode API, then we start building some kind of data structure to track down what needs to be drawn and when, with the result being similar to the traditional joke of half-implemented Lisp or ORMs, but applied to retained mode rendering.

Sure, some experts manage to get it right, and those are the ones that get to write AAA game engines, but many don't.


>> The majority of game engines scene graphs are retained mode 3D abstractions.

... but they are not built on an abstract retained mode API. That would be the wrong level of abstraction for a Web API that people can build on properly. I think that's the point here.

Unfortunately, Web APIs tend to be far too high level while missing out on low-level hooks, like the disaster that is Web Audio (and media playback in general).


Everyone needs their own retained mode API for sure, it’s just too different to generalize. Ideally, you would want to take your model and directly turn that into your UI, adding dependency tracing or domain knowledge to handle invalidation. React and it’s ilk make it easier to do just that.


But, the real difficulty seems to be writing a retained mode abstraction that actually fits everyone's use case.

I can't think of a retained mode API that works as well for RTS games as it does for FPSs, to say nothing of non-game uses like CAD.


What are the issues you think Unity, Unreal, CryEngine, Cocos2d-X, ligGDX, OGRE, SceneKit have with RTS rendering?


> "This situation reminds me days when we, at W3C HTML5 WG, were trying to sneak SQL specification into HTML5 "umbrella". And one particular flavor of it - the SQLite's SQL as it was defined at the moment of writing. Haven't got through for many good reasons as we know."

Can you elaborate on the arguments against a dialect of SQL being available for use with local storage? I can only think of arguments in favour of it. Would be good to understand the grounds on which the idea was dropped.


The lesson of the Web is that when you expose some interface for people to use, they will start depending on documented features, undocumented features, and downright bugs of the first popular implementation. Second and subsequent implementations will need to spend a bunch of time reverse-engineering those bugs so they can be documented and reliably implemented in future so that existing websites keep working.

The only alternative is to make sure all new interfaces have multiple, popular implementations simultaneously, so that authors can't afford to take advantage of bugs in one implementation. This is the sort of activity you see in WHATWG these days.

The problem with "a SQL dialect" being available for use on the Web is that every browser intended to use SQLite as the backend. Nobody wanted to invest the time and effort to write a second, compatible, equally-reliable implementation; nobody wanted to exhaustively research and document bugs and flaws in that specific version of SQLite; nobody wanted to commit to back-porting security fixes from later versions, or forward-porting the required bugs to later versions.

And since nobody wanted to do the work to make it possible, using SQL for local storage remains impossible.


The W3C attempted to say version X.Y.Z of SQLite will be the SQL standard for the web, mandating 1) that it be frozen in time 2) that the only way to be compliant was to put SQLite into all browsers, bit for bit, there would be no other way to be compliant.

As much as I wanted it, and was bummed that Mozilla protested so much, they were/did/are doing the right thing by saying that SQLite cannot be used as a standard, it isn't a spec.

What one can do, is compile SQLite to emscripten or wasm, or write a SQL engine in JS and use that in the browser. That is totally fine.

https://github.com/kripken/sql.js


As someone who works on a product which compiles SQLite to JS to read a SQLite based file format in the browser - this is a terrible, terrible “solution” and doesn’t scale for files larger than X MB, where X is the device dependent limit based on memory available to the browser.


Those memory limits are independent of and orthogonal to SQLite, but I totally understand the frustration.

https://www.html5rocks.com/en/tutorials/offline/quota-resear...

https://developer.chrome.com/apps/offline_storage


Is there a JS SQL/relational library out there that saves to local storage?


HTML and CSS are slowly losing ascendancy as the main render surface for web apps. The canvas provides a standard pixel buffer for both JS and compiled applications, making HTML's limitations no longer a restriction.


Except that rendering everything to canvas is not accessible to people with certain disabilities.

Also mentioned recently here https://news.ycombinator.com/item?id=16347216



Simply doing things with HTML doesn't magically solve accessibility issues, either.

A simple DOM representation could be used as a fallback, if possible.


Sadly, I would that 99% of companies don't care, unless mandated by government regulation.


The whole point of the web was not to have people write directly to bitmaps, to leave the presentation up to the rendering platform.

In the quest for eye candy we're losing what made the web great to begin with.


It's not about eye-candy, the Web has become an application platform (against all resistance) and it can't thrive on standards bodies to deliver every bit and piece required. Even in 2018, web applications are shunned by many professionals for their poor quality and performance.


> compiled applications

A lot of HN comments dismissed my prediction[1] that WebAssembly will bring opaque "compiled applications" that treat the canvas as a "standard pixel buffer", allowing adblocking CSS and request filters to be bypassed.

A lot of people seem to be focusing on the benefits of new technological changes ("no longer a restriction"), when they should first be concerned with the potential risks that change will create.

[1] https://news.ycombinator.com/item?id=10211050


>my prediction[1] that WebAssembly will bring opaque "compiled applications" that treat the canvas as a "standard pixel buffer", allowing adblocking CSS and request filters to be bypassed.

They could already do that. It's called serving the whole page as an image


Or as a Flash blob, which used to be distressingly common.


<canvas> by its specification mandates pixel buffer to be placed in memory with per pixel access. This reduces GPU acceleration options to the ground.


I don't believe that's true for any modern browsers. They all use hardware accelerated rendering in the common case of drawing to a canvas that will be composited on-screen, and make a copy into process memory only when you access the pixels with a call like getImageData().


Composite operations ( https://developer.mozilla.org/en-US/docs/Web/API/CanvasRende... ) are achievable only on bitmap/frame buffers. And OpenGL's frame buffers are not famous for their performance.


I was referring to compositing the DOM objects onto the screen, which is GPU-accelerated in most cases.

The internal canvas compositing operations you mention are also GPU-accelerated in most cases.


Last I checked, many browser vendors only GPU accelerate specific blend modes. If you do anything fancy your canvas ops silently drop to software.


What are you talking about? You’re able to access random pixels, yes, but first you need to copy them to normal RAM. Canvas 2D and WebGL are both hardware accelerated.


I’ve actually noticed a return recently to simple “HTML native” UI design on the web. Not as many people trying to emulate an “OS native” experience by adding a bunch of extra DOM elements to render borders and effects and such.


From your lips to God's ears.


This is a bit of a rambling mess, but to just make one point: the winner has been chosen. Android is implementing Vulkan, and so Vulkan it will be.


It will be Vulcan and Dx12. Microsoft is nothing if not a stubborn 800 lbs gorilla.


Just like Apple, Sony and Nintendo.


Eh, not for WebGL it won't. Nobody even implements DX other than high-powered desktop GPUs.

Will it live on? Sure, we will still have games and Xboxes. But if you're going to pick a standard that can work on mobile and desktop, there is no contender other than Vulkan.


> Nobody even implements DX other than high-powered desktop GPUs

err, Intel iGPU, available in something as tiny/low-power as a LattePanda board… and all the Windows tablets…


1. Vulkan is an optinal 3D API on Android 7.0 and later devices.

2. Vulkan is not allowed on UWP apps

3. Even on Switch, Vulkan is not the primary 3D API.


1. It is mandatory if the device claims to support VR mode;

2. Nobody (in Joel's on Software sense) is writing high performance 3D UWP apps anyway.


1. Basically Google and Samsung

2. UWP requirements apply to any Windows store game and Microsoft already started to be more bully about it

3. There is hardly any Vulkan game worth playing that isn't DirectX 12 as well.


1. So basically the biggest Android vendor?

2. Both of them... If you are gamer, you are going to Steam (and Valve is not a friend of the Windows store idea). I'm not happy with this situation either, as I prefer GoG, and GoG is a distant second.

3. Doom, Wolfenstein II, F1 2017, The Talos Principle from the other side of genre spectrum or the upcoming Star Citizen. Vice versa is more true, there's no DX12 game worth playing, that's not also Vulkan game.


1. Not everyone is rich enough to buy flaghship phones.

2. Lots of gamers just use XBox or PS4 (no Vulkan there). And on PC, not everyone uses Steam. Plus Microsoft already started to be more agressive regarding games on Windows 10, with Age of Empires remaster being the first example.

3. Well it is a matter of taste, not everyone craves for FPS, then there is also the small matter that Vulkan is not supported on XBox anyway, while DX 12 is. With much better developer tooling.


1. Then a game console is not much relevant either.

2. Age of Empires is Microsoft's game in the first place. Of course they will want their assets to use their technologies. 3rd party adoption is a rounding error. On the PC, except for 1) hardcore indie games players and 2) games locked to the publisher's platform, everyone uses Steam. In the second case, where the games are exclusive to a publishing platforms (Origin, Uplay), they have similar attitude to Windows store as Steam.

Windows store is an existential threat to them. They will ignore it as long as possible.

3. Sure, that's why I mentioned The Talos Principle (a puzzle game). Xbox API will be handled exactly as PS API is.


WebGL can cover 87.3% of game revenue.


AAA titles won't favour WebGL until the platform can compete with native, hand-tuned C++.


AAA will eventually be a sliver of total gaming revenue.


If Vulkan is the future, I wonder how mobile devs will make money if they can't target iOS.


It is a non issue thanks middleware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: