There were discussions recently about the feasability of changing Blender's coordinate systems. The answer was that it was highly unlikely to happen because there was no single place where it was defined -instead, it was assumed throughout the code that Z was up, in probably tens of thousands of lines... and the work to regularize that would be gargantuan with no clear benefit -since exporters and importers are supposed to handle these transformations anyway.
As an engineer, Z up right handed is the only one that makes sense. All engineering follows this convention, motion sensors (example) or GPS topography will use that.
Cameras calibrations also use right hand but typically rotated such that it's X right, Y down, Z depth. That way, the projection formula give the pixels column, row and depth values.
IMO there's no correct answer. It depends on the purpose.
Intuitively XY forms the plane where things are and Z is used for the "additional dimension". Since first we think of 2D, and then we move to 3D, so it's the natural way we think.
So if you make a cities skylines type 2D game, the XY plane will form the ground. Which means that when you want to make the game 3D, Z means up.
But if you're making a platformer game (like old marios), you have XY being the left-up plane, like when drawing a graph in a paper. Then if you want make your game 3D (as in, having 3D models for assets, the movement would still be the same 2D). Then the Z will be forwards.
Sometimes, even Y being down makes sense. Like with UI programming. This is specially important for window resizing. When you resize the window, you want everything to look roughly the same, but with more space. That is easier if you use (0,0) in a point that doesn't move when resizing, like the top-left corner. Since the bottom-left corner moves if you make the window taller.
I think the preference for Y being the vertical axis in gaming, comes from a legacy of orienting the work around screen-space (roughly 50 years ago). It was more efficient to have a memory layout that is easier to supply data to generate a TV signal. Since a CRT raster goes from upper-left to lower-right, line-by-line, that became the standard for screen-space: inverted y. So screen memory (or however the screen is represented in RAM) starts at address/offset zero, and increments from there. All the other features like hardware sprites, used the same convention since it was faster and simpler to do (easier math).
When consumer 3D cards were relatively new, this was inherited, most notably when the hardware started to ship with "z-buffer" support. Carrying all this baggage into 3D engines just made sense, since there was never any confusion about the screen-orientation of it all. Why bother flipping axes around from world and camera space to the screen?
In Physics we mostly used right-hand, but X-right, Y-up, and Z pointing towards the viewer.
But that's details. The only important choice is between left- and right-handed, as that affects the signs in the cross product (and some other formulas - generally everything that cares about which rotation is considered positive).
Haven't really done much in game design, but if I'm understanding right.
In game design, lets go with 2d first. You mark position with numbers. In 2d you have X and Y. So you say to move to 5,5. then from the starting point that will be 5 right, and 5 up.
3D you have the same thing, but a bunch of different groups were working on it in different ways without working together. So you wound up multiple systems that had X, Y and Z meaning different things. In the majority of 3d systems Y represents up/down. (the other 2 directions are still kind of half and half between systems)
From the chart I would guess that in Y-up right-hand positive X is to the right. You look at the palm of your hand.
That way when you develop a 2d platformer you would use a standard XY coordinate system. Switching to 3d would logically add the z axis as depth and not height. A movie is usually shot that way as well.
Of course that analogy breaks down as soon as your base-game is top down. Like a city planer or so.
Anyways, with standards it's often best to just go with what most others do. So kudos to Unreal for not being stubborn.
Okay, am I understanding this correctly? In Y-up right-hand, positive X is “to the left”?
No, the x-axis is usually the only consistent one (it increases from left to right). With right handed coordinate systems you can have Z going into the screen (Y-down) or Z coming out of the screen (Y-up)
I find that trying to make sense of terms like "to the left" tricky when we can rotate the directional cube any way we want. For example, in my drawing for "Y-up, left handed", the red X axis is pointed leftwards. However, we could rotate the unit vector cube so that the X axis is pointed right, and the Y axis is pointing up (i.e. the orientation we're most familiar with for 2D graphs). The Z axis would then be pointing away from us, into the plane of the paper/screen.
In contrast, if we oriented the Y-up right-handed cube in the same way, then the Z axis would be oriented as if to come out of the plane of the screen/page, towards us.
These distinctions only matter when we add a third dimension, so the left or right handedness is basically a question of "when we add the third axis to a 2D square made by the other two axes, does the third axis come towards us or away from us? I apologise if this hasn't made things any clearer — I am able to make things make sense by imagining the rotations in my head, but not everyone is able to visualise them like that.
Because a flat surface is an x-y plane. The ground is a "flat" surface, and so the z dimension is height.
For me, that's the only way that makes sense. But I program robots for a living, so I'm used to dealing with coordinate systems where the flat reference is the ground. Programmers seem to be using the screen as the flat reference. If I were building a game world, I'd probably use z-up convention.
Programmers seem to be using the screen as the flat reference.
In screen coordinates, the origin is the top left corner of the screen, and the Y-axis increases towards the bottom of the screen. So Y still isn't "up"
Yeah, it depends on whether you expect the 2D view to be on the floor or on the wall. If it's on the floor, Z is up. If it's on the wall, Z is forwards & backwards (depth). Personally I think it being on the wall makes way more sense since we already expect from 2D view that Y is up and down, it feels weird to shift it to forwards & backwards when switching to 3D.
Every kid learning math has to at the very least learn X and Y coordinates for graphs. That's the reason I think Y is the natural up, it just makes more sense to be in line with what everyone already know instead of flipping it around.
Hah, reading this not too long after watching Freya Holmer's latest video/talk is fun. "Everybody is disagreeing, except everyone is agreeing that unreal is wrong".
The image is by her (it's also used in the presentation). And she mentioned wanting to make it easier/possible to change handedness. She'd already had an 'apology' from Sweeny for the Unreal configuration. Interesting to see they're taking it further than that.