• finitebanjo@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    ·
    12 days ago

    “It’s a bit technical,” begins Birdwell, "but the simple version is that graphics cards at the time always stored RGB textures and even displayed everything as non linear intensities, meaning that an 8 bit RGB value of 128 encodes a pixel that’s about 22% as bright as a value of 255, but the graphics hardware was doing lighting calculations as though everything was linear.

    “The net result was that lighting always looked off. If you were trying to shade something that was curved, the dimming due to the surface angle aiming away from the light source would get darker way too quickly. Just like the example above, something that was supposed to end up looking 50% as bright as full intensity ended up looking only 22% as bright on the display. It looked very unnatural, instead of a nice curve everything was shaded way too extreme, rounded shapes looked oddly exaggerated and there wasn’t any way to get things to work in the general case.”

    • gazter@aussie.zone
      link
      fedilink
      English
      arrow-up
      21
      ·
      12 days ago

      Made harder by brightness being perceived non - linearly. We can detect a change in brightness much easier in a dark area than a light- if the RGB value is 10%, and shifts to 15%, we’ll notice it. But if it’s 80% and shifts to 85%, we probably won’t.

      • kyle@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 days ago

        Reminds me of a video I watched where they tested human perception of 60/90/120 FPS. They had an all white screen and flashed a single frame of black at various frame rates, and participants would press a button if they saw the change. Then repeated for an all black screen and a single frame of white would flash.

        At 60 and 90 FPS, most participants saw the flash. At 120 FPS, they only saw the change going from black to white.