Mapping to the PICO-8 palette, perceptually

September 7th, 2025

Given a palette and an image, how would you map each pixel to a color in the palette? In this article we’ll investigate how an advanced perceptual color space (CAM16-UCS) compares to simpler alternatives.

For this experiment I chose a somewhat strange cropped still from the Big Buck Bunny short and the PICO-8 fantasy console’s 16-color palette:

A simple way to map pixels to palette colors is to compute the Euclidean distance between each pixel and palette color, and choose the one with the shortest distance. Note that this method completely discards any structure in the image, which puts a limit on how good the results can look. We can’t for example allocate more shades to smooth regions to avoid banding. But this is the simplest way.

If done in regular sRGB, the results are interesting but not really faithful, but by weighting the RGB channels by their luminance contribution and increasing contrast a bit (libimagequant’s color space), sRGB looks decent:

We don’t need to limit ourselves to sRGB-derived spaces. The CAM16 color appearance model includes a “UCS”, Uniform Color Space, in which Euclidean distances should correspond more accurately to perceived color distances. So is it better for this task? Well it’s hard to say if “CAM16-UCS” really looks better below when compared to Oklab or CIELAB with 3x luminance weight and HyAB distance:

Pixel mapping results in CAM-16 UCS, Oklab, and weighted CIELAB color spaces.

Perhaps this problem is so ill-defined that no color space can help us in a setting where image’s spatial structure is not taken into account. Another factor is that the CAM16-UCS space depends on “viewing conditions”, which are assumptions of the viewer’s eyes physical state that depend on environment lighting and the time there’s been to adapt to the visual stimuli.

In the colour Python library’s API, we can easily toggle between “average”, “dim”, or “dark” viewing conditions, apparently defined in the earlier CIECAM02 color appearance model:

# Convert sRGB to XYZ color space
srgb_u8 = np.array(img)[...,:3]
xyz = colour.sRGB_to_XYZ(srgb_u8/255.0)

# UCS-16 in dim viewing conditions
dim_surround = colour.VIEWING_CONDITIONS_CAM16["Dim"]
ucs16_dim = colour.XYZ_to_CAM16UCS(xyz, surround=dim_surround)

# UCS-16 in well-lit viewing conditions
avg_surround = colour.VIEWING_CONDITIONS_CAM16["Average"]
ucs16 = colour.XYZ_to_CAM16UCS(xyz, surround=avg_surround)

There definitely is a difference between the two but neither looks like the Oklab result. Interestingly, the original blog post where Oklab was introduced says it “should assume normal well lit viewing conditions.”

So, in conclusion, CAM16-UCS didn’t look better than Oklab or weighted CIELAB. All three beat luma-weighted sRGB, though. CAM16-UCS results were also surprisingly far from Oklab, which I can’t explain. For acceptable quality, I suppose in this task you have to look at the actual image, not just its colors.


Just for fun, I tried pumping the luminance weight so high that we’ll be effectively reproducing a greyscale image with the PICO-8 palette:

Color distance from lightness alone, ignoring chromaticities.

Unfortunately that doesn’t work either because the reds and greens really stick out. I think this is the Helmholtz-Kohlrausch effect in action, not taken into account by Oklab’s luminance formula.

I’m writing a book on computer graphics. Sign up here if you’re interested.

Appendix A: More result images

For reference, here are all the result images from my experiment in a single plot. CIELAB on its own looked so poor I took the weighted HyAB version in earlier comparisons.

Appendix B: Source code

Here are the functions I used for pixel mapping. Like said earlier, I used the colour library for conversions.

NumPy Python code
# The PICO-8 palette
palette_srgb_u8 = np.array([
    (0x00, 0x00, 0x00),
    (0x1d, 0x2b, 0x53),
    (0x7e, 0x25, 0x53),
    (0x00, 0x87, 0x51),
    (0xab, 0x52, 0x36),
    (0x5f, 0x57, 0x4f),
    (0xc2, 0xc3, 0xc7),
    (0xff, 0xf1, 0xe8),
    (0xff, 0x00, 0x4d),
    (0xff, 0xa3, 0x00),
    (0xff, 0xec, 0x27),
    (0x00, 0xe4, 0x36),
    (0x29, 0xad, 0xff),
    (0x83, 0x76, 0x9c),
    (0xff, 0x77, 0xa8),
    (0xff, 0xcc, 0xaa),
    # (0x29, 0x18, 0x14), # the secret palette begins here :)
    # (0x11, 0x1d, 0x35),
    # (0x42, 0x21, 0x36),
    # (0x12, 0x53, 0x59),
    # (0x74, 0x2f, 0x29),
    # (0x49, 0x33, 0x3b),
    # (0xa2, 0x88, 0x79),
    # (0xf3, 0xef, 0x7d),
    # (0xbe, 0x12, 0x50),
    # (0xff, 0x6c, 0x24),
    # (0xa8, 0xe7, 0x2e),
    # (0x00, 0xb5, 0x43),
    # (0x06, 0x5a, 0xb5),
    # (0x75, 0x46, 0x65),
    # (0xff, 0x6e, 0x59),
    # (0xff, 0x9d, 0x81),
], dtype=np.uint8)


def euclidean_distance(a, b):
    delta = a - b
    return np.sqrt(np.sum(delta**2, axis=-1))

def weighted_rgb_distance(a, b):
    # Weighting and internal gamma chosen to match libimagequant
    # See https://github.com/ImageOptim/libimagequant/blob/6aad8f20b28185823813b8bd6823171711480dca/src/pal.rs#L12C1-L19C38
    # Convert from sRGB to internal 1.754 gamma, giving more weight to bright colors.
    # Equal to 0.57/0.4545 = 0.57 / (1/2.2)
    # NOTE: Does not match libimagequant's behavior exactly, just my best attempt

    power = 2.2/1.754
    channel_weights = np.array([[[0.5, 1.00, 0.45]]])
    aw = a * channel_weights
    bw = b * channel_weights
    delta = aw**power - bw**power
    return np.sqrt(np.sum(delta**2, axis=-1))

def hyab_distance(a, b):
    delta = a - b
    dL = np.sum(np.abs(delta[..., 0:1]), axis=-1)
    dab = np.sqrt(np.sum(delta[..., 1:3]**2, axis=-1))
    return dL + dab

def L_weighted_hyab_distance(a, b):
    delta = a - b
    dL = np.sum(np.abs(delta[..., 0:1]), axis=-1)
    dab = np.sqrt(np.sum(delta[..., 1:3]**2, axis=-1))
    return 3 * dL + dab

def L_15_weighted_hyab_distance(a, b):
    delta = a - b
    dL = np.sum(np.abs(delta[..., 0:1]), axis=-1)
    dab = np.sqrt(np.sum(delta[..., 1:3]**2, axis=-1))
    return 1.5 * dL + dab

def L_distance(a, b):
    delta = a - b
    dL = np.sum(delta[..., 0:1]**2, axis=-1)
    return dL

def map_pixels_to_palette(img, palette, func=euclidean_distance):
    """
    Find the index of a palette color closest to each input image pixel color.
    """

    a = img.reshape(-1, 1, 3)
    b = palette.reshape(1, -1, 3).astype(np.float32)
    dist = func(a, b)
    inds = np.argmin(dist, axis=-1)

    H, W, _ = img.shape

    return inds.reshape(H, W)


"""
# Usage example:
inds = map_pixels_to_palette(srgb_u8, palette_srgb_u8)
y = np.take(palette_srgb_u8, inds.reshape(-1), axis=0).reshape(H, W, 3)
"""