Made an example image of what I'm thinking gradient meshes should be made for, but are working in a way that's far beyond:
This is an svg that was made by about 950 objects (doubled the numbers of the necessary objects for the original effect, to blend colours with blurring).
It was made with a cloned path with 12 nodes, and resulting in 11400 nodes in total, while only a portion of the fill value is making up the final picture.
And, as they are clones, you can change not only the gradient's steps, but even the direction/shape of it, making each complex gradient appear on the ring in a live way.
By gradient meshes you cannot define how the interpolation steps between defined colours follow eachother -the exact shape of those, as far as I know.
But you definitely cannot change all the colours of an existing gradient mesh, in a precise manner, all at once.
Like changing a texture on a 3D model, previously unwrapped.
In 3D -blender for example- you unwrap a 3D model, create a layout from the unwrapped faces, then can map a simple image exactly as that uv layout.
Then can change the image used. Can even use videos on 3D shape's surfaces.
How could that be, in 3D such idea works like a charm, while 2D lacks the possibilities?