• Runtimes
  • Rolling out your own custom Spine runtime questions

Related Discussions
...

Hey all,

I am currently looking into integrating Spine animation runtime with our custom WebGL renderer. I wrote a small demo covering the essentials (skeleton and textures handling), however have two questions, for which I can't find much in the codebase or docs:

  1. Looking at the .atlas files generated by Spine, I can't understand what the offsets field is doing. I get bounds, that would be the image subregion within the sprite atlas, but don't get how does it relate and affects bounds.

Right now I am taking only bounds into account when displaying my spritesheets and my textures are clearly wrong. Is it something like this:

Where the black borders are bounds, yet the red borders are offsets?

  1. How to use the bones a, b, c, d, worldX and worldY properties to construct a proper 3x3 world transform matrix.

Looking at your threejs integrations I saw you are calculating the quads world positions via attachment.computeWorldVertices method + doing batch rendering and directly batching the world space position vertices as floats.

However my engine has a proper scene graph, that I'd like to utilise and not do lower level vertices arrays manipulation. Here is my current update method:

public update(dt: number): void {
   this.state.update(dt);
   this.state.apply(this.skeleton);
   this.skeleton.updateWorldTransform();

   const skeleton = this.skeleton;
   const worldMatrix = Matrix3.create();

   for (let i = 0; i < skeleton.drawOrder.length; i++) {
      const node = this.children[i];
      if (!node) {
         continue;
      }
      const slot = skeleton.drawOrder[i];

  Matrix3.set(
     worldMatrix,
     slot.bone.a,
     slot.bone.b,
     slot.bone.c,
     slot.bone.d,
     slot.bone.worldX,
     slot.bone.worldY
  );
  Matrix3.copy(worldMatrix, node.transform);
   }
}

At init time I have created a bunch of Node elements that correspond to each Bone and as you can see, in the update I construct a 3x3 matrix out of a, b, c, d, worldX and worldY elements. This almost works, however my model appears inverted:

Is there a better way to go about this? Looking at the Bone class, I don't see a 3x3 matrix that I can use exposed and am not sure how to construct one out of these fields.

Thanks for the great software, I am really excited to get this running! 🙂

Atlas offsets are for whitespace stripping.
Atlas export format

If your scene graph uses a Y-down coordinate system, you'll need to flip everything over. It seems spine-ts doesn't have a setting for that, but some of our other runtimes do, eg:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.1-beta/spine-c/spine-c/src/spine/Bone.c#L75

It's likely more efficient not to put every attachment into a scene graph. If your matrix operations rely on WebGL then you'd likely be flushing the renderer, which will prevent batching. Even if that isn't a problem, you'll probably need to deal with adding and removing scene graph nodes as attachments are shown and hidden during animations. All that is likely not worth the benefits of using the scene graph. Transforming a scene graph node to match a bone can still be done when needed.

Thank you for the reply! Seems that I missed the atlas packing specification on your website.

Our renderer allows for the scene graph to be batched behind the scenes. Very good point about showing / hiding attachments though.

Transforming a scene graph node to match a bone can still be done when needed

Do you have any code samples I can refer to? Somewhere a scene graph is initialised at init time and then transformed via matrices composed out of the a, b, c, d, worldX and worldY components?

If you are asking about how to make a scene graph node follow a bone, spine-unity has BoneFollower:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.1-beta/spine-unity/Assets/Spine/Runtime/spine-unity/Components/Following/BoneFollower.cs

That's a bit complex mixed with Unity stuff, but the idea is simple: convert the Spine Runtimes bone transform to the skeleton scene graph node coordinates, then convert to the target scene graph node coordinates.

If you meant an example that uses scene graph nodes to render, spine-canvas in spine-ts does that:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.0/spine-ts/spine-canvas/src/SkeletonRenderer.ts
I still contend it will be more efficient to treat the skeleton as a single scene graph node.

Thank you for your answer. Following the spine-canvas example, I did manage to get the dragon properly animated using our engine.

However after reading through more of your examples, I start to realise that some attachments will not be able to be expressed with quads in a scene-graph, such as MeshAttachment, which is built from multiple polygons with different GL.TRIANGLE* modes.

Am I correct? In that case we would need to resort to the lower level batch renderer, dynamically manipulating our vertices to world space transforms with attachment.computeWorldVertices I guess?

Thanks for all the info BTW!

Yes, to support mesh rendering you'll need to be able to render (and batch) triangles. A mesh is a single polygon (which may be concave) without holes. The triangulation is provided for you.

Note that even though a region attachment is a rectangle, game toolkits that can only render rectangles will not be able to render a region attachment that is scaled into a rhombus. If you can render quads then you have region attachments covered, but still need triangles for meshes.

Yes, the way to render is to have the attachment compute the world vertices. The game toolkit needs to support an API low level enough to render (and batch) triangles. Some game toolkits don't, so in those cases we provide a batching mechanism. That's not ideal because usually it means the batching can only be done within a single skeleton. For example:
https://github.com/EsotericSoftware/spine-runtimes/blob/4.1-beta/spine-cocos2dx/spine-cocos2dx/src/spine/v4/SkeletonBatch.cpp

No problem! I hope you find integrating Spine to be relatively easy!

(our cocos2d-x renderer actually does cross-skeleton batching, cause it's magic 😃)

The spine-cpp docs have a section on implementing your own renderer, which shows you what kind of low-level interface to the GPU you need at a minimum. It's not that complex really: spine-cpp Runtime Documentation: Implementing Rendering

The spine-ts WebGL backend is not trivial, but also not rocket science. The key classes are Mesh, PolygonBatcher, and SkeletonRenderer: https://github.com/EsotericSoftware/spine-runtimes/tree/4.0/spine-ts/spine-webgl/src

Some toolkits like Phaser actually use those classes directly to integrate Spine in their WebGL pipeline. You could consider doing the same to save you time and maintenance burden.

Thanks for all the info. Using the spine-core as inspiration, I managed to onboard skeletal animations using our batch renderer. On top of that, we have support for multiple textures per fragment shader, allowing us to pack textures into 2048x2048 megatextures and do batching across multiple models / skeletons via RegionAttachment, effectively drawing multiple models with 1 draw call. MeshAttachment however breaks it and forces flushes, due to each mesh having it's own indices layout. Not sure if I can get around that...

Thanks!