- 已编辑
Trying to understand the local/world rotation values
Hey,
I'm trying to understand the relationship between the rotation of a bone and the corresponding attachment (image). You may remember me from such episodes as "Generic runtime for Java/Android" (viewtopic.php?f=7&t=1433).
So I am just using the SRT values of a bone in my own rendering engine, but the rotation values are causing problems for me. In the SpineBoy demo, for example, the torso attachment has no rotation itself, but the bone does:
http://cl.ly/image/3v3p2u2D2p45
"skins": {
"default": {
...
"torso": {
"torso": { "x": 44.57, "y": -7.08, "rotation": -94.95, "width": 68, "height": 92 }
}
And the bone for the torso is listed as:
{ "name": "torso", "parent": "hip", "length": 85.82, "x": -6.42, "y": 1.97, "rotation": 94.95 }
Now obviously these two rotation values cancel to 0, but I just want to make sure I understand the relationship here.
Currently I'm rendering the images along with the value of the "worldRotation" and although their motion appears to be correct (i.e. the amount of delta rotation in each frame) some of the images are just, well wrong:
http://cl.ly/image/1b3e0i0m1S1I
Can I safely assume that in order to correctly render an image I take the worldRotation and adjust it by whatever value is in the skin for that bone?
I would first make sure all your bones are rendering with the correct position, rotation, and scale. Only then would I tackle rendering region attachments in the correct place.
Bones have a hierarchy and a parent bone's SRT affect child bones (unless inherit scale or rotation is disabled for the child). Region attachments have an "offset SRT" that is relative to the bone. The translation describes the center of the region in the bone's local coordinate system. The origin for the region's rotation and scale is the center of the region. The offset SRT never changes, the region always has the same SRT relative to the bone. The bone is how the image is manipulated. Make sense? Feel free to ask for clarification or more specific questions.
Yep.. makes sense, but your explanation and what I see in the data don't seem to add up in my brain :/
My bones appear to be rendering correctly. I have a simple render system I'm using to debug which just draws primitives (dots and lines) for the bones based on their world coordinates. This looks correct when I run the animation but when I try to add the actual images it goes a bit pear shaped.
Let's take the "head" slot definition in the skin:
"head": {
"head": { "x": 53.94, "y": -5.75, "rotation": -86.9, "width": 121, "height": 132 }
}
The docs state that "X" for example is:
The X position of the image relative to the slot’s bone. Assume 0 if omitted.
But this is what's confusing to me. The actual position of the head does not seem to be 53.94 pixels to the right of the "head" bone (to which it is attached). I assumed this meant that the center of the image is 53.94 pixels to the right of the bone's world X position, but this is obviously not the case.
So.. what I have is:
Bone: worldX, worldY as computed by the runtime
Skin: x, y as taken from the JSON
I must be missing some other data. You mentioned the hierarchy. Should I be calculating the offset by using the sum of all offsets from all parents of a given bone?
Oh btw.. I'm not using attachments at all. I replaced the attachment loader in the runtime because I didn't want the overhead of the vertex calcs that happen during the animation. Maybe this was a mistake?
A bone points along the positive X axis of its coordinate system. The Y axis is left (positive) or right (negative) of the bone. You can see this by choosing Parent in Spine and then translating an image and watching the X/Y values.
OK.. I've read and re-read your comment a couple of times.. just trying to see what you mean.
So, would I be correct in saying that the X,Y offset in the skin refers to offset relative to the bone if the bone was lying at an angle of 0 degrees relative to origin? (i.e "pointing right"). Hence if a bone that has a rotation of 90 (pointing up) the X value in the skin offset would, in world coordinates, correspond to a positive Y value, and a positive Y value in the skin offset would correspond to a negative X value in world coordinates?
That kind of makes sense to me as you wouldn't know what the angle of a bone is at any given time so the offset needs to be "pinned" to a fixed orientation. Or I could have completely misunderstood :/
Nah, that doesn't describe it. The image's offset is relative to the bone. The translation is in the bone's coordinate system. 10,0 is 10 units on the X axis, 0 on Y. If the bone is rotated 45 degrees, the 10 units on X will be up and to the right in world coordinates. The image's offset SRT doesn't change relative to the bone, but the image's world size and position can change as the bone's SRT is changed.
Maybe show some code on how you are trying to draw images and I can give better direction.
Hmm ok. Your last reply reads to me as the same as my explanation. I'm trying to figure out the difference.
Anyway, here's the sample code. Bear in mind that this is just for debugging and I wouldn't be using the canvas system in the actual game:
For each bone...
canvas.save();
canvas.translate(bone.worldSRT.x + bone.attachment.skin.x + x, size.y - (bone.worldSRT.y + bone.attachment.skin.y + y));
canvas.rotate(-(bone.worldSRT.rotation + bone.attachment.skin.rotation));
BodyPart bp = (BodyPart) bone.attachment.userData;
canvas.drawBitmap(bp.bitmap, -bp.hW, -bp.hH, paint);
canvas.restore();
bone.worldSRT is obviously the world SRT values directly from the spine runtime.
bone.attachment.skin is the raw skin data taken from the JSON document
x/y are just the absolute positions of the entire object in world space
size.y is just the device height as the y axis is inverted in the Android canvas (0,0 is top left)
The bitmap being drawn is not taken from an atlas but (again, just for the purposes of debugging) is just the raw image taken from spineboy sample.
The above renders as:
http://cl.ly/image/010N2p2G1E2v
Based on your explanation I'm still not understanding what the missing ingredient is. I get that the offset is in the bone's coordinate system, but if the bone is at 0,0 with 0 rotation then bone coordinates are the same as world coordinates so translation from origin should just be the sum. At least that's what my brain tells me.
It's too hard to see if it is right just from looking at your code. Have you tried drawing debug lines for the bones so you are sure they are in the right place? Next I would create a simpler project with maybe 2 bones and one attached image so you can more easily see where the image should be rendered versus where it is actually rendered. Start with no rotation on the bones and then move to more complex configurations.
I've looked at a lot of scrambled skeleton images, just like your screenshot, while developing Spine and all the runtimes. If you don't have things exactly right, then they are very wrong! It's a bit tricky to debug, so try to simplify the problem as much as possible.
Note that unless you draw by specifying the image vertices, you won't be able to use non-uniform scaling for images whose axes that are not aligned with the bone they are attached to. See here for more:
viewtopic.php?f=3&t=706&p=3228#p3228
From the screenshot, it looks like the positions aren't being offset from the positions of the parent bone.
Yeah.. I think I've just about reached the end of the time I wanted to spend on this anyway. I am obviously already rendering using verts but because my game objects are positioned from box2d coordinates I'm doing the transforms based initially on raw x/y coords then the actual matrix transformations are done in the shaders so it's a little cumbersome for me to use the vert arrays that are written to by spine-c. Hence why I wanted to get just raw SRT values from the runtime. I could transfer the full vert array back up from native to java, but that's a lot of state change going on and I think it'll just destroy the frame rate.
I think the spine app is awesome and I know it's still relatively early days but I still maintain it would be valuable to create a generic runtime that didn't make any assumption about the rendering environment and just provided raw SRT values. Point taken about the deformation though, and based on the video I saw of your plans for the next version this will become even more of an issue.
A pure Java generic runtime would be ideal, but as you've said if nobody is asking for it then there's little motivation for you to implement it.
Thanks for all your help. Good luck with your kickstarter campaign.
Actually.. last attempt. Just realized I can transfer bytes back and forth between Java and C using native buffers from the java side, so I could transfer the vert buffer from the spine-c runtime into java. It's going to be going straight back down again but in a different buffer object. Have to see if that is a performance issue or not.
Anyway.. if I were to do this, which vert. array is the right one? Looking at the spine-c runtime I'm guessing the important stuff is here?)
https://github.com/EsotericSoftware/spi ... ent.c#L100
I guess I would send back the contents of *vertices in this case?
I also notice that spRegionAttachment_computeWorldVertices does not appear to actually be called anywhere in the runtime, so I'm assuming it's intended for the implementor (me) to call.
Transferring data using a direct buffer will be fast. JNI has overhead so do it in one JNI call to get all the data you need.
Look at an implementation that uses spine-c:
https://github.com/EsotericSoftware/spi ... ml.cpp#L88
You'd have a loop like that and then pack the vertex positions, UVs, and vertex colors for each region attachment.