• Unity
  • Adding animations at runtime (external files)

Hi,

I'm working on a project where we use a single rig, but have 100's of (custom) animations.

Basically, our use-case is dependent on data at runtime, and we can't get what we want from normal mixing/crossfade.
We also have many skins, and a single (relatively unchanging) skeleton.

I am curious if:

A) there is a way to import animations at runtime nicely
(IE: something like how blender/maya/etc can save out animations without the rig data)

B) Is there a workflow for adding them at runtime?
...and would it be close to this:

  1. master project (potentially has all animations), skeleton that doesn't change
  2. export each individual animation
  3. use a custom script to mix/match the animations by combining them into a JSON at runtime
  4. import new mixed skeleton data
  5. use skeleton as normal in gameplay

We're still experimenting and looking for a better flow.
Just wondering if anyone has a better solution offhand, or a link to something my google-fu hasn't found yet...

Thanks,

Related Discussions
...

It's hard to suggest solutions without understanding why you need to separate your skeleton and animations. If you could expand on that more, it might help.

Most timelines don't store references to objects in the skeleton. The only timeline that does is EventTimeline, the rest store only an index in the skeleton's bones array, slots array, path constraint array, etc. This means if you loaded a skeleton with no animations and another skeleton that is identical (really identical, that is important) except it has animations, you should be able to use the animations with the first skeleton. EventTimeline stores Event objects for each key, which each have a reference to an EventData in the SkeletonData. This means events triggered by an EventTimeline will be passed an Event with an EventData reference to the second skeleton.

Hi Nate,

Thanks for getting back to me w/ that suggestion.
We were hoping to separate animations from skeleton because of concern over load times.
I'll admit we're kind of abusing the system with hundreds of animations. 🙂

Your suggestion might work though, loading a few (20~) skeleton+animations, then applying them on the first (identical) skeleton.
That over loading 1 skeleton w/ 100's of extra animations.
Was hoping the skeleton & animation was completely separate to avoid loading a second skeleton JSON (ours isn't too large though)
(we also will try to switch back to binary once we get a good workflow w/ the JSON)

I'm not sure how / if our Event timelines are key'd at all (if it's what i think it is, i think it should be clean).
So, would have to double check that.

I'll do some more research, but if you/others think of a better workflow please post 🙂

Thanks again!

FWIW, I would not bother with this until it was proven loading binary data is too slow. JSON is definitely the wrong choice when performance is a concern.

Nate :

It's hard to suggest solutions without understanding why you need to separate your skeleton and animations. If you could expand on that more, it might help.

I'm interested in a similar case. In my case I'd like to download additional animations over the air which are not included in the app.

Nate :

Most timelines don't store references to objects in the skeleton. The only timeline that does is EventTimeline, the rest store only an index in the skeleton's bones array, slots array, path constraint array, etc. This means if you loaded a skeleton with no animations and another skeleton that is identical (really identical, that is important) except it has animations, you should be able to use the animations with the first skeleton. EventTimeline stores Event objects for each key, which each have a reference to an EventData in the SkeletonData. This means events triggered by an EventTimeline will be passed an Event with an EventData reference to the second skeleton.

Is there a standard way to save/load animations without duplicating the skeleton data?

Thanks!

There is not, sorry. You have two hurdles:

1) Separating the animation data from the rest of the data. JSON processing is easy, so getting only the animation data is straightforward. For binary you'd likely write a tool based on SkeletonBinary, read and discard bytes until the animation section is reached, then read and copy out the bytes for the animations.

2) Loading only the animations, using an existing SkeletonData. You could modify SkeletonJson or SkeletonBinary for this. There is already a readAnimation function, it is just not exposed.

You'll have to take special care not to modify the skeleton, else loading the animations will fail spectacularly. Sending the whole skeleton would have to be significantly larger than sending only the additional animations, else this would not be worth the trouble. That is the case in most scenarios, since bandwidth is plentiful and skeleton data is small. Be sure to send your skeleton data as binary with compression (deflate is generally good enough).

Nate :

There is not, sorry. You have two hurdles:

1) Separating the animation data from the rest of the data. JSON processing is easy, so getting only the animation data is straightforward. For binary you'd likely write a tool based on http://esotericsoftware.com/spine-api-reference#SkeletonBinary, read and discard bytes until the animation section is reached, then read and copy out the bytes for the animations.

2) Loading only the animations, using an existing http://esotericsoftware.com/spine-api-reference#SkeletonData. You could modify http://esotericsoftware.com/spine-api-reference#SkeletonJson or http://esotericsoftware.com/spine-api-reference#SkeletonBinary for this. There is already a readAnimation function, it is just not exposed.

I see. That would work. Thanks!

Nate :

You'll have to take special care not to modify the skeleton, else loading the animations will fail spectacularly.

Understood.

Nate :

Sending the whole skeleton would have to be significantly larger than sending only the additional animations, else this would not be worth the trouble. That is the case in most scenarios, since bandwidth is plentiful and skeleton data is small. Be sure to send your skeleton data as binary with compression (deflate is generally good enough).

In the scenario I am evaluating, it seems that the skeleton is 100 times larger than most animations. The data needs to be downloaded on mobile devices which puts bandwidth at a premium.

To be sure, are you looking at Spine binary data that has been deflate compressed?

Nate :

To be sure, are you looking at Spine binary data that has been deflate compressed?

I'm not sure what you mean? Compressing the skel.bytes file reduces it by 30%.

The way I measured the animation size was to export once without animations (no_anims.skel.bytes) and once with animations (with_anims.skel.bytes). Then I compared the exported sizes and concluded a 100:1 ratio between the skeleton and animations.

I just wanted to make sure you had compared compressed .skel files. Comparing uncompressed .skel files may not show the same ratio. Also, 100x a small amount could still be a small amount. :nerd: Consider downloading an image could be a few hundred kilobytes and it's unlikely people would be looking for ways to reduce it. A megabyte or two is unlikely to be an issue, even on mobile. All that said, it could be you do have a huge amount of data and really would benefit from splitting the skeleton and animation. I just have to be skeptical by default because in most cases it's not needed. 🙂

Your guidance has been much appreciated!

In my specific scenario, I'm looking at supporting 1000s of animations, not all of which are needed at the same time or even by the same user. The binary skeleton file is over 15mb and the animations are under 100k, hence it seemed that splitting them would be beneficial. I'd also like to understand why the skeleton is so large in the first place, but I'm assuming for the moment that it needs to be that way.

By digging in to the suggested SkeletonBinary loading code, I should be able to build a custom workflow that will load, split and save out the animations and also ensure that the skeletons always match.

That is quite a big skeleton! It should be pretty apparent where the size goes just by scrolling through the JSON data. My guess would be that you have many weighted mesh attachments and each has many vertices. You can try using prune. Filter the tree to only show meshes, select all meshes, open the Weights view (alt+W), and click Prune. You'll then need to check that all your meshes still deform as you like. It will take some trial and error to come up with the largest prune percentage you can use without losing deform quality.

Just posting an update -

Like WildMonkey, our team is also doing a mobile / OTA type scenario. So, we're having to massage the data a bit first.

Thus far, we're able to mix/match the JSON before runtime. Next step is at runtime. Then with binary format.

One of our devs was experimenting with a preprocessor type thing.
The (master) binary would be on disk, we'd read it in via stream & push just the parts we need into spine.
We haven't gotten a working prototype of the process yet. Ran into a couple minor issues getting it just right, but still looks doable.

Will try to keep posted once we get something working.
Cheers,