mth@Posted: Sun Jul 06, 2008 10:05 am :
I have a question - is it possible to import to max, using der ton's md5 importer, head mesh with phenomes set in order to animate speech manually?



rich_is_bored@Posted: Sun Jul 06, 2008 11:46 am :
The lipsync functionality in Quake 4 is a third party technology licensed from Annosoft. The phenomes are applied in real-time during gameplay so no skeletal animation files exist for you to import.

The only data you have access to is the phenomes themselves and I can assure you that no one outside of Annosoft and their clients have the means or support to develop tools that work with that data. Ironically enough, Quake 4 has a built-in lipsync editor but Raven isn't allowed to release the files required for us to be able to use it.

That said, is it possible to make Quake 4 bake an animation out to an MD5anim? Perhaps. But I don't see any console commands to do that so it's really a question of to what extent lipsync functionality is exposed in the SDK. It might be possible to dump out the position of the various facial bones during speech with a bit of modification. Although I don't recommend going that route as it's a questionable activity.

My advice would be to either edit the lipsync declarations using the information we've uncovered here...

http://www.modwiki.net/wiki/Lipsync_(decl)

... or to manually animate the speech from scratch.



mth@Posted: Sun Jul 06, 2008 1:57 pm :
thank you rich, i didn't realize that they are done in real-time, i thought the phenomes are declared as keyframes in mdmesh itself earlier and *.lipsync files only controls the blend and order of them. What i wanted to do was animate all the talking myself in max, but using already created phenomes, which would save A LOT of time during this, and then import md5 into d3 to make q4 character cameo. Looks like there's no way to making this

But i got another question. Since lipsync software from Annosoft is ultra cheap tool (11$ - that is really user friendly price!), would it be able to lipsync model, which is body mesh and head mesh in one piece? Or would it, during the process of connecting the two into single md5 and then exported with der ton's exporter, lost all the phenomes data?



Brain Trepaning@Posted: Sun Jul 06, 2008 5:01 pm :
FWIW, for Doom 3, I used a free real-time lip-sync app for 3ds Max 5 called AutoVox (13 megs). I then exported the heads as an MD5 and attached the head and body with the DEF file. The program only works with Max 4 and 5, but I supply it here in the event someone is able to fix it up to work with more recent versions of 3ds Max. To use it, you can talk directly into a microphone. Of course, as it is, you require 3ds Max 4 or 5 (5 has extra features available), but I cannot supply those 3ds Max versions.



mth@Posted: Sun Jul 06, 2008 6:28 pm :
Brian, as always, you kick some serious ass here, once again you provide amazing tool, thank you! :D
how does it work? does it need phenomes, or does it use just the bones attached to lips?

btw, how situation looks with q4 md5's? do they have bone structure on the lips?



Brain Trepaning@Posted: Sun Jul 06, 2008 11:25 pm :
It works with a handful of morph targets in 3ds Max, plus Windows' sound recognition. You actually have to train your computer first to recognize your voice, but it's a one time process. Once the lip sync was created in 3ds Max, I would export the head using der_ton's GENERAL EXPORTER and just trigger the appropriate md5anim with the appropriate sound file. This completely bypasses any internal lip-syncing program in the game. The head is just a mesh and animations. Once the morphs are all set up, it takes just a few minutes to get a talking head into a Doom 3 engine game. Again, I am hopeful someone could reconfigure the whatsits of AutoVox to work with newer versions of 3ds Max. Undoubtedly, the creator of the program got snapped up by some big company and that was the end of Autovox's development.



LDAsh@Posted: Mon Jul 07, 2008 11:00 am :
I don't think those .lipsync files are real hard to figure out, they are chockas full of examples themselves, so we already have a collection of words figured out for us, and new ones shouldn't be real hard to figure out given the examples already provided, at least not after some trial and error anyway.
The only thing I haven't figured out yet is how to time them to audio, maybe that's done in a different file and hopefully not part of the the .lipsync files themselves. If they are, I have no idea how one would get them to sync up to audio.
(edit: I can read, the durations are defined in the codes. Difficult, but not impossible.)



6th Venom@Posted: Mon Jul 07, 2008 5:44 pm :
There is also Voice-O-Matic, but it's $349 USD...
Just test the trial version to see what it look like, there's also the Character Pack (retail price: $ 1990 USD, EDUCATIONAL price: $ 149 USD)
with pretty all the 3ds MAX Di-O-Matic plugins inside.

I'm sure you can found cheaper plugs too... :D



lowdragon@Posted: Mon Jul 07, 2008 7:28 pm :
... any usable gmax scripts for that matter ? However, extracting a base (md5) head to build additional (new) head_models shouldnt be that difficult. Dont know about that topic really but if they added the basic "frasing" (like a, e, i, o, u etc.) and animation blending its just a matter of setting up a animation_timeline (for sayed head_models)?!



mth@Posted: Mon Jul 07, 2008 7:56 pm :
6th Venom wrote:
There is also Voice-O-Matic, but it's $349 USD...
Just test the trial version to see what it look like, there's also the Character Pack (retail price: $ 1990 USD, EDUCATIONAL price: $ 149 USD)
with pretty all the 3ds MAX Di-O-Matic plugins inside.

I'm sure you can found cheaper plugs too... :D



but AutoVox is completely free. And after seeing it today i must say it kicks ass.
Sometimes I'm so happy that my university is too cheap to buy newer version of certain softwares... :D



mth@Posted: Sun Jul 06, 2008 10:05 am :
I have a question - is it possible to import to max, using der ton's md5 importer, head mesh with phenomes set in order to animate speech manually?



rich_is_bored@Posted: Sun Jul 06, 2008 11:46 am :
The lipsync functionality in Quake 4 is a third party technology licensed from Annosoft. The phenomes are applied in real-time during gameplay so no skeletal animation files exist for you to import.

The only data you have access to is the phenomes themselves and I can assure you that no one outside of Annosoft and their clients have the means or support to develop tools that work with that data. Ironically enough, Quake 4 has a built-in lipsync editor but Raven isn't allowed to release the files required for us to be able to use it.

That said, is it possible to make Quake 4 bake an animation out to an MD5anim? Perhaps. But I don't see any console commands to do that so it's really a question of to what extent lipsync functionality is exposed in the SDK. It might be possible to dump out the position of the various facial bones during speech with a bit of modification. Although I don't recommend going that route as it's a questionable activity.

My advice would be to either edit the lipsync declarations using the information we've uncovered here...

http://www.modwiki.net/wiki/Lipsync_(decl)

... or to manually animate the speech from scratch.



mth@Posted: Sun Jul 06, 2008 1:57 pm :
thank you rich, i didn't realize that they are done in real-time, i thought the phenomes are declared as keyframes in mdmesh itself earlier and *.lipsync files only controls the blend and order of them. What i wanted to do was animate all the talking myself in max, but using already created phenomes, which would save A LOT of time during this, and then import md5 into d3 to make q4 character cameo. Looks like there's no way to making this

But i got another question. Since lipsync software from Annosoft is ultra cheap tool (11$ - that is really user friendly price!), would it be able to lipsync model, which is body mesh and head mesh in one piece? Or would it, during the process of connecting the two into single md5 and then exported with der ton's exporter, lost all the phenomes data?



Brain Trepaning@Posted: Sun Jul 06, 2008 5:01 pm :
FWIW, for Doom 3, I used a free real-time lip-sync app for 3ds Max 5 called AutoVox (13 megs). I then exported the heads as an MD5 and attached the head and body with the DEF file. The program only works with Max 4 and 5, but I supply it here in the event someone is able to fix it up to work with more recent versions of 3ds Max. To use it, you can talk directly into a microphone. Of course, as it is, you require 3ds Max 4 or 5 (5 has extra features available), but I cannot supply those 3ds Max versions.



mth@Posted: Sun Jul 06, 2008 6:28 pm :
Brian, as always, you kick some serious ass here, once again you provide amazing tool, thank you! :D
how does it work? does it need phenomes, or does it use just the bones attached to lips?

btw, how situation looks with q4 md5's? do they have bone structure on the lips?



Brain Trepaning@Posted: Sun Jul 06, 2008 11:25 pm :
It works with a handful of morph targets in 3ds Max, plus Windows' sound recognition. You actually have to train your computer first to recognize your voice, but it's a one time process. Once the lip sync was created in 3ds Max, I would export the head using der_ton's GENERAL EXPORTER and just trigger the appropriate md5anim with the appropriate sound file. This completely bypasses any internal lip-syncing program in the game. The head is just a mesh and animations. Once the morphs are all set up, it takes just a few minutes to get a talking head into a Doom 3 engine game. Again, I am hopeful someone could reconfigure the whatsits of AutoVox to work with newer versions of 3ds Max. Undoubtedly, the creator of the program got snapped up by some big company and that was the end of Autovox's development.



LDAsh@Posted: Mon Jul 07, 2008 11:00 am :
I don't think those .lipsync files are real hard to figure out, they are chockas full of examples themselves, so we already have a collection of words figured out for us, and new ones shouldn't be real hard to figure out given the examples already provided, at least not after some trial and error anyway.
The only thing I haven't figured out yet is how to time them to audio, maybe that's done in a different file and hopefully not part of the the .lipsync files themselves. If they are, I have no idea how one would get them to sync up to audio.
(edit: I can read, the durations are defined in the codes. Difficult, but not impossible.)



6th Venom@Posted: Mon Jul 07, 2008 5:44 pm :
There is also Voice-O-Matic, but it's $349 USD...
Just test the trial version to see what it look like, there's also the Character Pack (retail price: $ 1990 USD, EDUCATIONAL price: $ 149 USD)
with pretty all the 3ds MAX Di-O-Matic plugins inside.

I'm sure you can found cheaper plugs too... :D



lowdragon@Posted: Mon Jul 07, 2008 7:28 pm :
... any usable gmax scripts for that matter ? However, extracting a base (md5) head to build additional (new) head_models shouldnt be that difficult. Dont know about that topic really but if they added the basic "frasing" (like a, e, i, o, u etc.) and animation blending its just a matter of setting up a animation_timeline (for sayed head_models)?!



mth@Posted: Mon Jul 07, 2008 7:56 pm :
6th Venom wrote:
There is also Voice-O-Matic, but it's $349 USD...
Just test the trial version to see what it look like, there's also the Character Pack (retail price: $ 1990 USD, EDUCATIONAL price: $ 149 USD)
with pretty all the 3ds MAX Di-O-Matic plugins inside.

I'm sure you can found cheaper plugs too... :D



but AutoVox is completely free. And after seeing it today i must say it kicks ass.
Sometimes I'm so happy that my university is too cheap to buy newer version of certain softwares... :D



mth@Posted: Sun Jul 06, 2008 10:05 am :
I have a question - is it possible to import to max, using der ton's md5 importer, head mesh with phenomes set in order to animate speech manually?



rich_is_bored@Posted: Sun Jul 06, 2008 11:46 am :
The lipsync functionality in Quake 4 is a third party technology licensed from Annosoft. The phenomes are applied in real-time during gameplay so no skeletal animation files exist for you to import.

The only data you have access to is the phenomes themselves and I can assure you that no one outside of Annosoft and their clients have the means or support to develop tools that work with that data. Ironically enough, Quake 4 has a built-in lipsync editor but Raven isn't allowed to release the files required for us to be able to use it.

That said, is it possible to make Quake 4 bake an animation out to an MD5anim? Perhaps. But I don't see any console commands to do that so it's really a question of to what extent lipsync functionality is exposed in the SDK. It might be possible to dump out the position of the various facial bones during speech with a bit of modification. Although I don't recommend going that route as it's a questionable activity.

My advice would be to either edit the lipsync declarations using the information we've uncovered here...

http://www.modwiki.net/wiki/Lipsync_(decl)

... or to manually animate the speech from scratch.



mth@Posted: Sun Jul 06, 2008 1:57 pm :
thank you rich, i didn't realize that they are done in real-time, i thought the phenomes are declared as keyframes in mdmesh itself earlier and *.lipsync files only controls the blend and order of them. What i wanted to do was animate all the talking myself in max, but using already created phenomes, which would save A LOT of time during this, and then import md5 into d3 to make q4 character cameo. Looks like there's no way to making this

But i got another question. Since lipsync software from Annosoft is ultra cheap tool (11$ - that is really user friendly price!), would it be able to lipsync model, which is body mesh and head mesh in one piece? Or would it, during the process of connecting the two into single md5 and then exported with der ton's exporter, lost all the phenomes data?



Brain Trepaning@Posted: Sun Jul 06, 2008 5:01 pm :
FWIW, for Doom 3, I used a free real-time lip-sync app for 3ds Max 5 called AutoVox (13 megs). I then exported the heads as an MD5 and attached the head and body with the DEF file. The program only works with Max 4 and 5, but I supply it here in the event someone is able to fix it up to work with more recent versions of 3ds Max. To use it, you can talk directly into a microphone. Of course, as it is, you require 3ds Max 4 or 5 (5 has extra features available), but I cannot supply those 3ds Max versions.



mth@Posted: Sun Jul 06, 2008 6:28 pm :
Brian, as always, you kick some serious ass here, once again you provide amazing tool, thank you! :D
how does it work? does it need phenomes, or does it use just the bones attached to lips?

btw, how situation looks with q4 md5's? do they have bone structure on the lips?



Brain Trepaning@Posted: Sun Jul 06, 2008 11:25 pm :
It works with a handful of morph targets in 3ds Max, plus Windows' sound recognition. You actually have to train your computer first to recognize your voice, but it's a one time process. Once the lip sync was created in 3ds Max, I would export the head using der_ton's GENERAL EXPORTER and just trigger the appropriate md5anim with the appropriate sound file. This completely bypasses any internal lip-syncing program in the game. The head is just a mesh and animations. Once the morphs are all set up, it takes just a few minutes to get a talking head into a Doom 3 engine game. Again, I am hopeful someone could reconfigure the whatsits of AutoVox to work with newer versions of 3ds Max. Undoubtedly, the creator of the program got snapped up by some big company and that was the end of Autovox's development.



LDAsh@Posted: Mon Jul 07, 2008 11:00 am :
I don't think those .lipsync files are real hard to figure out, they are chockas full of examples themselves, so we already have a collection of words figured out for us, and new ones shouldn't be real hard to figure out given the examples already provided, at least not after some trial and error anyway.
The only thing I haven't figured out yet is how to time them to audio, maybe that's done in a different file and hopefully not part of the the .lipsync files themselves. If they are, I have no idea how one would get them to sync up to audio.
(edit: I can read, the durations are defined in the codes. Difficult, but not impossible.)



6th Venom@Posted: Mon Jul 07, 2008 5:44 pm :
There is also Voice-O-Matic, but it's $349 USD...
Just test the trial version to see what it look like, there's also the Character Pack (retail price: $ 1990 USD, EDUCATIONAL price: $ 149 USD)
with pretty all the 3ds MAX Di-O-Matic plugins inside.

I'm sure you can found cheaper plugs too... :D



lowdragon@Posted: Mon Jul 07, 2008 7:28 pm :
... any usable gmax scripts for that matter ? However, extracting a base (md5) head to build additional (new) head_models shouldnt be that difficult. Dont know about that topic really but if they added the basic "frasing" (like a, e, i, o, u etc.) and animation blending its just a matter of setting up a animation_timeline (for sayed head_models)?!



mth@Posted: Mon Jul 07, 2008 7:56 pm :
6th Venom wrote:
There is also Voice-O-Matic, but it's $349 USD...
Just test the trial version to see what it look like, there's also the Character Pack (retail price: $ 1990 USD, EDUCATIONAL price: $ 149 USD)
with pretty all the 3ds MAX Di-O-Matic plugins inside.

I'm sure you can found cheaper plugs too... :D



but AutoVox is completely free. And after seeing it today i must say it kicks ass.
Sometimes I'm so happy that my university is too cheap to buy newer version of certain softwares... :D