rich_is_bored@Posted: Sat Aug 30, 2008 11:09 am :
We've discussed various methods to create bump maps from photosource in the past but this is a new technique and the results look really good so it might come in handy for texture creation ...

http://aig.cs.man.ac.uk/research/daedal ... nation.php

I knew there was a way to do this using color math I just didn't have the attention span to figure it out on my own. There's a link to a PDF that explains the math behind it.



mavrik65@Posted: Sat Aug 30, 2008 3:12 pm :
university of Manchester, woo go England! I'd like to see this on a model



rich_is_bored@Posted: Sun Aug 31, 2008 4:29 am :
You already have. They're called bump maps. :lol:

This is just a new and cheap way to create them.



lowdragon@Posted: Sun Aug 31, 2008 10:47 am :
So its a substitute (or sublimentation) of an issue, which already came up in the first place, while processing known data.

Extracting decent depthmaps from photosource is bloody annoying though
but dealing with an approximation, which not just aint as accurate as the real deal but also contains wrong data, just to spare another artist, aint the way to go imho.

Aint it like getting a new car with an old engine or putting up children's playgrounds in toxic wastelands.

However,
if its just part of the shading process and the systems alone (which ever virtuall displacement_mapping applies here) is getting more precise (which will not happen) its going to be a neat feature to have.

of cause, since iam no shader artist, i could be totally wrong :)



rich_is_bored@Posted: Sun Aug 31, 2008 12:09 pm :
You sacrifice accuracy but the studies they've conducted indicate that nobody seems to notice. In other words, it's good enough.

Besides, this isn't about what big budget studios can afford to do with thousands of dollars and a laser scanning based system or a few more modelers on the payroll. This is about what you or I can do with a digital camera at no cost.

I must be on the wrong site. :)



Kristus@Posted: Sun Aug 31, 2008 12:16 pm :
Aside from some rough stuff you can get out of crazybump. I'm starting to wonder if this whole deal of getting them from photo-source is starting to get more troublesome than it's worth. I was watching that vid and noticed a lot of effort seemed to be going into the process.

Wouldn't it be just easier to sculpt it up in mudbox or similar?



rich_is_bored@Posted: Mon Sep 01, 2008 5:20 am :
The only reason it seems so complex is because the technique is explained in great detail. In practice you'd be using a program someone else wrote to do the image processing. It wouldn't be much different than using a plugin for Photoshop (hint: NVidia's normal map plugin).

Of course, since no program is supplied writing your own would be necessary but the paper certainly helps in that regard.

At any rate, I found it interesting and had I the time and coding experience to where I could write a test application in short order, I'd bust this out and have some practical examples for you to look at. But until then we'll just have to wait.



kat@Posted: Wed Sep 03, 2008 10:09 am :
rich_is_bored wrote:
You sacrifice accuracy but the studies they've conducted indicate that nobody seems to notice. In other words, it's good enough.
No one noticed because they *are* photographs. People think the textures in HL2 are real for the same reason so judging the validity of an depth effect based on whether the *photos* appear real isn't an ideal way to judge the methodology because people are looking at the photos and not the effect ("does it look real" isn't the same as "do you get a better sense of depth from 'a' or 'b'"). It also depends who you ask... most gamers and consumers can't tell and won't know the difference, so in that respect, yes, it is good enough ;)

Quote:
Besides, this isn't about what big budget studios can afford to do with thousands of dollars and a laser scanning based system or a few more modelers on the payroll. This is about what you or I can do with a digital camera at no cost.
You'd be surprised at how many artists and as a result, studios, use CrazyBump as it saves them money.

Quote:
I must be on the wrong site. :)
Not really... if like you say above, it means an application that's as simple to use as crazybump then it'll get used by studios and artists because it'll save them odles of money.

Overall it's a nice technique but I'm confused about whether what they're doing is simply about creating photos and 'normal map' from photographs or whether it's interpretive code that's to be used for the purpose of rendering their particular flavour of images assets in game.

Looking at it critically, the results suffer the same 'depth' issues associated with using straight photos for this sort of work.



rich_is_bored@Posted: Wed Sep 03, 2008 12:48 pm :
I guess the video is a bit misleading. This isn't a rendering technique. It's a means to derive a diffuse and normal map from a pair of photos. They illustrate the results using displacement and self shadowing but it's only for demonstration.

That said, in a game that doesn't use displacement and self shadowing, the resulting normal maps aren't going to be any more spectacular than they are in Doom 3. They'll still suffer from "flatness" when viewed at extreme angles.

I suppose you're talking about the sharp edges between disconnected surfaces that make a normal map pop. But that's inherent when using a model as source material because ray tracing can be infinitely scaled. The only practical limit is how long your willing to wait for it to render.

But the benefits of raytracing can be had when dealing with 2D source material as well. It's simply a matter of working at a higher resolution. The higher resolution your source material, the more "samples per pixel" when you scale it down.



kat@Posted: Wed Sep 03, 2008 2:36 pm :
What I meant was this 'new' technique suffers the same problem inherent with using photo sources directly.. the interpretation of depth is dependant on tonal differences. So, it doesn't matter how much clever math you throw at something if the underlying technique relies on colour and tone for depth information it will always result in the same fundamental problem showing up that, granted, only someone clued into the techniques of asset generation will spot.

As I see it right now, it's only benefit is speed and costs, as you said, anyone with a digital camera can pull off half decent content.

I wasn't talking about flatness from obliques btw :wink:



rich_is_bored@Posted: Sat Aug 30, 2008 11:09 am :
We've discussed various methods to create bump maps from photosource in the past but this is a new technique and the results look really good so it might come in handy for texture creation ...

http://aig.cs.man.ac.uk/research/daedal ... nation.php

I knew there was a way to do this using color math I just didn't have the attention span to figure it out on my own. There's a link to a PDF that explains the math behind it.



mavrik65@Posted: Sat Aug 30, 2008 3:12 pm :
university of Manchester, woo go England! I'd like to see this on a model



rich_is_bored@Posted: Sun Aug 31, 2008 4:29 am :
You already have. They're called bump maps. :lol:

This is just a new and cheap way to create them.



lowdragon@Posted: Sun Aug 31, 2008 10:47 am :
So its a substitute (or sublimentation) of an issue, which already came up in the first place, while processing known data.

Extracting decent depthmaps from photosource is bloody annoying though
but dealing with an approximation, which not just aint as accurate as the real deal but also contains wrong data, just to spare another artist, aint the way to go imho.

Aint it like getting a new car with an old engine or putting up children's playgrounds in toxic wastelands.

However,
if its just part of the shading process and the systems alone (which ever virtuall displacement_mapping applies here) is getting more precise (which will not happen) its going to be a neat feature to have.

of cause, since iam no shader artist, i could be totally wrong :)



rich_is_bored@Posted: Sun Aug 31, 2008 12:09 pm :
You sacrifice accuracy but the studies they've conducted indicate that nobody seems to notice. In other words, it's good enough.

Besides, this isn't about what big budget studios can afford to do with thousands of dollars and a laser scanning based system or a few more modelers on the payroll. This is about what you or I can do with a digital camera at no cost.

I must be on the wrong site. :)



Kristus@Posted: Sun Aug 31, 2008 12:16 pm :
Aside from some rough stuff you can get out of crazybump. I'm starting to wonder if this whole deal of getting them from photo-source is starting to get more troublesome than it's worth. I was watching that vid and noticed a lot of effort seemed to be going into the process.

Wouldn't it be just easier to sculpt it up in mudbox or similar?



rich_is_bored@Posted: Mon Sep 01, 2008 5:20 am :
The only reason it seems so complex is because the technique is explained in great detail. In practice you'd be using a program someone else wrote to do the image processing. It wouldn't be much different than using a plugin for Photoshop (hint: NVidia's normal map plugin).

Of course, since no program is supplied writing your own would be necessary but the paper certainly helps in that regard.

At any rate, I found it interesting and had I the time and coding experience to where I could write a test application in short order, I'd bust this out and have some practical examples for you to look at. But until then we'll just have to wait.



kat@Posted: Wed Sep 03, 2008 10:09 am :
rich_is_bored wrote:
You sacrifice accuracy but the studies they've conducted indicate that nobody seems to notice. In other words, it's good enough.
No one noticed because they *are* photographs. People think the textures in HL2 are real for the same reason so judging the validity of an depth effect based on whether the *photos* appear real isn't an ideal way to judge the methodology because people are looking at the photos and not the effect ("does it look real" isn't the same as "do you get a better sense of depth from 'a' or 'b'"). It also depends who you ask... most gamers and consumers can't tell and won't know the difference, so in that respect, yes, it is good enough ;)

Quote:
Besides, this isn't about what big budget studios can afford to do with thousands of dollars and a laser scanning based system or a few more modelers on the payroll. This is about what you or I can do with a digital camera at no cost.
You'd be surprised at how many artists and as a result, studios, use CrazyBump as it saves them money.

Quote:
I must be on the wrong site. :)
Not really... if like you say above, it means an application that's as simple to use as crazybump then it'll get used by studios and artists because it'll save them odles of money.

Overall it's a nice technique but I'm confused about whether what they're doing is simply about creating photos and 'normal map' from photographs or whether it's interpretive code that's to be used for the purpose of rendering their particular flavour of images assets in game.

Looking at it critically, the results suffer the same 'depth' issues associated with using straight photos for this sort of work.



rich_is_bored@Posted: Wed Sep 03, 2008 12:48 pm :
I guess the video is a bit misleading. This isn't a rendering technique. It's a means to derive a diffuse and normal map from a pair of photos. They illustrate the results using displacement and self shadowing but it's only for demonstration.

That said, in a game that doesn't use displacement and self shadowing, the resulting normal maps aren't going to be any more spectacular than they are in Doom 3. They'll still suffer from "flatness" when viewed at extreme angles.

I suppose you're talking about the sharp edges between disconnected surfaces that make a normal map pop. But that's inherent when using a model as source material because ray tracing can be infinitely scaled. The only practical limit is how long your willing to wait for it to render.

But the benefits of raytracing can be had when dealing with 2D source material as well. It's simply a matter of working at a higher resolution. The higher resolution your source material, the more "samples per pixel" when you scale it down.



kat@Posted: Wed Sep 03, 2008 2:36 pm :
What I meant was this 'new' technique suffers the same problem inherent with using photo sources directly.. the interpretation of depth is dependant on tonal differences. So, it doesn't matter how much clever math you throw at something if the underlying technique relies on colour and tone for depth information it will always result in the same fundamental problem showing up that, granted, only someone clued into the techniques of asset generation will spot.

As I see it right now, it's only benefit is speed and costs, as you said, anyone with a digital camera can pull off half decent content.

I wasn't talking about flatness from obliques btw :wink: