Creating Next-gen Assets from Scanned Data

Creating Next-gen Assets from Scanned Data

Here at the studio we are always looking into new ways to do things, and as I’m working on some graveyard assets for Arena at the moment, I thought it would be a cool idea if I could just go to a cemetery and scan some statues to use as a base to start my models. Starting everything from scratch is cool, but when you need a huge amount of assets with a huge amount of details, you’ll have to start looking for other ways to provide quality content in time.

So here’s the deal, you can start your organic models either sculpting it in a app like Zbrush, or you can take a scanned model as a starting point of your high poly mesh. The first option is nice if you have enough time and good-fast sculpting skills, though in either case, you’ll need a solid organic modeling background, anyway. (unless you use decimation, CHEATER!)

I ended up deciding to use a bit of each, not least because it’s not everyday that we found a 10-feet goat paw demon statue in a graveyard, unless you live in Norway (hail satan).

As a proof of concept I decided to make a ancient asset using a scanned model from Abby Crawford, a archeologist with amazing scans on Sketchfab.

In this article, we will be going through the steps to create a asset using scanned data, and with some useful tips to suceed on it, so without furter ado, here we go:

 

Step 1: Scanning a reference model using Agisoft Photoscan

If you dont know yet, Agisoft Photoscan is one of these softwares that impress us right at the first look. It uses a serie of photographs to create a point cloud from multiple triangulation.

          image11(image by http://www.clemson.edu/)

 

So the very first thing you’ll need is a good camera, preferably a DSLR one like a Canon or a Nikon, but a good digital camera like a Sony Alpha/NEX should be fine. (you can always use your point n click or cellphone camera, but keep in mind that the detail level will be directly proportional to the sensor quality).


Here are the basic steps to get a mesh from your photos:

  1. Shoot photos from multiple angle around the subject
  2. Remove background (depending on type of photo)
  3. Align Photos
  4. Refine point cloud
  5. Build Geometry
  6. Build Texture
  7. Export

image16(image by http://www.edwardtriplett.com)

 

I do not intend to get into further details on this step since the focus of this text is not about the scanning itself, maybe I’ll cover that in a future article, you can learn the basics about it in the support section from Agisoft: http://www.agisoft.com/support/tutorials/beginner-level/

To all my FOSS friends out there, we also have Open Source solutions to generate a point cloud from photogrammetry as well, you can use VisualSFM for this, and Meshlab to build the geo from cloud data. Here’s a great article from Digital compositor Jesse Spielman about the subject: Open Source Photogrammetry Workflow

The free 123D from Autodesk can do the work as well, but if you want really accurate detailed 3D scans, Photoscan is the way.

 

Step 2: Cleaning up scanned geometry

Ok, so here’s where I started from. Usually the scanning will not bring you perfect results, since projection gaps is a hard thing to avoid, the best way to work around it is by doing some post cleanup on a modeling package.

We have a few ways to do this, you can use Zbrush or 3D Coat for example, particularly if you are working on a very organic scanned mesh, you can easily cut pieces, fill holes and rebuild geometry in these apps, especially with the voxel based tools on 3D Coat and Z-Remesher on Zbrush. You can use Netfabb as well to do it, as it is a repair & analysis tool for 3D Printing data.

As I’m doing a tombstone, the parts where the scan didn’t get very well were rather hard surface, so I decided to do the cleanup straight in Blender.

(Actually, I just did this on the final stage, shame on me, but let’s suppose you don’t know about it)
So this is the scan that I used as a base:

image15

Note that we have a bunch of scan clippings near the ground, we need to fix that!

The first idea I had was to do everything up on the raw scanned mesh, and just retouch the generated bake maps later, a very dumb idea I would say. You might even re-paint the diffuse without problems, but then the normal map… even with a normal map color wheel, you will never get the edges/seams perfectly right.
So I decided to re-model the broken pieces to get a normal map from these parts that would merge seamlessly with the scanning data. Just some box modelling to create a base mesh that could be sculpted in mudbox:

image02

After some brush work, trying to match the feel from reference, we have a final high poly mesh:

image03

If it were a different more serious error, I probably would need 3D-Coat or Zbrush to fix this, 3D Coat has nice tools to fix scanned meshes, in the case you need to re-sculpt some parts of the scanned mesh, you’ll problably need to remesh the scan, so these tools come handy.

Tip: You can create a bounding box around the good parts of the mesh and use a boolean modifier to get rid of the defective parts, whatever. (unless the software you’re using has poor boolean algorithms).

Step 3: Retopology

With the high poly model ready to go, now we’ll “just” need to translate that geometry information to a mesh light enough to be displayed in-game realtime. We will basically re-model everything from scratch using the high poly as a guide, it sounds disgusting, and it is. 😀

Thankfully we have some tools to help us in our journey, most 3D programs have specific tools to this kind of work, as a snap to face tool or mesh projection (if we can use auto-retopo: no. cheater.)

I’ve used snap to face on blender for this, so I could extrude my polygons and they would snap to the high poly faces, simple like magic.
So here we go, reserve a big time of your day to this:

image08

Oh, that’s easy.

Try to block out the main shape elements first, just adding the necessary amount of vertices.

In every vertice you put into play, have in mind how you intend to join everything together later, think in the big picture!

image00

Ok, maybe I’ll need some coffee.

image19

HOLY JESUS

When things start to look frightening, just take a break, eat some carb (hell yea, ops, no!11), and try to set anxiety aside, don’t try to accelerate things that are meant to be slow, or you could end up with a weird cubism work poor job.

image09

If I could just use a mirror 🙁

Notice how you can achieve complex shapes in a relative short time/easily by doing retopo on scanned objects! All the structure is already built, you just need to “draw over”. Remember when as a kid you were learning to draw, and then you put a blank paper sheet on top of a drawing and copied it? It’s exactly like that!

image14

“…Will I take a shock if I touch the computer motherboard? hmm… oh wait, I’m modeling. ”

EDIT: (yes, definitely)

Careful to not be distracted and die while doing this.

image20

ZzZzzzZZZZZZzzzzzzZZzzzZZzzz…

Sometimes its better to model small pieces in a separate mesh from the main object, this can save a good amount of polys 😉

image12

OH WAIT, done!

that’s eaaasy dude 😀

Divagations apart, the challenge when doing retopology for game content is to cut out the highest polygon count possible, while maintaining a good silhouette. For that model, I prefer to sacrifice the polycount a bit in order to get a nice silhouette,  the final model have around 3k poly, it could be highly optimized up to about around 1k~ poly yet maintaining a good silhouette however, and if you don’t care that much to this, it could even be optimized to something around the 400 poly level. (and you always can use a displacement map, if you have a DX11 capable video card)

But screw that, my main concern with that model was to make a good article for you, and also in context, it will be a central piece of the game puzzle core, so a little extra detail level really helps to separate it from the other assets.

 

Step 4: UV Unwrap

Oh man, I love this step, even with basically 100% 98% of world population hating it, it’s really cool to slow down a bit after all the headaches doing retopology, when you finally understands what a unwrap really is and how to do it properly, it starts to look like something enjoyable. (at least for me 8D)

I usually start by putting seams on places that will have obvious material changes (like a wooden wardrobe with a mirror in the middle, for example), then things start to be more abstract. As a good starting point, edges that go beyond 90 degrees are a good place to put some seams. It also depends on what you’re planning to do next. If you plan to hand paint your UVs, your seams and layout will need to be in a certain way, if you will be painting in a 3D app, you may have another approach and etc…

So there is no strict rule, the rule is to make good looking textures without visible seams.

Want to optimize your UV layout?

How about SCALE EVERYTHING UP and just try to pack everything back?! 😀

image17

BETTER THAN A JIGSAW PUZZLE

image05

Seriously, this is nice as hell.

You can go even further with optimizations by doing insane flipping and twisting with your UV islands, but try not to do things like islands on a 45 degree angle, mesh parts with different axis orientations, since all of this can lead to shading problems on bake, and also be a bit odd to work with (especially if you plan to use something like dDo or Substance Designer to create textures), but, hey, it’s art! do whatever you want, as long as it looks good. unless you are Hitler

Before exporting the mesh to bake, I select all the seams and mark them as sharp edges, even if they be in a section that is meant to be smooth, dont worry, the normal map will take care of this later, by doing it you’ll avoid a bunch of smoothing errors and probably get a cleaner bake, trust me. trade secret.

(also, dont forget to select smooth groups, normals and keep vertex order in the export window when exporting the mesh as .obj, if you use blender)

 

Step 5: Baking

With our UVs crafted, now we need to bake the scan into the low poly mesh, in order to do that I’ll be using Xnormal, a wonderful software if I might say, as I see lots of people with questions about settings and stuff to use when baking the maps on xnormal, I’ll show the settings that I usually use that give me good results overall:

Ambient Occlusion:

Captura de tela 2015-03-30 21.18.52

 

Example:

cdf_occlusion

Cavity:

Captura de tela 2015-03-30 21.19.16

Example:

cdf_cavity

Curvature:

Captura de tela 2015-03-30 21.19.31

(turn rays on this to 512 to be even more awesome)

Example:

cdf_curvature

 

So, how do you make your normal maps always look good?

You’ll need a cage, that’s it.

For those who don’t know, the cage is the mesh that defines the boundaries of your projection, when you set a cage in xnormal, the projection ceases to be a explicit normal projection to be a averaged one, what this means in real life is that you’ll not have seams anymore. (If you did everything right until now)

To have a better understanding on this subject, I reeeally recommend this topic from Polycount. (god bless you, EarthQuake)

After that, I doubt you will have problems with bake anymore.

 

Step 6: Texturing

As we are working with scanned data that was taken with color information, we can use it as well as a starting point to the texturing!

On xNormal, besides the standard bakes that we usually did (normal map, height map, cavity, curvature, etc), we have the option to extract color information from a model to project into another, and this is really great for what we need.

To bake the texture, just add the base texture that will be projected to the “base texture to bake” field, on high definition meshes tab:

1a

Then check “Bake base texture” checkbox on maps to render:

2a

Thats it!

Grave_final_baseTexBaked

 

For the missing parts, you can open the map in Photoshop for example and fill those areas using the other textures as a reference, or you can even open it in a 3D Painting app like mudbox and paint straight into it.

As this is just a base, we won’t need super accurate textures, just a good base to work with.

 

image04

 

Now let’s talk about concepts for a second: you don’t really want to just replicate the scanned object and use it as it is in your game, right? (RIGHT?)

We are in a creative role, don’t limit yourself to what is given to you, the whole point in using a scanned model as starting point is just to add some extra organic detail that we probably wouldn’t achieve easily doing everything by hand, and also to speed things up a bit, so that way you can use your imagination to transform a scan of a headphone into a sci-fi spaceship for example, even though you would never find a spaceship in headphone format near your house to be scanned, anyways.

So in this case, I started searching for references to make a tombstone, and thats where I found that scan, but I also needed an obelisk for a key piece of the game, so I found that this scan could turn out into a nice fantasy obelisk, I hope Shelby forgive me for that.

In order to do that, I needed to clean up some of the baked maps to fit into my idea, the first thing was to clear out all the text from the tomb using clone brush, both in diffuse as in normal map (we probably could do that before with the scan opened in mudbox, but that’s fine).

After that, I used nDo to create the new normal map for the areas that I wanted, so I could add my own carved text to it.

With all the textures in hand, I’ve started the final texturing work in dDo, what an amazing software!

Added some edge damage and highlights, a bit of dust, tear and stuff, removed a bit of shading, and here’s the final diffuse map:

image18

If you want to make trully PBR textures, you would need to remove a lot more of shading information from your diffuse, but I really dont care that much on that, since this was more of a proof of concept test.

Textures from scanned meshes can be a bit blurry too (like mine), to solve that, if the scan was made with a good cam and you have the project from photoscan to mess around, you can re-project the textures on a optimal UV after some clean-up in the scanned mesh, that way you will be able to get more resolution from your bake.

The last thing to do here is to add a detail normal map.
Just like textures from scans are usualy blurry, the mesh is too, so to get a certain definition in the normal map we’ll need to do some tricks in order to have a not so flat shading.

You can use Crazy Bump for that, just open your diffuse texture there and raise only the fine detail sliders, that way we’ll get all the small details from our texture map that will add an extra layer of texture to our shading.

Captura de tela 2015-04-01 17.36.13

(you can mask later in Photoshop the areas that you don’t need)

After that, just merge that with your main normal map, to correctly do that, you can use a utility called Normal Map Combiner, by Planet in Action, that way, you’ll merge all the vector informations together the proper way.

(nDo2 and Crazy Bump have tools to easily do that, too)

normal3

A thing that I really wanted in this asset was to have some kind of light emission in certain parts, to achieve that, I painted a mask to be used in the emissive input of the renderer:

image07

The idea is to when the player gets close, the text will be revealed and we’ll have a mystic light to show some awesome (not certainly defined yet) stuff.

And, I think we are done!

image06

Notice how we could turn a tombstone into a fantasy monument, that’s the beauty of our work, using imagination to transform things. I probably will get back to it to tweak everything later, but that was a nice start to prove that using scanned objects is a reliable concept.

Now just put all the textures into your engine of choice and tweak all the settings to your taste!

 

Here we can see some renders of the progress:

image01

This is a test version I did using the first baked normal map, without changing the written stuff.

image13

Wireframe of the first test on marmoset toolbag 2 (yeah, background totally makes sense)


Hope I could give you some ideas or inspiration to you try out new stuff regarding this subject, in the course of our development, I’ll probably be making some scanned assets (like statues and tree trunks, root textures and so on) and also will be providing a workflow to our approach, since I’m planning to invest some money to buy a good camera, and build a small scanning rig here at my workstation to scan some statues to use as a base for some complex models (yeah, I’m planning to buy a series of plaster statues to use as reference for the environment, you know how cool is that???!!1) that’s why I did first this test, so don’t forget to subscribe and follow our progress as well!

Want to take a closer look into this model? Thinking of you, we’ll be uploading our assets to sketchfab, that way, you can be more involved with our project 😉

So check out the final model here:

Grave Obelisk
by Guilherme Henrique
on Sketchfab

Hope you like it, see you next time!

 


—-some useful links:

A article about the use of scanned models on “The Vanishing of Ethan Carter” game.

Polycount wiki

xNormal Webpage

Quixel (developer of dDo) Webpage

Blender Webpage

Share this: