Physically Based Materials Workflow

Checkout this public sessions about Physically Based Materials Workflow how it can transformed Arch Viz.

During the Substance Days event featuring masterclasses, keynotes and talks by some of the top Substance users from around the world. What impressed the most was the ability of some of these artists to create the impressive textures they do. If your current texture pipeline, like many, revolves around photographs, scans and Photoshop, your first exposure to Substance might seem daunting, as it really does require a completely different thought process. While much of the work is done in a node graph similar to many shader pipelines, the way you build procedural materials requires that you look at materials around you like puzzles that need to be solved. And as a result some of these node trees can get pretty intense.

The line up of speakers this year was impressive to say the least, but they had a chance to ask Scott a few questions about his presentation this year. He provided insight into their current pipeline and detailed the work they recently completed on the new NVIDIA headquarters.

Hit that Play button to watch, listen & learn from the experts in the industry…

Credits to Cgarchitect & Allegorithmic

Check out for more recommended tutorials below:

Physically Based Materials Workflow

Checkout this public sessions about Physically Based Materials Workflow how it can transformed Arch Viz. During the Substance Days event ...
Read More

Benchmark your render speed

Test your machine how fast it can render using V-Ray Benchmark App. V-Ray Benchmark is a standalone application which includes ...
Read More

God Rays Effect

Take a look how to create “God Rays” using “VRayEnvironmentFog” and create a volumetric effect known as God Rays. You ...
Read More

Understanding VrayLight Select

If your 3d artist and hungry its a must to check this out:  Understanding Light Select Render Element In Vray ...
Read More

Max to VR Interactive Workflow

So you've heard about 3ds Max Interactive and you want to get your hands on it, huh? 3ds Max is ...
Read More

Next-Gen GPU Rendering

http://on-demand.gputechconf.com/gtc/2017/video/s7463-blagovest-taskov-next-generation-gpu-rendering-high-end-production-features-on-gpu.mp4   ...
Read More

Scan anything with Photogrammetry

Allegorithmic to deepen its roots into the production and delivery of pre-set content (textures, materials, etc.), we would like to ...
Read More

Smartphone as Texture Scanner

https://alg-releases.s3.amazonaws.com/2Texture_DebutDuBP_webm_0.webm How to use Smartphone as Texture Scanner A Creative Technologist at Allegorithmic. Today, He will show us how to ...
Read More

Procedural Wood Pixel Patterned

Procedural Wood Pixel Patterned for Archviz Here is the entire graph of the Pixel Patterned substance. This wooden block assembly ...
Read More

V-Ray Denoiser Quick Tutorial

Take a look at how the V-Ray Denoiser works, and what can be achieved with it. You’ll also learn how ...
Read More

Interactive Denoising with Nvidia AI

 Interactive Denoising with Nvidia Artificial Intelligence Good design is obvious: You know it when you see it. What’s less apparent ...
Read More

Vray Resumable Rendering

What is resumable rendering? In short, resumable rendering is the ability to have incomplete renders resume where they left off ...
Read More

Botelya ng Buhay

Check it out some tricks and tools of the trade in this making of: "Bottles of Life" and check where ...
Read More

What is Visual Composition

Composition! Everybody heard of it?  What is it? Imagine purchasing a book and opening it. Only to find out that ...
Read More

Exterior Lighting Setup

One of the most anticipated tutorials is finally out. We admire his work and the way he delivers his renderings ...
Read More

Camera Simulator

This tool will help you understand and play the basic DSLR Camera Controls and the outcome without buying expensive actual ...
Read More

Photorealism Explained

How to achieve CG Photorealism. Some tips and tricks from the guru over the last 11 years to make images ...
Read More

V-Ray RT GPU Shaders Breakdown

Be inspired by the images and shaders breakdown video tutorial below: Basil, wood, details shaders breakdown. V-Ray RT GPU videotutorial ...
Read More

Videos

Tips & Tricks Videos Click on the top playlist button for more recommended tutorials ...
Read More

Benchmark your render speed

Test your machine how fast it can render using V-Ray Benchmark App.

V-Ray Benchmark is a standalone application which includes a single GPU scene and a single CPU scene. Just download app here  or this link and run the test. You can also check Benchmark  results  below or this link and compare your computer specs and render speed with others or share your results online.

You can also add notes to let others know what mods you’ve made, like water cooling and overclocking. Best of all, V-Ray Benchmark is free and does not require a V-Ray license.

Note:   Results of V-Ray Benchmark are based on the included CPU and GPU scenes.

You may like to share your screenshot to our social media and #3dteamz Vray Benchmark

 

Credits to Chaosgroup

Understanding VrayLight Select

If your 3d artist and hungry its a must to check this out:  Understanding Light Select Render Element In Vray For 3ds Max.

Overview


The Light Select Render Element represents the lighting contribution from one or more selected lights in the scene. Each Light Select channel can output selected lights’ raw, diffuse, or specular contributions to the illumination, or overall (normal) contribution. When multiple lights are selected, all the contributions from the selected lights are combined into a single render element. Multiple VRayLightSelect elements can be rendered for a single scene, and lights may be included in more than one VRayLightSelect element.

This element is similar to the Lighting Render Element. However, the Lighting element combines the effect of all lights in the scene while the Light Select element allows for a user-selection light or set of lights to be broken out, showing their own individual effect(s) on the scene’s illumination. By using these render elements, specific lights in the resulting render can be adjusted (color, temperature, brightness, etc.) in a composite without the need for re-rendering.

For example, by generating a Light Select element for all of the backlights in a scene, an artist may adjust the backlighting of the rendered scene easily in the composite without affecting the rest of the scene’s illumination.

 Hit Play Button >

Compositing Equation


The following diagram shows the compositing formula to recreate all the light in a scene in its most basic form, but only when each light in the scene is accounted for in exactly one VRayLightSelect element for a particular mode. If a particular light is used in more than one VRayLightSelect element, this equation will result in brighter lighting than intended because that light will contribute lighting more than once.

 

Notes


  • When using VRayLightSelect, a good practice is to have one VRayLightSelect element for each light source in your scene. This way the Lighting element can be recreated from them, and adjusted as needed while compositing without re-rendering.
  • If the VRayLightSelect element mode is set to Direct illumination, the specular contribution is added to the render element. While this is perfect for simple compositing, a better workflow is for each light to be selected for two VRayLightSelect Render Elements, one with its mode set to Direct diffuse and another with its mode set to Direct specular. In this way, the specular and diffuse lighting can both be controlled independently at a composite level.

 

Source Credits to: Chaosgroup

 

Max to VR Interactive Workflow

So you’ve heard about 3ds Max Interactive and you want to get your hands on it, huh?


3ds Max is now combined with 3ds Max Interactive, a powerful VR engine that gives you the ability to go from Max to VR in just a few clicks. If you’re a current subscriber, you’ll notice a new Interactive menu when you open 3ds Max 2018.1: this will launch 3ds Max Interactive, our new 3D to VR creative workflow for design viz artists like you.

Ready to take the leap? Starting today, you can download both 3ds Max 2018 Update 1 and 3ds Max Interactive as two separate downloads from your desktop account, provided you’re a current 3ds Max, suite or collection subscriber.

Here’s how to get set up:

Are you more of a step-by-stepper? Here’s a how-to in a few steps:


UPDATE 3DS MAX TO 2018.1

1. Launch the Autodesk Desktop App.


2. Once in the Autodesk Desktop App, open the My Updates menu in the top left corner. Click on Autodesk 3ds Max 2018.1 Update, then click Update to start the download. 3ds Max 2018.1 will automatically install once the download is complete.

3. Don’t have the Autodesk Desktop App? No problem. Open up a browser and head on over to your Autodesk Account.

4. Under Product Updates, locate 3ds Max 2018 Update 1 and click Download.

5. Once your download is complete, simply follow the install instructions.

6. Huzzah! You’ve got 3ds Max 2018 Update 1. Next, go ahead and launch 3ds Max.

INSTALL MAX INTERACTIVE

1. Once 3ds Max is open, you’ll notice a new welcome screen.

2. This will open up a browser, where you’ll log in to your Autodesk Account. From here, click on All Products & Services (top left) and select the Downloads button on the 3ds Max tab.

3. Next, you’ll see a pop-up for all available 3ds Max downloads. Find 3ds Max Interactive and click Download now. Follow the download instructions and click the Install button.

4. From here, you’ll open up the 3ds Max Interactive installer. Follow the install instructions, enter your country, accept the License and Services agreement and click Next.

5. You’ll need a serial number for this next step, but luckily you’ve already got one! Use your 3ds Max 2018 serial number from your Autodesk Account and click Next.

6. Once you receive a confirmation that your serial number has been found and activated, click Finish to complete your install. Sit back, relax, stretch, hydrate – oh, wait, it’s done installing already?

7. You’ll get a notification once the installation has successfully completed, and now it’s time to launch 3ds Max Interactive.

8. It’s happening!

9. Decisions, decisions. Pick the appropriate template for the type of project you want to create.

10… the sky’s the limit! Happy creating.

READY TO GO FROM MAX TO VR?

On Monday, June 12th, we’ll be kicking off our journey from 3ds Max to VR. Join us for 10 days of short video power tutorials to get you up and running with VR content creation, and familiar with the tools and terminology.

Bonus: you’ll walk away with solid understanding of the fluid workflow between 3ds Max’s powerful 3D tools and with the new interactive toolset. Pretty sweet.

 

Source Credits to:
Design Visualization Team Autodesk Blogs

 

Next-Gen GPU Rendering

 

Scan anything with Photogrammetry

Allegorithmic to deepen its roots into the production and delivery of pre-set content (textures, materials, etc.), we would like to share today some knowledge that we gathered about the use of photogrammetry in order to scan materials surrounding us.

As a member of Allegorithmic Labs, I spend my time experimenting with crazy ideas, new gear and prototype software. When you combine that with my background as a professional Digital Operator (here), I could only find the subject of photogrammetry fascinating. But while the process of utilizing expansive cameras and optics to capture your data is fairly documented, it was unclear we could do this with a simple smartphone.

I decided to dig into the subject and give it a try and because, well, “it works!”, the goal of this blog post is to provide you with all you need to setup a low-cost, DIY pipeline for high quality material scanning using photogrammetry.

As you will see, there is no magic involved, only the right tools (Substance of course), so I encourage everyone to go for it and make their own scanned materials. Let’s scan the world!

SHOOTING

In the case of capturing on a mobile device, it’s important to choose a photo app that can save uncompressed photos and provides manual exposure controls. In our case, we used a Samsung Galaxy S6 and the FV-5 camera app.

On a photogrammetry shoot, the goal is to capture a picture as sharp as possible and without any noise. The lens’s aperture on a smartphone is often locked or not easily adjustable, so we need to find an exposure setting that will suit our needs. The exposure needs to be good enough to keep the ISO as low as possible (close to 100) and the shutter speed high enough to avoid motion blur. Also, you need to set the White Balance on manual to avoid color shift between images.

In terms of light quality for the photogrammetry shoot, it’s best to wait for the lighting to be diffused such as on a cloudy day. Alternatively, you can use a diffuser disc such as the Photoflex LiteDisc.

The diffused light will minimize harsh shadows, which will produce a better color map. In our case, we took advantage of the overcast sky for the shoot.

For a tree, we need to take a series of photographs around the trunk. We move around the trunk 360 degrees, while taking a new photo at equidistant intervals.

Don’t forget to keep enough overlap between each photo as it will help the algorithms to provide a better result .

Intensive shooting can quickly drain the battery, so do not hesitate to pack an external battery for backup 😉

PHOTOGRAMMETRY

Back at the office, we will continue our work using a specific software for photogrammetry.

There are different software on the market, but for our example we have chosen PhotoScan Standard from Agisoft.

The typical process for reconstructing the data is as follows: import the images, working with the Point Cloud data, and finally the Mesh reconstruction. In this example we will be focusing only on one part of the tree. At the end of the process, we’ll have a High Poly Mesh, around 15M polygons, and a Color Texture in 8K format. As you can see, data captured using a mobile phone can still provide good results for photogrammetry.

POST PROCESS

The High Poly Mesh is around 15M polygons and to fit into a standard workflow, we have to transfer the High Poly detail to a Low Poly Mesh. Here you can use the integrated bakers in Substance Designer or Substance Painter. You can also use your favorite 3D package for the Low Poly Mesh creation process.

In our case, we used Blender, for the Low Poly Mesh and the Normal / Color Baking. You can find a very nice tutorial from Darrin Lile here.

If you would like to work with a Height map, you can use Substance Designer to convert your Normal Texture into a Height Texture. In Substance Designer, you simply connect your Normal Texture into the Normal to Height node to produce the height data. Finally, go to Menu Bar, Tools/Prefrences…/General/Cooker/Cooking size limite, type 8192, then set the Output Size to 13 to produce an 8K texture. And voilà, you have your Height Texture.

Now, we have our Low Poly Mesh, which can be used in any 3D application, but we have some holes and artefacts.

In this case, it’s possible to fix these issues using Substance Painter.

Substance Painter can import and export 8K maps. It’s possible to import our Color Texture in 8K, work on it in 4K and export the result in 8K as Substance Painter doesn’t upscale the textures. Substance Painter will actually recompute the texture to a lossless 8K size.

After importing the Mesh, Color, Normal and Height map, we simply add a Fill Layer with all textures. You can then add any additional channels that may be needed and set the resolution to 4K.

Next, we can use the Clone Tool to fix issues. By working on a new layer, you separate the clone data from the base photogrammetry.

Just, don’t forget to switch the blending mode for all channels to PassThrough. By using the PassThrough blending mode, all channels below the Clone Layer are copied.

And like others Brushes in Substance Painter, the Clone Tool works across all channels in a single stroke, you can clone the Color, as well as the Height and/or the Normal at the same time. It’s a huge time saver.

To remove as much of the lighting visible in the Color Texture as possible, we can use a filter we created in Substance Designer. This filter is a Work In Progress and will be available in a future release.

(Click here to download it)

This filter can remove the lighting based on a Color Reference and the Ambient Occlusion (computed with the High Poly Mesh). The end result is a Color Texture that is more versatile and neutral, and can be used in different lighting conditions.

FINISHING

Now, everything is fixed. We can start working on an augmented material, aka adding manual or procedural details to the scan.

Let’s started by a baking all of the additional maps, like World Space Normal, Ambient Occlusion, Curvature, Position and Thickness. Substance Painter can bake this 15M polygon Mesh very quickly.

For example, we can add some moss on our tree. We found on Substance Share a Moss Material, and imported it into Substance Painter.

We are off to a good start, simply by adding this material to a Fill Layer and using the Moss Smart Mask. If we play with the Mask Builder Parameters and the World Space Normal, it’s easy to put more moss on one side, like if it was on the North Side of our tree. We can add more details by adding another Smart Mask and finally use the “Organic Spread” Particle Brush to add some final touches.

For another example, we can burn our tree.

Just added a new Fill Layer with a smoke material, add a Mask and use the “Burn” Particle Brush. If you choose a large Brush Size, you can get a nice result.

Finally you can export all Textures and Additional Maps in 8K and render the results in your favorite renderer. As you can see in this example, the 8K export from Substance Painter is lossless and is the same quality as the Color Texture from the Photogrammetric Software.

We’re now entering an era of scanning, and nothing should prevent everyone to be able to produce and customize their own scans. So I invite you all to go outside and just take pictures of materials you like. The knowledge and tools are here today to help you turn them into production-ready textures.

In the coming weeks, we will also publish a second part to this tutorial covering how to convert a scan like this into a seamless PBR material Substance Painter and Substance Designer.

Anyways, here you go, I hope you liked it! This blog is a first, little step for Allegorithmic and we will soon share more info and announcements. Stay tuned 🙂

Credits: Anthony Salvi From: allegorithmic

Smartphone as Texture Scanner

How to use Smartphone as Texture Scanner

A Creative Technologist at Allegorithmic. Today, He will show us how to use a smartphone camera to capture a material. Using the new scan filters in Substance Designer 6, it’s possible to transform a smartphone into a material scanner.

First of all, it’s important to find the right balance between quality and cost. For the cost part, we will use a cardboard box, a stack of sheet of tracing paper and an LED light for our lighting setup. For the image capture, we want to look at using the best process possible, so for our tests, we used an iPhone 6s and iPhone 7 as our camera with the Adobe Lightroom mobile app.

This app allows you to capture in RAW (uncompressed) format on iOS and Android. Adobe Lightroom also has some very nice features such as full manual shooting and High Dynamic Range modes. However, you can find other apps like ProCam on the App Store or Camerafv5 for Android.

With Substance Designer 6, we have a new filter named Multi-angle to Normal that has 8 possible inputs. To use the filter, we have to produce 8 images at 45° around our material.

To understand the image capture process, we can imagine our material on a much larger scale. Let’s imagine our material is at the scale of a mountain. At this size, our light can represent the sun. If the sun turns around the mountain, we can see the shape of the shadows cast in black. These shapes represent the indirect information about the relief of the mountain itself. If we combine enough information, at least 4 images (using 8 for a better result), the algorithm can calculate the relief.

The process is simple. We just need to turn the light 8 times at the same distance around our material.

Gear

To help us, we can draw a circle on our sheet with our 8 angles. You can download our template at this link.

To improve our chart, we can switch the color to black to reduce the lighting bounces from our LED light onto our material sample. Then we add some shapes (square, triangle, moon, star etc.) at the 8 angles, which are useful in post-process. These shapes are used by the photomerge process in Adobe Photoshop. They help Photoshop to produce a quick and accurate merging.

Finally, we draw a 10cm by 10cm square at the center to cut a hole for our opacity shoot.

Next, we will setup the scanbox. The box is as simple as possible, but it’s very important to retain the ability to capture the opacity of a material. To capture opacity, we will use a square hole (10cm x 10cm) at the center of the box and a stack of 6 sheets of tracing paper to diffuse the light through the material, plus a last sheet on top of the chart, as you can see in these image below:

Don’t forget a hole to put the LED light inside the box as well.
To finish our setup and to reduce the cost, we used a simple cardboard tube with a foam core plate to create a stand for our smartphone. To maximize the final frame, the stand size is calibrated to the size of the box. We added a black paper sheet to remove all potential color bounces coming from the cylinder. Finally, the smartphone is attached with 4 pieces of tape.

Here is the final setup with the scanbox and the stand. The stand is not as stable as a tripod and so it needs to be held in hand.

During the capture, it’s always a good idea to neutralize the color shift coming from the lighting. By nature, all lights have a color tint with some tints stronger than others. For example, a candle is red, a tungsten bulb is orange, the sky is blue. To neutralize this color and keep color consistency, a ColorChecker is required. Basically, it’s a reference, with a gray scale printed and calibrated.

The goal in post-production is to adjust the gray color captured and keep it as a pure gray. You can find more info about this product here and here.

The LED is designed by Manfrotto and produces a well-balanced daylight.

This is our toolbox:
– A cardboard box with a multi-angle chart
– A cardboard stand
– A LED light + tripod + string
– A ColorChecker
– A microfiber cloth
The photo shoot

Our goal will be to capture 2 materials: a leather and a complex fabric. We need to make sure that the only light visible on our material is coming from our LED light. If not, the shadow cast can be wrong and the post-processing in Substance Designer will produce an erroneous computation. So, just close the windows or curtains to darken the room and eliminate as much ambient light as possible and turn on your LED light on. Next, we place our material on the box.

Be sure the material is not covering the little shapes (square, triangle, moon, star, or whatever you’re using). Then, clean your sample, as dust or hair will often fall onto our material. The most important thing is to not move the material sample during the shooting! If the material moved, the photo-merge cannot be done correctly.

A string is attached to the LED light. We use a simple knot on the string as a metering guide between our LED light and the chart points around the circle on the box.

The most important angle is the altitude. See the graph below:

If the angle is too low, the shadow casting could be too long and we stand to lose some information in these shadow areas. The result will be an erroneous normal map computation with a flat result in the darker areas.

The trick is to find an angle to preserve information and provide enough shadowing for a good result.

Now it’s time to work on our camera. Start to clean your lens. Often our smartphone isn’t really clean. It takes 5 seconds to clean the lens with a microfiber cloth.

With Adobe Lightroom mobile you can use the PRO mode and manually set up your shot or use the HDR mode. Both deliver good results.

In the PRO mode, turn the flash off, activate the DNG format, set the ISO as low as possible (25 to 100) and set the White Balance to Daylight. To avoid motion blur in your image, keep a speed around 1/50 sec and adapt your ISO to get a good exposure.

You will find more and tutorials on using Adobe Lightroom for mobile here.

Another option is to use the HDR mode and activate the Save Uncompressed Original option. In this configuration, you have both, the original dng file and the “HDR” dng computed.

It can be helpful to add the grid and the level on screen for framing the shot. You can also add a timer for 5 seconds to act as a remote trigger for capturing the image. This will provide a more stable result as manually touching the shutter button on the camera can inadvertently add a shake, which can produce a blurrier image.

If you have an Adobe Creative Cloud account you can import your images into your Adobe Lightroom Desktop library.

Here is a tutorial on importing content.

Once everything is set, it’s time to capture the material. Set the LED lighting power to maximum. Make sure you have enough battery power (smartphone and LED light) and that the LED light doesn’t become too hot. During the shoot, it’s important to keep the same framing as much as possible. However, don’t panic if your pictures are not perfectly aligned. It’s more important to keep all of the shapes (square, triangle, moon, star etc.) on the chart visible in the images.

Now we can shoot a reference with our ColorChecker under the same LED light.

After we take the 8 material shots and 1 color reference image, we can shoot three more pictures. One for the color, one for the new white balance and one for the opacity (for the fabric).

For the color, we need soft and diffuse lighting. In our case, we used the indirect light coming from a window and bounced in an aluminum paper sheet. It’s not perfect, but it’s a good base. Of course, we have more options for this lighting, but this was the simplest in our case.

Don’t forget to shoot a new ColorChecker to set the color correction in post production.

For opacity, open the curtains in the room and setup the LED light inside the box. With the LED light at a low power, we have enough backlight for capturing opacity.

For a better light diffusion, you can add white paper on the inside and have the light centered in the middle of the scanbox: a foam core plate with a hole works well.

Now we have our 8 angles, 1 color, 1 opacity and 2 ColorChecker images. Next, we import our images in Adobe Lightroom Desktop. With Adobe Creative Cloud activated, all images are in our hard drive.

In our case, we used the “HDR” dng files, but as we explained, the dng files from the PRO mode deliver a good result as well. For the ColorChecker images, we keep the regular dng file.

Let’s select the ColorChecker image for the LED light and use the White Balance tool.

Then, we have to set the parameters Whites, Blacks, and Clarity at 0 and finally copy and paste the values of the parameters to the 8 angle images.

We can repeat this process for the ColorChecker image in daylight and apply the new value on our color and opacity images. Also, set the parameters Whites, Blacks and Clarity at 0.

For the opacity image, we can use the gradient tool in Lightroom to reduce the vignette effect.

For the last step, we exported all of the images as TIFF 16bit using the Adobe 1998 ICC profile.

Our phone stand was simple and cheap but not very efficient. However, this isn’t a problem. Using the photomerge feature in Adobe Photoshop, it’s possible to merge and align the images. Just load your pictures in photomerge and uncheck the Blend Images Together option.

After that, add a solid black layer at the bottom of the layer stack and crop your image on a square and resize it to 4096 pixels.

Finally export your layer stack in a new individual TIFF file with the remove layers option.
Creating the seamless material in Substance Designer

For the leather material, we created a new Substance using the Physically Based (Metallic/Roughness) template set to 4096×4096 in size. Then we linked our pictures to our project.

The first node we used was the Multi Crop. It’s useful to select which area of the image we want to target for our material. This crop can be critical when the tiling is difficult. But with the procedural approach, you can easily test different crops for a better tiling. After that, we used the Multi Color Equalizer to clean our images and finally a Multi-Angle to Normal node to generate the normal map.

We can add our color image and just copy paste our Multi Crop node and set the input Count to 1.

With the Color Equalizer node we can balance the light and dark values to produce a better color map for our material.

Now it’s time for the tiling part. With the new Smart Auto Tile node, you can quickly create a seamless material. Don’t hesitate to move, rotate and transform your pattern to get the best result possible.

For more information, you can find a tutorial here.

The new Color Match node is perfect for changing the color of our leather while retaining all the details. It gives you an endless variety of colors options with the benefit of mixing with the initial color captured data.

The Specular Level can be vital to achieving realistic results. With the Metallic/Roughness definition, when the Metallic is set to 0, the material is understood to be a dielectric and the reflectance value at the Fresnel zero angle or f0 is set to 4% reflective. This works for most common dielectric materials, but some dielectrics can have a different Index of Refraction or IOR. The Specular Level can be used to override the default 4% value used in the metallic/roughness definition. Here, we have a leather and we can use the Specular Level to set a custom level for the leather material.

To drive this channel, just add a new Output node and set the usage and identifier to specularLevel. Then adjust the value by using a Uniform Color node in Grayscale Mode. A value of 60 is a good start for a leather.

For the Height map, we used the node Normal to Height HQ and set the quality to High.

At last, we worked on the Roughness and used the Height map to drive 2 values between the bumps.

The easier part is for the Metallic channel. Just add a Uniform Color node, set on Grayscale and at 0.

After these few steps, finally, here is the complete graph.

Our leather sample was pretty matte, and after some tests we found the good values for our leather and voilà 🙂

Creating the fabric material

The process is quite similar, with the same Specular Level output set at 60, except we have the opacity channel.

Using a Color Equalizer and a Color to Mask node, we can have good control over the opacity mask for our material. We then just need to add an Output node with the Usage and Identifier set to opacity.

One caveat with the Smart Auto Tile is that there is not an opacity input. To work around this, we can plug the opacity in the Height input and use the Height output for our opacity channel.

As with the leather, we can use the Color Match node to easily transform our scanned material into a hybrid material with multiple color options.

But of course, we can use our color image as well 🙂

And at the end, the complete graph:

Conclusion

At the end of this process, we have 2 hybrid materials, scanned with our smartphone and ready to use in a PBR pipeline. All Substances and images are available at this link.

Don’t hesitate to play with material settings and explore the scanning process. Next time you need some material references, grab your smartphone, build your own scanbox and have fun with Substance Designer 6!
One more thing

In a previous blog post about photogrammetry, we showed a specific use case. However, now with Substance Designer 6, it’s easy to convert our photogrammetry maps in a single tiling material.

Photogrammetry is really fun, but it means your material is dependent on your object. Maybe you would like something more versatile. Fortunately, with Substance Designer 6, it’s easy to convert our photogrammetry maps in a single tiling material. After this step, your material will be usable on any mesh.

The job was done in 3 simple steps:

  1. Crop the good area in our map exported from Substance Painter.
  2. Make it tile with the Smart Auto tile node.
  3. Balance the color with the Color Match node to erase/smooth the color variations.

(8k workflow)

And now, we can apply this texture on a simple cylinder and view our bark 🙂

You can find the Substance file and images here.

As you can see, you have multiple usages for these new scanning nodes in Substance Designer 6.

We hope you will find this one interesting and that it gives you more ideas on what you can achieve with Substance Designer 6!

Credits: Anthony Salvi From: allegorithmic

Procedural Wood Pixel Patterned

Procedural Wood Pixel Patterned for Archviz

Here is the entire graph of the Pixel Patterned substance. This wooden block assembly is used in architecture, often to delimit spaces in a house or simply for interior decoration. This material is esthetic and gives a natural feel within the house.

First of all, we’ve defined the essential points of the substance, namely the choice of color, the visible wood veins and the number of blocks.

For the color, I used wood stain shades used in the industry, with the Gradient Map node to choose the hue. I also used the Glaze Selection parameter and varied the hue areas using the Noise Selection in order to have as many choices as possible.

For the wood veins, I have created several patterns that can be tweaked using the Vein Selection Slider.

For the top of the sticks, I mixed multiple grunge and noise generators with a warp and a spherical shape.

Substance Brick:

The slope of the brick is essential for this substance. Basically, it is a linear gradient in the X-axis from black to white then from white to black. The problem is that when varying the position of the angle to one side the gray values ​​are crushed and no longer render the linear gradient curve.

After several tests, we used the Pixel Processor to manage the slope. Besides being very fast, it allows us to keep the linear gradient to its maximum range when moving the X-axis with the slider.

Here is the view of the Pixel Processor to control the X-axis with a slider.

Next, simply plug it into the Tile Generator or Tile Sampler and expose the desired number of bricks in X and Y.

For realistic substances like terracottas, the base color is key to the realism of the substance. To achieve this, we needed to assemble several noises at different scales, which makes it possible to obtain information with varied details.

Some of the results :

For the Bricks Bond Variations substance, every pattern was created in a subgraph. All subgraphs are composed as shown in the image (below). The information is then assembled in a different channel with the RGBA Merge node. We preferred to create them this way in order to make them reusable in other graphs.

Once created, we can find them in the brick substance. With the Grayscale Conversion node, we can retrieve information from each channel. This method also allows us to add new patterns in the future for all substances that use them.

In each pattern, we will find the pattern in R, the random color in G, the mask if there is one in B and the slope in A.

For the bricks depth details I used a simple technique with safe transform and masking. With a Slope Blur node I created the rock aspect and the volume of the brick depth, with a safe transform I randomize the result x number of time.

I used the Blend node to merge them. To do so, I select the result of the pattern with the randomized colors as a mask input. For each blend I used a Histogram Shift node to change the choice of mask obtained. This saves me time by varying a single noise result.

Here is the layer of masks:

Some different substance results:

Terrazzo Generator Substance:

The Terrazzo is commonly used in architecture. This material is easily recognizable due to its inserts. It is often composed of variable-sized inserts. To respond more easily to a wide choice of terrazzo, we have chosen to control them with layers of inserts. Thus we find in the graph the creation of the inserts.

The starting point for this substance is to be able to control all aspects of this material. I divided the inserts into 3 levels.

For each insert level, the user can handle density, size, color, color variation, intensity of normal / height, roughness, metallic.

In order to avoid too many parameters, we had to make some compromises – notably for color. Instead of exposing X colors for each level of insert, we have created a parameter Color share that allows us to pick colors in the other levels of insert thanks to a slider.

The creation of this Terrazzo generator has been made based on the feedback of Benoit Campo from Paris Picture Club.

Parquet Substance:

For the Parquet substance, we worked so that we could reuse the filters for each type of wood. Thus we have in a subgraph the type of wood, a subgraph for the type of pattern and a final filter to split the wood material into boards.

For each type of wood, you can choose a type of finish (natural, varnish, etc.) in the details, which is super useful. Finishes save time in the choice of flooring.

It is also possible to change the pattern used for the layout of the floor:

Now you know more about how we designed this this first Substance Source – Architecture Selection update together with Gaëtan Lassagne. Now, it’s your turn to play!

Credits: Damien Bousseau From: allegorithmic

 

V-Ray Denoiser Quick Tutorial

Take a look at how the V-Ray Denoiser works, and what can be achieved with it. You’ll also learn how to use the V-Ray Denoiser as a standalone animation tool. You can  also download  the 3d model below for you to enjoy and  try:


Credits to: Chaosgroup

Free V-Ray Denoiser 3D Scene: