Click on the top playlist button for more recommended tutorials…
If you need 1on1 or more basic, advance, specific tutorials. Please don’t hesitate to let us know. We have diverse experience and competitive team to recommend with tested and proven track records internationally. Learn from the right people that might truly help with your professional career advancement.
- Physically Based Materials Workflow
Checkout this public sessions about Physically Based Materials Workflow how it can transformed Arch Viz.
During the Substance Days event featuring masterclasses, keynotes and talks by some of the top Substance users from around the world. What impressed the most was the ability of some of these artists to create the impressive textures they do. If your current texture pipeline, like many, revolves around photographs, scans and Photoshop, your first exposure to Substance might seem daunting, as it really does require a completely different thought process. While much of the work is done in a node graph similar to many shader pipelines, the way you build procedural materials requires that you look at materials around you like puzzles that need to be solved. And as a result some of these node trees can get pretty intense.
The line up of speakers this year was impressive to say the least, but they had a chance to ask Scott a few questions about his presentation this year. He provided insight into their current pipeline and detailed the work they recently completed on the new NVIDIA headquarters.
Hit that Play button to watch, listen & learn from the experts in the industry…
Credits to Cgarchitect & Allegorithmic
Check out for more recommended tutorials below:
- Benchmark your render speed
Test your machine how fast it can render using V-Ray Benchmark App.
V-Ray Benchmark is a standalone application which includes a single GPU scene and a single CPU scene. Just download app here or this link and run the test. You can also check Benchmark results below or this link and compare your computer specs and render speed with others or share your results online.
You can also add notes to let others know what mods you’ve made, like water cooling and overclocking. Best of all, V-Ray Benchmark is free and does not require a V-Ray license.
Note: Results of V-Ray Benchmark are based on the included CPU and GPU scenes.
You may like to share your screenshot to our social media and #3dteamz Vray Benchmark
Credits to Chaosgroup
- God Rays Effect
Take a look how to create “God Rays” using “VRayEnvironmentFog” and create a volumetric effect known as God Rays. You can also download the 3d scene below for you to enjoy and try:
Credits to: Chaosgroup
Free Cargo Ship 3D Scene:
- Understanding VrayLight Select
If your 3d artist and hungry its a must to check this out: Understanding Light Select Render Element In Vray For 3ds Max.
The Light Select Render Element represents the lighting contribution from one or more selected lights in the scene. Each Light Select channel can output selected lights’ raw, diffuse, or specular contributions to the illumination, or overall (normal) contribution. When multiple lights are selected, all the contributions from the selected lights are combined into a single render element. Multiple VRayLightSelect elements can be rendered for a single scene, and lights may be included in more than one VRayLightSelect element.
This element is similar to the Lighting Render Element. However, the Lighting element combines the effect of all lights in the scene while the Light Select element allows for a user-selection light or set of lights to be broken out, showing their own individual effect(s) on the scene’s illumination. By using these render elements, specific lights in the resulting render can be adjusted (color, temperature, brightness, etc.) in a composite without the need for re-rendering.
For example, by generating a Light Select element for all of the backlights in a scene, an artist may adjust the backlighting of the rendered scene easily in the composite without affecting the rest of the scene’s illumination.Hit Play Button >
The following diagram shows the compositing formula to recreate all the light in a scene in its most basic form, but only when each light in the scene is accounted for in exactly one VRayLightSelect element for a particular mode. If a particular light is used in more than one VRayLightSelect element, this equation will result in brighter lighting than intended because that light will contribute lighting more than once.
- When using VRayLightSelect, a good practice is to have one VRayLightSelect element for each light source in your scene. This way the Lighting element can be recreated from them, and adjusted as needed while compositing without re-rendering.
- If the VRayLightSelect element mode is set to Direct illumination, the specular contribution is added to the render element. While this is perfect for simple compositing, a better workflow is for each light to be selected for two VRayLightSelect Render Elements, one with its mode set to Direct diffuse and another with its mode set to Direct specular. In this way, the specular and diffuse lighting can both be controlled independently at a composite level.
Source Credits to: Chaosgroup
- Max to VR Interactive Workflow
So you’ve heard about 3ds Max Interactive and you want to get your hands on it, huh?
3ds Max is now combined with 3ds Max Interactive, a powerful VR engine that gives you the ability to go from Max to VR in just a few clicks. If you’re a current subscriber, you’ll notice a new Interactive menu when you open 3ds Max 2018.1: this will launch 3ds Max Interactive, our new 3D to VR creative workflow for design viz artists like you.
Ready to take the leap? Starting today, you can download both 3ds Max 2018 Update 1 and 3ds Max Interactive as two separate downloads from your desktop account, provided you’re a current 3ds Max, suite or collection subscriber.
Here’s how to get set up:
Are you more of a step-by-stepper? Here’s a how-to in a few steps:
UPDATE 3DS MAX TO 2018.1
1. Launch the Autodesk Desktop App.
2. Once in the Autodesk Desktop App, open the My Updates menu in the top left corner. Click on Autodesk 3ds Max 2018.1 Update, then click Update to start the download. 3ds Max 2018.1 will automatically install once the download is complete.
3. Don’t have the Autodesk Desktop App? No problem. Open up a browser and head on over to your Autodesk Account.
4. Under Product Updates, locate 3ds Max 2018 Update 1 and click Download.
5. Once your download is complete, simply follow the install instructions.
6. Huzzah! You’ve got 3ds Max 2018 Update 1. Next, go ahead and launch 3ds Max.
INSTALL MAX INTERACTIVE
1. Once 3ds Max is open, you’ll notice a new welcome screen.
2. This will open up a browser, where you’ll log in to your Autodesk Account. From here, click on All Products & Services (top left) and select the Downloads button on the 3ds Max tab.
3. Next, you’ll see a pop-up for all available 3ds Max downloads. Find 3ds Max Interactive and click Download now. Follow the download instructions and click the Install button.
4. From here, you’ll open up the 3ds Max Interactive installer. Follow the install instructions, enter your country, accept the License and Services agreement and click Next.
5. You’ll need a serial number for this next step, but luckily you’ve already got one! Use your 3ds Max 2018 serial number from your Autodesk Account and click Next.
6. Once you receive a confirmation that your serial number has been found and activated, click Finish to complete your install. Sit back, relax, stretch, hydrate – oh, wait, it’s done installing already?
7. You’ll get a notification once the installation has successfully completed, and now it’s time to launch 3ds Max Interactive.
8. It’s happening!
9. Decisions, decisions. Pick the appropriate template for the type of project you want to create.
10… the sky’s the limit! Happy creating.
READY TO GO FROM MAX TO VR?
On Monday, June 12th, we’ll be kicking off our journey from 3ds Max to VR. Join us for 10 days of short video power tutorials to get you up and running with VR content creation, and familiar with the tools and terminology.
Bonus: you’ll walk away with solid understanding of the fluid workflow between 3ds Max’s powerful 3D tools and with the new interactive toolset. Pretty sweet.
Source Credits to:
Design Visualization Team @ Autodesk Blogs
- Next-Gen GPU Rendering
- Scan anything with Photogrammetry
- Smartphone as Texture Scanner
How to use Smartphone as Texture Scanner
A Creative Technologist at Allegorithmic. Today, He will show us how to use a smartphone camera to capture a material. Using the new scan filters in Substance Designer 6, it’s possible to transform a smartphone into a material scanner.
First of all, it’s important to find the right balance between quality and cost. For the cost part, we will use a cardboard box, a stack of sheet of tracing paper and an LED light for our lighting setup. For the image capture, we want to look at using the best process possible, so for our tests, we used an iPhone 6s and iPhone 7 as our camera with the Adobe Lightroom mobile app.
This app allows you to capture in RAW (uncompressed) format on iOS and Android. Adobe Lightroom also has some very nice features such as full manual shooting and High Dynamic Range modes. However, you can find other apps like ProCam on the App Store or Camerafv5 for Android.
With Substance Designer 6, we have a new filter named Multi-angle to Normal that has 8 possible inputs. To use the filter, we have to produce 8 images at 45° around our material.
To understand the image capture process, we can imagine our material on a much larger scale. Let’s imagine our material is at the scale of a mountain. At this size, our light can represent the sun. If the sun turns around the mountain, we can see the shape of the shadows cast in black. These shapes represent the indirect information about the relief of the mountain itself. If we combine enough information, at least 4 images (using 8 for a better result), the algorithm can calculate the relief.
The process is simple. We just need to turn the light 8 times at the same distance around our material.
To help us, we can draw a circle on our sheet with our 8 angles. You can download our template at this link.
To improve our chart, we can switch the color to black to reduce the lighting bounces from our LED light onto our material sample. Then we add some shapes (square, triangle, moon, star etc.) at the 8 angles, which are useful in post-process. These shapes are used by the photomerge process in Adobe Photoshop. They help Photoshop to produce a quick and accurate merging.
Finally, we draw a 10cm by 10cm square at the center to cut a hole for our opacity shoot.
Next, we will setup the scanbox. The box is as simple as possible, but it’s very important to retain the ability to capture the opacity of a material. To capture opacity, we will use a square hole (10cm x 10cm) at the center of the box and a stack of 6 sheets of tracing paper to diffuse the light through the material, plus a last sheet on top of the chart, as you can see in these image below:
Don’t forget a hole to put the LED light inside the box as well.
To finish our setup and to reduce the cost, we used a simple cardboard tube with a foam core plate to create a stand for our smartphone. To maximize the final frame, the stand size is calibrated to the size of the box. We added a black paper sheet to remove all potential color bounces coming from the cylinder. Finally, the smartphone is attached with 4 pieces of tape.
Here is the final setup with the scanbox and the stand. The stand is not as stable as a tripod and so it needs to be held in hand.
During the capture, it’s always a good idea to neutralize the color shift coming from the lighting. By nature, all lights have a color tint with some tints stronger than others. For example, a candle is red, a tungsten bulb is orange, the sky is blue. To neutralize this color and keep color consistency, a ColorChecker is required. Basically, it’s a reference, with a gray scale printed and calibrated.
The LED is designed by Manfrotto and produces a well-balanced daylight.
This is our toolbox:
– A cardboard box with a multi-angle chart
– A cardboard stand
– A LED light + tripod + string
– A ColorChecker
– A microfiber cloth
The photo shoot
Our goal will be to capture 2 materials: a leather and a complex fabric. We need to make sure that the only light visible on our material is coming from our LED light. If not, the shadow cast can be wrong and the post-processing in Substance Designer will produce an erroneous computation. So, just close the windows or curtains to darken the room and eliminate as much ambient light as possible and turn on your LED light on. Next, we place our material on the box.
Be sure the material is not covering the little shapes (square, triangle, moon, star, or whatever you’re using). Then, clean your sample, as dust or hair will often fall onto our material. The most important thing is to not move the material sample during the shooting! If the material moved, the photo-merge cannot be done correctly.
A string is attached to the LED light. We use a simple knot on the string as a metering guide between our LED light and the chart points around the circle on the box.
The most important angle is the altitude. See the graph below:
If the angle is too low, the shadow casting could be too long and we stand to lose some information in these shadow areas. The result will be an erroneous normal map computation with a flat result in the darker areas.
The trick is to find an angle to preserve information and provide enough shadowing for a good result.
Now it’s time to work on our camera. Start to clean your lens. Often our smartphone isn’t really clean. It takes 5 seconds to clean the lens with a microfiber cloth.
With Adobe Lightroom mobile you can use the PRO mode and manually set up your shot or use the HDR mode. Both deliver good results.
In the PRO mode, turn the flash off, activate the DNG format, set the ISO as low as possible (25 to 100) and set the White Balance to Daylight. To avoid motion blur in your image, keep a speed around 1/50 sec and adapt your ISO to get a good exposure.
You will find more and tutorials on using Adobe Lightroom for mobile here.
Another option is to use the HDR mode and activate the Save Uncompressed Original option. In this configuration, you have both, the original dng file and the “HDR” dng computed.
It can be helpful to add the grid and the level on screen for framing the shot. You can also add a timer for 5 seconds to act as a remote trigger for capturing the image. This will provide a more stable result as manually touching the shutter button on the camera can inadvertently add a shake, which can produce a blurrier image.
If you have an Adobe Creative Cloud account you can import your images into your Adobe Lightroom Desktop library.
Here is a tutorial on importing content.
Once everything is set, it’s time to capture the material. Set the LED lighting power to maximum. Make sure you have enough battery power (smartphone and LED light) and that the LED light doesn’t become too hot. During the shoot, it’s important to keep the same framing as much as possible. However, don’t panic if your pictures are not perfectly aligned. It’s more important to keep all of the shapes (square, triangle, moon, star etc.) on the chart visible in the images.
Now we can shoot a reference with our ColorChecker under the same LED light.
After we take the 8 material shots and 1 color reference image, we can shoot three more pictures. One for the color, one for the new white balance and one for the opacity (for the fabric).
For the color, we need soft and diffuse lighting. In our case, we used the indirect light coming from a window and bounced in an aluminum paper sheet. It’s not perfect, but it’s a good base. Of course, we have more options for this lighting, but this was the simplest in our case.
Don’t forget to shoot a new ColorChecker to set the color correction in post production.
For opacity, open the curtains in the room and setup the LED light inside the box. With the LED light at a low power, we have enough backlight for capturing opacity.
For a better light diffusion, you can add white paper on the inside and have the light centered in the middle of the scanbox: a foam core plate with a hole works well.
Now we have our 8 angles, 1 color, 1 opacity and 2 ColorChecker images. Next, we import our images in Adobe Lightroom Desktop. With Adobe Creative Cloud activated, all images are in our hard drive.
In our case, we used the “HDR” dng files, but as we explained, the dng files from the PRO mode deliver a good result as well. For the ColorChecker images, we keep the regular dng file.
Let’s select the ColorChecker image for the LED light and use the White Balance tool.
Then, we have to set the parameters Whites, Blacks, and Clarity at 0 and finally copy and paste the values of the parameters to the 8 angle images.
We can repeat this process for the ColorChecker image in daylight and apply the new value on our color and opacity images. Also, set the parameters Whites, Blacks and Clarity at 0.
For the opacity image, we can use the gradient tool in Lightroom to reduce the vignette effect.
For the last step, we exported all of the images as TIFF 16bit using the Adobe 1998 ICC profile.
Our phone stand was simple and cheap but not very efficient. However, this isn’t a problem. Using the photomerge feature in Adobe Photoshop, it’s possible to merge and align the images. Just load your pictures in photomerge and uncheck the Blend Images Together option.
After that, add a solid black layer at the bottom of the layer stack and crop your image on a square and resize it to 4096 pixels.
Finally export your layer stack in a new individual TIFF file with the remove layers option.
Creating the seamless material in Substance Designer
For the leather material, we created a new Substance using the Physically Based (Metallic/Roughness) template set to 4096×4096 in size. Then we linked our pictures to our project.
The first node we used was the Multi Crop. It’s useful to select which area of the image we want to target for our material. This crop can be critical when the tiling is difficult. But with the procedural approach, you can easily test different crops for a better tiling. After that, we used the Multi Color Equalizer to clean our images and finally a Multi-Angle to Normal node to generate the normal map.
We can add our color image and just copy paste our Multi Crop node and set the input Count to 1.
With the Color Equalizer node we can balance the light and dark values to produce a better color map for our material.
Now it’s time for the tiling part. With the new Smart Auto Tile node, you can quickly create a seamless material. Don’t hesitate to move, rotate and transform your pattern to get the best result possible.
For more information, you can find a tutorial here.
The new Color Match node is perfect for changing the color of our leather while retaining all the details. It gives you an endless variety of colors options with the benefit of mixing with the initial color captured data.
The Specular Level can be vital to achieving realistic results. With the Metallic/Roughness definition, when the Metallic is set to 0, the material is understood to be a dielectric and the reflectance value at the Fresnel zero angle or f0 is set to 4% reflective. This works for most common dielectric materials, but some dielectrics can have a different Index of Refraction or IOR. The Specular Level can be used to override the default 4% value used in the metallic/roughness definition. Here, we have a leather and we can use the Specular Level to set a custom level for the leather material.
To drive this channel, just add a new Output node and set the usage and identifier to specularLevel. Then adjust the value by using a Uniform Color node in Grayscale Mode. A value of 60 is a good start for a leather.
For the Height map, we used the node Normal to Height HQ and set the quality to High.
At last, we worked on the Roughness and used the Height map to drive 2 values between the bumps.
The easier part is for the Metallic channel. Just add a Uniform Color node, set on Grayscale and at 0.
After these few steps, finally, here is the complete graph.
Our leather sample was pretty matte, and after some tests we found the good values for our leather and voilà 🙂
Creating the fabric material
The process is quite similar, with the same Specular Level output set at 60, except we have the opacity channel.
Using a Color Equalizer and a Color to Mask node, we can have good control over the opacity mask for our material. We then just need to add an Output node with the Usage and Identifier set to opacity.
One caveat with the Smart Auto Tile is that there is not an opacity input. To work around this, we can plug the opacity in the Height input and use the Height output for our opacity channel.
As with the leather, we can use the Color Match node to easily transform our scanned material into a hybrid material with multiple color options.
But of course, we can use our color image as well 🙂
And at the end, the complete graph:
At the end of this process, we have 2 hybrid materials, scanned with our smartphone and ready to use in a PBR pipeline. All Substances and images are available at this link.
Don’t hesitate to play with material settings and explore the scanning process. Next time you need some material references, grab your smartphone, build your own scanbox and have fun with Substance Designer 6!
One more thing
In a previous blog post about photogrammetry, we showed a specific use case. However, now with Substance Designer 6, it’s easy to convert our photogrammetry maps in a single tiling material.
Photogrammetry is really fun, but it means your material is dependent on your object. Maybe you would like something more versatile. Fortunately, with Substance Designer 6, it’s easy to convert our photogrammetry maps in a single tiling material. After this step, your material will be usable on any mesh.
The job was done in 3 simple steps:
- Crop the good area in our map exported from Substance Painter.
- Make it tile with the Smart Auto tile node.
- Balance the color with the Color Match node to erase/smooth the color variations.
And now, we can apply this texture on a simple cylinder and view our bark 🙂
You can find the Substance file and images here.
As you can see, you have multiple usages for these new scanning nodes in Substance Designer 6.
We hope you will find this one interesting and that it gives you more ideas on what you can achieve with Substance Designer 6!
Credits: Anthony Salvi From: allegorithmic
- Procedural Wood Pixel Patterned
Procedural Wood Pixel Patterned for Archviz
Here is the entire graph of the Pixel Patterned substance. This wooden block assembly is used in architecture, often to delimit spaces in a house or simply for interior decoration. This material is esthetic and gives a natural feel within the house.
First of all, we’ve defined the essential points of the substance, namely the choice of color, the visible wood veins and the number of blocks.
For the color, I used wood stain shades used in the industry, with the Gradient Map node to choose the hue. I also used the Glaze Selection parameter and varied the hue areas using the Noise Selection in order to have as many choices as possible.
For the wood veins, I have created several patterns that can be tweaked using the Vein Selection Slider.
For the top of the sticks, I mixed multiple grunge and noise generators with a warp and a spherical shape.
The slope of the brick is essential for this substance. Basically, it is a linear gradient in the X-axis from black to white then from white to black. The problem is that when varying the position of the angle to one side the gray values are crushed and no longer render the linear gradient curve.
After several tests, we used the Pixel Processor to manage the slope. Besides being very fast, it allows us to keep the linear gradient to its maximum range when moving the X-axis with the slider.
Here is the view of the Pixel Processor to control the X-axis with a slider.
Next, simply plug it into the Tile Generator or Tile Sampler and expose the desired number of bricks in X and Y.
For realistic substances like terracottas, the base color is key to the realism of the substance. To achieve this, we needed to assemble several noises at different scales, which makes it possible to obtain information with varied details.
Some of the results :
For the Bricks Bond Variations substance, every pattern was created in a subgraph. All subgraphs are composed as shown in the image (below). The information is then assembled in a different channel with the RGBA Merge node. We preferred to create them this way in order to make them reusable in other graphs.
Once created, we can find them in the brick substance. With the Grayscale Conversion node, we can retrieve information from each channel. This method also allows us to add new patterns in the future for all substances that use them.
In each pattern, we will find the pattern in R, the random color in G, the mask if there is one in B and the slope in A.
For the bricks depth details I used a simple technique with safe transform and masking. With a Slope Blur node I created the rock aspect and the volume of the brick depth, with a safe transform I randomize the result x number of time.
I used the Blend node to merge them. To do so, I select the result of the pattern with the randomized colors as a mask input. For each blend I used a Histogram Shift node to change the choice of mask obtained. This saves me time by varying a single noise result.
Here is the layer of masks:
Some different substance results:
Terrazzo Generator Substance:
The Terrazzo is commonly used in architecture. This material is easily recognizable due to its inserts. It is often composed of variable-sized inserts. To respond more easily to a wide choice of terrazzo, we have chosen to control them with layers of inserts. Thus we find in the graph the creation of the inserts.
The starting point for this substance is to be able to control all aspects of this material. I divided the inserts into 3 levels.
For each insert level, the user can handle density, size, color, color variation, intensity of normal / height, roughness, metallic.
In order to avoid too many parameters, we had to make some compromises – notably for color. Instead of exposing X colors for each level of insert, we have created a parameter Color share that allows us to pick colors in the other levels of insert thanks to a slider.
The creation of this Terrazzo generator has been made based on the feedback of Benoit Campo from Paris Picture Club.
For the Parquet substance, we worked so that we could reuse the filters for each type of wood. Thus we have in a subgraph the type of wood, a subgraph for the type of pattern and a final filter to split the wood material into boards.
For each type of wood, you can choose a type of finish (natural, varnish, etc.) in the details, which is super useful. Finishes save time in the choice of flooring.
It is also possible to change the pattern used for the layout of the floor:
Now you know more about how we designed this this first Substance Source – Architecture Selection update together with Gaëtan Lassagne. Now, it’s your turn to play!
Credits: Damien Bousseau From: allegorithmic
- V-Ray Denoiser Quick Tutorial
Take a look at how the V-Ray Denoiser works, and what can be achieved with it. You’ll also learn how to use the V-Ray Denoiser as a standalone animation tool. You can also download the 3d model below for you to enjoy and try:
Credits to: Chaosgroup
Free V-Ray Denoiser 3D Scene:
- Interactive Denoising with Nvidia AI
Interactive Denoising with Nvidia Artificial Intelligence
Good design is obvious: You know it when you see it.
What’s less apparent is the amount of trial and error behind the process to achieve it. Designers, manufacturers and other creative types have to try multiple variations of an idea. Each time they render an image, then they examine, adjust, validate and try yet another variation.
The more time they have to iterate, the better the final outcome. Of course, time is money, and deadlines loom.
NVIDIA CEO and founder Jensen Huang showed at the GPU Technology Conference today how NVIDIA is advancing the iterative design process to accurately predict final renderings by applying artificial intelligence to ray tracing. (Ray tracing is a technique that uses complex math to realistically simulate how light interacts with surfaces in a specific space.)
The ray tracing process generates highly realistic imagery but is computationally intensive, and can leave a certain amount of noise in an image. Removing this noise while preserving sharp edges and texture detail, is known in the industry as denoising. Using NVIDIA Iray, Huang showed how NVIDIA is the first to make high-quality denoising operate in real time by combining deep learning prediction algorithms with Pascal architecture-based NVIDIA Quadro GPUs.
It’s a complete gamechanger for graphics-intensive industries like entertainment, product design, manufacturing, architecture, engineering and many others.
The technique can be applied to ray-tracing systems of many kinds. NVIDIA is already integrating deep learning techniques to its own rendering products, starting with Iray.
How Iray Interactive Denoising Works
Existing algorithms for high-quality denoising consume seconds to minutes per frame, which makes them impractical for interactive applications.
By predicting final images from only partly finished results, Iray AI produces accurate, photorealistic models without having to wait for the final image to be rendered.
Designers can iterate on and complete final images 4x faster, for a far quicker understanding of a final scene or model. The cumulative time savings can significantly accelerate a business’s go-to-market plans.
To achieve this, NVIDIA researchers and engineers turned to a class of neural networks called an autoencoder. Autoencoders are used for increasing image resolution, compressing video and many other image processing algorithms.
Using the NVIDIA DGX-1 AI supercomputer, the team trained a neural network to translate a noisy image into a clean reference image. In less than 24 hours, the neural network was trained using 15,000 image pairs with varying amounts of noise from 3,000 different scenes. Once trained, the network takes a fraction of a second to clean up noise in almost any image — even those not represented in the original training set.
With Iray, there’s no need to worry about how the deep learning functionality works. We’ve already trained the network and use GPU-accelerated inference on Iray output. Creatives just click a button and enjoy interactivity with the improved image quality with any Pascal or better GPU.
Iray deep learning functionality will be included with the Iray SDK we supply to software companies, and exposed in Iray plugin products we produce later this year. We also plan to add an AI mode to NVIDIA Mental Ray. We expect renderers of many kinds to adopt this technology. The basis of this technique will be published at the ACM SIGGRAPH 2017 computer graphics conference in July. Learn more here.
Knowledge Source Credits to :
Nvidia Artificial Intelligence
- Vray Resumable Rendering
What is resumable rendering?
In short, resumable rendering is the ability to have incomplete renders resume where they left off. The rendering could have stopped because of some outside circumstance, such as a power failure, or have been stopped based on the needs of the user.
In the next 3.5 service pack of V-Ray, we will be introducing resumable rendering as a new feature. This will first come out in V-Ray for 3ds Max, and then released for the other platforms. To use it, simply turn on Resumable Rendering in the VFB settings in the render dialogue or pass the -resume=1 option to V-Ray Standalone. You will also have to set a time interval for incremental saves if you are using progressive rendering.
Please note that this feature is still under development and some of the functionality may change in the final release of the service pack.
Two type of resumable rendering in V-Ray:
As you know, V-Ray has different ways to render. For resumable rendering, the difference is mainly between bucket rendering and progressive rendering.
With bucket rendering, the case is fairly simple. V-Ray writes the image as each bucket drops as part of a sidecar .vrimg file. When the rendering is resumed, V-Ray reads the partial .vrimg file and picks up on the next buckets that need to be rendered. The light cache is also stored in the .vrimg file so V-Ray doesn’t need to recompute it when resuming the render.
In this case, you need to set a time interval in the Resumable Rendering settings which tells V-Ray how often to save the state of the rendered image so that it can be resumed from that point forward. V-Ray will save a sidecar .vrprog file that has all the information that V-Ray needs to resume the progressive rendering. In addition to the contents of the progressive buffer, the light cache is also stored in the file so that it doesn’t have to be recomputed when resuming. After stopping and resuming the rendering, V-Ray will read the .vrprog file and pick up the rendering process where it left off.
A few things to consider:
Since this process relies on either a sidecar .vrimg or .vrprog file to be saved, it will save that file in the directory where you are saving the final image. Both files can be large as they contain a lot of data. This is especially true for the .vrprog as it has all the needed data for the whole image, not just the completed buckets.
When a rendering is resumed, there is still some preparation that needs to be done before the rendering starts that has to be redone. This includes scene prep, and texture loading. The GI light cache is saved, but the irradiance map is not and would have to be recalculated.
Rendering got stopped by outside forces:
There are many reasons that an outside situation could have caused a rendering to stop. The computer could have had a power failure, or ran out of ram, or you could have been rendering on the Cloud such as Google Computer Engine which offers Preemptible VMs. In this last case, the VMs are significantly cheaper at 20% of the cost of standard VMs. The issue is that Preemptible machines can be recalled at very short notice. If this happens, a new VM can come back up and resume where the rendering left off.
Choosing to stop a render:
A user may choose to stop the rendering on purpose. This could be because the current rendering is taking too long and there is a rendering that of a higher priority that needs to be done right away. In that case, you can easily stop the rendering without losing any progress when you resume it.
Another case could be set up as part of the render farm policy that all renderings are stopped after certain amount of time to ensure that every shot gets rendered overnight. If you are using progressive rendering, a grainier version of the render will still be available to review and can be resumed if the rendering looks good enough to warrant the full quality.
Incremental animation rendering:
Depending on your render manager, a simple script could be set up so that the entire animation is rendered for example at 5 minutes a frame and then resumed to render for another 5 minutes until the rendering has reached the desired quality. This means that no matter what, you will have a version of the animation that can be seen in it’s entirely that will continue to incrementally get better as it stays on the farm with every pass.
Keep in mind that, based on the fact that your scene still needs to be loaded, and prepped for every instance of the resumed rendering, this in itself can take a bit of time. So it is recommended that you don’t do very short increments of your rendering such as 30 seconds.
Resumable rendering is an important feature that could drastically change your workflow and choices you make while rendering. It could also greatly reduce the anxiety associated with long renders. We have only listed a few examples of uses cases for resumable rendering but are eager to hear of other ideas and examples of where this feature could change the way you work.
Knowledge Source Credits to : Chaosgroup Labs
- Botelya ng Buhay
Check it out some tricks and tools of the trade in this making of: “Bottles of Life” and check where is the rat hiding 🙂
Click 3dteamz slider < Left – Right > and Hit Play button below:
from Vray Certified Professional: Farid Ghanbari
- What is Visual Composition
Composition! Everybody heard of it? What is it?
Imagine purchasing a book and opening it. Only to find out that pages where out of order, text is hard to read, and the story is rambled in no particular direction, Its exactly the same as a badly compose image.
What is Visual Composition – is about arranging elements in a scene in a pleasing and easy to read manner. Check hit play button below to learn this very important rules of third in the jungle that might help us improve our visuals.
Enjoy and don’t forget to take the most important notes every 3D Artist should know…
Credits to: Andrew Price
www.3dteamz.com – We don’t just share, We carefully pick the right knowledge from top professionals and experienced people in the industry that might help you succeed!
- Exterior Lighting Setup
One of the most anticipated tutorials is finally out. We admire his work and the way he delivers his renderings. He never fails to amaze us for the quality output he shared. We are thankful that he let us publish his first tutorials in Exterior Lighting Setup for everyone to learn from it. Please scroll down below our Q&A and his final render and settings.
Featuring: Junangelo Ran
1. Can you tell us a little bit about yourself? like how long are you working in this industry?
I’m Junangelo Ran, I graduated BSIT year 1998 and start working in this kind of field since year 2000. First I worked as Cad Operator, then Cad, 3dsmax & PS Instructor and now Interior Designer/Visualizer.
I’m doing rendering since year 2004 using 3dViz scanline and by the year 2008 I started using 3dsMax with Vray.
2. Can you share the process of the rendering you shared to us.
Yes , I am very happy to share what I had learned from google and youtube 🙂.
Set the gamma first
Vray Setting/Final Render
Vray Physical Camera & Vray Sun Setting
Vray Domelight setting for Environment as GI & Reflection
Scene (Camera & Sun location)
Raw Render & Render Elements
• Vray ExtraTex is for additional ambient occlusion
• Vray ZDepth is for blurring or some fog for the back plants and trees
• Vray Wirecolor is for quick selection in Photoshop
• Vray Reflection is for additional reflection
Post Process in Photoshop
3. What tips do you share to the community in working this kind of projects?
Do not stop searching and practicing. 🙂
We like to thank him for sharing his knowledge with everyone. Please stay tuned for his next tutorials and it’s coming out very soon…
- Camera Simulator
This tool will help you understand and play the basic DSLR Camera Controls and the outcome without buying expensive actual camera.
Personally, tried this tool to match and experiment Vray physical camera settings. You may like to try it yourself too. Enjoy simulating below:
-ISO refers to how sensitive the “film” will be to the incoming light when the picture is snapped. High ISO settings allow for faster shutter speeds in low light but introduce grain into the image. Low ISO settings produce the cleanest image but require lots of light. Generally, you will want to use the lowest ISO setting that your lighting will allow.
-Shutter speed is how long the shutter needs to be open, allowing light into the camera, to properly expose the image. Fast shutter speeds allow you to “freeze” the action in a photo, but require lots of light. Slower shutter speeds allow for shooting with less light but can cause motion blur in the image.
-Aperture, or f-stop, refers to how big the hole will be for the light to pass through when the shutter is open and the picture is snapped. Lower f numbers correspond with larger holes. The important thing to remember is this: the higher the f number, the more things in front of and behind the subject will be in focus, but the more light you will need. The lower the f number, the more things in front of and behind the subject will be out of focus, and the less light you will need.
-Moving this slider is the same as zooming in and out with your lens. A wide, zoomed out setting creates the greatest depth of field (more things are in focus) while zooming in creates a shallower depth-of-field (typically just the subject will be in focus).
-Use this slider to simulate how close or far you are in relation to the subject.
-The exposure modes of an SLR let you control one setting while the camera automatically adjusts the others. In Shutter Priority mode, you to set the shutter speed while the camera sets the aperture/f-stop. In Aperture Priority mode, you set the aperture/f-stop while the camera sets the shutter speed. Manual mode is fully manual—you’re on your own! Refer to the camera’s light meter to help get the proper exposure. Although every real SLR camera has a “fully automatic” mode, there is not one here—what’s the fun in that?
-Lighting is the single biggest determinant of how your camera needs to be set. With only a few exceptions, you can never have too much light. Use this slider to experiment with different indoor and outdoor lighting conditions.
- Photorealism Explained
How to achieve CG Photorealism. Some tips and tricks from the guru over the last 11 years to make images more realistic. Check them out below:
Why photorealism is the most important skill you can focus on
The 4 Building Blocks of Photorealism
Simple tips and tricks to make more photorealistic renders
If you’re a CG artist hoping to work in the industry, photorealism is the most important skill you can focus on. It not only helps you to learn and understand how real life looks (a crucial step for creating cartoonish exaggerated styles), but is also highly in-demand for hollywood, gaming studios and new industries.
If you can achieve photorealism results, you’ll go far.
But as any artist will tell you, photorealism is hard!
So how do you achieve it?
In this 54 minute video you’ll discover: Click Play button below.
Credits to: Andrew Price
- V-Ray RT GPU Shaders Breakdown
Be inspired by the images and shaders breakdown video tutorial below:
Basil, wood, details shaders breakdown. V-Ray RT GPU videotutorial.
Credits to: Dabarti