V3.1 is around the corner! – Juan, Head of Maxwell Render Technology
First of all, on behalf of the whole Maxwell team, I would like to welcome you to our new official blog. Requests for specific topic areas are very welcome so please let us know! I am going to kick-off with a sneak look at what’s coming with V3.1.
Over the last couple of months the development team has been working in many areas. After a major release like v3.0 there is always a stability period where part of the developers’ work involves preparing minor updates to make Maxwell as robust and stable as possible, while at the same time we focus efforts into the development of new features. I am happy to announce that the first “point” release after 3.0 is almost here, and it will include cool new features that we are sure you will love.
Lights. Camera. Render.
One area we have worked on a lot is improving the workflow when using lights. One of the first things we say to people who approach Maxwell for the first time is that in Maxwell just as in real life, light is emitted by real objects, rather than by abstract point lights with zero volume. Whilst this is something that everybody quickly understands, many people are actually used to spot lights and other kind of “incorrect” lights.
This happens especially to those people who were rendering before the Physically Based Rendering (PBR) era started (incidentally it’s been 10 years now since we embarked on our pioneering mission, when many people thought we were crazy and said that PBR was only useful for academic purposes!). So we find that people are usually very comfortable with the way native lights work in their platforms, and they would love to see Maxwell translating these lights into Maxwell lights, and converting “non-physically based” lights such as spots into Maxwell lights – in a transparent and efficient way. Well, in 3.1 we have added exactly this, and the emitters workflow is much more efficient and flexible now.
We have also implemented support for OpenVDB. This new format for volumetric data developed and maintained by Dreamworks is becoming a standard in Film & VFX and its usage can be expanded to other industries as well. OpenVDB is a voxel-based approach that allows you to make simulations with a much smaller amount of particles than regular simulations and then “displace” the resulting voxels to add detail. The resulting voxel detail is stored in OpenVDB format and sent to the renderer. Without OpenVDB, you would need to have billions of particles to have detailed displaced smoke over a large area, which is not too practical.
New network system
Jobs can be monitored from a web browser which means that you can use any computer, tablet or smartphone for that.
Besides the official features included in this release, we will also release a technology preview of the new Maxwell job manager/network system. This new tool has been built by our network programming experts based on our experience over the years, following a new approach that will bring a lot of advantages. Firstly, it is a much more robust and failsafe system – the system will recover itself from failures that could not be properly handled in the old system. Jobs can be monitored from a web browser which means that you can use any computer, tablet or smartphone for that. Job descriptions are serializable, which means you will be able to copy and paste jobs, create jobs copying settings from previous jobs, etc. There is also an API that allows you to create scripts for tasks like adding jobs, or integrate this system into 3rd party tools.
We will provide access to this new network system as a separate download at the same time as the 3.1 release, and we intend to reach enough level of maturity (for release) shortly afterwards, so it can be included in the official package and finally replace the old network system very soon.
Oh, and I should not forget to mention that this is a FREE upgrade for all 3.0 customers!
Following our plans to make the Maxwell camera model as similar as possible to a real camera, we have implemented support for white balance and custom film responses. We have also added support for UDIM textures to improve the connection with texturing applications such as Mari. Studio now includes a tool for creating 360º view animations in a few clicks… there are many more things included in 3.1, we will post a complete list of changes very soon. Oh, and I should not forget to mention that this is a FREE upgrade for all 3.0 customers!
With the 3.0 release we also created the new Early Builds portal where we upload things that are not completely ready for production yet, but can still be useful for many people. We have decided to upload v3.1 to that site so all the customers that want to do so can have access to the beta and give their feedback.
What lies ahead . . .
What will happen after 3.1? Obviously there are some things we can say and some we can’t… and that’s mostly because we don’t want to raise expectations and promise something at early R&D stage that may fail or need a change of course, or simply take 3 times longer than we anticipated. But in the end we are not giants, and we do feel we should protect our roadmap.
We are optimistic about our developments and we hope soon we can give you a system that will allow you to render efficient and top quality multilayered skin.
Having said that, we will try to share with you as much as possible. One of the things we have been focusing on most (besides the 3.1 features) is skin rendering. Skin rendering, and in general rendering of subsurface scattering materials is one of the biggest challenges in unbiased rendering. We are optimistic about our developments and we hope soon we can give you a system that will allow you to render efficient and top quality multilayered skin.
You have probably seen the video of our GPU prototype from the last Siggraph. We continue working on that and we hope to give you more info soon.