A Compelling Example of AI in CAD: Autodesk’s Take on Generative Design – Cadalyst Magazine

Artificial intelligence (AI), machine learning, deep neural networks (DNNs) — these topics and buzzwords have captured the lion’s share of technology press in the past few years. But for those who’ve seen more than a few hype-and-bust cycles in the past, the headlines beg the question: Is this stuff real and impactful, or just another round of marketing exuberance that will disappoint in the long run? The answer in a big-picture context is clear: In the long term, artificial and machine learning will fundamentally transform not only technology, but likely society as a whole.
The trickier (and more relevant) question for many is: How will it do so, when it will do so, and in which markets and applications will it have the most impact? Certainly, professional computing applications are among those most clearly in the crosshairs of machine learning. And of those professional spaces, CAD presents some of the most compelling uses I’ve seen for the technology. We’re seeing both hardware and applications incorporate AI to speed and improve upon the things CAD professionals already rely on, and vendors such as Autodesk are planting new seeds — already starting to bear fruit — for transformational changes to traditional workflows, first and foremost in generative design.
Early — and Unexpected — AI Opportunities
Uses for machine learning (which you can pretty much equate to AI) that most of us never envisioned have already evolved to aid in an area that CAD workflows have long emphasized: 3D visualization. Promoted primarily by NVIDIA in conjunction with its deployment and promotion of AI-accelerating graphics processing units (GPUs) are two uses in particular: smart-sampled anti-aliasing and raytrace de-noising.
Those familiar with the most common, yet extremely compute-intensive technique to render photorealistic images — raytracing — know that the image does not appear in full fidelity in a single pass, but instead resolves over time as the engine fires rays into the scene and resolves the lighting contributions of that ray’s path throughout the scene. To accelerate the process, NVIDIA (and others) have incorporated a DNN in the raytracer to accelerate image “convergence” by decreasing the computational load in the latter stages of rendering. Once the image congeals into something it can recognize, AI fills in remaining rays/pixels, de-noising the image and wrapping up the time-consuming rendering process far faster than requiring the full per-ray processing, and without any temporal artifacts.
NVIDIA has also found a way to leverage a trained DNN to speed the more common interactive 3D graphics we use for real-time modeling and viewing. An established, high-quality technique to reduce the visual artifacts of CGI aliasing, super-sampling is notoriously expensive in consumption of GPU processing cycles. Super-sampling essentially requires multiple passes through all or part of the 3D rasterization pipeline, with a different, shifted sample point for each pass.
Super-sampling would be much more appealing if the number of passes/frames could be reduced without adversely affecting quality, thereby reducing the performance hit. That’s the goal of NVIDIA’s Deep Learning Super Sampling (DLSS), which reduces the input sample count by using a GPU-accelerated DNN to analyze intermediate frames and gain some “knowledge” of the underlying geometry for intelligent placement of fewer sample points. The end result is either the same quality with higher performance, or better quality at the same performance.
AI-enabled DLSS antialiasing aims to produce the same quality at higher performance. Image source: NVIDIA
Accelerating both raytracing and conventional 3D graphics with de-noising and intelligent supersampling respectively, both improve on functionality long ingrained in 3D CAD workflows. But while compelling on their own, they represent more the evolutionary uses for machine learning, and the bigger impact will be felt by revolutionary applications of AI, like generative design.
Dreamcatcher as a Proving Ground for AI-Based Generative Design
If DNNs can learn engineering principles like stress and fluid dynamics, evaluate a set of given design constraints, and even judge style, why can’t they suggest an optimal design for a drone or a bike frame or an aircraft’s landing gear? Well, they can. Dreamcatcher, launched as a research project in 2014, was Autodesk’s vehicle for exploring, testing, and hopefully proving an AI-driven tool to generate physical designs. Implementing what Autodesk calls a divergent design flow, Dreamcatcher builds off the traditional flow: define, create, explore (visualize and analyze), and fabricate. Where that flow departs from the norm is in who — or rather, what — is performing or assisting in those development stages, and the extent of that assistance.
The Dreamcatcher conceptual flow looks familiar — the difference lies in what’s doing the generation. Image source: Autodesk
Comments
Post a Comment