
Supercharging SketchUp Extensions with NVIDIA CUDA
As SketchUp developers, we are constantly pushing the boundaries of what is possible within the viewport. While the Ruby API is fantastic for geometry manipulation and workflow automation, there is a hard ceiling when it comes to raw computational throughput.
If your extension involves heavy simulation, complex rendering, or massive dataset processing, it may be time to look beyond the CPU. Integrating NVIDIA CUDA into your SketchUp extension pipeline can unlock performance gains of 10x to 100x for parallelizable tasks.
Why CUDA?
SketchUp relies heavily on the CPU for its core modeling operations. However, for tasks that require performing the same mathematical operation on millions of data points (like pixels in a render or vertices in a complex mesh simulation), the CPU is a bottleneck.
NVIDIA CUDA (Compute Unified Device Architecture) allows you to offload these heavy parallel tasks to the GPU. Modern NVIDIA RTX cards have thousands of CUDA cores designed specifically for this kind of "number crunching."
When to Use It (and When Not To)
CUDA is not a magic bullet. Moving data between the CPU (Host) and GPU (Device) incurs latency. You should only pay this "transport cost" if the calculation time saved is significant.
Good Candidates for CUDA | Stick to Standard Ruby/C++ |
Ray Tracing / Rendering: Calculating light paths for millions of pixels. | Simple Geometry Tweaks: Rounding corners on a cube or scaling a few components. |
Physics Simulations: Cloth draping, fluid dynamics, or structural stress analysis on complex meshes. | Attribute Management: Renaming layers, organizing scenes, or attaching meta-data. |
Point Cloud Processing: Analyzing or meshing millions of LIDAR points. | UI/UX Tasks: Creating menus, dialogs, or user input tools. |
Stand Out in the Extension Warehouse
Beyond the raw performance gains, integrating CUDA offers a significant marketing advantage. The SketchUp Extension Warehouse features a dedicated NVIDIA CUDA filter, allowing users to specifically search for and identify high-performance, GPU-accelerated tools. By flagging your extension with this attribute during the submission process, you instantly differentiate your product from standard CPU-bound plugins. This visibility signals to power users—architects and designers working with massive datasets—that your extension is engineered for speed and stability, positioning it as a premium solution for their most demanding workflows.

The Architecture: Ruby, C++, and the Live C API
You cannot call CUDA directly from Ruby. To bridge this gap, you must build a Ruby C Extension. The most performant architecture for a CUDA-backed SketchUp extension looks like this:
The Ruby Layer: Handles the UI and user interaction.
The C++ Layer (Intermediate): Acts as the bridge. This is where you implement the SketchUp Live C API.
Tip: Since SketchUp 2019.2, the C API allows read-only access to the active model. This is crucial for performance. Instead of converting Ruby objects to C structures (slow), your C++ extension can read geometry data directly from memory using the C API.
The CUDA Layer: The C++ layer sends raw vertex/vector data to the GPU memory. The CUDA kernel executes the heavy math and returns the result (e.g., new vertex positions or texture maps) back to C++.
Write-Back: The C++ layer returns the processed data to Ruby, which then updates the SketchUp model (or draws to the viewport via view.draw for real-time feedback).
Getting Started
Ready to experiment? Here is your roadmap:
1. Set up the Environment
NVIDIA CUDA Toolkit: Download the latest toolkit to get the nvcc compiler and libraries.
SketchUp C SDK: You will need headers to read model data efficiently.1
2. Learn the Bridge
Ruby C Extensions: If you haven't written a C extension for Ruby before, start here. It is the glue that holds everything together.
3. Review the Live C API
Understand how to read the active model from C++ without locking the UI.

