Using the Jedox GPU Accelerator


Related links: Requirements of Jedox GPU AcceleratorInstallation of Jedox GPU Accelerator, Jedox GPU Accelerator Advisor

The Jedox GPU Accelerator uses the computational power of NVIDIA Tesla™ GPUs to speed up OLAP calculations in the Jedox OLAP Server. This article describes how to use the Jedox GPU Accelerator.

Cube conversion

Administrators can decide for each cube individually whether or not it should make use of GPU acceleration. While any cube (except system cubes) can be converted to a GPU cube, it makes sense especially for cubes with the following properties:

  • High numeric data volumes: e.g. > 300k filled numeric cells
  • Large consolidations: compute-intensive calculations, such as high-level consolidations or consolidations on large target areas
  • Cubes with business rules on base-cell level (B: rules), like arithmetical (+,-, *,/) or conditional rules (if/then/else), or rules across different cubes (PALO.DATA)
  • Reports involving dimension filters (/dimension/dfilter)

To activate GPU acceleration for a specific cube, open the Modeler, select the cube, open the “Advanced” panel, and check the box for “Activate GPU acceleration”:

Note that speedups can only be expected when the majority of steps in the computation chain (e.g. multiplication step in a rule, cell transformation step in PALO.DATA, aggregation) provide enough input cells to fully utilize the GPU hardware, i.e., thousands of cells or more.  

Cube conversion to GPU memory

When not specified otherwise (see next section), each cube’s numerical data storage will physically reside in GPU memory after cube conversion, which requires enough available memory on the GPU. If multiple GPUs are available, the data storage is distributed among the devices. Note that GPU memory is also required by the GPU engine during calculations; as a rule of thumb, make sure that converted cubes do not use more than half of the available GPU memory. Use nvidia-smi to view GPU memory consumption:

cd C:\Program Files\NVIDIA Corporation\NVSMI

nvidia-smi.exe -l

Cube conversion to host memory

The numerical data storage of a cube can also reside in GPU format in conventional RAM, which allows for accelerating cubes that are larger than available GPU memory. The option can be switched on by adding palo.ini parameter

gpu-data-storage R

Note that a conversion of the cube to GPU format is still necessary and requires additional available RAM on the host system. As internally page-locked (pinned) memory is used, system performance might degrade whenever the system runs out of available memory.


Writing back to GPU-accelerated cubes is fully supported.


Rules are GPU-accelerated under specific circumstances that depend on the query and the rules involved.

Most rule functions are supported on GPU. To find out whether all rules are supported for a specific cube, administrators can use the GPU Accelerator Advisor from Jedox Excel Add-in to list all rules, along with GPU support information. Find details here: Jedox GPU Accelerator Advisor.

Dynamic engine selection

Each reading operation (e.g. aggregation, rule computation, dimension filter, etc.) is evaluated according to its impact on the CPU and GPU engine, and the best-suited engine is chosen to do the actual computation. Dynamic engine selection can be switched off such that GPU does the computation whenever it supports the operation by using/adding “o” to the “engine-configuration” parameter (in palo.ini):

engine-configuration o