Skip to Main Content
Spotfire Ideas Portal
Status Future Consideration
Categories caching
Created by Guest
Created on Jan 30, 2024

Incorporate TIBCO PSG Cache Framework into product

The TIBCO PSG Cache Framework is an API-driven mechanism to assist the developer in simplifying the steps to create single, multi-table and incremental cache tables by providing a minimal set of input values.

For all cache types, a project profile provides default settings that identify the data source, cache schema and various other attributes.

If the cache framework was integrated into Studio It would be envisioned that a developer could right-click on a view and select the cache type [single, multi, incremental]. Based on the cache type it may provide a pop-up to ask additional questions before generating the required resources.

For single and multi-table, the developer simply provides the cache view target path. Potentially, a radio button could indicated if pre- and/or post- cache procedure "shells" should be generated.

For multi-table, consider the ability to create database partitioning based on a selected column and partition style. Depending on the database type, it would require partition functions to be generated and executed. This would specify the partition scheme and functions.

For incremental a pop-up would ask for incremental key name and primary key columns.

  • TDV could provide a drop-down list for both. Multiple columns can be chosen for primary key.

  • Incremental MERGE would be supported which includes ADD/UPDATE.

  • All logic is generated for the initial load and delta load scripts.

    • Provide the ability to turn on/off log activity in scripts.

    • For initial load use this pattern:

      • INSERT INTO <target_cache_table>

      • SELECT {option DISABLE_DATA_CACHE_IMMEDIATE} cachekey, S.*

      • FROM <source_cache_view> S

      • WHERE S."<incremental_key_name>" <= maxI;

    • For delta load script use these patterns:

      • SELECT {option DISABLE_DATA_CACHE_IMMEDIATE} MAX(keyName)

      • MERGE INTO <target_cache_table> AS c USING

      • (SELECT {option DISABLE_DATA_CACHE_IMMEDIATE} s.* FROM <source_cache_view> s WHERE s."<incremental_key_name>" <= maxI AND s."<incremental_key_name>" > minI ) AS d

      • ON (c."pk1" = d."pk1" AND c."pk2" = d."pk2")

      • WHEN MATCHED THEN

      • UPDATE SET c.<target_columns_list> = d.<source_columns_list>

      • WHEN NOT MATCHED THEN

      • INSERT ("cachekey", <column_list>) VALUES (cachekey, <column_list>);

  • Cache tables are generated and dropped as needed.

For all cache types and for SQL Server in particular, NVARCHAR cache columns are supported by looking at source lineage for that column. If the source table column is NVARCHAR then the cache target column is NVARCHAR. This will be easy to do once TDV Catalog tables support native data types. A lineage function will need to be executed on cache view tables to find source table columns of type NVARCHAR. It would not be acceptable to make all VARCHAR columns NVARCHAR. It must be done selectively based on source column.


  • Attach files