Multiple analysis can be built by users that work with common data sets, or subsets of common data.
This causes multiple extracts of similar data from same source.
More BI tools are now leveraging in memory.
This request is to enable a data set to be defined as resident-in-memory and available to multiple web report developers to use.
e.g. Orders. Multiple users may need to build an order analytics, but should not have to extract similar data sets multiple times.
I believe if an analysis is copied, then that maye have the same GUID of tables and might be shared, but this is more of a hack than a solution.
How about a R/TERR session (or python) that stays open for multiple calls as a data function? This would imply no data movement at all; just like
what an R user can do interactively but adding the power of SF. Even caching large tables requires data movement for multiple calls to a data function that use the same data.
@Christian Turri, @Jarod, Indeed, I know about SBDF files that can be loaded and refreshed. I do not want to load the whole table
I would like the ability to build multiple separate analysis from that could be a subset of other master 'Controlled' data sets. This could be 100M's rows and 100's columns. User may want a select date from set where country = US. Another wants different. Each query is from common data already extracted, but needs ability to be incrementally refreshed.
Another vendor "Incorta" does such data modelling for super fast results.
@Christian Turri, Though the provided link is helpful in preventing spotfire from hitting the database multiple times, there would still be multiple copies of the datatable in spotfire node manager RAM, which is what I believe Dave Williams (and I along with him) is requesting.
Our particular use case is to have two dashboards one for internal users and one for external users. The external dashboard needs to be stripped of various columns/filters that we want to be allowed on the internal dashboard. We are working around this by attempting to use ironpython scripts that will do this stripping for us when a user accesses the dashboard.
This would be mostly useful when the datatable is large in size ( > 100 GB for us) then we would only need to load the datatable into the webplayer RAM once and be able to build multiple dashboards off of it.
I believe you can already do what you want by caching SBDF files. Have a look at this article:
https://community.tibco.com/wiki/caching-data-tibco-spotfire-automation-services