We have an application that let users save their configuration in db. When the analysis is open, user's configuration will be loaded. Then, document properties would be set and data will be loaded which depends on document properties.Currently all the available solutions do not allow us to do that. Python script runs at start up actually run after the data already loaded, then data will have to be loaded again after the document properties are set. Ie. they will be double loaded.
This idea has been closed with referral to existing capabilities. The initial state of an analysis (such as document properties, marking, filtering, bookmarks) can be configured with configuration blocks, which can be set when opening or saving the analysis. There are several ways of applying configuration blocks, for example with Automation Services or with the API (using C# extension or IronPython scripting).
You are welcome to report a new idea if there are issues that can't be resolved with the above capabilities.
You can add a configuration block when opening an analysis through Automation Services. That allows you to have an AS job that uses a template to create analysis files with different configuration (doc properties, filtering, marking etc.) for different users. See https://community.tibco.com/wiki/create-configuration-block-tibco-spotfire.
Configuration blocks look very good, BUT they are not saved in the analysis. If you could have them applied each time an analysis is opened, they would be great. One would also need a simple way to set them, not only through scripts or so.
Hi Tamara, have you explored configuration blocks, https://community.tibco.com/wiki/create-configuration-block-tibco-spotfire?
A configuration block allows you to assign document properties and other parameters before the analysis is opened. You can also define default filtering and marking
Would be very handy. We also have a use case here, users select experiments for which data is loaded from a data base. What I did was to embed the table with a list of experiments which is then used for on-demand load. However, a solution using the above mentioned would be nicer.
As I understand it, configuration blocks go in that direction? However, they are (in my opinion) rather complex to use and they cannot be stored permanently in an analysis file, which is annoying. So having a way to easily create configuration blocks and store them permanently in an analysis would be a general solution which might also be otherwise very useful.