Spotfire should include a Denodo connector. This would enable self service access, push down queries on big data and official support.
Note: This idea was initially about Spotfire Information Services being shipped with a Denodo data source template. We are not planning to do this. Instead a template is available here and I hope it's fairly easy to add it to your Spotfire environment.
I understand there is a community connector and we (Texas Instruments) are using it. My ask would be this connector transition to a fully supported connector rather than community support and for the connector to work with oAuth OIDC SSO and not just Kerberos.
Finally! A custom Spotfire connector for Denodo is available for download here, and it's supported in 10.3 LTS and later Spotfire releases! Download it, read the license terms, deploy it in a test environment, try it out and let me know what you think at tblomber@tibco.com. Native Denodo support is still in my backlog and I will therefor keep this idea open.
The current option for connecting to Denodo from Spotfire is not user friendly for business people and limits the use of Spotfire to project based use cases and not enterprise wide use.
This is very useful for large datasets. I just added my vote.
Hi Julie! Yes, that's correct. For now please continue to use Information Services for Denodo access and continue to comment and vote on this idea for native connector support.
Thomas, correct me if I am wrong, but the data source template only allows you to create information links from denodo, not data connections. Correct?
Note: This idea was initially about Spotfire Information Services being shipped with a Denodo data source template. We are not planning to do this. Instead a template is available here and I hope it's fairly easy to add it to your Spotfire environment.
GSK needs this. Denodo is critical. We cannot use Spotfire without it
Joining the demand for a Denodo Data Connector
We are implementing a globla Denodo solution and also have a global Spotfire deployment. This would benefit us greatly if a Denodo connector was provided natively in Spotfire
Thomas, to answer your question directly ...
...Just wanted to say thanks for great feedback. Please use Information Links for now. With a self service Denodo "connector" what would be most useful? Push down queries, the self service part or something else? Thanks, Thomas
...speed and self service are the two big points for us. More specifically, we want to speed up on demand queries in a way that we can only do with a data connection (by modifying the SQL as needed for individual projects). Right now, we are struggling with the speed of information links. We are using all tools available -- on demand, scheduled updates, automation services -- but it's not enough.
Denodo is our data virtualization tool and how self service users can get to data. We can connect to denodo now with information links, but this is slow and inefficient. We can also use ODBC connections, but we are building larger and larger projects with more and more users and making sure that every user has ODBC setup is also slow and inefficient. A data connector would go a long way.
Look forward to seeing what comes of this. There needs to be a self-service Spotfire connector to Denodo.
Thanks for the list of feedback Dave. I have added links and created ideas for them and would be happy to schedule a call to discuss our plans for these.
1. Add custom connector data sources
Limited to the defined list of connections with no easy administrative way to add additional sources if a connector is available, e.g. Add a new type is not as easy as it is to add a data source template and information links.
Spotfire should support any ODBC DSN in-database
https://ideas.tibco.com/ideas/TS-I-6634
All connectors should be available in the web clients' fly out
https://ideas.tibco.com/ideas/TS-I-6859
2. Search
Connection to table shows connection name and not GUID. Name is not necessarily unique
Source Properties on tables built from a connection does have GUID. This makes reverse trace harder. Searching on NAME is slow and may return duplicates or similar names, searching on GUID is fast and unique. While a search result may return a result and you can see the path, there is no easy navigate to that folder location with a right click “Locate in tree” as there is with information links.
Search and locate connections based on guid
https://ideas.tibco.com/ideas/TS-I-6860
3. Named attributes (columns)
While an attribute can be renamed in a connection, it is not globally applied to all connections that may use that source.
Would it be possible to reuse data connections as well as data sources or is there a reason for not reusing data connections which holds the information about renamed columns?
4. Calculated attributes (columns)
Appears no mechanism to create calculated attributes that can be globally used on other connections. Creating a data model and compound attributes using %1, %2, %3 etc. from a source with information links allows one column attribute to be published once and named once that will be valid for any subsequent information links developed.
e.g. Computed values for cleansed email address, data interval calculations
Store calculated columns in data connections
https://ideas.tibco.com/ideas/TS-I-6861
5. Global Filter Attributes
Appears no way to create and insert a global filter attribute to a configured query>
We use these to define once-use-every a filter on a sets of data that we want to constrain.
e.g. CREATED date > sysdate-365 and ROW_NUM<1000001 (Useful when debugging queries that have long run time to load a data subset.
Static filters in in-database connectors (remove rows)
https://ideas.tibco.com/ideas/TS-I-6862
6. Mandatory and optional prompts
There seems to be no means to define a prompt as mandatory or optional as there is on Information links.
Mandatory and optional prompts
https://ideas.tibco.com/ideas/TS-I-6863
7. Conditioning
There appears no helpers built into the Connection builder for aiding in the development of the SQL and easily adding parameters, Filters, groups and Pivoting
Static filters in in-database connectors (remove rows)
https://ideas.tibco.com/ideas/TS-I-6862
Store calculated columns in data connections (groups)
https://ideas.tibco.com/ideas/TS-I-6861
Pivot / Un-Pivot with in-database connections
https://ideas.tibco.com/ideas/TS-I-6473
8. Security
All columns of a data source are exposed on the connection. It is necessary to be able to expose only certain elements from a data source to users to build connections based on folder level security. Data connections is missing the abstraction layer between DB and presentation layer
Expose only certain columns in shared data connections
https://ideas.tibco.com/ideas/TS-I-6869
9. Credentials
When a source is created and saved with stored credentials, the user id is not seen.
The same target DB may have different access based on credentials.
Show user id when stored credentials in a data source is used
https://ideas.tibco.com/ideas/TS-I-6870
10. Administration of drivers
Each driver needs to be manually deployed on each web node server
Connector drivers should only need to be installed once in a Spotfire topology
https://ideas.tibco.com/ideas/TS-I-6873
11. Data routing
A data connection executes based on the location of the client/web server. If web servers are geographically distributed, then each connection to the source is independent and cannot be cached on the TSS. There is no mechanism to create a routing rule to a web server based on its source. E.g. if a server is in EU with a data connection to EU then performance is OK, but if web server is in Asia and data connection is sourced from EU, then performance can be bad.
Information link defines a constant user experience on the query data selection.
Enable analysis resource routing based on data connection name
https://ideas.tibco.com/ideas/TS-I-6730
12. Caching (Tied to routing)
In cases of multiple web servers, how is a data connection cached between web servers?
Information links appear to allow caching for all users.
One shared server connector cache
https://ideas.tibco.com/ideas/TS-I-6874
Numerous issues with data connectors.
The one thing they have is slightly better prompting.
1. Limited to the defined list of connections with no easy administrative way to add additional sources if a connector is available
e.g. Add a new type is not as easy as it is to add a data source template and information links
2. Search
Connection to table shows connection name and not GUID. Name is not necessarily unique
Source Properties on tables built from a connection does have GUId. This makes revers trace harder
Searching on NAME is slow and may return duplicates or similar names, searching on GUID is fast and unique
a. While a search result may return a result and you can see the path, there is no easy navigate to that folder location with a right click “Locate in tree” as there is with information links.
3. Named attributes
While an attribute can be renamed in a connection, it is not globally applied to all connections that may use that source.
4. Calculated attributes
Appears no mechanism to create calculated attributes that can be globally used on other connections
Creating a data model and compound attributes using %1, %2, %3 etc. from a source with information links allows one column attribute to be published once and named once that will be valid for any subsequent information links developed.
e.g. Computed values for cleansed email address, data interval calculations
5. Global Filter Attributes
Appears no way to create and insert a global filter attribute to a configured query>
We use these to define once-use-every a filter on a sets of data that we want to constrain.
e.g. CREATED date > sysdate-365
ROW_NUM<1000001 (Useful when debugging queries that have long run time to load a data subset
6. Prompts
There seems to be no means to define a prompt as mandatory or optional as there is on Information links
7. Conditioning
There appears no helpers built into the Connection builder for aiding in the development of the SQL and easily adding parameters, Filters, groups and and Pivoting
8. Security
All columns of a data source are exposed on the connection.
It is necessary to be able to expose only certain elements from a data source to users to build connections based on folder level security.
Data connections is missing the abstraction layer between DB and presentation layer
9. Credentials
When a source is created and saved with stored credentials, the user id is not seen.
The same target DB may have different access based on credentials.
10. Administration of drivers
Each driver needs to be manually deployed on each web node server
11. Data routing
A data connection executes based on the location of the client/web server.
If web servers are geographically distributed, then each connection to the source is independent and cannot be cached on the TSS.
There is no mechanism to create a routing rule to a web server based on its source.
e.g. If a server is in EU with a data connection to EU then performance is OK, but if web server is in Asia and data connection is sourced from EU, then performance can be bad.
Information link defines a constant user experience on the query data selection
12. Caching (Tied to routing)
In cases of multiple web servers, how is a data connection cached between web servers?
Information links appear to allow caching for all users.
Just wanted to say thanks for great feedback. Please use Information Links for now. With a self service Denodo "connector" what would be most useful? Push down queries, the self service part or something else? Thanks, Thomas
The variances and differences with Data connectors and IL and data sources and ODBC is useful but disconcerting that applying filters and search on values is so different between connections, sources, ODBC links.
Has anyone ever built a custom connector to make such an addition?
https://community.tibco.com/wiki/create-custom-data-source-tibco-spotfire
I presume if it was easy TIBCO themselves would have done and released as a HF.
We are using Spofire and Denodo heavily in GE Capital, Denodo connector to Spotfire will be a big win in terms of Performance if we have a connector.
Denodo is our biggest connector for Spotfire at Asurion. We manage it by using local ODBC connections with specific DSN names that are given to each department. On the back-end (Spotfire server) we setup the same DSN names as system DSN's, which do allow for use with the web player. It works, but requires more setup. It would be of great benefit to our company to have a native Denodo connector for Spotfire. The biggest complaints we receive from our internal teams now are no support for external data (through odbc) and no "on demand" data (from the native connector). Thanks, Chris Lundeberg