Request
Provide input for optional connection parameters for Apache Spark SQL connector (see attached .png).
Background
Apache Spark SQL connector is used to connect Databricks into Spotfire.
Connecting to Databricks catalog and databases works as designed, but querying certain large data results in an error (see Error below).
Researching online suggests the solution is to set the parameter `EnableQueryResultDownload="0"` in driver config (see Reference below); however there's no option to do this via Spotfire connector pane.
Error
Spotfire.Dxp.Framework.Library.LibraryException: Import failed ---> Spotfire.Dxp.Data.Exceptions.ImportException: An error occurred when executing a query in the external data source.
External error:
ERROR [HY000] [Simba][Hardy] (35) Error from server: error code: '0' error message: '[Simba][Hardy] (134) File df2ca4b4-fc10-4eab-adf3-aca527a9c2bb: A retriable error occurred while attempting to download a result file from the cloud store but the retry limit had been exceeded. Error Detail: File df2ca4b4-fc10-4eab-adf3-aca527a9c2bb: The result file URL had expired on 1708645311447 (Unix timestamp)'.
Reference
Setting [Simba Config](https://community.fabric.microsoft.com/t5/Service/Error-on-PBI-Service-dataset-refresh/m-p/2633552).
Implemented in | 14.4 |
From Spotfire 14.4, the web client can be used to create and modify connections to Databricks and other Apache Spark SQL based data sources. The new web UI (also available in the installed client) allows for selecting OAuth2 as authentication method (in addition to username/password and Windows authentication) for SSO and safe personalized login.
When needed, connections can be customized by using custom connection properties for data source or driver specific settings.
Note: Some connection authoring capabilities, such as configuring prompts, writing custom queries, and defining primary key columns, are only available when you use the installed client. For details, see the documentation.
Learn more about everything new in Spotfire 14.4.