Cloud Amplifier - Databricks - Unable to retrieve schemas
All my catalogs can be retrieved/validated just fine except for one that returns the "Cannot retrieve schemas" error or "Cannot validate table" error. I've attached screenshots below. 99% confident this is a Databricks issue but cannot pinpoint what is different about this catalog vs the others…
I am using a personal access token, I have admin access in Databricks, I did NOT create any of the catalogs but have access to all of them.
Answers
-
I'd look to make sure the personal access token has sufficient permissions to access the specific catalog. Although you have admin access, the specific catalog permissions might be different. Make sure that you have the necessary permissions to access the tables within the catalog. Specific tables may have different permissions than the catalog itself.
Check configuration settings to make sure things like URL, port, database name, and credentials are configured properly.** Was this post helpful? Click Agree or Like below. **
** Did this solve your problem? Accept it as a solution! **0 -
My account has 'ALL PRIVILEGES' for each object in Databricks. It is pretty safe to assume that the URL, port, database name, are all configured correctly as I am able to bring it data from other catalogs.
I do not see anything that would indicate my access token does not have the same scope, permissions, etc… that my account has.0 -
Hey @zaclingen_fwm,
Have you run a "GRANT ALL PRIVILEGES" command for your user after you saw this error in Domo, but are still seeing this error?
Two ideas here:
1. You could ensure that your user has the [SELECT, READ VOLUME, USE SCHEMAS] privileges on all of the schemas in the that catalog. see here: . I'm not sure if "GRANT ALL PRIVILEGES" will continue to apply to any schemas/tables that are created after that command gets run. You might need to run it again.2. If running the "GRANT ALL PRIVILEGES" command didn't fix the problem with your existing access token, try creating a new access token, and then testing the new one with a second Cloud Amplifier integration.
0 -
I've had the same issue with databricks and solved it by 1. ensuring my databricks catalog, project, schema and table names do not use dash characters, only underscore characters and 2. separating databricks tables and views into separate schemas - i.e. no mixing tables and views in the same databricks schemas.
Worth noting the databricks logs are very useful for troubleshooting - it logs exactly where the connection request has issues.
I've also found that I have needed to break up large tables say over 200GB into smaller tables for domo to not get stuck caching before it checks for the next update and sees there are changes0
Categories
- All Categories
- 1.8K Product Ideas
- 1.8K Ideas Exchange
- 1.5K Connect
- 1.2K Connectors
- 300 Workbench
- 6 Cloud Amplifier
- 8 Federated
- 2.9K Transform
- 100 SQL DataFlows
- 616 Datasets
- 2.2K Magic ETL
- 3.8K Visualize
- 2.5K Charting
- 738 Beast Mode
- 56 App Studio
- 40 Variables
- 684 Automate
- 176 Apps
- 452 APIs & Domo Developer
- 46 Workflows
- 10 DomoAI
- 35 Predict
- 14 Jupyter Workspaces
- 21 R & Python Tiles
- 394 Distribute
- 113 Domo Everywhere
- 275 Scheduled Reports
- 6 Software Integrations
- 123 Manage
- 120 Governance & Security
- 8 Domo Community Gallery
- 38 Product Releases
- 10 Domo University
- 5.4K Community Forums
- 40 Getting Started
- 30 Community Member Introductions
- 108 Community Announcements
- 4.8K Archive