Hadoop/HDFS
Hadoop/HDFS connector allows you to connect to your file system.
Supported paths are:
HDFS-compatible file system path for on-prem deployments (example: hdfs://my.hdfs.namenode-hostname:8020/mydata/folder)
HDFS may be either customer-managed hdfs deployment (HDFS 2.7 recommended) or installed by Conduit in distributed mode
Features
Conduit makes it easy to connect your data to your favorite BI and data science tools, including Power BI. Your data approachable and interactive – in a matter of minutes, no matter where it's stored.
Data aggregation and JOINs
Access your data in real-time. Conduit allows you to connector in DirectQuery mode vs. Power BI’s standard import mode, which limits your data refreshes per day.
Advanced Parquet Store cache for a fast performance. Configurable expiration and re-caching.
Custom pick data to use only specific columns needed for reporting to speed things up even more.
Built-in data governance and security controls. Flexible yet robust.
On this page:
- 1 Features
- 2 Prerequisites
- 3 Create Connector
- 3.1 Datasources
- 3.2 Authentication
- 3.3 Publish
- 3.4 Virtualization
- 3.5 Authorization
- 3.6 Advanced
- 3.7 Endpoints
Prerequisites
If you haven’t already done so, be sure to sign up for a Conduit account. Try the power and flexibility of Conduit firsthand with a free trial.
For your HDFS datasource, have the following handy:
relative path to a file or folder you would like to connect to in BI tools
Create Connector
Connectors can be created from the main dashboard. To create new connector, click on "Add New Connector" button, then select desired connector type to load wizard for configuring the new connector.
There are a few basic steps to getting any connector up an running:
Define your datasource
Configure access
Select what data you want to make available via connector
Configure virtualization and caching options
Datasources
Define your connector name and path.
Connector Name
Required
Will be used to identify published tables
Only lowercase letters, numbers and underscore symbols are allowed
Can be changed only before the connector is saved
Description
Optional field for notes about connector; visible in Conduit only
Can be changed at any point
Path
Required
Relative path to a file or folder you would like to connect to in BI tools
Can be changed only before the connector is saved
Click Next button (blue right arrow) to go to the Authentication tab to continue configuring your connector.
To cancel connector creation, click Close button.
Authentication
Define how external BI users should be authorized by Conduit to access specific data and how Conduit is connecting to the datasource.
Select Authentication Method for external users connecting to Conduit:
Anonymous with Impersonation
Anyone with the connector link has read access to all tables/data published through the connector
BI users are not required to provide any form of credentials
Default option
Conduit Authentication with Impersonation
Allows Conduit Admins to configure data access only to users from specific Conduit Group(s)
BI users are required to provide credentials that are looked up by Conduit in its user database
Active Directory with Impersonation
Allows Conduit Admins to configure data access only to users from specific Active Directory Groups(s) for a selected User Subscription. The access to the database will be done by Conduit authentication credentials.
Click Next button to go to the next tab to continue configuring your connector.
To cancel connector creation, click Close button.
Publish
Select what data will be available to the BI users. Choose to publish one or more files and/or folders.
On Publish tab individual files and/or folders can be selected for publishing.
To explore folder structure click on black arrow(s) to expand datasource node(s)
Use Search to find specific fields you would like to select. Please note that search will be finding only items on expanded nodes.
Selecting several files in the same folder with the same schema and file type will result in a table with all the files appended to create one table
a closest parent folder name will be used for identification
Selecting an entire folder (or subfolder) will be an indicative that the selection should be treated in "folder mode", so the source folder can be configured as Dynamic or Hive-Compatible folder
To continue configuring connector properties, click Next button.
To cancel connector creation, click Close button.
Virtualization
On Virtualization tab you can configure the following:
Enable Query Caching
When enabled, Conduit will store query results for all queries for the connector's datasets so that when the exact same query is called again, the query results will be returned from memory
The results set exceeding one page of retrieved records - for PowerBI it's 10000 - will not be cached to avoid OOM
Recommended to enable when expensive queries are expected and/or when underlying data is not expected to change often
Caching expiration is 24 hours by default, and can be customized for each connector's dataset as needed
Enable Connector Caching
When enabled, Conduit will create temporary secure parquet store of all connector's datasets for a quick future access
Recommended to enable for large datasets and/or when expensive queries are expected
Selected tables for the connector will be cached in the parquet store. All queries for this connector will be ran against the parquet store
Caching expiration is 24 hours by default, and can be customized for each connector's dataset as needed
When connector data is cached, query results will be cached in memory for small/medium results set to further enhance performance. Query Cache will expire with data cache
List of existing stored parquet files and their expected expiration times can be accessed on Performance>Parquet Store page
The Conduit SQL Query engine is enabled by default, being needed to parse all the SQL queries generated by the BI tools.
Authorization
Configure access for a selected Authentication type.
If you've selected on the Authentication tab "Conduit Authentication with Impersonation" or "Active Directory with Impersonation" authentication type, then here you can configure which Conduit Group(s) Or Active Directory Group(s) should grant access to published table(s).
By default Authorization is not enabled, meaning all authenticated users will have access to all published tables for a given connector.
To enforce Authorization click Enable Authorization
From a group list you can select which groups(s) should grant accessto the connector
Access is granted on a table level.
If you need some group(s) to have access to certain fields from table A, and other group(s) should have access to another set of fields from the same table A, please create two connectors to pruned versions of the table A, one for each permissions case.
If Authorization is enabled but not groups are selected, the connector's tables will be accessible to no one.
Only Admins are allowed to view and modify Authorization tab.
Authentication type and Authorization configuration can be changed at any time. If permissions are revoked, the data will no longer be accessible to external user(s) as well as connector to a restricted table will no longer be present in connector list in BI tools.
Advanced
Fine-tune how your selections should be published.
For each table the following can be configured:
Alias
A user-friendly table name to be used to identify published tables by external users.
Optional. If not specified, real file name or immediate parent folder name will be used for identification
File Type
File type of the file (or files if these are expected to be appended into one table)
If file type is CSV, TSV, PDV, CDV or SCDV, First Row Header option will be added, checked by default
Cache options
Cache now
Displayed when Connector Cache enabled on Virtualization tab; disabled by default
Conduit will initiate caching of the data source on connector save to avoid waiting for cache upon initial query
Auto refresh
Displayed when Connector Cache enabled on Virtualization tab; enabled by default
Conduit will re-cache connector in Parquet Store when existing data cache expires
Caching Expiration
Displayed when Cache Query or Connector Cache has been enabled on Virtualization tab
Default cache expiration time is 24 hours, can be customized for each connector’s dataset as needed
Connectors to large datasets would benefit from having less frequent caching
After expiration, cache will re-create either when previous cache expires (if Auto refresh option enabled) or when a query is ran (if Auto refresh option disabled)
Folder options - available if an entire folder has been selected
Static
Conduit will build a static list of files at connector setup time and new files will be ignored at query time.
Dynamic Folder
Conduit will recursively traverse the folder structure and build a list of all files in this folder tree at query time, so new files flowing into the folder structure are always going to be included in the query.
Hive-Compatible Folder Layout
Conduit will read out flat files from cloud storage, and if the files are grouped in folders with names in the "fieldName=fieldValue" format (for example date=12/09/2018, date=12/10/2018 etc), Spark will be able to read this "Hive-compatible" folder with predicate pushdown for queries involving these fields and provide huge performance gains for filtering and aggregation queries.
Endpoints
This page contains the endpoints for the newly created connector that you can use to access the data from different applications:
JDBC/ODBC/Thrift Endpoint – to connect to dataset(s) defined on the connector from various BI and data science tools
Power BI Spark Connector – to connect to dataset(s) defined on the connector from Power BI
Tableau Spark Connector – to connect to dataset(s) defined on the connector from Tableau
Qlik Spark Thrift Connector – to connect to dataset(s) defined on the connector from Qlik Sense
REST Endpoint – to connect to dataset(s) defined on the connector via REST API