Cloudera JDBC Driver For Apache Hive Install Guide PDF
Cloudera JDBC Driver For Apache Hive Install Guide PDF
Cloudera, the Cloudera logo, and any other product or service names or slogans contained in this
document, except as otherwise disclaimed, are trademarks of Cloudera and its suppliers or
licensors, and may not be copied, imitated or used, in whole or in part, without the prior written
permission of Cloudera or the applicable trademark holder.
Hadoop and the Hadoop elephant logo are trademarks of the Apache Software Foundation. All
other trademarks, registered trademarks, product names and company names or logos
mentioned in this document are the property of their respective owners. Reference to any
products, services, processes or other information, by trade name, trademark, manufacturer,
supplier or otherwise does not constitute or imply endorsement, sponsorship or
recommendation thereof by us.
Complying with all applicable copyright laws is the responsibility of the user. Without limiting the
rights under copyright, no part of this document may be reproduced, stored in or introduced into
a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written
permission of Cloudera.
Cloudera may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Cloudera, the furnishing of this document does not give you any
license to these patents, trademarks copyrights, or other intellectual property.
The information in this document is subject to change without notice. Cloudera shall not be liable
for any damages resulting from technical errors or omissions which may be present in this
document, or from use of this document.
Cloudera, Inc.
1001 Page Mill Road, Building 2
Palo Alto, CA 94304-1008
[email protected]
US: 1-888-789-1488
Intl: 1-650-843-0595
www.cloudera.com
Release Information
Version: 2.6.5
The Cloudera JDBC Driver for Apache Hive complies with the JDBC 4.0 and 4.1 data standards.
JDBC is one of the most established and widely supported APIs for connecting to and working with
databases. At the heart of the technology is the JDBC driver, which connects an application to the
database. For more information about JDBC, see Data Access Standards on the Simba
Technologies website: https://www.simba.com/resources/data-access-standards-glossary.
This guide is suitable for users who want to access data residing within Hive from their desktop
environment. Application developers might also find the information helpful. Refer to your
application for details on connecting via JDBC.
System Requirements
Each machine where you use the Cloudera JDBC Driver for Apache Hive must have Java Runtime
Environment (JRE) installed. The version of JRE that must be installed depends on the version of
the JDBC API you are using with the driver. The following table lists the required version of JRE for
each provided version of the JDBC API.
The driver is recommended for Apache Hive versions 0.11 through 3.1, and CDH versions 5.0
through 5.15. The driver also supports later minor versions of CDH 5.
The archive contains the driver supporting the JDBC API version indicated in the archive name, as
well as release notes and third-party license information. In addition, the required third-party
libraries and dependencies are packaged and shared in the driver JAR file in the archive.
To access a Hive data store using the Cloudera JDBC Driver for Apache Hive, you need to configure
the following:
l The list of driver library files (see "Referencing the JDBC Driver Libraries" on page 8)
l The Driver or DataSource class (see "Registering the Driver Class" on page 9)
l The connection URL for the driver (see "Building the Connection URL" on page 10)
Most JDBC applications provide a set of configuration options for adding a list of driver library
files. Use the provided options to include all the JAR files from the ZIP archive as part of the driver
configuration in the application. For more information, see the documentation for your JDBC
application.
You must include all the driver library files in the class path. This is the path that the Java Runtime
Environment searches for classes and other resource files. For more information, see "Setting the
Class Path" in the appropriate Java SE Documentation.
For Java SE 8:
l For Windows:
http://docs.oracle.com/javase/8/docs/technotes/tools/windows/classpath.html
The following is a list of the classes used to connect the Cloudera JDBC Driver for Apache Hive to
Hive data stores. The Driver classes extend java.sql.Driver, and the DataSource
classes extend javax.sql.DataSource and
javax.sql.ConnectionPoolDataSource.
To support JDBC 4.0, classes with the following fully-qualified class names (FQCNs) are available:
l com.cloudera.hive.jdbc4.HS1Driver
l com.cloudera.hive.jdbc4.HS2Driver
l com.cloudera.hive.jdbc4.HS1DataSource
l com.cloudera.hive.jdbc4.HS2DataSource
To support JDBC 4.1, classes with the following FQCNs are available:
l com.cloudera.hive.jdbc41.HS1Driver
l com.cloudera.hive.jdbc41.HS2Driver
l com.cloudera.hive.jdbc41.HS1DataSource
l com.cloudera.hive.jdbc41.HS2DataSource
The following sample code shows how to use the DriverManager to establish a connection for
JDBC 4.0:
Note:
The following sample code shows how to use the DataSource class to establish a connection:
private static Connection connectViaDS() throws Exception
{
Connection connection = null;
Class.forName(DRIVER_CLASS);
Note:
By default, the driver uses the schema named default and authenticates the connection using the
user name anonymous.
You can specify optional settings such as the number of the schema to use or any of the
connection properties supported by the driver. For a list of the properties available in the driver,
see "Driver Configuration Options" on page 93.
Note:
If you specify a property that is not supported by the driver, then the driver attempts to apply
the property as a Hive server-side property for the client session. For more information, see
"Configuring Server-Side Properties" on page 27.
The following is the format of a connection URL that specifies some optional settings:
jdbc:[Subprotocol]://[Host]:[Port]/[Schema];[Property1]=[Value];
[Property2]=[Value];...
For example, to connect to port 11000 on a Hive Server 2 instance installed on the local machine,
use a schema named default2, and authenticate the connection using a user name and password,
you would use the following connection URL:
jdbc:hive2://localhost:11000/default2;AuthMech=3;
UID=cloudera;PWD=cloudera
Important:
Note:
l If you specify a schema in the connection URL, you can still issue queries on other
schemas by explicitly specifying the schema in the query. To inspect your databases and
determine the appropriate schema to use, run the show databases command at the
Hive command prompt.
l If you set the transportMode property to http, then the port number specified in the
connection URL corresponds to the HTTP port rather than the TCP port. By default, Hive
servers use 10001 as the HTTP port number.
Configuring Authentication
The Cloudera JDBC Driver for Apache Hive supports the following authentication mechanisms:
l No Authentication
l Kerberos
l User Name
l User Name And Password
l Hadoop Delegation Token
You configure the authentication mechanism that the driver uses to connect to Hive by specifying
the relevant properties in the connection URL.
For information about selecting an appropriate authentication mechanism when using the
Cloudera JDBC Driver for Apache Hive, see "Authentication Mechanisms" on page 17.
For information about the properties you can use in the connection URL, see "Driver
Configuration Options" on page 93.
Note:
In addition to authentication, you can configure the driver to connect over SSL. For more
information, see "Configuring SSL" on page 25.
Using No Authentication
Note:
When connecting to a Hive server of type Hive Server 1, you must use No Authentication.
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
For example:
jdbc:hive2://localhost:10000;AuthMech=0;transportMode=binary;
Using Kerberos
Kerberos must be installed and configured before you can use this authentication mechanism. For
information about configuring and operating Kerberos on Windows, see "Configuring Kerberos
Authentication for Windows" on page 19. For other operating systems, see the MIT Kerberos
documentation: http://web.mit.edu/kerberos/krb5-latest/doc/.
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
Note:
If your Kerberos setup does not define a default realm or if the realm of your Hive server is
not the default, then set the KrbRealm property to the realm of the Hive server.
3. Set the KrbHostFQDN property to the fully qualified domain name of the Hive server host.
4. Optionally, specify how the driver obtains the Kerberos Subject by setting the
KrbAuthType property as follows:
l To configure the driver to automatically detect which method to use for obtaining
the Subject, set the KrbAuthType property to 0. Alternatively, do not set the
KrbAuthType property.
l Or, to create a LoginContext from a JAAS configuration and then use the Subject
associated with it, set the KrbAuthType property to 1.
l Or, to create a LoginContext from a Kerberos ticket cache and then use the Subject
associated with it, set the KrbAuthType property to 2.
For more detailed information about how the driver obtains Kerberos Subjects based on
these settings, see "KrbAuthType" on page 96.
For example, the following connection URL connects to a Hive server with Kerberos enabled, but
without SSL enabled:
jdbc:hive2://node1.example.com:10000;AuthMech=1;
KrbRealm=EXAMPLE.COM;KrbHostFQDN=hs2node1.example.com;
KrbServiceName=hive;KrbAuthType=2
In this example, Kerberos is enabled for JDBC connections, the Kerberos service principal name is
hive/[email protected], the host name for the data source is
node1.example.com, and the server is listening on port 10000 for JDBC connections.
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
Note:
This authentication mechanism is available only for Hive Server 2. Most default configurations of
Hive Server 2 require User Name authentication.
For example:
jdbc:hive2://node1.example.com:10000;AuthMech=2;
transportMode=sasl;UID=hs2
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
Note:
For example, the following connection URL connects to a Hive server with LDAP authentication
enabled, but without SSL or SASL enabled:
jdbc:hive2://node1.example.com:10001;AuthMech=3;
transportMode=http;httpPath=cliservice;UID=hs2;PWD=cloudera;
In this example, user name and password (LDAP) authentication is enabled for JDBC connections,
the LDAP user name is hs2, the password is cloudera, and the server is listening on port 10001 for
JDBC connections.
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
Note:
For example:
jdbc:hive
2
://node1.example.com:
10000;AuthMech=6;delegationToken=kP9PcyQ7prK2LwUMZMpFQ4R+5VE
If you are using a Hadoop delegation token for authentication, the token must be provided to the
driver in the form of a Base64 URL-safe encoded string. This token can be obtained from the driver
using the getDelegationToken() function, or by utilizing the Hadoop distribution .jar
files.
The code samples below demonstrate the use of the getDelegationToken() function. For
more information about this function, see "IHadoopConnection" on page 35.
The sample below shows how to obtain the token string with the driver using a Kerberos
connection:
import
com.cloudera.hiveserver2.hivecommon.core.IHadoopConnection;
The sample below demonstrates how to obtain the encoded string form of the token if the
delegation is saved to the UserGroupInformation. This sample requires the hadoop-shims-
common-[hadoop version].jar, hadoop-common-[hadoop version].jar, and all
their dependencies.
import org.apache.hadoop.hive.shims.Utils;
import org.apache.hive.service.auth.HiveAuthFactory;
public class TestHadoopDelegationTokenClass
{
public static void main(String[] args) throws SQLException
{
// Obtain the delegationToken stored in the current
UserGroupInformation.
String delegationToken = Utils.getTokenStrForm
(HiveAuthFactory.HS2_CLIENT_TOKEN);
Connection tokenConnection =
DriverManager.getConnection(tokenConnectionString);
}
}
Authentication Mechanisms
To connect to a Hive server, you must configure the Cloudera JDBC Driver for Apache Hive to use
the authentication mechanism that matches the access requirements of the server and provides
the necessary credentials. To determine the authentication settings that your Hive server
requires, check the server configuration and then refer to the corresponding section below.
Hive Server 1
Hive Server 1 does not support authentication. You must configure the driver to use No
Authentication (see "Using No Authentication" on page 12).
Hive Server 2
Most default configurations of Hive Server 2 require User Name authentication. If you are unable
to connect to your Hive server using User Name authentication, then verify the authentication
mechanism configured for your Hive server by examining the hive-site.xml file. Examine the
following properties to determine which authentication mechanism your server is set to use:
The following table lists the authentication mechanisms to configure for the driver based on the
settings in the hive-site.xml file.
Note:
For more information about authentication mechanisms, refer to the documentation for your
Hadoop / Hive distribution. See also "Running Hadoop in Secure Mode" in the Apache Hadoop
documentation: http://hadoop.apache.org/docs/r0.23.7/hadoop-project-dist/hadoop-
common/ClusterSetup.html#Running_Hadoop_in_Secure_Mode.
Using No Authentication
Using Kerberos
If no user name is specified in the driver configuration, then the driver defaults to using hive as
the user name.
When connecting to a Hive Server 2 instance and the server is configured to use the SASL-PLAIN
authentication mechanism with a user name and a password, you must configure your
connection to use User Name And Password authentication.
Note:
The 64-bit installer includes both 32-bit and 64-bit libraries. The 32-bit installer includes 32-
bit libraries only.
2. To run the installer, double-click the .msi file that you downloaded.
3. Follow the instructions in the installer to complete the installation process.
4. When the installation completes, click Finish.
You must set the KRB5CCNAME environment variable to your credential cache file.
If the authentication succeeds, then your ticket information appears in the MIT Kerberos Ticket
Manager.
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
For detailed information about these properties, see "Driver Configuration Options" on page 93
To enable the driver to get Ticket Granting Tickets (TGTs) directly, make sure that the
KRB5CCNAME environment variable has not been set.
For example:
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="PathToTheKeyTab"
principal="cloudera@CLOUDERA"
doNotPrompt=true;
};
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
For detailed information about these properties, see "Driver Configuration Options" on
page 93.
If the client application obtains a Subject with a TGT, then that Subject can be used to
authenticate the connection to the server.
For example:
// Contains logic to be executed as a privileged action
public class AuthenticateDriverAction
implements PrivilegedAction<Void>
{
// The connection, which is established as a
PrivilegedAction
Connection con;
// Define a string as the connection URL
static String ConnectionURL =
"jdbc:hive2://192.168.1.1:10000";
/**
* Logic executed in this method will have access to the
* Subject that is used to "doAs". The driver will get
* the Subject and use it for establishing a connection
* with the server.
*/
@Override
public Void run()
{
try
{
// Establish a connection using the connection URL
con = DriverManager.getConnection(ConnectionURL);
}
catch (SQLException e)
{
// Handle errors that are encountered during
// interaction with the data store
e.printStackTrace();
}
catch (Exception e)
{
// Handle other errors
e.printStackTrace();
}
return null;
}
}
2. Run the PrivilegedAction using the existing Subject, and then use the connection.
For example:
argument:
-Dsun.security.krb5.debug=true
l Or, add the following code to the source code of your application:
System.setProperty("sun.security.krb5.debug","true")
l Or, add the following code to the source code of your application:
System.setProperty
("com.ibm.security.krb5.Krb5Debug","all");
System.setProperty
("com.ibm.security.jgss.debug","all");
Important:
Consult your company’s policy to make sure that you are allowed to enable encryption
strengths in your environment that are greater than what the JVM allows by default.
If the issue is not resolved after you install the JCE policy files extension, then restart your
machine and try your connection again. If the issue persists even after you restart your machine,
then verify which directories the JVM is searching to find the JCE policy files extension. To print
out the search paths that your JVM currently uses to find the JCE policy files extension, modify
your Java source code to print the return value of the following call:
System.getProperty("java.ext.dirs")
Configuring SSL
Note:
In this documentation, "SSL" indicates both TLS (Transport Layer Security) and SSL (Secure
Sockets Layer). The driver supports industry-standard versions of TLS/SSL.
If you are connecting to a Hive server that has Secure Sockets Layer (SSL) enabled, you can
configure the driver to connect to an SSL-enabled socket. When connecting to a server over SSL,
the driver uses one-way authentication to verify the identity of the server.
One-way authentication requires a signed, trusted SSL certificate for verifying the identity of the
server. You can configure the driver to access a specific TrustStore or KeyStore that contains the
appropriate certificate. If you do not specify a TrustStore or KeyStore, then the driver uses the
default Java TrustStore named jssecacerts. If jssecacerts is not available, then the driver
uses cacerts instead.
You provide this information to the driver in the connection URL. For more information about the
syntax of the connection URL, see "Building the Connection URL" on page 10.
To configure SSL:
1. Set the SSL property to 1.
2. If you are not using one of the default Java TrustStores, then do one of the following:
l Create a TrustStore and configure the driver to use it:
a. Create a TrustStore containing your signed, trusted server certificate.
b. Set the SSLTrustStore property to the full path of the TrustStore.
c. Set the SSLTrustStorePwd property to the password for accessing the
TrustStore.
l Or, create a KeyStore and configure the driver to use it:
a. Create a KeyStore containing your signed, trusted server certificate.
b. Set the SSLKeyStore property to the full path of the KeyStore.
c. Set the SSLKeyStorePwd property to the password for accessing the
KeyStore.
3. Optionally, to allow the SSL certificate used by the server to be self-signed, set the
AllowSelfSignedCerts property to 1.
Important:
4. Optionally, to allow the common name of a CA-issued certificate to not match the host
name of the Hive server, set the CAIssuedCertNamesMismatch property to 1.
For example, the following connection URL connects to a data source using username and
password (LDAP) authentication, with SSL enabled:
jdbc:hive2://localhost:10000;AuthMech=3;SSL=1;
SSLKeyStore=C:\\Users\\bsmith\\Desktop\\keystore.jks;SSLKeyStore
Pwd=clouderaSSL123;UID=hs2;PWD=cloudera123
Note:
For more information about the connection properties used in SSL connections, see "Driver
Configuration Options" on page 93.
For example, to set the mapreduce.job.queuename property to myQueue, you would use a
connection URL such as the following:
jdbc:hive://localhost:18000/default2;AuthMech=3;
UID=cloudera;PWD=cloudera;mapreduce.job.queuename=myQueue
Note:
For a list of all Hadoop and Hive server-side properties that your implementation supports, run
the set -v command at the Hive CLI command line or Beeline. You can also execute the set
-v query after connecting using the driver.
Configuring Logging
To help troubleshoot issues, you can enable logging in the driver.
Important:
Only enable logging long enough to capture an issue. Logging decreases performance and can
consume a large quantity of disk space.
In the connection URL, set the LogLevel key to enable logging at the desired level of detail. The
following table lists the logging levels provided by the Cloudera JDBC Driver for Apache Hive, in
order from least verbose to most verbose.
2 Log error events that might allow the driver to continue running.
To enable logging:
1. Set the LogLevel property to the desired level of information to include in log files.
2. Set the LogPath property to the full path to the folder where you want to save log files. To
make sure that the connection URL is compatible with all JDBC applications, escape the
backslashes (\) in your file path by typing another backslash.
For example, the following connection URL enables logging level 3 and saves the log files in
the C:\temp folder:
jdbc:hive://localhost:11000;LogLevel=3;LogPath=C:\\temp
3. To make sure that the new settings take effect, restart your JDBC application and reconnect
to the server.
The Cloudera JDBC Driver for Apache Hive produces the following log files in the location specified
in the LogPath property:
l A HiveJDBC_driver.log file that logs driver activity that is not specific to a
connection.
If the LogPath value is invalid, then the driver sends the logged information to the standard
output stream (System.out).
To disable logging:
1. Set the LogLevel property to 0.
2. To make sure that the new setting takes effect, restart your JDBC application and reconnect
to the server.
Features
More information is provided on the following features of the Cloudera JDBC Driver for Apache
Hive:
l "SQL Query versus HiveQL Query" on page 30
l "Data Types" on page 30
l "Catalog and Schema Support" on page 31
l "Write-back" on page 31
l "IHadoopStatement" on page 32
l "IHadoopConnection" on page 35
l "Security and Authentication" on page 38
l "Interfaces and Supported Methods" on page 38
Data Types
The Cloudera JDBC Driver for Apache Hive supports many common data formats, converting
between Hive, SQL, and Java data types.
The aggregate types (ARRAY, MAP, STRUCT, and UNIONTYPE) are not yet supported. Columns of
aggregate types are treated as VARCHAR columns in SQL and STRING columns in Java.
Note:
Write-back
The Cloudera JDBC Driver for Apache Hive supports translation for the following syntax when
connecting to a Hive Server 2 instance that is running Hive 0.14 or later:
l INSERT
l UPDATE
l DELETE
l CREATE
l DROP
If the statement contains non-standard SQL-92 syntax, then the driver is unable to translate the
statement to SQL and instead falls back to using HiveQL.
IHadoopStatement
IHadoopStatement is an interface implemented by the driver's statement class. It provides access
to methods that allow for asynchronous execution of queries and the retrieval of the Yarn ATS
GUID associated with the execution.
*
* @param sql
An SQL statement to be sent to the database, typically
a
* static SQL SELECT statement.
*
* @return A ResultSet object that DOES NOT contain the
data produced by the given query; never null.
*
* @throws SQLException If a database access error
occurs, or the given SQL
* statement produces anything other than a single
* <code>ResultSet</code> object.
*/
public ResultSet executeAsync(String sql) throws
SQLException;
/**
* Returns the Yarn ATS guid.
*
* @return String The yarn ATS guid from the operation
if execution has started,
* else null.
*/
public String getYarnATSGuid(); }
The driver sends a request to the server for statement execution and returns immediately
after receiving a response from the server for the execute request without waiting for the
server to complete the execution.
The driver does not wait for the server to complete query execution unless getMetaData
() or next() APIs are called.
Note that this feature does not work with prepared statements.
For example:
import
com.cloudera.hiveserver2.hivecommon.core.IHadoopStatement;
Returns the Yarn ATS GUID associated with the current execution. Returns null if the Yarn
ATS GUID is not available.
For example:
public class TestYarnGUIDClass
{
public static void main(String[] args) throws SQLException
{
// Create the connection object.
// Execute a query.
ResultSet resultSet = statement.executeQuery("select *
from example_table");
IHadoopConnection
IHadoopConnection is an interface implemented by the driver's statement class. It provides
access to methods that allow for the retrieval, deletion, and renewal of delegation tokens.
server.
*
* @param tokenString The token to cancel.
* @throws SQLException If an error occurs while sending
the request.
*/
public void cancelDelegationToken(String tokenString)
throws SQLException;
/**
* Sends a get delegation token request to the server
and returns the token as an
* encoded string.
*
* @param owner The owner of the token.
* @param renewer The renewer of the token.
*
* @return The token as an encoded string.
* @throws SQLException If an error occurs while getting
the token.
*/
public String getDelegationToken(String owner, String
renewer) throws SQLException;
/**
* Sends a renew delegation token request to the sever.
*
* @param tokenString The token to renew.
* @throws SQLException If an error occurs while sending
the request.
*/
public void renewDelegationToken(String tokenString)
throws SQLException;
}
The driver sends a request to the server to obtain a delegation token with the given owner
and renewer.
The driver sends a request to the server to cancel the provided delegation token.
l renewDelegationToken()
The driver sends a request to the server to renew the provided delegation token.
The following is a basic code sample that demonstrates how to use the above functions:
public class TestDelegationTokenClass
{
public static void main(String[] args) throws SQLException
{
// Create the connection object with Kerberos
authentication.
Connection kerbConnection = DriverManager.getConnection
(
"jdbc:hive2://localhost:10000;AuthMech=1;KrbRealm=Y
ourRealm;KrbHostFQDN=sample.com;KrbServiceName=hiv
e;");
Note:
In this documentation, "SSL" indicates both TLS (Transport Layer Security) and SSL (Secure
Sockets Layer). The driver supports industry-standard versions of TLS/SSL.
The driver provides mechanisms that allow you to authenticate your connection using the
Kerberos protocol, your Hive user name only, or your Hive user name and password. You must
use the authentication mechanism that matches the security requirements of the Hive server. For
information about determining the appropriate authentication mechanism to use based on the
Hive server configuration, see "Authentication Mechanisms" on page 17. For detailed driver
configuration instructions, see "Configuring Authentication" on page 12.
Additionally, the driver supports SSL connections with one-way authentication. If the server has
an SSL-enabled socket, then you can configure the driver to connect to it.
It is recommended that you enable SSL whenever you connect to a server that is configured to
support it. SSL encryption protects data and credentials when they are transferred over the
network, and provides stronger security than authentication alone. For detailed configuration
instructions, see "Configuring SSL" on page 25.
The SSL version that the driver supports depends on the JVM version that you are using. For
information about the SSL versions that are supported by each version of Java, see "Diagnosing
TLS, SSL, and HTTPS" on the Java Platform Group Product Management Blog:
https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https.
Note:
The SSL version used for the connection is the highest version that is supported by both the
driver and the server, which is determined at connection time.
However, the driver does not support every method from these interfaces. For information about
whether a specific method is supported by the driver and which version of the JDBC API is the
earliest version that supports the method, refer to the following sections.
l Array l SQLData
l Blob l SQLInput
l Clob l SQLOutput
l Ref l Struct
l Savepoint
CallableStatement
The following table lists the methods that belong to the CallableStatement interface, and
describes whether each method is supported by the Cloudera JDBC Driver for Apache Hive and
which version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the CallableStatement interface, see the
Java API documentation:
http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/CallableStatement.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Connection
The following table lists the methods that belong to the Connection interface, and describes
whether each method is supported by the Cloudera JDBC Driver for Apache Hive and which
version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the Connection interface, see the Java API
documentation: http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/Connection.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Map<String,Class<?>> 3.0 No
getTypeMap()
CallableStatement 3.0 No
prepareCall(String sql)
CallableStatement 3.0 No
prepareCall(String sql, int
resultSetType, int
resultSetConcurrency)
CallableStatement 3.0 No
prepareCall(String sql, int
resultSetType, int
resultSetConcurrency, int
resultSetHoldability)
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
PreparedStatement 3.0 No
prepareStatement(String
sql, int autoGeneratedKeys)
PreparedStatement 3.0 No
prepareStatement(String
sql, int[] columnIndexes)
PreparedStatement 3.0 No
prepareStatement(String
sql, int resultSetType, int
resultSetConcurrency)
PreparedStatement 3.0 No
prepareStatement(String
sql, int resultSetType, int
resultSetConcurrency, int
resultSetHoldability)
PreparedStatement 3.0 No
prepareStatement(String
sql, String[] columnNames)
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
DatabaseMetaData
The following table lists the methods that belong to the DatabaseMetaData interface, and
describes whether each method is supported by the Cloudera JDBC Driver for Apache Hive and
which version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the DatabaseMetaData interface, see the Java
API
documentation:http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/DatabaseMetaData.html.
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
Suppo
rted Suppo
Since rted
Method Notes
JDBC by the
Versio Driver
n
DataSource
The following table lists the methods that belong to the DataSource interface, and describes
whether each method is supported by the Cloudera JDBC Driver for Apache Hive and which
version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the DataSource interface, see the Java API
documentation: http://docs.oracle.com/javase/1.5.0/docs/api/javax/sql/DataSource.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Driver
The following table lists the methods that belong to the Driver interface, and describes whether
each method is supported by the Cloudera JDBC Driver for Apache Hive and which version of the
JDBC API is the earliest version that supports the method.
For detailed information about each method in the Driver interface, see the Java API
documentation: http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/Driver.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
ParameterMetaData
The following table lists the methods that belong to the ParameterMetaData interface, and
describes whether each method is supported by the Cloudera JDBC Driver for Apache Hive and
which version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the ParameterMetaData interface, see the
Java API documentation:
http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/ParameterMetaData.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
PooledConnection
The following table lists the methods that belong to the PooledConnection interface, and
describes whether each method is supported by the Cloudera JDBC Driver for Apache Hive and
which version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the PooledConnection interface, see the Java
API documentation:
http://docs.oracle.com/javase/1.5.0/docs/api/javax/sql/PooledConnection.html.
Supporte
Supporte
d
d
Method Since Notes
by the
JDBC
Driver
Version
Supporte
Supporte
d
d
Method Since Notes
by the
JDBC
Driver
Version
PreparedStatement
The following table lists the methods that belong to the PreparedStatement interface, and
describes whether each method is supported by the Cloudera JDBC Driver for Apache Hive and
which version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the PooledConnection interface, see the Java
API documentation:
http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/PreparedStatement.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
ResultSet
The following table lists the methods that belong to the ResultSet interface, and describes
whether each method is supported by the Cloudera JDBC Driver for Apache Hive and which
version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the ResultSet interface, see the Java API
documentation: http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/ResultSet.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
ResultSetMetaData
The following table lists the methods that belong to the ResultSetMetaData interface, and
describes whether each method is supported by the Cloudera JDBC Driver for Apache Hive and
which version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the ResultSetMetaData interface, see the
Java API documentation:
http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/ResultSetMetaData.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Statement
The following table lists the methods that belong to the Statement interface, and describes
whether each method is supported by the Cloudera JDBC Driver for Apache Hive and which
version of the JDBC API is the earliest version that supports the method.
For detailed information about each method in the Statement interface, see the Java API
documentation: http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/Statement.html.
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
int[]executeBatch() 3.0 No
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
Supported
Supported
Since
Method by the Notes
JDBC
Driver
Version
You can set configuration properties using the connection URL. For more information, see
"Building the Connection URL" on page 10.
Note:
AllowSelfSignedCerts
Default Value Data Type Required
0 Integer No
Description
This property specifies whether the driver allows the server to use self-signed SSL certificates.
l 1: The driver allows self-signed certificates.
Important:
When this property is set to 1, SSL verification is disabled. The driver does not verify the
server certificate against the trust store, and does not verify if the server's host name
matches the common name in the server certificate.
Note:
AsyncExecPollInterval
Default Value Data Type Required
10 Integer No
Description
The time in milliseconds between each poll for the asynchronous query execution status.
"Asynchronous" refers to the fact that the RPC call used to execute a query against Hive is
asynchronous. It does not mean that JDBC asynchronous operations are supported.
Note:
AuthMech
Default Value Data Type Required
Description
The authentication mechanism to use. Set the property to one of the following values:
l 0 for No Authentication.
l 1 for Kerberos.
l 2 for User Name.
l 3 for User Name And Password.
l 6 for Hadoop Delegation Token.
CAIssuedCertsMismatch
Default Value Data Type Required
0 Integer No
Description
This property specifies whether the driver requires the name of the CA-issued SSL certificate to
match the host name of the Hive server.
l 0: The driver requires the names to match.
l 1: The driver allows the names to mismatch.
Note:
CatalogSchemaSwitch
Default Value Data Type Required
0 Integer No
Description
This property specifies whether the driver treats Hive catalogs as schemas or as catalogs.
l 1: The driver treats Hive catalogs as schemas as a restriction for filtering.
l 0: Hive catalogs are treated as catalogs, and Hive schemas are treated as schemas.
DecimalColumnScale
Default Value Data Type Required
10 Integer No
Description
The maximum number of digits to the right of the decimal point for numeric data types.
DefaultStringColumnLength
Default Value Data Type Required
255 Integer No
Description
The maximum number of characters that can be contained in STRING columns. The range of
DefaultStringColumnLength is 0 to 32767.
By default, the columns metadata for Hive does not specify a maximum data length for STRING
columns.
DelegationToken
Default Value Data Type Required
Description
This token must be provided to the driver in the form of a Base64 URL-safe encoded string. It can
be obtained from the driver using the getDelegationToken() function, or by utilizing the
Hadoop distribution .jar files.
DelegationUID
Default Value Data Type Required
None String No
Description
Use this option to delegate all operations against Hive to a user that is different than the
authenticated user for the connection.
Note:
This option is applicable only when connecting to a Hive Server 2 instance that supports this
feature.
httpPath
Default Value Data Type Required
Description
The driver forms the HTTP address to connect to by appending the httpPath value to the host
and port specified in the connection URL. For example, to connect to the HTTP address
http://localhost:10002/cliservice, you would use the following connection URL:
jdbc:hive2://localhost:10002;AuthMech=3;transportMode=http;httpP
ath=cliservice;
UID=hs2;PWD=cloudera;
Note:
KrbAuthType
Default Value Data Type Required
0 Integer No
Description
This property specifies how the driver obtains the Subject for Kerberos authentication.
l 0: The driver automatically detects which method to use for obtaining the Subject:
1. First, the driver tries to obtain the Subject from the current thread's inherited
AccessControlContext. If the AccessControlContext contains multiple Subjects, the
driver uses the most recent Subject.
2. If the first method does not work, then the driver checks the
java.security.auth.login.config system property for a JAAS
configuration. If a JAAS configuration is specified, the driver uses that information to
create a LoginContext and then uses the Subject associated with it.
3. If the second method does not work, then the driver checks the KRB5_CONFIG and
KRB5CCNAME system environment variables for a Kerberos ticket cache. The driver
uses the information from the cache to create a LoginContext and then uses the
Subject associated with it.
l 1: The driver checks the java.security.auth.login.config system property for a
JAAS configuration. If a JAAS configuration is specified, the driver uses that information to
create a LoginContext and then uses the Subject associated with it.
l 2: The driver checks the KRB5_CONFIG and KRB5CCNAME system environment variables for
a Kerberos ticket cache. The driver uses the information from the cache to create a
LoginContext and then uses the Subject associated with it.
KrbHostFQDN
Default Value Data Type Required
Description
KrbRealm
Default Value Data Type Required
Description
If your Kerberos configuration already defines the realm of the Hive Server 2 host as the default
realm, then you do not need to configure this property.
KrbServiceName
Default Value Data Type Required
Description
LogLevel
Default Value Data Type Required
0 Integer No
Description
Use this property to enable or disable logging in the driver and to specify the amount of detail
included in log files.
Important:
Only enable logging long enough to capture an issue. Logging decreases performance and can
consume a large quantity of disk space.
When logging is enabled, the driver produces the following log files in the location specified in the
LogPath property:
l A HiveJDBC_driver.log file that logs driver activity that is not specific to a
connection.
If the LogPath value is invalid, then the driver sends the logged information to the standard
output stream (System.out).
LogPath
Default Value Data Type Required
Description
The full path to the folder where the driver saves log files when logging is enabled.
PreparedMetaLimitZero
Default Value Data Type Required
0 Integer No
Description
PWD
Default Value Data Type Required
Description
The password corresponding to the user name that you provided using the property "UID" on
page 103.
Important:
If you set the AuthMech to 3, the default PWD value is not used and you must specify a
password.
RowsFetchedPerBlock
Default Value Data Type Required
10000 Integer No
Description
Any positive 32-bit integer is a valid value, but testing has shown that performance gains are
marginal beyond the default value of 10000 rows.
SocketTimeout
Default Value Data Type Required
0 Integer No
Description
The number of seconds that the TCP socket waits for a response from the server before raising an
error on the request.
When this property is set to 0, the connection does not time out.
SSL
Default Value Data Type Required
0 Integer No
Description
This property specifies whether the driver communicates with the Hive server through an SSL-
enabled socket.
l 1: The driver connects to SSL-enabled sockets.
l 0: The driver does not connect to SSL-enabled sockets.
Note:
SSL is configured independently of authentication. When authentication and SSL are both
enabled, the driver performs the specified authentication method over an SSL connection.
SSLKeyStore
Default Value Data Type Required
None String No
Description
The full path of the Java KeyStore containing the server certificate for one-way SSL authentication.
Note:
The Cloudera JDBC Driver for Apache Hive accepts TrustStores and KeyStores for one-way SSL
authentication. See also the property "SSLTrustStore" on page 101.
SSLKeyStorePwd
Default Value Data Type Required
Description
The password for accessing the Java KeyStore that you specified using the property "SSLKeyStore"
on page 101.
SSLTrustStore
Default Value Data Type Required
Description
The full path of the Java TrustStore containing the server certificate for one-way SSL
authentication.
Note:
The Cloudera JDBC Driver for Apache Hive accepts TrustStores and KeyStores for one-way SSL
authentication. See also the property "SSLKeyStore" on page 101.
SSLTrustStorePwd
Default Value Data Type Required
Description
The password for accessing the Java TrustStore that you specified using the property
"SSLTrustStore" on page 101.
transportMode
Default Value Data Type Required
sasl String No
Description
When connecting to a Hive Server 1 instance, you must use this setting. If you use this
setting but do not specify the AuthMech property, then the driver uses AuthMech=0 by
default. This setting is valid only when the AuthMech property is set to 0 or 3.
l sasl: The driver uses the SASL transport protocol.
If you use this setting but do not specify the AuthMech property, then the driver uses
AuthMech=2 by default. This setting is valid only when the AuthMech property is set to
1, 2, or 3.
l http: The driver uses the HTTP transport protocol.
If you use this setting but do not specify the AuthMech property, then the driver uses
AuthMech=3 by default. This setting is valid only when the AuthMech property is set to
3.
If you set this property to http, then the port number in the connection URL corresponds
to the HTTP port rather than the TCP port, and you must specify the httpPath property.
For more information, see "httpPath" on page 96.
UID
Default Value Data Type Required
Description
The user name that you use to access the Hive server.
Important:
If you set the AuthMech to 3, the default UID value is not used and you must specify a user
name.
UseNativeQuery
Default Value Data Type Required
0 Integer No
Description
This property specifies whether the driver transforms the queries emitted by applications.
l 1: The driver does not transform the queries emitted by applications, so the native query is
used.
l 0: The driver transforms the queries emitted by applications and converts them into an
equivalent form in HiveQL.
Note:
If the application is Hive-aware and already emits HiveQL, then enable this option to avoid the
extra overhead of query transformation.
zk
Default Value Data Type Required
None String No
Description
The connection string to one or more ZooKeeper quorums, written in the following format where
[ZK_IP] is the IP address, [ZK_Port] is the port number, and [ZK_Namespace] is the namespace:
[ZK_IP]:[ZK_Port]/[ZK_Namespace]
For example:
jdbc:hive2://zk=192.168.0.1:2181/hiveserver2
Use this option to enable the Dynamic Service Discovery feature, which allows you to connect to
Hive servers that are registered against a ZooKeeper service by connecting to the ZooKeeper
service.
You can specify multiple quorums in a comma-separated list. If connection to a quorum fails, the
driver will attempt to connect to the next quorum in the list.
Contact Us
If you are having difficulties using the driver, our Community Forum may have your solution. In
addition to providing user to user support, our forums are a great place to share your questions,
comments, and feature requests with us.
If you are a Subscription customer you may also use the Cloudera Support Portal to search the
Knowledge Base or file a Case.
Important:
To help us assist you, prior to contacting Cloudera Support please prepare a detailed summary
of the client and server environment including operating system version, patch level, and
configuration.
Third-Party Trademarks
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Apache Hive, Apache, and Hive are trademarks or registered trademarks of The Apache Software
Foundation or its subsidiaries in Canada, United States and/or other countries.
Third-Party Licenses
The licenses for the third-party libraries that are included in this product are listed below.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH
THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The following notice is included in compliance with the Apache License, Version 2.0 and is
applicable to all software licensed under the Apache License, Version 2.0.
Apache License
http://www.apache.org/licenses/
"License" shall mean the terms and conditions for use, reproduction, and distribution as
defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that
is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control,
are controlled by, or are under common control with that entity. For the purposes of this
definition, "control" means (i) the power, direct or indirect, to cause the direction or
management of such entity, whether by contract or otherwise, or (ii) ownership of fifty
percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by
this License.
"Source" form shall mean the preferred form for making modifications, including but not
limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation
of a Source form, including but not limited to compiled object code, generated
documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made
available under the License, as indicated by a copyright notice that is included in or
attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based
on (or derived from) the Work and for which the editorial revisions, annotations,
elaborations, or other modifications represent, as a whole, an original work of authorship.
For the purposes of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of, the Work and
Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the
Work and any modifications or additions to that Work or Derivative Works thereof, that is
intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by
an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the
purposes of this definition, "submitted" means any form of electronic, verbal, or written
communication sent to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems, and issue tracking
systems that are managed by, or on behalf of, the Licensor for the purpose of discussing
and improving the Work, but excluding communication that is conspicuously marked or
otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a
Contribution has been received by Licensor and subsequently incorporated within the
Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each
Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the Work and such Derivative
Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each
Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge,
royalty-free, irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license
applies only to those patent claims licensable by such Contributor that are necessarily
infringed by their Contribution(s) alone or by combination of their Contribution(s) with the
Work to which such Contribution(s) was submitted. If You institute patent litigation against
any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a
(a) You must give any other recipients of the Work or Derivative Works a copy of this
License; and
(b) You must cause any modified files to carry prominent notices stating that You
changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute,
all copyright, patent, trademark, and attribution notices from the Source form of
the Work, excluding those notices that do not pertain to any part of the
Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any
Derivative Works that You distribute must include a readable copy of the
attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the
following places: within a NOTICE text file distributed as part of the Derivative
Works; within the Source form or documentation, if provided along with the
Derivative Works; or, within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents of the NOTICE
file are for informational purposes only and do not modify the License. You may
add Your own attribution notices within Derivative Works that You distribute,
alongside or as an addendum to the NOTICE text from the Work, provided that
such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide
additional or different license terms and conditions for use, reproduction, or distribution of
Your modifications, or for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with the conditions stated
in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution
intentionally submitted for inclusion in the Work by You to the Licensor shall be under the
terms and conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of any
separate license agreement you may have executed with Licensor regarding such
Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for reasonable and
customary use in describing the origin of the Work and reproducing the content of the
NOTICE file.
To apply the Apache License to your work, attach the following boilerplate notice, with the
fields enclosed by brackets "[]" replaced with your own identifying information. (Don't
include the brackets!) The text should be enclosed in the appropriate comment syntax for
the file format. We also recommend that a file or class name and description of purpose be
included on the same "printed page" as the copyright notice for easier identification within
third-party archives.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this
file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
This product includes software that is licensed under the Apache License, Version 2.0 (listed
below):
Apache Commons
Copyright © 2001-2015 The Apache Software Foundation
Apache Hive
Copyright © 2008-2015 The Apache Software Foundation
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in
compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is
distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
express or implied. See the License for the specific language governing permissions and limitations
under the License.