Package org.apache.spark.sql.jdbc
Class PostgresDialect
Object
org.apache.spark.sql.jdbc.JdbcDialect
org.apache.spark.sql.jdbc.PostgresDialect
- All Implemented Interfaces:
Serializable,org.apache.spark.internal.Logging,org.apache.spark.sql.catalyst.SQLConfHelper,NoLegacyJDBCError,scala.Equals,scala.Product
public class PostgresDialect
extends JdbcDialect
implements org.apache.spark.sql.catalyst.SQLConfHelper, NoLegacyJDBCError, scala.Product, Serializable
- See Also:
-
Nested Class Summary
Nested ClassesNested classes/interfaces inherited from interface org.apache.spark.internal.Logging
org.apache.spark.internal.Logging.LogStringContext, org.apache.spark.internal.Logging.SparkShellLoggingFilter -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionabstract static Rapply()voidbeforeFetch(Connection connection, scala.collection.immutable.Map<String, String> properties) Override connection specific properties to run before a select is made.booleanCheck if this dialect instance can handle a certain jdbc url.classifyException(Throwable e, String condition, scala.collection.immutable.Map<String, String> messageParameters, String description, boolean isRuntime) Gets a dialect exception, classifies it and wraps it byAnalysisException.scala.Option<String>compileExpression(Expression expr) Converts V2 expression to String representing a SQL expression.compileValue(Object value) Converts value to SQL expression.Converts an instance ofjava.sql.Dateto a customjava.sql.Datevalue.java.sql timestamps are measured with millisecond accuracy (from Long.MinValue milliseconds to Long.MaxValue milliseconds), while Spark timestamps are measured at microseconds accuracy.Convert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database.Converts a LocalDateTime representing a TimestampNTZ type to an instance ofjava.sql.Timestamp.createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference, Map<String, String>> columnsProperties, Map<String, String> properties) Build a create index SQL statement.dropIndex(String indexName, Identifier tableIdent) Build a drop index SQL statement.scala.Option<DataType>getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Get the custom datatype mapping for the given jdbc meta information.scala.Option<JdbcType>getJDBCType(DataType dt) Retrieve the jdbc / sql type for a given datatype.getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample) getTruncateQuery(String table, scala.Option<Object> cascade) The SQL query used to truncate a table.getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable) getUpdateColumnTypeQuery(String tableName, String columnName, String newDataType) booleanindexExists(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Checks whether an index existsscala.Option<Object>Return Some[true] iffTRUNCATE TABLEcauses cascading default.booleanbooleanisSupportedFunction(String funcName) Returns whether the database supports function.booleanisSyntaxErrorBestEffort(SQLException exception) Attempts to determine if the given SQLException is a SQL syntax error.renameTable(Identifier oldTable, Identifier newTable) Rename an existing table.booleanReturns ture if dialect supports LIMIT clause.booleanReturns ture if dialect supports OFFSET clause.booleanstatic StringtoString()voidupdateExtraColumnMeta(Connection conn, ResultSetMetaData rsmd, int columnIdx, MetadataBuilder metadata) Get extra column metadata for the given column.Methods inherited from class org.apache.spark.sql.jdbc.JdbcDialect
alterTable, classifyException, compileAggregate, createConnectionFactory, createSchema, createTable, dropSchema, dropTable, functions, getAddColumnQuery, getDayTimeIntervalAsMicros, getDeleteColumnQuery, getFullyQualifiedQuotedTableName, getJdbcSQLQueryBuilder, getLimitClause, getOffsetClause, getRenameColumnQuery, getSchemaCommentQuery, getSchemaQuery, getTableCommentQuery, getTableExistsQuery, getTruncateQuery, getYearMonthIntervalAsMonths, insertIntoTable, listIndexes, listSchemas, quoteIdentifier, removeSchemaCommentQuery, renameTable, schemasExists, supportsHintMethods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface scala.Equals
canEqual, equalsMethods inherited from interface org.apache.spark.internal.Logging
initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, isTraceEnabled, log, logBasedOnLevel, logDebug, logDebug, logDebug, logDebug, logError, logError, logError, logError, logInfo, logInfo, logInfo, logInfo, logName, LogStringContext, logTrace, logTrace, logTrace, logTrace, logWarning, logWarning, logWarning, logWarning, org$apache$spark$internal$Logging$$log_, org$apache$spark$internal$Logging$$log__$eq, withLogContextMethods inherited from interface scala.Product
productArity, productElement, productElementName, productElementNames, productIterator, productPrefixMethods inherited from interface org.apache.spark.sql.catalyst.SQLConfHelper
conf, withSQLConf
-
Constructor Details
-
PostgresDialect
public PostgresDialect()
-
-
Method Details
-
apply
public abstract static R apply() -
toString
-
canHandle
Description copied from class:JdbcDialectCheck if this dialect instance can handle a certain jdbc url.- Specified by:
canHandlein classJdbcDialect- Parameters:
url- the jdbc url.- Returns:
- True if the dialect can be applied on the given jdbc url.
-
isSupportedFunction
Description copied from class:JdbcDialectReturns whether the database supports function.- Overrides:
isSupportedFunctionin classJdbcDialect- Parameters:
funcName- Upper-cased function name- Returns:
- True if the database supports function.
-
isObjectNotFoundException
- Overrides:
isObjectNotFoundExceptionin classJdbcDialect
-
getCatalystType
public scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md) Description copied from class:JdbcDialectGet the custom datatype mapping for the given jdbc meta information.Guidelines for mapping database defined timestamps to Spark SQL timestamps:
-
TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
TimestampNTZType -
TIMESTAMP WITHOUT TIME ZONE if !preferTimestampNTZ ->
TimestampType(LTZ) - TIMESTAMP WITH TIME ZONE ->
TimestampType(LTZ) - TIMESTAMP WITH LOCAL TIME ZONE ->
TimestampType(LTZ) -
If the TIMESTAMP cannot be distinguished by
sqlTypeandtypeName, preferTimestampNTZ is respected for now, but we may need to add another option in the future if necessary.
- Overrides:
getCatalystTypein classJdbcDialect- Parameters:
sqlType- Refers toTypesconstants, or other constants defined by the target database, e.g.-101is Oracle's TIMESTAMP WITH TIME ZONE type. This value is returned byResultSetMetaData.getColumnType(int).typeName- The column type name used by the database (e.g. "BIGINT UNSIGNED"). This is sometimes used to determine the target data type whensqlTypeis not sufficient if multiple database types are conflated into a single id. This value is returned byResultSetMetaData.getColumnTypeName(int).size- The size of the type, e.g. the maximum precision for numeric types, length for character string, etc. This value is returned byResultSetMetaData.getPrecision(int).md- Result metadata associated with this type. This contains additional information fromResultSetMetaDataor user specified options.-
isTimestampNTZ: Whether read a TIMESTAMP WITHOUT TIME ZONE value asTimestampNTZTypeor not. This is configured byJDBCOptions.preferTimestampNTZ. -
scale: The length of fractional partResultSetMetaData.getScale(int)
-
- Returns:
- An option the actual DataType (subclasses of
DataType) or None if the default type mapping should be used.
-
TIMESTAMP WITHOUT TIME ZONE if preferTimestampNTZ ->
-
convertJavaTimestampToTimestampNTZ
Description copied from class:JdbcDialectConvert java.sql.Timestamp to a LocalDateTime representing the same wall-clock time as the value stored in a remote database. JDBC dialects should override this function to provide implementations that suit their JDBC drivers.- Overrides:
convertJavaTimestampToTimestampNTZin classJdbcDialect- Parameters:
t- Timestamp returned from JDBC driver getTimestamp method.- Returns:
- A LocalDateTime representing the same wall clock time as the timestamp in database.
-
convertTimestampNTZToJavaTimestamp
Description copied from class:JdbcDialectConverts a LocalDateTime representing a TimestampNTZ type to an instance ofjava.sql.Timestamp.- Overrides:
convertTimestampNTZToJavaTimestampin classJdbcDialect- Parameters:
ldt- representing a TimestampNTZType.- Returns:
- A Java Timestamp representing this LocalDateTime.
-
getJDBCType
Description copied from class:JdbcDialectRetrieve the jdbc / sql type for a given datatype.- Overrides:
getJDBCTypein classJdbcDialect- Parameters:
dt- The datatype (e.g.StringType)- Returns:
- The new JdbcType if there is an override for this DataType
-
isCascadingTruncateTable
Description copied from class:JdbcDialectReturn Some[true] iffTRUNCATE TABLEcauses cascading default. Some[true] : TRUNCATE TABLE causes cascading. Some[false] : TRUNCATE TABLE does not cause cascading. None: The behavior of TRUNCATE TABLE is unknown (default).- Overrides:
isCascadingTruncateTablein classJdbcDialect- Returns:
- (undocumented)
-
getTruncateQuery
The SQL query used to truncate a table. For Postgres, the default behaviour is to also truncate any descendant tables. As this is a (possibly unwanted) side-effect, the Postgres dialect adds 'ONLY' to truncate only the table in question- Overrides:
getTruncateQueryin classJdbcDialect- Parameters:
table- The table to truncatecascade- Whether or not to cascade the truncation. Default value is the value of isCascadingTruncateTable(). Cascading a truncation will truncate tables with a foreign key relationship to the target table. However, it will not truncate tables with an inheritance relationship to the target table, as the truncate query always includes "ONLY" to prevent this behaviour.- Returns:
- The SQL query to use for truncating a table
-
beforeFetch
public void beforeFetch(Connection connection, scala.collection.immutable.Map<String, String> properties) Description copied from class:JdbcDialectOverride connection specific properties to run before a select is made. This is in place to allow dialects that need special treatment to optimize behavior.- Overrides:
beforeFetchin classJdbcDialect- Parameters:
connection- The connection objectproperties- The connection properties. This is passed through from the relation.
-
getUpdateColumnTypeQuery
- Overrides:
getUpdateColumnTypeQueryin classJdbcDialect
-
getUpdateColumnNullabilityQuery
public String getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable) - Overrides:
getUpdateColumnNullabilityQueryin classJdbcDialect
-
createIndex
public String createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference, Map<String, String>> columnsProperties, Map<String, String> properties) Description copied from class:JdbcDialectBuild a create index SQL statement.- Overrides:
createIndexin classJdbcDialect- Parameters:
indexName- the name of the index to be createdtableIdent- the table on which index to be createdcolumns- the columns on which index to be createdcolumnsProperties- the properties of the columns on which index to be createdproperties- the properties of the index to be created- Returns:
- the SQL statement to use for creating the index.
-
isSyntaxErrorBestEffort
Description copied from class:JdbcDialectAttempts to determine if the given SQLException is a SQL syntax error.This check is best-effort: it may not detect all syntax errors across all JDBC dialects. However, if this method returns true, the exception is guaranteed to be a syntax error.
This is used to decide whether to wrap the exception in a more appropriate Spark exception.
- Overrides:
isSyntaxErrorBestEffortin classJdbcDialect- Parameters:
exception- (undocumented)- Returns:
- true if the exception is confidently identified as a syntax error; false otherwise.
-
indexExists
public boolean indexExists(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options) Description copied from class:JdbcDialectChecks whether an index exists- Overrides:
indexExistsin classJdbcDialect- Parameters:
conn- (undocumented)indexName- the name of the indextableIdent- the table on which index to be checkedoptions- JDBCOptions of the table- Returns:
- true if the index with
indexNameexists in the table withtableName, false otherwise
-
dropIndex
Description copied from class:JdbcDialectBuild a drop index SQL statement.- Overrides:
dropIndexin classJdbcDialect- Parameters:
indexName- the name of the index to be dropped.tableIdent- the table on which index to be dropped.- Returns:
- the SQL statement to use for dropping the index.
-
classifyException
public Throwable classifyException(Throwable e, String condition, scala.collection.immutable.Map<String, String> messageParameters, String description, boolean isRuntime) Description copied from class:JdbcDialectGets a dialect exception, classifies it and wraps it byAnalysisException.- Specified by:
classifyExceptionin interfaceNoLegacyJDBCError- Overrides:
classifyExceptionin classJdbcDialect- Parameters:
e- The dialect specific exception.condition- The error condition assigned in the case of an unclassifiedemessageParameters- The message parameters oferrorClassdescription- The error descriptionisRuntime- Whether the exception is a runtime exception or not.- Returns:
SparkThrowable + Throwableor its sub-class.
-
compileExpression
Description copied from class:JdbcDialectConverts V2 expression to String representing a SQL expression.- Overrides:
compileExpressionin classJdbcDialect- Parameters:
expr- The V2 expression to be converted.- Returns:
- Converted value.
-
compileValue
Description copied from class:JdbcDialectConverts value to SQL expression.- Overrides:
compileValuein classJdbcDialect- Parameters:
value- The value to be converted.- Returns:
- Converted value.
-
supportsLimit
public boolean supportsLimit()Description copied from class:JdbcDialectReturns ture if dialect supports LIMIT clause.Note: Some build-in dialect supports LIMIT clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilderandMsSqlServerDialect.MsSqlServerSQLQueryBuilder.- Overrides:
supportsLimitin classJdbcDialect- Returns:
- (undocumented)
-
supportsOffset
public boolean supportsOffset()Description copied from class:JdbcDialectReturns ture if dialect supports OFFSET clause.Note: Some build-in dialect supports OFFSET clause with some trick, please see:
OracleDialect.OracleSQLQueryBuilderandMySQLDialect.MySQLSQLQueryBuilder.- Overrides:
supportsOffsetin classJdbcDialect- Returns:
- (undocumented)
-
supportsTableSample
public boolean supportsTableSample()- Overrides:
supportsTableSamplein classJdbcDialect
-
getTableSample
- Overrides:
getTableSamplein classJdbcDialect
-
renameTable
Description copied from class:JdbcDialectRename an existing table.- Overrides:
renameTablein classJdbcDialect- Parameters:
oldTable- The existing table.newTable- New name of the table.- Returns:
- The SQL statement to use for renaming the table.
-
convertJavaTimestampToTimestamp
java.sql timestamps are measured with millisecond accuracy (from Long.MinValue milliseconds to Long.MaxValue milliseconds), while Spark timestamps are measured at microseconds accuracy. For the "infinity values" in PostgreSQL (represented by big constants), we need clamp them to avoid overflow. If it is not one of the infinity values, fall back to default behavior.- Overrides:
convertJavaTimestampToTimestampin classJdbcDialect- Parameters:
t- (undocumented)- Returns:
- (undocumented)
-
convertJavaDateToDate
Description copied from class:JdbcDialectConverts an instance ofjava.sql.Dateto a customjava.sql.Datevalue.- Overrides:
convertJavaDateToDatein classJdbcDialect- Parameters:
d- the date value returned from JDBC ResultSet getDate method.- Returns:
- the date value after conversion
-
updateExtraColumnMeta
public void updateExtraColumnMeta(Connection conn, ResultSetMetaData rsmd, int columnIdx, MetadataBuilder metadata) Description copied from class:JdbcDialectGet extra column metadata for the given column.- Overrides:
updateExtraColumnMetain classJdbcDialect- Parameters:
conn- The connection currently connection being used.rsmd- The metadata of the current result set.columnIdx- The index of the column.metadata- The metadata builder to store the extra column information.
-