Class PostgresDialect

Object
org.apache.spark.sql.jdbc.PostgresDialect

public class PostgresDialect extends Object
  • Constructor Details

    • PostgresDialect

      public PostgresDialect()
  • Method Details

    • canHandle

      public static boolean canHandle(String url)
    • isSupportedFunction

      public static boolean isSupportedFunction(String funcName)
    • getCatalystType

      public static scala.Option<DataType> getCatalystType(int sqlType, String typeName, int size, MetadataBuilder md)
    • convertJavaTimestampToTimestampNTZ

      public static LocalDateTime convertJavaTimestampToTimestampNTZ(Timestamp t)
    • convertTimestampNTZToJavaTimestamp

      public static Timestamp convertTimestampNTZToJavaTimestamp(LocalDateTime ldt)
    • getJDBCType

      public static scala.Option<JdbcType> getJDBCType(DataType dt)
    • isCascadingTruncateTable

      public static scala.Option<Object> isCascadingTruncateTable()
    • getTruncateQuery

      public static String getTruncateQuery(String table, scala.Option<Object> cascade)
      The SQL query used to truncate a table. For Postgres, the default behaviour is to also truncate any descendant tables. As this is a (possibly unwanted) side-effect, the Postgres dialect adds 'ONLY' to truncate only the table in question
      Parameters:
      table - The table to truncate
      cascade - Whether or not to cascade the truncation. Default value is the value of isCascadingTruncateTable(). Cascading a truncation will truncate tables with a foreign key relationship to the target table. However, it will not truncate tables with an inheritance relationship to the target table, as the truncate query always includes "ONLY" to prevent this behaviour.
      Returns:
      The SQL query to use for truncating a table
    • beforeFetch

      public static void beforeFetch(Connection connection, scala.collection.immutable.Map<String,String> properties)
    • getUpdateColumnTypeQuery

      public static String getUpdateColumnTypeQuery(String tableName, String columnName, String newDataType)
    • getUpdateColumnNullabilityQuery

      public static String getUpdateColumnNullabilityQuery(String tableName, String columnName, boolean isNullable)
    • createIndex

      public static String createIndex(String indexName, Identifier tableIdent, NamedReference[] columns, Map<NamedReference,Map<String,String>> columnsProperties, Map<String,String> properties)
    • indexExists

      public static boolean indexExists(Connection conn, String indexName, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
    • dropIndex

      public static String dropIndex(String indexName, Identifier tableIdent)
    • classifyException

      public static AnalysisException classifyException(String message, Throwable e)
    • supportsLimit

      public static boolean supportsLimit()
    • supportsOffset

      public static boolean supportsOffset()
    • supportsTableSample

      public static boolean supportsTableSample()
    • getTableSample

      public static String getTableSample(org.apache.spark.sql.execution.datasources.v2.TableSampleInfo sample)
    • renameTable

      public static String renameTable(Identifier oldTable, Identifier newTable)
    • convertJavaTimestampToTimestamp

      public static Timestamp convertJavaTimestampToTimestamp(Timestamp t)
      java.sql timestamps are measured with millisecond accuracy (from Long.MinValue milliseconds to Long.MaxValue milliseconds), while Spark timestamps are measured at microseconds accuracy. For the "infinity values" in PostgreSQL (represented by big constants), we need clamp them to avoid overflow. If it is not one of the infinity values, fall back to default behavior.
      Parameters:
      t - (undocumented)
      Returns:
      (undocumented)
    • createConnectionFactory

      public static scala.Function1<Object,Connection> createConnectionFactory(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
    • quoteIdentifier

      public static String quoteIdentifier(String colName)
    • createTable

      public static void createTable(Statement statement, String tableName, String strSchema, org.apache.spark.sql.execution.datasources.jdbc.JdbcOptionsInWrite options)
    • getTableExistsQuery

      public static String getTableExistsQuery(String table)
    • getSchemaQuery

      public static String getSchemaQuery(String table)
    • getTruncateQuery$default$2

      public static scala.Option<Object> getTruncateQuery$default$2()
    • compileValue

      public static Object compileValue(Object value)
    • compileExpression

      public static scala.Option<String> compileExpression(Expression expr)
    • compileAggregate

      public static scala.Option<String> compileAggregate(AggregateFunc aggFunction)
    • functions

      public static scala.collection.Seq<scala.Tuple2<String,UnboundFunction>> functions()
    • createSchema

      public static void createSchema(Statement statement, String schema, String comment)
    • schemasExists

      public static boolean schemasExists(Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options, String schema)
    • listSchemas

      public static String[][] listSchemas(Connection conn, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
    • alterTable

      public static String[] alterTable(String tableName, scala.collection.Seq<TableChange> changes, int dbMajorVersion)
    • getAddColumnQuery

      public static String getAddColumnQuery(String tableName, String columnName, String dataType)
    • getRenameColumnQuery

      public static String getRenameColumnQuery(String tableName, String columnName, String newName, int dbMajorVersion)
    • getDeleteColumnQuery

      public static String getDeleteColumnQuery(String tableName, String columnName)
    • getTableCommentQuery

      public static String getTableCommentQuery(String table, String comment)
    • getSchemaCommentQuery

      public static String getSchemaCommentQuery(String schema, String comment)
    • removeSchemaCommentQuery

      public static String removeSchemaCommentQuery(String schema)
    • dropSchema

      public static String dropSchema(String schema, boolean cascade)
    • listIndexes

      public static TableIndex[] listIndexes(Connection conn, Identifier tableIdent, org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
    • getLimitClause

      public static String getLimitClause(Integer limit)
    • getOffsetClause

      public static String getOffsetClause(Integer offset)
    • getJdbcSQLQueryBuilder

      public static JdbcSQLQueryBuilder getJdbcSQLQueryBuilder(org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions options)
    • getFullyQualifiedQuotedTableName

      public static String getFullyQualifiedQuotedTableName(Identifier ident)
    • org$apache$spark$internal$Logging$$log_

      public static org.slf4j.Logger org$apache$spark$internal$Logging$$log_()
    • org$apache$spark$internal$Logging$$log__$eq

      public static void org$apache$spark$internal$Logging$$log__$eq(org.slf4j.Logger x$1)
    • conf

      public static org.apache.spark.sql.internal.SQLConf conf()