final class WorkerCapabilities extends GeneratedMessage with WorkerCapabilitiesOrBuilder
Capabilities used for query planning and running the worker during query execution.
Protobuf type org.apache.spark.udf.worker.WorkerCapabilities
- Annotations
- @Generated()
- Source
- WorkerCapabilities.java
- Alphabetic
- By Inheritance
- WorkerCapabilities
- WorkerCapabilitiesOrBuilder
- GeneratedMessage
- Serializable
- AbstractMessage
- Message
- MessageOrBuilder
- AbstractMessageLite
- MessageLite
- MessageLiteOrBuilder
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @IntrinsicCandidate() @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(obj: AnyRef): Boolean
- Definition Classes
- WorkerCapabilities → AbstractMessage → Message → AnyRef → Any
- Annotations
- @Override()
- def findInitializationErrors(): List[String]
- Definition Classes
- AbstractMessage → MessageOrBuilder
- def getAllFields(): Map[FieldDescriptor, AnyRef]
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @IntrinsicCandidate() @native()
- def getDefaultInstanceForType(): WorkerCapabilities
- Definition Classes
- WorkerCapabilities → MessageOrBuilder → MessageLiteOrBuilder
- Annotations
- @Override()
- def getDescriptorForType(): Descriptor
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- def getField(field: FieldDescriptor): AnyRef
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- def getInitializationErrorString(): String
- Definition Classes
- AbstractMessage → MessageOrBuilder
- def getOneofFieldDescriptor(oneof: OneofDescriptor): FieldDescriptor
- Definition Classes
- GeneratedMessage → AbstractMessage → MessageOrBuilder
- def getParserForType(): Parser[WorkerCapabilities]
- Definition Classes
- WorkerCapabilities → GeneratedMessage → Message → MessageLite
- Annotations
- @Override()
- def getRepeatedField(field: FieldDescriptor, index: Int): AnyRef
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- def getRepeatedFieldCount(field: FieldDescriptor): Int
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- def getSerializedSize(): Int
- Definition Classes
- WorkerCapabilities → GeneratedMessage → AbstractMessage → MessageLite
- Annotations
- @Override()
- def getSupportedCommunicationPatterns(index: Int): UDFProtoCommunicationPattern
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- index
The index of the element to return.
- returns
The supportedCommunicationPatterns at the given index.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedCommunicationPatternsCount(): Int
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- returns
The count of supportedCommunicationPatterns.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedCommunicationPatternsList(): List[UDFProtoCommunicationPattern]
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- returns
A list containing the supportedCommunicationPatterns.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedCommunicationPatternsValue(index: Int): Int
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- index
The index of the value to return.
- returns
The enum numeric value on the wire of supportedCommunicationPatterns at the given index.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedCommunicationPatternsValueList(): List[Integer]
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
Which UDF protocol communication patterns the worker supports. This should list all supported patterns. The pattern used for a specific UDF will be communicated in the initial message of the UDF protocol. If an execution for an unsupported pattern is requested the query will fail during query planning. (Required)
repeated .org.apache.spark.udf.worker.UDFProtoCommunicationPattern supported_communication_patterns = 2;- returns
A list containing the enum numeric values on the wire for supportedCommunicationPatterns.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedDataFormats(index: Int): UDFWorkerDataFormat
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- index
The index of the element to return.
- returns
The supportedDataFormats at the given index.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedDataFormatsCount(): Int
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- returns
The count of supportedDataFormats.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedDataFormatsList(): List[UDFWorkerDataFormat]
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- returns
A list containing the supportedDataFormats.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedDataFormatsValue(index: Int): Int
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- index
The index of the value to return.
- returns
The enum numeric value on the wire of supportedDataFormats at the given index.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportedDataFormatsValueList(): List[Integer]
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
The data formats that the worker supports for UDF data in- & output. Every worker MUST at least support ARROW. It is expected that for each UDF execution, the input format always matches the output format. If a worker supports multiple data formats, the engine will select the most suitable one for each UDF invocation. Which format was chosen is reported by the engine as part of the UDF protocol's init message. (Required)
repeated .org.apache.spark.udf.worker.UDFWorkerDataFormat supported_data_formats = 1;- returns
A list containing the enum numeric values on the wire for supportedDataFormats.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportsConcurrentUdfs(): Boolean
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
optional bool supports_concurrent_udfs = 3;- returns
The supportsConcurrentUdfs.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getSupportsReuse(): Boolean
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
optional bool supports_reuse = 4;- returns
The supportsReuse.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def getUnknownFields(): UnknownFieldSet
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- def hasField(field: FieldDescriptor): Boolean
- Definition Classes
- GeneratedMessage → MessageOrBuilder
- def hasOneof(oneof: OneofDescriptor): Boolean
- Definition Classes
- GeneratedMessage → AbstractMessage → MessageOrBuilder
- def hasSupportsConcurrentUdfs(): Boolean
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
Whether multiple, concurrent UDF connections are supported by this worker (for example via multi-threading). In the first implementation of the engine-side worker specification, this property will not be used. Usage of this property can be enabled in the future if the engine implements more advanced resource management (TBD). (Optional)
optional bool supports_concurrent_udfs = 3;- returns
Whether the supportsConcurrentUdfs field is set.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def hasSupportsReuse(): Boolean
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
Whether compatible workers may be reused. If this is not supported, the worker is terminated after every single UDF invocation. (Optional)
optional bool supports_reuse = 4;- returns
Whether the supportsReuse field is set.
- Definition Classes
- WorkerCapabilities → WorkerCapabilitiesOrBuilder
- Annotations
- @Override()
- def hashCode(): Int
- Definition Classes
- WorkerCapabilities → AbstractMessage → Message → AnyRef → Any
- Annotations
- @Override()
- def internalGetFieldAccessorTable(): FieldAccessorTable
- Attributes
- protected[worker]
- Definition Classes
- WorkerCapabilities → GeneratedMessage
- Annotations
- @Override()
- def internalGetMapFieldReflection(fieldNumber: Int): MapFieldReflectionAccessor
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- final def isInitialized(): Boolean
- Definition Classes
- WorkerCapabilities → GeneratedMessage → AbstractMessage → MessageLiteOrBuilder
- Annotations
- @Override()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- def makeExtensionsImmutable(): Unit
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def newBuilderForType(parent: BuilderParent): Builder
- Attributes
- protected[worker]
- Definition Classes
- WorkerCapabilities → AbstractMessage
- Annotations
- @Override()
- def newBuilderForType(): Builder
- Definition Classes
- WorkerCapabilities → Message → MessageLite
- Annotations
- @Override()
- def newInstance(unused: UnusedPrivateParameter): AnyRef
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @IntrinsicCandidate() @native()
- def parseUnknownField(input: CodedInputStream, unknownFields: Builder, extensionRegistry: ExtensionRegistryLite, tag: Int): Boolean
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- Annotations
- @throws(classOf[java.io.IOException])
- def parseUnknownFieldProto3(input: CodedInputStream, unknownFields: Builder, extensionRegistry: ExtensionRegistryLite, tag: Int): Boolean
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- Annotations
- @throws(classOf[java.io.IOException])
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toBuilder(): Builder
- Definition Classes
- WorkerCapabilities → Message → MessageLite
- Annotations
- @Override()
- def toByteArray(): Array[Byte]
- Definition Classes
- AbstractMessageLite → MessageLite
- def toByteString(): ByteString
- Definition Classes
- AbstractMessageLite → MessageLite
- final def toString(): String
- Definition Classes
- AbstractMessage → Message → AnyRef → Any
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- def writeDelimitedTo(output: OutputStream): Unit
- Definition Classes
- AbstractMessageLite → MessageLite
- Annotations
- @throws(classOf[java.io.IOException])
- def writeReplace(): AnyRef
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- Annotations
- @throws(classOf[java.io.ObjectStreamException])
- def writeTo(output: CodedOutputStream): Unit
- Definition Classes
- WorkerCapabilities → GeneratedMessage → AbstractMessage → MessageLite
- Annotations
- @Override()
- def writeTo(output: OutputStream): Unit
- Definition Classes
- AbstractMessageLite → MessageLite
- Annotations
- @throws(classOf[java.io.IOException])
Deprecated Value Members
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable]) @Deprecated
- Deprecated
(Since version 9)
- def internalGetMapField(fieldNumber: Int): MapField[_ <: AnyRef, _ <: AnyRef]
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- Annotations
- @Deprecated
- Deprecated
- def mergeFromAndMakeImmutableInternal(input: CodedInputStream, extensionRegistry: ExtensionRegistryLite): Unit
- Attributes
- protected[protobuf]
- Definition Classes
- GeneratedMessage
- Annotations
- @throws(classOf[com.google.protobuf.InvalidProtocolBufferException]) @Deprecated
- Deprecated