public class StructType extends DataType implements scala.collection.Seq<StructField>, scala.Product, scala.Serializable
StructType
object can be constructed by
StructType(fields: Seq[StructField])
For a StructType
object, one or multiple StructField
s can be extracted by names.
If multiple StructField
s are extracted, a StructType
object will be returned.
If a provided name does not have a matching field, it will be ignored. For the case
of extracting a single StructField, a null
will be returned.
Example:
import org.apache.spark.sql._
val struct =
StructType(
StructField("a", IntegerType, true) ::
StructField("b", LongType, false) ::
StructField("c", BooleanType, false) :: Nil)
// Extract a single StructField.
val singleField = struct("b")
// singleField: StructField = StructField(b,LongType,false)
// This struct does not have a field called "d". null will be returned.
val nonExisting = struct("d")
// nonExisting: StructField = null
// Extract multiple StructFields. Field names are provided in a set.
// A StructType object will be returned.
val twoFields = struct(Set("b", "c"))
// twoFields: StructType =
// StructType(List(StructField(b,LongType,false), StructField(c,BooleanType,false)))
// Any names without matching fields will be ignored.
// For the case shown below, "d" will be ignored and
// it is treated as struct(Set("b", "c")).
val ignoreNonExisting = struct(Set("b", "c", "d"))
// ignoreNonExisting: StructType =
// StructType(List(StructField(b,LongType,false), StructField(c,BooleanType,false)))
A Row
object is used as a value of the StructType.
Example:
import org.apache.spark.sql._
val innerStruct =
StructType(
StructField("f1", IntegerType, true) ::
StructField("f2", LongType, false) ::
StructField("f3", BooleanType, false) :: Nil)
val struct = StructType(
StructField("a", innerStruct, true) :: Nil)
// Create a Row with the schema defined by struct
val row = Row(Row(1, 2, true))
// row: Row = {@link 1,2,true}
scala.PartialFunction.AndThen<A,B,C>, scala.PartialFunction.Lifted<A,B>, scala.PartialFunction.OrElse<A,B>, scala.PartialFunction.Unlifted<A,B>
scala.collection.SeqLike.CombinationsItr, scala.collection.SeqLike.PermutationsItr
scala.collection.TraversableLike.WithFilter
scala.collection.TraversableOnce.BufferedCanBuildFrom<A,Coll extends scala.collection.TraversableOnce<Object>>, scala.collection.TraversableOnce.FlattenOps<A>, scala.collection.TraversableOnce.ForceImplicitAmbiguity, scala.collection.TraversableOnce.MonadOps<A>, scala.collection.TraversableOnce.OnceCanBuildFrom<A>
Constructor and Description |
---|
StructType(StructField[] fields) |
Modifier and Type | Method and Description |
---|---|
StructField |
apply(int fieldIndex) |
StructType |
apply(scala.collection.immutable.Set<String> names)
Returns a
StructType containing StructField s of the given names, preserving the
original order of fields. |
StructField |
apply(String name)
Extracts a
StructField of the given name. |
StructType |
asNullable()
Returns the same data type but set all nullability fields are true
(
StructField.nullable , ArrayType.containsNull , and MapType.valueContainsNull ). |
void |
buildFormattedString(String prefix,
scala.collection.mutable.StringBuilder builder) |
int |
defaultSize()
The default size of a value of the StructType is the total default sizes of all field types.
|
String[] |
fieldNames()
Returns all field names in an array.
|
StructField[] |
fields() |
scala.collection.Iterator<StructField> |
iterator() |
org.json4s.JsonAST.JObject |
jsonValue() |
int |
length() |
StructType |
merge(StructType that)
Merges with another schema (
StructType ). |
void |
printTreeString() |
String |
simpleString() |
String |
treeString() |
equalsIgnoreCompatibleNullability, equalsIgnoreNullability, fromCaseClassString, fromJson, isPrimitive, json, prettyJson, sameType, typeName, unapply
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
andThen, applyOrElse, isDefinedAt, lift, orElse, runWith
andThen$mcDD$sp, andThen$mcDF$sp, andThen$mcDI$sp, andThen$mcDJ$sp, andThen$mcFD$sp, andThen$mcFF$sp, andThen$mcFI$sp, andThen$mcFJ$sp, andThen$mcID$sp, andThen$mcIF$sp, andThen$mcII$sp, andThen$mcIJ$sp, andThen$mcJD$sp, andThen$mcJF$sp, andThen$mcJI$sp, andThen$mcJJ$sp, andThen$mcVD$sp, andThen$mcVF$sp, andThen$mcVI$sp, andThen$mcVJ$sp, andThen$mcZD$sp, andThen$mcZF$sp, andThen$mcZI$sp, andThen$mcZJ$sp, apply, apply$mcDD$sp, apply$mcDF$sp, apply$mcDI$sp, apply$mcDJ$sp, apply$mcFD$sp, apply$mcFF$sp, apply$mcFI$sp, apply$mcFJ$sp, apply$mcID$sp, apply$mcIF$sp, apply$mcII$sp, apply$mcIJ$sp, apply$mcJD$sp, apply$mcJF$sp, apply$mcJI$sp, apply$mcJJ$sp, apply$mcVD$sp, apply$mcVF$sp, apply$mcVI$sp, apply$mcVJ$sp, apply$mcZD$sp, apply$mcZF$sp, apply$mcZI$sp, apply$mcZJ$sp, compose, compose$mcDD$sp, compose$mcDF$sp, compose$mcDI$sp, compose$mcDJ$sp, compose$mcFD$sp, compose$mcFF$sp, compose$mcFI$sp, compose$mcFJ$sp, compose$mcID$sp, compose$mcIF$sp, compose$mcII$sp, compose$mcIJ$sp, compose$mcJD$sp, compose$mcJF$sp, compose$mcJI$sp, compose$mcJJ$sp, compose$mcVD$sp, compose$mcVF$sp, compose$mcVI$sp, compose$mcVJ$sp, compose$mcZD$sp, compose$mcZF$sp, compose$mcZI$sp, compose$mcZJ$sp, toString
$colon$plus, $plus$colon, combinations, contains, containsSlice, corresponds, diff, distinct, endsWith, indexOfSlice, indexOfSlice, indexWhere, indices, intersect, isEmpty, lastIndexOfSlice, lastIndexOfSlice, lastIndexWhere, lengthCompare, padTo, parCombiner, patch, permutations, reverse, reverseIterator, reverseMap, segmentLength, size, sortBy, sorted, sortWith, startsWith, thisCollection, toCollection, toSeq, toString, union, updated, view, view
canEqual, copyToArray, drop, dropRight, exists, find, foldRight, forall, foreach, grouped, head, reduceRight, sameElements, slice, sliding, sliding, take, takeRight, takeWhile, toIterable, toIterator, toStream, zip, zipAll, zipWithIndex
$plus$plus, $plus$plus$colon, $plus$plus$colon, collect, dropWhile, filter, filterNot, flatMap, groupBy, hasDefiniteSize, headOption, init, inits, isTraversableAgain, last, lastOption, map, partition, repr, scan, scanLeft, scanRight, sliceWithKnownBound, sliceWithKnownDelta, span, splitAt, stringPrefix, tail, tails, to, toTraversable, withFilter
$colon$bslash, $div$colon, addString, addString, addString, aggregate, collectFirst, copyToArray, copyToArray, copyToBuffer, count, fold, foldLeft, max, maxBy, min, minBy, mkString, mkString, mkString, nonEmpty, product, reduce, reduceLeft, reduceLeftOption, reduceOption, reduceRightOption, reversed, sum, toArray, toBuffer, toIndexedSeq, toList, toMap, toSet, toVector
public StructType(StructField[] fields)
public StructField[] fields()
public String[] fieldNames()
public StructField apply(String name)
StructField
of the given name. If the StructType
object does not
have a name matching the given name, null
will be returned.public StructType apply(scala.collection.immutable.Set<String> names)
StructType
containing StructField
s of the given names, preserving the
original order of fields. Those names which do not have matching fields will be ignored.public String treeString()
public void printTreeString()
public void buildFormattedString(String prefix, scala.collection.mutable.StringBuilder builder)
public StructField apply(int fieldIndex)
apply
in interface scala.collection.GenSeqLike<StructField,scala.collection.Seq<StructField>>
apply
in interface scala.collection.SeqLike<StructField,scala.collection.Seq<StructField>>
public int length()
length
in interface scala.collection.GenSeqLike<StructField,scala.collection.Seq<StructField>>
length
in interface scala.collection.SeqLike<StructField,scala.collection.Seq<StructField>>
public scala.collection.Iterator<StructField> iterator()
iterator
in interface scala.collection.GenIterableLike<StructField,scala.collection.Seq<StructField>>
iterator
in interface scala.collection.IterableLike<StructField,scala.collection.Seq<StructField>>
public int defaultSize()
defaultSize
in class DataType
public String simpleString()
simpleString
in class DataType
public StructType merge(StructType that)
StructType
). For a struct field A from this
and a struct field
B from that
,
1. If A and B have the same name and data type, they are merged to a field C with the same name
and data type. C is nullable if and only if either A or B is nullable.
2. If A doesn't exist in that
, it's included in the result schema.
3. If B doesn't exist in this
, it's also included in the result schema.
4. Otherwise, this
and that
are considered as conflicting schemas and an exception would be
thrown.
public StructType asNullable()
DataType
StructField.nullable
, ArrayType.containsNull
, and MapType.valueContainsNull
).asNullable
in class DataType