Class DataframeAnalysisBase
java.lang.Object
co.elastic.clients.elasticsearch.ml.DataframeAnalysisBase
- All Implemented Interfaces:
JsonpSerializable
- Direct Known Subclasses:
DataframeAnalysisClassification
,DataframeAnalysisRegression
public abstract class DataframeAnalysisBase extends java.lang.Object implements JsonpSerializable
- See Also:
- API specification
-
Nested Class Summary
Nested Classes Modifier and Type Class Description protected static class
DataframeAnalysisBase.AbstractBuilder<BuilderT extends DataframeAnalysisBase.AbstractBuilder<BuilderT>>
-
Constructor Summary
Constructors Modifier Constructor Description protected
DataframeAnalysisBase(DataframeAnalysisBase.AbstractBuilder<?> builder)
-
Method Summary
Modifier and Type Method Description java.lang.Double
alpha()
Advanced configuration option.java.lang.String
dependentVariable()
Required - Defines which field of the document is to be predicted.java.lang.Double
downsampleFactor()
Advanced configuration option.java.lang.Boolean
earlyStoppingEnabled()
Advanced configuration option.java.lang.Double
eta()
Advanced configuration option.java.lang.Double
etaGrowthRatePerTree()
Advanced configuration option.java.lang.Double
featureBagFraction()
Advanced configuration option.java.util.List<DataframeAnalysisFeatureProcessor>
featureProcessors()
Advanced configuration option.java.lang.Double
gamma()
Advanced configuration option.java.lang.Double
lambda()
Advanced configuration option.java.lang.Integer
maxOptimizationRoundsPerHyperparameter()
Advanced configuration option.java.lang.Integer
maxTrees()
Advanced configuration option.java.lang.Integer
numTopFeatureImportanceValues()
Advanced configuration option.java.lang.String
predictionFieldName()
Defines the name of the prediction field in the results.java.lang.Double
randomizeSeed()
Defines the seed for the random generator that is used to pick training data.void
serialize(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper)
Serialize this object to JSON.protected void
serializeInternal(jakarta.json.stream.JsonGenerator generator, JsonpMapper mapper)
protected static <BuilderT extends DataframeAnalysisBase.AbstractBuilder<BuilderT>>
voidsetupDataframeAnalysisBaseDeserializer(ObjectDeserializer<BuilderT> op)
java.lang.Integer
softTreeDepthLimit()
Advanced configuration option.java.lang.Double
softTreeDepthTolerance()
Advanced configuration option.java.lang.String
toString()
java.lang.String
trainingPercent()
Defines what percentage of the eligible documents that will be used for training.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
-
Constructor Details
-
DataframeAnalysisBase
-
-
Method Details
-
alpha
@Nullable public final java.lang.Double alpha()Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.API name:
alpha
-
dependentVariable
public final java.lang.String dependentVariable()Required - Defines which field of the document is to be predicted. It must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable. For classification analysis, the data type of the field must be numeric (integer
,short
,long
,byte
), categorical (ip
orkeyword
), orboolean
. There must be no more than 30 different values in this field. For regression analysis, the data type of the field must be numeric.API name:
dependent_variable
-
downsampleFactor
@Nullable public final java.lang.Double downsampleFactor()Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.API name:
downsample_factor
-
earlyStoppingEnabled
@Nullable public final java.lang.Boolean earlyStoppingEnabled()Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable.API name:
early_stopping_enabled
-
eta
@Nullable public final java.lang.Double eta()Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.API name:
eta
-
etaGrowthRatePerTree
@Nullable public final java.lang.Double etaGrowthRatePerTree()Advanced configuration option. Specifies the rate at whicheta
increases for each new tree that is added to the forest. For example, a rate of 1.05 increaseseta
by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2.API name:
eta_growth_rate_per_tree
-
featureBagFraction
@Nullable public final java.lang.Double featureBagFraction()Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.API name:
feature_bag_fraction
-
featureProcessors
Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiplefeature_processors
entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields.API name:
feature_processors
-
gamma
@Nullable public final java.lang.Double gamma()Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.API name:
gamma
-
lambda
@Nullable public final java.lang.Double lambda()Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.API name:
lambda
-
maxOptimizationRoundsPerHyperparameter
@Nullable public final java.lang.Integer maxOptimizationRoundsPerHyperparameter()Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.API name:
max_optimization_rounds_per_hyperparameter
-
maxTrees
@Nullable public final java.lang.Integer maxTrees()Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.API name:
max_trees
-
numTopFeatureImportanceValues
@Nullable public final java.lang.Integer numTopFeatureImportanceValues()Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, no feature importance calculation occurs.API name:
num_top_feature_importance_values
-
predictionFieldName
@Nullable public final java.lang.String predictionFieldName()Defines the name of the prediction field in the results. Defaults to<dependent_variable>_prediction
.API name:
prediction_field_name
-
randomizeSeed
@Nullable public final java.lang.Double randomizeSeed()Defines the seed for the random generator that is used to pick training data. By default, it is randomly generated. Set it to a specific value to use the same training data each time you start a job (assuming other related parameters such assource
andanalyzed_fields
are the same).API name:
randomize_seed
-
softTreeDepthLimit
@Nullable public final java.lang.Integer softTreeDepthLimit()Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This soft limit combines with thesoft_tree_depth_tolerance
to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.API name:
soft_tree_depth_limit
-
softTreeDepthTolerance
@Nullable public final java.lang.Double softTreeDepthTolerance()Advanced configuration option. This option controls how quickly the regularized loss increases when the tree depth exceedssoft_tree_depth_limit
. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01.API name:
soft_tree_depth_tolerance
-
trainingPercent
@Nullable public final java.lang.String trainingPercent()Defines what percentage of the eligible documents that will be used for training. Documents that are ignored by the analysis (for example those that contain arrays with more than one value) won’t be included in the calculation for used percentage.API name:
training_percent
-
serialize
Serialize this object to JSON.- Specified by:
serialize
in interfaceJsonpSerializable
-
serializeInternal
-
toString
public java.lang.String toString()- Overrides:
toString
in classjava.lang.Object
-
setupDataframeAnalysisBaseDeserializer
protected static <BuilderT extends DataframeAnalysisBase.AbstractBuilder<BuilderT>> void setupDataframeAnalysisBaseDeserializer(ObjectDeserializer<BuilderT> op)
-