Classification (classification
)¶
Logistic Regression¶
- class Orange.classification.LogisticRegressionLearner(penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='auto', max_iter=100, multi_class='auto', verbose=0, n_jobs=1, preprocessors=None)[source]¶
A wrapper for sklearn.linear_model._logistic.LogisticRegression. The following is its documentation:
Logistic Regression (aka logit, MaxEnt) classifier.
In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the 'multi_class' option is set to 'ovr', and uses the cross-entropy loss if the 'multi_class' option is set to 'multinomial'. (Currently the 'multinomial' option is supported only by the 'lbfgs', 'sag', 'saga' and 'newton-cg' solvers.)
This class implements regularized logistic regression using the 'liblinear' library, 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers. Note that regularization is applied by default. It can handle both dense and sparse input. Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any other input format will be converted (and copied).
The 'newton-cg', 'sag', and 'lbfgs' solvers support only L2 regularization with primal formulation, or no regularization. The 'liblinear' solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. The Elastic-Net regularization is only supported by the 'saga' solver.
Read more in the User Guide.
- preprocessors = [HasClass(), Continuize(), RemoveNaNColumns(), SklImpute()]¶
A sequence of data preprocessors to apply on data prior to fitting the model
Random Forest¶
- class Orange.classification.RandomForestLearner(n_estimators=10, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='sqrt', max_leaf_nodes=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, class_weight=None, preprocessors=None)[source]¶
A wrapper for sklearn.ensemble._forest.RandomForestClassifier. The following is its documentation:
A random forest classifier.
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the max_samples parameter if bootstrap=True (default), otherwise the whole dataset is used to build each tree.
Read more in the User Guide.
Simple Random Forest¶
- class Orange.classification.SimpleRandomForestLearner(n_estimators=10, min_instances=2, max_depth=1024, max_majority=1.0, skip_prob='sqrt', seed=42)[source]¶
A random forest classifier, optimized for speed. Trees in the forest are constructed with
SimpleTreeLearner
classification trees.- Parameters:
n_estimators (int, optional (default = 10)) -- Number of trees in the forest.
min_instances (int, optional (default = 2)) -- Minimal number of data instances in leaves. When growing the three, new nodes are not introduced if they would result in leaves with fewer instances than min_instances. Instance count is weighed.
max_depth (int, optional (default = 1024)) -- Maximal depth of tree.
max_majority (float, optional (default = 1.0)) -- Maximal proportion of majority class. When this is exceeded, induction stops (only used for classification).
skip_prob (string, optional (default = "sqrt")) --
Data attribute will be skipped with probability
skip_prob
.if float, then skip attribute with this probability.
if "sqrt", then skip_prob = 1 - sqrt(n_features) / n_features
if "log2", then skip_prob = 1 - log2(n_features) / n_features
seed (int, optional (default = 42)) -- Random seed.
Softmax Regression¶
- class Orange.classification.SoftmaxRegressionLearner(lambda_=1.0, preprocessors=None, **fmin_args)[source]¶
L2 regularized softmax regression classifier. Uses the L-BFGS algorithm to minimize the categorical cross entropy cost with L2 regularization. This model is suitable when dealing with a multi-class classification problem.
When using this learner you should:
choose a suitable regularization parameter lambda_,
consider using many logistic regression models (one for each value of the class variable) instead of softmax regression.
- Parameters:
lambda_ (float, optional (default=1.0)) -- Regularization parameter. It controls trade-off between fitting the data and keeping parameters small. Higher values of lambda_ force parameters to be smaller.
preprocessors (list, optional) --
Preprocessors are applied to data before training or testing. Default preprocessors: [RemoveNaNClasses(), RemoveNaNColumns(), Impute(), Continuize(), Normalize()]
remove columns with all values as NaN
replace NaN values with suitable values
continuize all discrete attributes,
transform the dataset so that the columns are on a similar scale,
fmin_args (dict, optional) -- Parameters for L-BFGS algorithm.
- preprocessors = [HasClass(), RemoveNaNColumns(), Impute(), Continuize(), Normalize()]¶
A sequence of data preprocessors to apply on data prior to fitting the model
k-Nearest Neighbors¶
- class Orange.classification.KNNLearner(n_neighbors=5, metric='euclidean', weights='uniform', algorithm='auto', metric_params=None, preprocessors=None)[source]¶
A wrapper for sklearn.neighbors._classification.KNeighborsClassifier. The following is its documentation:
Classifier implementing the k-nearest neighbors vote.
Read more in the User Guide.
Naive Bayes¶
- class Orange.classification.NaiveBayesLearner(preprocessors=None)[source]¶
Naive Bayes classifier. Works only with discrete attributes. By default, continuous attributes are discretized.
- Parameters:
preprocessors (list, optional (default="[Orange.preprocess.Discretize]")) -- An ordered list of preprocessors applied to data before training or testing.
- preprocessors = [RemoveNaNColumns(), Discretize()]¶
A sequence of data preprocessors to apply on data prior to fitting the model
The following code loads lenses dataset (four discrete attributes and discrete class), constructs naive Bayesian learner, uses it on the entire dataset to construct a classifier, and then applies classifier to the first three data instances:
>>> import Orange
>>> lenses = Orange.data.Table('lenses')
>>> nb = Orange.classification.NaiveBayesLearner()
>>> classifier = nb(lenses)
>>> classifier(lenses[0:3], True)
array([[ 0.04358755, 0.82671726, 0.12969519],
[ 0.17428279, 0.20342097, 0.62229625],
[ 0.18633359, 0.79518516, 0.01848125]])
Support Vector Machines¶
- class Orange.classification.SVMLearner(C=1.0, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, max_iter=-1, preprocessors=None)[source]¶
A wrapper for sklearn.svm._classes.SVC. The following is its documentation:
C-Support Vector Classification.
The implementation is based on libsvm. The fit time scales at least quadratically with the number of samples and may be impractical beyond tens of thousands of samples. For large datasets consider using
LinearSVC
orSGDClassifier
instead, possibly after aNystroem
transformer.The multiclass support is handled according to a one-vs-one scheme.
For details on the precise mathematical formulation of the provided kernel functions and how gamma, coef0 and degree affect each other, see the corresponding section in the narrative documentation: svm_kernels.
Read more in the User Guide.
- preprocessors = [HasClass(), Continuize(), RemoveNaNColumns(), SklImpute(), AdaptiveNormalize(zero_based=<?>, norm_type=<?>, transform_class=<?>, normalize_datetime=<?>, center=<?>, scale=<?>)]¶
A sequence of data preprocessors to apply on data prior to fitting the model
Linear Support Vector Machines¶
- class Orange.classification.LinearSVMLearner(penalty='l2', loss='squared_hinge', dual=True, tol=0.0001, C=1.0, multi_class='ovr', fit_intercept=True, intercept_scaling=True, random_state=None, preprocessors=None)[source]¶
A wrapper for sklearn.svm._classes.LinearSVC. The following is its documentation:
Linear Support Vector Classification.
Similar to SVC with parameter kernel='linear', but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.
This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme.
Read more in the User Guide.
- preprocessors = [HasClass(), Continuize(), RemoveNaNColumns(), SklImpute(), AdaptiveNormalize(zero_based=<?>, norm_type=<?>, transform_class=<?>, normalize_datetime=<?>, center=<?>, scale=<?>)]¶
A sequence of data preprocessors to apply on data prior to fitting the model
Nu-Support Vector Machines¶
- class Orange.classification.NuSVMLearner(nu=0.5, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, max_iter=-1, preprocessors=None)[source]¶
A wrapper for sklearn.svm._classes.NuSVC. The following is its documentation:
Nu-Support Vector Classification.
Similar to SVC but uses a parameter to control the number of support vectors.
The implementation is based on libsvm.
Read more in the User Guide.
- preprocessors = [HasClass(), Continuize(), RemoveNaNColumns(), SklImpute(), AdaptiveNormalize(zero_based=<?>, norm_type=<?>, transform_class=<?>, normalize_datetime=<?>, center=<?>, scale=<?>)]¶
A sequence of data preprocessors to apply on data prior to fitting the model
Classification Tree¶
Orange includes three implemenations of classification trees. TreeLearner is home-grown and properly handles multinominal and missing values. The one from scikit-learn, SklTreeLearner, is faster. Another home-grown, SimpleTreeLearner, is simpler and still faster.
The following code loads iris dataset (four numeric attributes and discrete class), constructs a decision tree learner, uses it on the entire dataset to construct a classifier, and then prints the tree:
>>> import Orange
>>> iris = Orange.data.Table('iris')
>>> tr = Orange.classification.TreeLearner()
>>> classifier = tr(data)
>>> printed_tree = classifier.print_tree()
>>> for i in printed_tree.split('\n'):
>>> print(i)
[50. 0. 0.] petal length ≤ 1.9
[ 0. 50. 50.] petal length > 1.9
[ 0. 49. 5.] petal width ≤ 1.7
[ 0. 47. 1.] petal length ≤ 4.9
[0. 2. 4.] petal length > 4.9
[0. 0. 3.] petal width ≤ 1.5
[0. 2. 1.] petal width > 1.5
[0. 2. 0.] sepal length ≤ 6.7
[0. 0. 1.] sepal length > 6.7
[ 0. 1. 45.] petal width > 1.7
- class Orange.classification.TreeLearner(*args, binarize=False, max_depth=None, min_samples_leaf=1, min_samples_split=2, sufficient_majority=0.95, preprocessors=None, **kwargs)[source]¶
Tree inducer with proper handling of nominal attributes and binarization.
The inducer can handle missing values of attributes and target. For discrete attributes with more than two possible values, each value can get a separate branch (binarize=False), or values can be grouped into two groups (binarize=True, default).
The tree growth can be limited by the required number of instances for internal nodes and for leafs, the sufficient proportion of majority class, and by the maximal depth of the tree.
If the tree is not binary, it can contain zero-branches.
- Parameters:
binarize (bool) -- if True the inducer will find optimal split into two subsets for values of discrete attributes. If False (default), each value gets its branch.
min_samples_leaf (float) -- the minimal number of data instances in a leaf
min_samples_split (float) -- the minimal nubmer of data instances that is split into subgroups
max_depth (int) -- the maximal depth of the tree
sufficient_majority (float) -- a majority at which the data is not split further
- Returns:
instance of OrangeTreeModel
Simple Tree¶
- class Orange.classification.SimpleTreeLearner(min_instances=2, max_depth=32, max_majority=0.95, skip_prob=0.0, bootstrap=False, seed=42)[source]¶
Classification or regression tree learner. Uses gain ratio for classification and mean square error for regression. This learner was developed to speed-up random forest construction, but can also be used as a standalone tree learner.
- min_instancesint, optional (default = 2)
Minimal number of data instances in leaves. When growing the three, new nodes are not introduced if they would result in leaves with fewer instances than min_instances. Instance count is weighed.
- max_depthint, optional (default = 1024)
Maximal depth of tree.
- max_majorityfloat, optional (default = 1.0)
Maximal proportion of majority class. When this is exceeded, induction stops (only used for classification).
- skip_probstring, optional (default = 0.0)
Data attribute will be skipped with probability
skip_prob
.if float, then skip attribute with this probability.
if "sqrt", then skip_prob = 1 - sqrt(n_features) / n_features
if "log2", then skip_prob = 1 - log2(n_features) / n_features
- bootstrapdata table, optional (default = False)
A bootstrap dataset.
- seedint, optional (default = 42)
Random seed.
Majority Classifier¶
- class Orange.classification.MajorityLearner(preprocessors=None)[source]¶
A majority classifier. Always returns most frequent class from the training set, regardless of the attribute values from the test data instance. Returns class value distribution if class probabilities are requested. Can be used as a baseline when comparing classifiers.
In the special case of uniform class distribution within the training data, class value is selected randomly. In order to produce consistent results on the same dataset, this value is selected based on hash of the class vector.
Neural Network¶
- class Orange.classification.NNClassificationLearner(hidden_layer_sizes=(100,), activation='relu', solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, preprocessors=None)[source]¶
A wrapper for Orange.classification.neural_network.MLPClassifierWCallback. The following is its documentation:
Multi-layer Perceptron classifier.
This model optimizes the log-loss function using LBFGS or stochastic gradient descent.
New in version 0.18.
CN2 Rule Induction¶
Induction of rules works by finding a rule that covers some learning instances, removing these instances, and repeating this until all instances are covered. Rules are scored by heuristics such as impurity of class distribution of covered instances. The module includes common rule-learning algorithms, and allows for replacing rule search strategies, scoring and other components.
- class Orange.classification.rules.CN2Learner(preprocessors=None, base_rules=None)[source]¶
Classic CN2 inducer that constructs a list of ordered rules. To evaluate found hypotheses, entropy measure is used. Returns a CN2Classifier if called with data.
References
"The CN2 Induction Algorithm", Peter Clark and Tim Niblett, Machine Learning Journal, 3 (4), pp261-283, (1989)
- class Orange.classification.rules.CN2UnorderedLearner(preprocessors=None, base_rules=None)[source]¶
Construct a set of unordered rules.
Rules are learnt for each class individually and scored by the relative frequency of the class corrected by the Laplace correction. After adding a rule, only the covered examples of that class are removed.
The code below loads the iris dataset (four continuous attributes and a discrete class) and fits the learner.
import Orange data = Orange.data.Table("iris") learner = Orange.classification.CN2UnorderedLearner() # consider up to 10 solution streams at one time learner.rule_finder.search_algorithm.beam_width = 10 # continuous value space is constrained to reduce computation time learner.rule_finder.search_strategy.constrain_continuous = True # found rules must cover at least 15 examples learner.rule_finder.general_validator.min_covered_examples = 15 # found rules may combine at most 2 selectors (conditions) learner.rule_finder.general_validator.max_rule_length = 2 classifier = learner(data)
References
"Rule Induction with CN2: Some Recent Improvements", Peter Clark and Robin Boswell, Machine Learning - Proceedings of the 5th European Conference (EWSL-91), pp151-163, 1991
- class Orange.classification.rules.CN2SDLearner(preprocessors=None, base_rules=None)[source]¶
Ordered CN2SD inducer that constructs a list of ordered rules. To evaluate found hypotheses, Weighted relative accuracy measure is used. Returns a CN2SDClassifier if called with data.
In this setting, ordered rule induction refers exclusively to finding best rule conditions and assigning the majority class in the rule head (target class is set to None). To later predict instances, rules will be regarded as unordered.
Notes
A weighted covering algorithm is applied, in which subsequently induced rules also represent interesting and sufficiently large subgroups of the population. Covered positive examples are not deleted from the learning set, rather their weight is reduced.
The algorithm demonstrates how classification rule learning (predictive induction) can be adapted to subgroup discovery, a task at the intersection of predictive and descriptive induction.
References
"Subgroup Discovery with CN2-SD", Nada Lavrač et al., Journal of Machine Learning Research 5 (2004), 153-188, 2004
- class Orange.classification.rules.CN2SDUnorderedLearner(preprocessors=None, base_rules=None)[source]¶
Unordered CN2SD inducer that constructs a set of unordered rules. To evaluate found hypotheses, Weighted relative accuracy measure is used. Returns a CN2SDUnorderedClassifier if called with data.
Notes
A weighted covering algorithm is applied, in which subsequently induced rules also represent interesting and sufficiently large subgroups of the population. Covered positive examples are not deleted from the learning set, rather their weight is reduced.
The algorithm demonstrates how classification rule learning (predictive induction) can be adapted to subgroup discovery, a task at the intersection of predictive and descriptive induction.
References
"Subgroup Discovery with CN2-SD", Nada Lavrač et al., Journal of Machine Learning Research 5 (2004), 153-188, 2004
Calibration and threshold optimization¶
- class Orange.classification.calibration.ThresholdClassifier(base_model, threshold)[source]¶
A model that wraps a binary model and sets a different threshold.
The target class is the class with index 1. A data instances is classified to class 1 it the probability of this class equals or exceeds the threshold
- base_model¶
base mode
- Type:
Orange.classification.Model
- class Orange.classification.calibration.ThresholdLearner(base_learner, threshold_criterion=0)[source]¶
A learner that runs another learner and then finds the optimal threshold for CA or F1 on the training data.
- base_leaner¶
base learner
- Type:
Learner
- class Orange.classification.calibration.CalibratedClassifier(base_model, calibrators)[source]¶
A model that wraps another model and recalibrates probabilities
- base_model¶
base mode
- Type:
Mode
- class Orange.classification.calibration.CalibratedLearner(base_learner, calibration_method=0)[source]¶
Probability calibration for learning algorithms
This learner that wraps another learner, so that after training, it predicts the probabilities on training data and calibrates them using sigmoid or isotonic calibration. It then returns a
CalibratedClassifier
.- base_learner¶
base learner
- Type:
Learner
Gradient Boosted Trees¶
- class Orange.classification.gb.GBClassifier(loss='log_loss', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, presort='deprecated', validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0, preprocessors=None)[source]¶
A wrapper for sklearn.ensemble._gb.GradientBoostingClassifier. The following is its documentation:
Gradient Boosting for classification.
This algorithm builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage
n_classes_
regression trees are fit on the negative gradient of the loss function, e.g. binary or multiclass log loss. Binary classification is a special case where only a single regression tree is induced.sklearn.ensemble.HistGradientBoostingClassifier
is a much faster variant of this algorithm for intermediate datasets (n_samples >= 10_000).Read more in the User Guide.
- class Orange.classification.catgb.CatGBClassifier(iterations=None, learning_rate=None, depth=None, l2_leaf_reg=None, model_size_reg=None, rsm=None, loss_function=None, border_count=None, feature_border_type=None, per_float_feature_quantization=None, input_borders=None, output_borders=None, fold_permutation_block=None, od_pval=None, od_wait=None, od_type=None, nan_mode=None, counter_calc_method=None, leaf_estimation_iterations=None, leaf_estimation_method=None, thread_count=None, random_seed=None, use_best_model=None, verbose=False, logging_level=None, metric_period=None, ctr_leaf_count_limit=None, store_all_simple_ctr=None, max_ctr_complexity=None, has_time=None, allow_const_label=None, classes_count=None, class_weights=None, one_hot_max_size=None, random_strength=None, name=None, ignored_features=None, train_dir='/home/docs/.cache/Orange/3.36.1', custom_loss=None, custom_metric=None, eval_metric=None, bagging_temperature=None, save_snapshot=None, snapshot_file=None, snapshot_interval=None, fold_len_multiplier=None, used_ram_limit=None, gpu_ram_part=None, allow_writing_files=False, final_ctr_computation_mode=None, approx_on_full_history=None, boosting_type=None, simple_ctr=None, combinations_ctr=None, per_feature_ctr=None, task_type=None, device_config=None, devices=None, bootstrap_type=None, subsample=None, sampling_unit=None, dev_score_calc_obj_block_size=None, max_depth=None, n_estimators=None, num_boost_round=None, num_trees=None, colsample_bylevel=None, random_state=None, reg_lambda=None, objective=None, eta=None, max_bin=None, scale_pos_weight=None, gpu_cat_features_storage=None, data_partition=None, metadata=None, early_stopping_rounds=None, cat_features=None, grow_policy=None, min_data_in_leaf=None, min_child_samples=None, max_leaves=None, num_leaves=None, score_function=None, leaf_estimation_backtracking=None, ctr_history_unit=None, monotone_constraints=None, feature_weights=None, penalties_coefficient=None, first_feature_use_penalties=None, model_shrink_rate=None, model_shrink_mode=None, langevin=None, diffusion_temperature=None, posterior_sampling=None, boost_from_average=None, text_features=None, tokenizers=None, dictionaries=None, feature_calcers=None, text_processing=None, preprocessors=None)[source]¶
A wrapper for catboost.core.CatBoostClassifier. The following is its documentation:
Implementation of the scikit-learn API for CatBoost classification.
- class Orange.classification.xgb.XGBClassifier(max_depth=None, learning_rate=None, n_estimators=100, verbosity=None, objective='binary:logistic', booster=None, tree_method=None, n_jobs=None, gamma=None, min_child_weight=None, max_delta_step=None, subsample=None, colsample_bytree=None, colsample_bylevel=None, colsample_bynode=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, base_score=None, random_state=None, missing=nan, num_parallel_tree=None, monotone_constraints=None, interaction_constraints=None, importance_type='gain', gpu_id=None, validate_parameters=None, preprocessors=None)[source]¶
A wrapper for xgboost.sklearn.XGBClassifier. The following is its documentation:
Implementation of the scikit-learn API for XGBoost classification. See /python/sklearn_estimator for more information.
- class Orange.classification.xgb.XGBRFClassifier(max_depth=None, learning_rate=None, n_estimators=100, verbosity=None, objective='binary:logistic', booster=None, tree_method=None, n_jobs=None, gamma=None, min_child_weight=None, max_delta_step=None, subsample=None, colsample_bytree=None, colsample_bylevel=None, colsample_bynode=None, reg_alpha=None, reg_lambda=None, scale_pos_weight=None, base_score=None, random_state=None, missing=nan, num_parallel_tree=None, monotone_constraints=None, interaction_constraints=None, importance_type='gain', gpu_id=None, validate_parameters=None, preprocessors=None)[source]¶
A wrapper for xgboost.sklearn.XGBRFClassifier. The following is its documentation:
scikit-learn API for XGBoost random forest classification. See /python/sklearn_estimator for more information.