NAME
AI::MXNet::Metric - Evaluation Metric API.
=head1 DESCRIPTION
This module hosts all the evaluation metrics available to evaluate the performance of a learned model.
L<Python Docs|http://mxnet.incubator.apache.org/api/python/metric/metric.html>
get_config
Save configurations of metric. Can be recreated
from configs with mx->metric->create(%{ $config })
NAME
AI::MXNet::Accuracy - Computes accuracy classification score.
DESCRIPTION
The accuracy score is defined as
accuracy(y, y^) = (1/n) * sum(i=0..n−1) { y^(i)==y(i) }
Parameters:
axis (Int, default=1) – The axis that represents classes.
name (Str, default='accuracy') – Name of this metric instance for display.
pdl> use AI::MXNet qw(mx)
pdl> $predicts = [mx->nd->array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
pdl> $labels = [mx->nd->array([[0, 1, 1]])]
pdl> $acc = mx->metric->Accuracy()
pdl> $acc->update($labels, $predicts)
pdl> use Data::Dumper
pdl> print Dumper([$acc->get])
$VAR1 = [
'accuracy',
'0.666666666666667'
];
NAME
AI::MXNet::TopKAccuracy - Computes top k predictions accuracy.
DESCRIPTION
TopKAccuracy differs from Accuracy in that it considers the prediction
to be True as long as the ground truth label is in the top K predicated labels.
If top_k = 1, then TopKAccuracy is identical to Accuracy.
Parameters:
top_k(Int, default 1) – Whether targets are in top k predictions.
name (Str, default 'top_k_accuracy') – Name of this metric instance for display.
use AI::MXNet qw(mx);
$top_k = 3;
$predicts = [mx->nd->array(
[[0.80342804, 0.5275223 , 0.11911147, 0.63968144, 0.09092526,
0.33222568, 0.42738095, 0.55438581, 0.62812652, 0.69739294],
[0.78994969, 0.13189035, 0.34277045, 0.20155961, 0.70732423,
0.03339926, 0.90925004, 0.40516066, 0.76043547, 0.47375838],
[0.28671892, 0.75129249, 0.09708994, 0.41235779, 0.28163896,
0.39027778, 0.87110921, 0.08124512, 0.55793117, 0.54753428],
[0.33220307, 0.97326881, 0.2862761 , 0.5082575 , 0.14795074,
0.19643398, 0.84082001, 0.0037532 , 0.78262101, 0.83347772],
[0.93790734, 0.97260166, 0.83282304, 0.06581761, 0.40379256,
0.37479349, 0.50750135, 0.97787696, 0.81899021, 0.18754124],
[0.69804812, 0.68261077, 0.99909815, 0.48263116, 0.73059268,
0.79518236, 0.26139168, 0.16107376, 0.69850315, 0.89950917],
[0.91515562, 0.31244902, 0.95412616, 0.7242641 , 0.02091039,
0.72554552, 0.58165923, 0.9545687 , 0.74233195, 0.19750339],
[0.94900651, 0.85836332, 0.44904621, 0.82365038, 0.99726878,
0.56413064, 0.5890016 , 0.42402702, 0.89548786, 0.44437266],
[0.57723744, 0.66019353, 0.30244304, 0.02295771, 0.83766937,
0.31953292, 0.37552193, 0.18172362, 0.83135182, 0.18487429],
[0.96968683, 0.69644561, 0.60566253, 0.49600661, 0.70888438,
0.26044186, 0.65267488, 0.62297362, 0.83609334, 0.3572364 ]]
)];
$labels = [mx->nd->array([2, 6, 9, 2, 3, 4, 7, 8, 9, 6])];
$acc = mx->metric->TopKAccuracy(top_k=>$top_k);
$acc->update($labels, $predicts);
use Data::Dumper;
print Dumper([$acc->get]);
$VAR1 = [
'top_k_accuracy_3',
'0.3'
];
NAME
AI::MXNet::F1 - Calculate the F1 score of a binary classification problem.
DESCRIPTION
The F1 score is equivalent to harmonic mean of the precision and recall,
where the best value is 1.0 and the worst value is 0.0. The formula for F1 score is:
F1 = 2 * (precision * recall) / (precision + recall)
The formula for precision and recall is:
precision = true_positives / (true_positives + false_positives)
recall = true_positives / (true_positives + false_negatives)
Note:
This F1 score only supports binary classification.
Parameters:
name (Str, default 'f1') – Name of this metric instance for display.
average (Str, default 'macro') –
Strategy to be used for aggregating across mini-batches.
“macro”: average the F1 scores for each batch. “micro”: compute a single F1 score across all batches.
$predicts = [mx.nd.array([[0.3, 0.7], [0., 1.], [0.4, 0.6]])];
$labels = [mx.nd.array([0., 1., 1.])];
$f1 = mx->metric->F1();
$f1->update($labels, $predicts);
print $f1->get;
f1 0.8
NAME
AI::MXNet::MCC - Computes the Matthews Correlation Coefficient of a binary classification problem.
DESCRIPTION
While slower to compute than F1 the MCC can give insight that F1 or Accuracy cannot.
For instance, if the network always predicts the same result
then the MCC will immeadiately show this. The MCC is also symetric with respect
to positive and negative categorization, however, there needs to be both
positive and negative examples in the labels or it will always return 0.
MCC of 0 is uncorrelated, 1 is completely correlated, and -1 is negatively correlated.
MCC = (TP * TN - FP * FN)/sqrt( (TP + FP)*( TP + FN )*( TN + FP )*( TN + FN ) )
where 0 terms in the denominator are replaced by 1.
This version of MCC only supports binary classification.
Parameters
----------
name : str, 'mcc'
Name of this metric instance for display.
average : str, default 'macro'
Strategy to be used for aggregating across mini-batches.
"macro": average the MCC for each batch.
"micro": compute a single MCC across all batches.
Examples
--------
In this example the network almost always predicts positive
>>> $false_positives = 1000
>>> $false_negatives = 1
>>> $true_positives = 10000
>>> $true_negatives = 1
>>> $predicts = [mx->nd->array(
[
([.3, .7])x$false_positives,
([.7, .3])x$true_negatives,
([.7, .3])x$false_negatives,
([.3, .7])xtrue_positives
]
)];
>>> $labels = [mx->nd->array(
[
(0)x($false_positives + $true_negatives),
(1)x($false_negatives + $true_positives)
]
)];
>>> $f1 = mx->metric->F1();
>>> $f1->update($labels, $predicts);
>>> $mcc = mx->metric->MCC()
>>> $mcc->update($labels, $predicts)
>>> print $f1->get();
f1 0.95233560306652054
>>> print $mcc->get();
mcc 0.01917751877733392
NAME
AI::MXNet::Perplexity - Calculate perplexity.
DESCRIPTION
Perplexity is a measurement of how well a probability distribution or model predicts a sample.
A low perplexity indicates the model is good at predicting the sample.
Parameters
----------
ignore_label : int or undef
index of invalid label to ignore when
counting. usually should be -1. Include
all entries if undef.
axis : int (default -1)
The axis from prediction that was used to
compute softmax. By default uses the last
axis.
$predicts = [mx->nd->array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])];
$labels = [mx->nd->array([0, 1, 1])];
$perp = mx->metric->Perplexity(ignore_label=>undef);
$perp->update($labels, $predicts);
print $perp->get()
Perplexity 1.77109762851559
NAME
AI::MXNet::MAE - Calculate Mean Absolute Error loss
=head1 DESCRIPTION
>>> $predicts = [mx->nd->array([3, -0.5, 2, 7])->reshape([4,1])]
>>> $labels = [mx->nd->array([2.5, 0.0, 2, 8])->reshape([4,1])]
>>> $mean_absolute_error = mx->metric->MAE()
>>> $mean_absolute_error->update($labels, $predicts)
>>> print $mean_absolute_error->get()
('mae', 0.5)
NAME
AI::MXNet::MSE - Calculate Mean Squared Error loss
=head1 DESCRIPTION
>>> $predicts = [mx->nd->array([3, -0.5, 2, 7])->reshape([4,1])]
>>> $labels = [mx->nd->array([2.5, 0.0, 2, 8])->reshape([4,1])]
>>> $mean_squared_error = mx->metric->MSE()
>>> $mean_squared_error->update($labels, $predicts)
>>> print $mean_squared_error->get()
('mse', 0.375)
NAME
AI::MXNet::RMSE - Calculate Root Mean Squred Error loss
=head1 DESCRIPTION
>>> $predicts = [mx->nd->array([3, -0.5, 2, 7])->reshape([4,1])]
>>> $labels = [mx->nd->array([2.5, 0.0, 2, 8])->reshape([4,1])]
>>> $root_mean_squared_error = mx->metric->RMSE()
>>> $root_mean_squared_error->update($labels, $predicts)
>>> print $root_mean_squared_error->get()
'rmse', 0.612372457981
NAME
AI::MXNet::CrossEntropy - Calculate Cross Entropy loss
=head1 DESCRIPTION
>>> $predicts = [mx->nd->array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
>>> $labels = [mx->nd->array([0, 1, 1])]
>>> $ce = mx->metric->CrossEntropy()
>>> $ce->update($labels, $predicts)
>>> print $ce->get()
('cross-entropy', 0.57159948348999023)
NAME
AI::MXNet::NegativeLogLikelihood - Computes the negative log-likelihood loss.
=head1 DESCRIPTION
>>> $predicts = [mx->nd->array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
>>> $labels = [mx->nd->array([0, 1, 1])]
>>> $nll_loss = mx->metric->NegativeLogLikelihood
>>> $nll_loss->update($labels, $predicts)
>>> print $nll_loss->get()
('cross-entropy', 0.57159948348999023)
NAME
AI::MXNet::PearsonCorrelation - Computes Pearson correlation.
DESCRIPTION
Computes Pearson correlation.
Parameters
----------
name : str
Name of this metric instance for display.
Examples
--------
>>> $predicts = [mx->nd->array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])]
>>> $labels = [mx->nd->array([[1, 0], [0, 1], [0, 1]])]
>>> $pr = mx->metric->PearsonCorrelation()
>>> $pr->update($labels, $predicts)
>>> print pr->get()
('pearson-correlation', '0.421637061887229')
NAME
AI::MXNet::Loss - Dummy metric for directly printing loss.
DESCRIPTION
Dummy metric for directly printing loss.
Parameters
----------
name : str
Name of this metric instance for display.
NAME
AI::MXNet::Confidence - Accuracy by confidence buckets.
DESCRIPTION
Accuracy by confidence buckets.
Parameters
----------
name : str
Name of this metric instance for display.
num_classes: Int
number of classes
confidence_thresholds: ArrayRef[Num]
confidence buckets
For example
my $composite_metric = AI::MXNet::CompositeEvalMetric->new;
$composite_metric->add(mx->metric->create('acc'));
$composite_metric->add(
AI::MXNet::Confidence->new(
num_classes => 2,
confidence_thresholds => [ 0.5, 0.7, 0.8, 0.9 ],
)
);
NAME
AI::MXNet::CustomMetric - Custom evaluation metric that takes a sub ref.
DESCRIPTION
Custom evaluation metric that takes a sub ref.
Parameters
----------
eval_function : subref
Customized evaluation function.
name : str, optional
The name of the metric
allow_extra_outputs : bool
If true, the prediction outputs can have extra outputs.
This is useful in RNN, where the states are also produced
in outputs for forwarding.
create
Create an evaluation metric.
Parameters
----------
metric : str or sub ref
The name of the metric, or a function
providing statistics given pred, label NDArray.
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 243:
Non-ASCII character seen before =encoding in 'sum(i=0..n−1)'. Assuming UTF-8