Close
Sidebar
Search tutorials
Get Started
Documentation
XCurve.Metrics.OpenAUC
Compute the OpenAUC score, which is denoted as: $$ \text{OpenAUC}(h,r)=\frac{1}{N_kN_u}\sum_{i=1}^{N_k}\sum_{j=1}^{N_u}\mathbb{I}(h(x_i)=y_i)\cdot\mathbb{I}(r(x_j)>r(x_i)), $$ where $h: \mathcal{X} \to \mathcal{Y}_k$ represents the close-set classifier, $r: \mathcal{X}\to\mathbb{R}$ denotes the open-set ranker; $(x_i, y_i)$ and $x_j$ are sampled from close-set and open-set, reprectively. For more details, please refer to the literature:OpenAUC: Towards AUC-Oriented Open-Set Recognition. Zitai Wang, Qianqian Xu, Zhiyong Yang, Yuan He, Xiaochun Cao and Qingming Huang, NeurIPS, 2022.
|
|
---|---|
|
|
Example:
from Metrics import OpenAUC
import numpy as np
n_close_samples, C, n_open_samples = 10, 5, 8
open_set_pred_known = np.random.rand(n_close_samples)
open_set_pred_unknown = np.random.rand(n_open_samples)
close_set_pred_class = np.random.randint(low=0, high=C, size=(n_close_samples, ))
close_set_labels = np.random.randint(low=0, high=C, size=(n_close_samples, ))
print(open_set_pred_known, open_set_pred_unknown, close_set_pred_class, close_set_labels)
openauc = OpenAUC(open_set_pred_known, open_set_pred_unknown, close_set_pred_class, close_set_labels)
print(openauc)