When building complex models, it is often difficult to
explain why the model should be trusted. While global measures such as
accuracy are useful, they cannot be used for explaining why a model
made a specific prediction. 'lime' (a port of the 'lime' 'Python'
package) is a method for explaining the outcome of black box models by
fitting a local model around the point in question an perturbations of
this point. The approach is described in more detail in the article by
Ribeiro et al. (2016) <doi:10.48550/arXiv.1602.04938>.
| Version: |
0.5.4 |
| Depends: |
R (≥ 4.1) |
| Imports: |
assertthat, ggplot2, glmnet, glue, gower, grDevices, lifecycle, Matrix, methods, Rcpp, rlang, stats, stringi, tools |
| LinkingTo: |
Rcpp, RcppEigen |
| Suggests: |
covr, h2o, htmlwidgets, keras, knitr, magick, MASS, mlr, ranger, rmarkdown, sessioninfo, shiny, shinythemes, testthat (≥ 3.0.0), text2vec, xgboost |
| Published: |
2025-12-11 |
| DOI: |
10.32614/CRAN.package.lime |
| Author: |
Emil Hvitfeldt
[aut, cre],
Thomas Lin Pedersen
[aut],
Michaël Benesty [aut] |
| Maintainer: |
Emil Hvitfeldt <emil.hvitfeldt at posit.co> |
| BugReports: |
https://github.com/tidymodels/lime/issues |
| License: |
MIT + file LICENSE |
| URL: |
https://lime.data-imaginist.com,
https://github.com/tidymodels/lime,
https://lime.data-imaginist.com/ |
| NeedsCompilation: |
yes |
| Materials: |
README, NEWS |
| In views: |
MachineLearning |
| CRAN checks: |
lime results |