Bootstrapping is a very popular resampling method with replacement. It assigns measures of accuracy to sample estimates. Bootstrapping allows the estimation of the sampling distribution of nearly any statistic.
“A way of generating confidence intervals and the distribution of test statistics through sampling the observed data rather than through assuming a probability model for the underlying random variable. A basic bootstrap sample of a data set x1, x2, …, xn is a sample of size n with replacement, so that the bootstrap sample will be drawn from the original set of distinct values, but not generally in the same proportions as the original data set.”
— David Spiegelhalter, “The Art of Statistics, Learning from Data”
“a widely applicable and extremely powerful statistical tool that can be used to quantify the uncertainty associated with a given estimator or statistical learning method. As a simple example, the bootstrap can be used to estimate the standard errors of the coefficients from a linear regression fit.”
— James, Witten, Hastie & Tibshirani, “An Introduction to Statistical Learning”
“In many cases where formulas for standard errors are hard to obtain manually, or where they are thought not to be very good approximations to the true sampling variation of an estimator, we can rely on a resampling method. The general idea is to treat the observed data as a population that we can draw samples from. The most common resampling method is the bootstrap. (There are several versions of the bootstrap, but the most general, and most easily applied, is called the nonparametric bootstrap.”
— Jeffrey M. Wooldridge, “Introductory Econometrics, A Modern Approach“