Bootstrap Methods and Their Application

The Bootstrap method is defined and its applications illustrated.Descripción completa

Views 212 Downloads 8 File size 13MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

Bootstrap methods and their application

Cambridge Series on Statistical and Probabilistic Mathematics Editorial Board: R. Gill (Utrecht) B.D. Ripley (Oxford) S. Ross (Berkeley) M. Stein (Chicago) D. Williams (Bath) This series of high quality upper-division textbooks and expository mono­ graphs covers all areas of stochastic applicable mathematics. The topics range from pure and applied statistics to probability theory, operations re­ search, mathematical programming, and optimzation. The books contain clear presentations of new developments in the field and also of the state of the art in classical methods. While emphasizing rigorous treatment of the­ oretical methods, the books contain important applications and discussions of new techniques made possible be advances in computational methods.

Bootstrap methods and their application A . C. D a v iso n

Professor o f Statistics, Department o f Mathematics, Swiss Federal Institute o f Technology, Lausanne

D . V. H in k le y

Professor o f Statistics, Department o f Statistics and Applied Probability, University o f California, Santa Barbara

H I C a m b r id g e U N IV E R S IT Y P R E S S

P U B L IS H E D BY THE PRESS S Y N D IC A T E OF THE U N IV E R S IT Y OF C A M B R ID G E

The Pitt Building, Trumpington Street, Cambridge CB2 1RP, United Kingdom C A M B R ID G E U N IV E R S IT Y PRESS

The Edinburgh Building, Cambridge CB2 2R U , United Kingdom 40 West 20th Street, N ew York, N Y 10011-4211, U SA 10 Stamford Road, Oakleigh, M elbourne 3166, Australia © Cambridge University Press 1997 This book is in copyright. Subject to statutory exception and to the provisions o f relevant collective licensing agreements, no reproduction o f any part may take place without the written permission o f Cambridge University Press First published 1997 Printed in the United States o f America Typeset in TgX M onotype Times A catalogue record fo r this book is available fro m the British Library

Library o f Congress Cataloguing in Publication data D avison, A. C. (Anthony Christopher) Bootstrap methods and their application / A.C. D avison, D.V. Hinkley. p. cm. Includes bibliographical references and index. ISB N 0 521 57391 2 (hb). ISBN 0 521 57471 4 (pb) 1. Bootstrap (Statistics) I. Hinkley, D. V. II. Title. QA276.8.D38 1997 519.5'44~dc21 96-30064 CIP ISBN 0 521 57391 2 hardback ISB N 0 521 57471 4 paperback

Contents

Preface 1

Introduction

2

The Basic Bootstraps 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11

3

In tro d u ctio n Param etric Sim ulation N o n p aram etric Sim ulation Simple Confidence Intervals R educing E rro r Statistical Issues N o n p aram etric A pproxim ations for V ariance and Bias Subsam pling M ethods B ibliographic N otes Problem s Practicals

Further Ideas 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9

In tro d u ctio n Several Sam ples Sem iparam etric M odels Sm ooth E stim ates o f F C ensoring M issing D a ta F inite Population Sam pling H ierarchical D a ta B ootstrapping the B ootstrap

ix 1 11 11 15 22 27 31 37 45 55 59 60 66 70 70 71 77 79 82 88 92 100 103

v

Contents

vi 3.10 3.11 3.12 3.13 3.14

B ootstrap D iagnostics Choice o f E stim ator from the D ata B ibliographic N otes Problem s Practicals

136

Tests 4.1

Intro d u ctio n

4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9

R esam pling for Param etric Tests N o n p aram etric P erm utation Tests N o n p aram etric B ootstrap Tests A djusted P-values Estim ating Properties o f Tests B ibliographic N otes Problem s Practicals

Confidence Intervals 5.1 5.2

113 120 123 126 131

Intro d u ctio n

136 140 156 161 175 180 183 184 187 191 191 193 202 211 220 223

Basic C onfidence Lim it M ethods 5.3 Percentile M ethods 5.4 T heoretical C om parison o f M ethods 5.5 Inversion o f Significance Tests 5.6 D ouble B ootstrap M ethods 5.7 Em pirical C om parison o f B ootstrap M ethods 5.8 M ultip aram eter M ethods 5.9 C onditional Confidence Regions 5.10 Prediction 5.11 B ibliographic N otes 5.12 Problem s 5.13 Practicals

230 231 238 243 246 247 251

Linear Regression

256

6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8

256 257 273 290 307 315 316 321

Intro d u ctio n Least Squares L inear Regression M ultiple L inear Regression A ggregate Prediction E rro r and V ariable Selection R obust Regression B ibliographic N otes Problem s Practicals

vii

Contents

7

8

9

Further Topics in Regression

326

7.1

In tro d u ctio n

326

7.2

G eneralized L inear M odels

327

7.3

Survival D a ta

346

7.4

O th er N onlinear M odels

353

7.5

M isclassification E rro r

358

7.6

N o n p aram etric Regression

362

7.7

B ibliographic N otes

374

7.8

Problem s

376

7.9

Practicals

378

Complex Dependence

385

8.1

In tro d u ctio n

385

8.2

Time Series

385

8.3

Point Processes

415

8.4

B ibliographic N otes

426

8.5

Problem s

428

8.6

Practicals

432

Improved Calculation

437

9.1

In tro d u ctio n

437

9.2

Balanced B ootstraps

438

9.3

C ontrol M ethods

446

9.4

Im po rtan ce R esam pling

450

9.5

Saddlepoint A pproxim ation

466

9.6

B ibliographic N otes

485

9.7

Problem s

487

9.8

Practicals

494

10 Semiparametric Likelihood Inference

499

10.1 Likelihood

499

10.2 M ultinom ial-B ased Likelihoods

500

10.3 B ootstrap Likelihood

507

10.4 Likelihood Based on Confidence Sets

509

10.5 Bayesian B ootstraps

512

10.6 B ibliographic N otes

514

10.7 Problem s

516

10.8 Practicals

519

viii 11

Contents

Computer Implementation

522

11.1 11.2 11.3 11.4 11.5 11.6

In tro d u ctio n Basic B ootstraps F u rth er Ideas Tests Confidence Intervals L inear Regression

522 525 531 534 536 537

11.7 11.8

F u rth er Topics in Regression Time Series

540 543

11.9 Im proved S im ulation 11.10 S em iparam etric Likelihoods Appendix A. Cumulant Calculations Bibliography Name Index Example index Subject index

545 549 551 555 568 572 575

Preface

The publication in 1979 of Bradley Efron’s first article on bootstrap methods was a major event in Statistics, at once synthesizing some of the earlier resampling ideas and establishing a new framework for simulation-based statistical analysis. The idea of replacing complicated and often inaccurate approximations to biases, variances, and other measures of uncertainty by com puter simulations caught the imagination of both theoretical researchers and users of statistical methods. Theoreticians sharpened their pencils and set about establishing mathematical conditions under which the idea could work. Once they had overcome their initial skepticism, applied workers sat down at their terminals and began to amass empirical evidence that the bootstrap often did work better than traditional methods. The early trickle of papers quickly became a torrent, with new additions to the literature appearing every month, and it was hard to see when would be a good moment to try to chart the waters. Then the organizers o f COMPSTAT ’92 invited us to present a course on the topic, and shortly afterwards we began to write this book. We decided to try to write a balanced account o f resampling methods, to include basic aspects of the theory which underpinned the methods, and to show as many applications as we could in order to illustrate the full potential of the methods — warts and all. We quickly realized that in order for us and others to understand and use the bootstrap, we would need suitable software, and producing it led us further towards a practically oriented treatment. Our view was cemented by two further developments: the appearance o f two excellent books, one by Peter Hall on the asymptotic theory and the other on basic methods by Bradley Efron and Robert Tibshirani; and the chance to give further courses that included practicals. O ur experience has been that hands-on computing is essential in coming to grips with resampling ideas, so we have included practicals in this book, as well as more theoretical problems. As the book expanded, we realized that a fully comprehensive treatm ent was beyond us, and that certain topics could be given only a cursory treatm ent because too little is known about them. So it is that the reader will find only brief accounts o f bootstrap methods for hierarchical data, missing data problems, model selection, robust estimation, nonparam etric regression, and complex data. But we do try to point the more ambitious reader in the right direction. No project of this size is produced in a vacuum. The majority of work on the book was completed while we were at the University of Oxford, and we are very grateful to colleagues and students there, who have helped shape our work in various ways. The experience of trying to teach these methods in Oxford and elsewhere — at the Universite de Toulouse I, Universite de Neuchatel, Universita degli Studi di Padova, Queensland University of Technology, Universidade de Sao Paulo, and University of Umea — has been vital, and we are grateful to participants in these courses for prompting us to think more deeply about the

ix

X

Preface

material. Readers will be grateful to these people also, for unwittingly debugging some of the problems and practicals. We are also grateful to the organizers of COMPSTAT ’92 and CLAPEM V for inviting us to give short courses on our work. While writing this book we have asked many people for access to data, copies of their programs, papers or reprints; some have then been rewarded by our bombarding them with questions, to which the answers have invariably been courteous and informative. We cannot name all those who have helped in this way, but D. R. Brillinger, P. Hall, M. P. Jones, B. D. Ripley, H. O’R. Sternberg and G. A. Young have been especially generous. S. Hutchinson and B. D. Ripley have helped considerably with computing matters. We are grateful to the mostly anonymous reviewers who commented on an early draft of the book, and to R. G atto and G. A. Young, who later read various parts in detail. A t Cambridge University Press, A. W oollatt and D. Tranah have helped greatly in producing the final version, and their patience has been commendable. We are particularly indebted to two people. V. Ventura read large portions o f the book, and helped with various aspects of the com putation. A. J. Canty has turned our version o f the bootstrap library functions into reliable working code, checked the book for mistakes, and has made numerous suggestions that have improved it enormously. Both of them have contributed greatly — though o f course we take responsibility for any errors that remain in the book. We hope that readers will tell us about them, and we will do our best to correct any future versions of the book; see its WWW page, at U R L http://dmawww.epf1.ch/davison.mosaic/BMA/ The book could not have been completed without grants from the U K Engineer­ ing and Physical Sciences Research Council, which in addition to providing funding for equipment and research assistantships, supported the work o f A. C. Davison through the award o f an Advanced Research Fellowship. We also acknowledge support from the US N ational Science Foundation. We must also mention the Friday evening sustenance provided at the Eagle and Child, the Lam b and Flag, and the Royal Oak. The projects of many authors have flourished in these amiable establishments. Finally, we thank our families, friends and colleagues for their patience while this project absorbed our time and energy. Particular thanks are due to Claire Cullen Davison for keeping the Davison family going during the writing of this book. A. C. Davison and D. V. Hinkley Lausanne and Santa Barbara May 1997

1 Introduction

The explicit recognition o f uncertainty is central to the statistical sciences. N o ­ tions such as prior inform ation, probability models, likelihood, stan d ard errors an d confidence limits are all intended to form alize uncertainty and thereby m ake allow ance for it. In sim ple situations, the uncertainty o f an estim ate may be gauged by analytical calculation based on an assum ed probability m odel for the available data. But in m ore com plicated problem s this approach can be tedious an d difficult, and its results are potentially m isleading if inappropriate assum ptions or sim plifications have been made. F or illustration, consider Table 1.1, which is taken from a larger tabulation (Table 7.4) o f the num bers o f A ID S reports in E ngland and W ales from m id -1983 to the end o f 1992. R eports are cross-classified by diagnosis period an d length o f reporting delay, in three-m onth intervals. A blank in the table corresponds to an unknow n (as yet unreported) entry. The problem was to predict the states o f the epidem ic in 1991 and 1992, which depend heavily on the values missing at the b o tto m right o f the table. T he d a ta su p p o rt the assum ption th at the reporting delay does n o t depend on the diagnosis period. In this case a simple m odel is th a t the num ber o f reports in row j and colum n k o f the table has a Poisson distribution with m ean Hjk = exp(oij -f f t) . If all the cells o f the table are regarded as independent, then the to tal nu m b er o f u n reported diagnoses in period j has a Poisson distribution w ith m ean n jk = exp(ay) k

exP (Pk), k

where the sum is over colum ns with blanks in row j. The eventual total o f as yet u n rep o rted diagnoses from period j can be estim ated by replacing a j and Pk by estim ates derived from the incom plete table, and thence we obtain the predicted to tal for period j. Such predictions are shown by the solid line in

1

2

1 ■ Introduction

D iagnosis period

R e p o rtin g delay interval (q u a rte rs ):

Y ear

Q u a rte r

0+

1

2

3

4

5

6

1988

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

31 26 31 36 32 15 34 38 31 32 49 44 41 56 53 63 71 95 76 67

80 99 95 77 92 92 104 101 124 132 107 153 137 124 175 135 161 178 181

16 27 35 20 32 14 29 34 47 36 51 41 29 39 35 24 48 39

9 9 13 26 10 27 31 18 24 10 17 16 33 14 17 23 25

3 8 18 11 12 22 18 9 11 9 15 11 7 12 13 12

2 11 4 3 19 21 8 15 15 7 8 6 11 7 11

8 3 6 8 12 12 6 6 8 6 9 5 6 10

1989

1990

1991

1992

>14 •••

••• ••• ••• •••

6 3 3 2 2 1

T otal rep o rts to end o f 1992 174 211 224 205 224 219 253 233 281 245 260 285 271 263 306 258 310 318 273 133

Figure 1.1, together w ith the observed to tal reports to the end o f 1992. How good are these predictions? It would be tedious b u t possible to p u t pen to p ap er and estim ate the prediction uncertainty th ro u g h calculations based on the Poisson model. But in fact the d a ta are m uch m ore variable th an th a t m odel would suggest, and by failing to take this into account we w ould believe th at the predictions are m ore accurate th a n they really are. Furtherm ore, a b etter approach would be to use a sem iparam etric m odel to sm ooth out the evident variability o f the increase in diagnoses from q u arter to q u arter; the corresponding prediction is the dotted line in Figure 1.1. A nalytical calculations for this m odel would be very unpleasant, and a m ore flexible line o f attack is needed. W hile m ore th an one approach is possible, the one th a t we shall develop based on com puter sim ulation is b o th flexible and straightforw ard.

Purpose of the Book O ur central goal is to describe how the com puter can be harnessed to obtain reliable stan d ard errors, confidence intervals, and o th er m easures o f uncertainty for a wide range o f problem s. The key idea is to resam ple from the original d a ta — either directly o r via a fitted m odel — to create replicate datasets, from

Table 1.1 Numbers of AIDS reports in England and Wales to the end of 1992 (De Angelis and Gilks, 1994) extracted from Table 7.4. A t indicates a reporting delay less than one month.

3

1 ■Introduction

Figure 1.1 Predicted quarterly diagnoses from a parametric model (solid) and a semiparametric model (dots) fitted to the AIDS data, together with the actual totals to the end of 1992 (+).

Time

which the variability o f the quantities o f interest can be assessed w ithout longwinded and error-prone analytical calculation. Because this approach involves repeating the original d a ta analysis procedure w ith m any replicate sets o f data, these are som etim es called computer-intensive methods. A n o th er nam e for them is bootstrap methods, because to use the d a ta to generate m ore d a ta seems analogous to a trick used by the fictional B aron M unchausen, who when he found him self a t the b o tto m o f a lake got out by pulling him self up by his b ootstraps. In the sim plest nonparam etric problem s we do literally sample from the data, and a com m on initial reaction is th a t this is a fraud. In fact it is not. It turns out th a t a wide range o f statistical problem s can be tackled this way, liberating the investigator from the need to oversimplify complex problem s. T he ap proach can also be applied in simple problem s, to check the adequacy o f stan d ard m easures o f uncertainty, to relax assum ptions, and to give quick approxim ate solutions. A n exam ple o f this is random sam pling to estim ate the p erm u tatio n distribution o f a nonparam etric test statistic. It is o f course true th a t in m any applications we can be fairly confident in a p articu lar p aram etric m odel and the stan d ard analysis based on th a t model. Even so, it can still be helpful to see w hat can be inferred w ithout particular p aram etric m odel assum ptions. This is in the spirit o f robustness o f validity o f the statistical analysis perform ed. N onparam etric b o o tstrap analysis allows us to do this.

4

1 • Introduction

3 5 7 18 43 85 91 98 100 130 230 487 _____________________________________________________________________

Despite its scope an d usefulness, resam pling m ust be carefully applied. Unless certain basic ideas are understood, it is all too easy to produce a solution to the w rong problem , or a b ad solution to the right one. B ootstrap m ethods are intended to help avoid tedious calculations based on questionable assum ptions, and this they do. But they can n o t replace clear critical thought ab o u t the problem , ap p ro p riate design o f the investigation and d a ta analysis, and incisive presentation o f conclusions. In this b o o k we describe how resam pling m ethods can be used, and evaluate their perform ance, in a wide range o f contexts. O u r focus is on the m ethods and their practical application rath er th an on the underlying theory, accounts o f which are available elsewhere. This book is intended to be useful to the m any investigators w ho w ant to know how and when the m ethods can safely be applied, and how to tell when things have gone wrong. The m athem atical level o f the book reflects this: we have aim ed for a clear account o f the key ideas w ithout an overload o f technical detail.

Examples B ootstrap m ethods can be applied b o th when there is a well-defined probability m odel for d a ta an d when there is not. In o u r initial developm ent o f the m ethods we shall m ake frequent use o f tw o simple examples, one o f each type, to illustrate the m ain points. Example 1.1 (Air-conditioning data) Table 1.2 gives n = 12 times between failures o f air-conditioning equipm ent, for which we wish to estim ate the underlying m ean or its reciprocal, the failure rate. A simple m odel for this problem is th a t the times are sam pled from an exponential distribution. The dotted line in the left panel o f Figure 1.2 is the cum ulative distribution function (C D F ) F t ) = / °’

\ l - e x p (-y/n),

y ~ °’

y > 0,

for the fitted exponential distrib u tio n w ith m ean fi set equal to the sample average, y = 108.083. The solid line on the sam e plot is the nonparam etric equivalent, the em pirical distribution function (E D F ) for the data, which places equal probabilities n-1 = 0.083 at each sam ple value. C om parison o f the two curves suggests th a t the exponential m odel fits reasonably well. A n alternative view o f this is shown in the right panel o f the figure, which is an exponential

Table 1.2 Service hours between failures of the air-conditioning equipment in a Boeing 720 jet aircraft (Proschan, 1963).

1 ■ Introduction

5

o

O co

Figure 1.2 Summary displays for the air-conditioning data. The left panel shows the EDF for the data, F (solid), and the CDF of a fitted exponential distribution (dots). The right panel shows a plot of the ordered failure times against exponential quantiles, with the fitted exponential model shown as the dotted line.

o o in o o o o

co o o

CM

O o

0.0 0.5 Failure time y

1.0 1.5 2.0 2.5 3.0

Quantiles of standard exponential

Q -Q plot — a plot o f ordered d a ta values yy) against the standard exponential quantiles

n+ 1

= - log (1 K=1

n+ 1

A lthough these plots suggest reasonable agreem ent with the exponential m odel, the sam ple is ra th e r too small to have m uch confidence in this. In the d a ta source the m ore general gam m a m odel with m ean /i and index k is used; its density is fw (y) =

1 1

/ \ K I K ' „K-1. y K exP ( - Ky / v l

y > o,

h, k

> o.

( i.i)

F or o u r sam ple the estim ated index is k = 0.71, which does not differ signif­ icantly (P = 0.29) from the value k = 1 th a t corresponds to the exponential m odel. O u r reason for m entioning this will becom e apparent in C h apter 2. Basic properties o f the estim ator T = Y for fj. are easy to obtain theoretically under the exponential model. For example, it is easy to show th at T is unbiased and has variance fi2/n. A pproxim ate confidence intervals for n can be calculated using these properties in conjunction with a norm al approxim ation for the distrib u tio n o f T, alth o u g h this does n o t w ork very well: we can tell this because Y / n has an exact gam m a distribution, which leads to exact confidence limits. Things are m ore com plicated under the m ore general gam m a model, because the index k is only estim ated, and so in a traditional approach we would use approxim ations — such as a norm al approxim ation for the distribution o f T, or a chi-squared approxim ation for the log likelihood ratio statistic.

6

1 ■ Introduction

The param etric sim ulation m ethods o f Section 2.2 can be used alongside these approxim ations, to diagnose problem s w ith them , or to replace them entirely.

■ Example 1.2 (City population data) Table 1.3 reports n = 49 d a ta pairs, each corresponding to a city in the U nited States o f A m erica, the p air being the 1920 and 1930 p o pulations o f the city, w hich we denote by u and x. The d a ta are plotted in Figure 1.3. Interest here is in the ratio o f m eans, because this would enable us to estim ate the to tal pop u latio n o f the U SA in 1930 from the 1920 figure. I f the cities form a ran d o m sam ple w ith ( U , X ) denoting the p air o f populatio n values for a random ly selected city, then the total 1930 population is the prod u ct o f the to tal 1920 popu latio n and the ratio o f expectations 6 = E (X )/E ([7). This ratio is the p aram eter o f interest. In this case there is no obvious p aram etric m odel for the jo in t distribution o f ( U , X ) , so it is n atu ral to estim ate 9 by its em pirical analog, T = X / U , the ratio o f sam ple averages. We are then concerned w ith the uncertainty in T. If we had a plausible param etric m odel — for exam ple, th a t the pair ( U, X ) has a bivariate lognorm al distrib u tio n — then theoretical calculations like those in Exam ple 1.1 would lead to bias an d variance estim ates for use in a norm al approxim ation, which in tu rn would provide approxim ate confidence intervals for 6. W ithout such a m odel we m ust use nonparam etric analysis. It is still possible to estim ate the bias an d variance o f T, as we shall see, and this m akes norm al approxim ation still feasible, as well as m ore com plex approaches to setting confidence intervals. ■ Exam ple 1.1 is special in th a t an exact distribution is available for the statistic o f interest an d can be used to calculate confidence limits, at least u nder the exponential m odel. But for param etric m odels in general this will n o t be true. In Section 2.2 we shall show how to use param etric sim ulation to o b tain approxim ate distributions, either by approxim ating m om ents for use in norm al approxim ations, or — when these are inaccurate — directly. In Exam ple 1.2 we m ake no assum ptions ab o u t the form o f the d ata disribution. But still, as we shall show in Section 2.3, sim ulation can be used to obtain properties o f T, even to approxim ate its distribution. M uch o f C h ap ter 2 is devoted to this.

Layout of the Book C h ap ter 2 describes the properties o f resam pling m ethods for use w ith sin­ gle sam ples from p aram etric an d nonparam etric m odels, discusses practical m atters such as the num bers o f replicate datasets required, and outlines delta m ethods for variance approxim ation based on different forms o f jackknife. It

1 • Introduction Table 13 Populations in thousands of n — 49 large US cities in 1920 (u) and in 1930 (x) (Cochran, 1977, p. 152).

u

X

u

X

u

X

138 93 61 179 48 37 29 23 30

143 104 69 260 75 63 50 48 111 50 52 53 79 57 317 93 58

76 381 387 78 60 507 50 77 64 40 136 243 256 94 36 45

80 464 459 106 57 634 64 89 77 60 139 291 288 85 46 53

67 120 172 66 46 121 44 64 56 40 116 87 43 43 161 36

67 115 183 86 65 113 58 63 142 64 130 105 61 50 232 54

2

38 46 71 25 298 74 50

Figure 1J Populations of 49 large United States cities (in 1000s) in 1920 and 1930.

c

o « 3 Q. O Q. O CO O)

1920 population

8

1 ■ Introduction

also contains a basic discussion o f confidence intervals and o f the ideas th at underlie b o o tstrap m ethods. C h apter 3 outlines how the basic ideas are extended to several samples, sem iparam etric and sm ooth models, simple cases where d a ta have hierarchical structure or are sam pled from a finite population, an d to situations where d ata are incom plete because censored o r missing. It goes on to discuss how the sim ulation o u tp u t itself m ay be used to detect problem s — so-called boo tstrap diagnostics — an d how it m ay be useful to b o o tstrap the bootstrap. In C h ap ter 4 we review the basic principles o f significance testing, and then describe M onte C arlo tests, including those using M arkov C hain sim ulation, and param etric b o o tstrap tests. This is followed by discussion o f nonparam etric perm utatio n tests, and the m ore general m ethods o f semi- and nonparam etric boo tstrap tests. A double b o o tstrap m ethod is detailed for im proved approxi­ m ation o f P-values. Confidence intervals are the subject o f C h ap ter 5. A fter outlining basic ideas, we describe how to construct simple confidence intervals based on sim ulations, an d then go on to m ore com plex m ethods, such as the studentized bootstrap, percentile m ethods, the double b o o tstrap and test inversion. The m ain m ethods are com pared em pirically in Section 5.7, then there are brief accounts o f confidence regions for m ultivariate param eters, and o f prediction intervals. The three subsequent chapters deal w ith m ore com plex problem s. C h ap ­ ter 6 describes how the basic resam pling m ethods m ay be applied in linear regression problem s, including tests for coefficients, prediction analysis, and variable selection. C h ap ter 7 deals w ith m ore com plex regression situations: generalized linear models, oth er nonlinear m odels, semi- and nonparam etric regression, survival analysis, and classification error. C h apter 8 details m ethods appropriate for tim e series, spatial data, an d poin t processes. C h apter 9 describes how variance reduction techniques such as balanced sim ulation, control variates, and im portance sam pling can be adapted to yield im proved sim ulations, w ith the aim o f reducing the am ount o f sim ulation needed for an answ er o f given accuracy. It also shows how saddlepoint m ethods can som etim es be used to avoid sim ulation entirely. C h apter 10 describes various sem iparam etric versions o f the likelihood function, the ideas underlying which are closely related to resam pling m ethods. It also briefly outlines a Bayesian version o f the b o otstrap. C hapters 2 -10 contain problem s intended to reinforce the reader’s under­ standing o f b o th m ethods an d theory, and in some cases problem s develop topics th at could n o t be included in the text. Some o f these dem and a know l­ edge o f m om ents and cum ulants, basic facts ab o u t which are sketched in the A ppendix. The book also contains practicals th a t apply resam pling routines w ritten in

1 ■ Introduction

9

the S language to sets o f data. The practicals are intended to reinforce the ideas in each chapter, to supplem ent the m ore theoretical problem s, and to give exam ples on which readers can base analyses o f their own data. It would be possible to give different sorts o f course based on this book. O ne w ould be a “theoretical” course based on the problem s and an o th er an “applied” course based on the practicals; we prefer to blend the two. A lthough a library o f routines for use with the statistical package S P lu s is bundled w ith it, m ost o f the book can be read w ithout reference to p a r­ ticular softw are packages. A p art from the practicals, the exception to this is C h ap ter 11, which is a short introduction to the m ain resam pling routines, arran g ed roughly in the order with which the corresponding ideas ap p ear in earlier chapters. R eaders intending to use the bundled routines will find it useful to w ork through the relevant sections o f C h apter 11 before attem pting the practicals.

Notation A lthough we believe th a t o u r n o tation is largely standard, there are not enough letters in the English and G reek alphabets for us to be entirely consistent. G reek letters such as 6, P and v generally denote param eters or o ther unknow ns, while a is used for error rates in connection with significance tests and confidence sets. English letters X , Y, Z , and so forth are used for random variables, which take values x, y, z. T hus the estim ator T has observed value t, which m ay be an estim ate o f the unknow n p aram eter 0. The letter V is used for a variance estim ate, an d the letter p for a probability, except for regression models, where p is the num b er o f covariates. Script letters such as J/~ are used to denote sets. Probability, expectation, variance and covariance are denoted Pr( ), E( ), var(-) and cov(-, •), while the jo in t cum ulant o f Yi, Y1Y2 and Y3 is denoted cum(Yi, Yj Y2, Y3). We use I {A} to denote the indicator random variable, which takes values one if the event A is true and zero otherwise. A related function is the H eaviside function

We use #{/!} to denote the nu m ber o f elem ents in the set A, and #{^4r} for the num ber o f events A r th a t occur in a sequence A i , A 2 , __ We use = to m ean “is approxim ately equal to ”, usually corresponding to asym ptotic equivalence as sam ple sizes tend to infinity, ~ to m ean “is distributed as” o r “is distributed according to ”, ~ to m ean “is distributed approxim ately a s”, ~ to m ean “is a sam ple o f independent identically distributed random variables from ”, while s has its usual m eaning o f “is equivalent to ”.

10

1 ■ Introduction

The d a ta values in a sam ple o f size n are typically denoted by y i , . . . , y n, the observed values o f the ran d o m variables y i , . . . , y n; their average is y = n-'Zyj-

We m ostly reserve Z for ran d o m variables th a t are stan d ard norm al, at least approxim ately, an d use Q for ran d o m variables w ith o ther (approxim ately) know n distributions. As usual N(n, a 2) represents the norm al distribution w ith m ean \i an d variance a 2, while za is often the a quantile o f the stan d ard norm al distribution, w hose cum ulative distrib u tio n function is ®( ). The letter R is reserved for the n u m b er o f replicate sim ulations. Sim ulated copies o f a statistic T are denoted T ' , r = 1 ,..., R, w hose ordered values are r ('i) ^ ^ T (R)- E xpectation, variance an d probability calculated w ith respect to the sim ulation distribution are w ritten Pr*(), E*(-) and var*(-). W here possible we avoid boldface type, and rely on the context to m ake it plain when we are dealing w ith vectors o r m atrices; a T denotes the m atrix transpose o f a vector o r m atrix a. We use PD F, C D F, an d E D F as sh o rth an d for “probability density function”, “cum ulative distribution function”, and “em pirical distribution function”. The letters F and G are used for C D F s, an d / and g are generally used for the corresponding PD F s. A n exception to this is th a t /*; denotes the frequency with which y; app ears in the rth resample. We use M L E as sh o rth an d for “m axim um likelihood estim ate” or som etim es “m axim um likelihood estim ation”. The end o f each exam ple is m arked ■, an d the end o f each algorithm is m arked •.

2 The Basic Bootstraps

2.1 Introduction In this chap ter we discuss techniques which are applicable to a single, h om o­ geneous sam ple o f data, denoted by y i,...,} V T he sam ple values are thought o f as the outcom es o f independent and identically distributed ran d o m variables Y U . . . ,Y „ w hose probability density function (P D F ) and cumulative distribution function (C D F ) we shall denote by / and F, respectively. T he sam ple is to be used to m ake inferences ab o u t a p o p ulation characteristic, generically denoted by 6, using a statistic T whose value in the sam ple is t. We assum e for the m om ent th a t the choice o f T has been m ade and th a t it is an estim ate for 6, which we take to be a scalar. O u r atten tio n is focused on questions concerning the probability distribution o f T. F or exam ple, w hat are its bias, its stan d ard error, or its quantiles? W hat are likely values und er a certain null hypothesis o f interest? H ow do we calculate confidence limits for 6 using T ? T here are tw o situations to distinguish, the param etric and the n o n p a ra m et­ ric. W hen there is a p articu lar m athem atical m odel, with adjustable constants o r p aram eters ip th a t fully determ ine / , such a m odel is called parametric and statistical m ethods based on this m odel are param etric m ethods. In this case the p aram eter o f interest 6 is a com ponent o f or function o f ip. W hen no such m athem atical m odel is used, the statistical analysis is nonparametric, and uses only the fact th a t the ran d o m variables Yj are independent and identically distributed. Even if there is a plausible param etric m odel, a nonparam etric analysis can still be useful to assess the robustness o f conclusions draw n from a p aram etric analysis. A n im p o rta n t role is played in nonparam etric analysis by the empirical distribution which puts equal probabilities n-1 a t each sam ple value yj. The corresponding estim ate o f F is the empirical distribution function (E D F ) F,

11

12

2 • The Basic Bootstraps

which is defined as the sam ple p ro p o rtio n #{^4} means the number of times the event A occurs.

n M ore form ally F(y) = l i Z H ^ y - y ^ j=i

w

where H(u) is the unit step function which ju m p s from 0 to 1 at u = 0. N otice th at the values o f the E D F are fixed (0, j[), so the E D F is equivalent to its points o f increase, the ordered values >’(i) < • • • < y ln} o f the data. An exam ple o f the E D F was shown in the left panel o f Figure 1.2. W hen there are rep eat values in the sample, as would often occur with discrete data, the E D F assigns probabilities p ro p o rtional to the sam ple fre­ quencies at each distinct observed value y. The form al definition (2.1) still applies. The E D F plays the role o f fitted m odel when no m athem atical form is assum ed for F, analogous to a param etric C D F w ith param eters replaced by their estim ates.

2.1.1 Statistical functions M any simple statistics can be th o u g h t o f in term s o f properties o f the EDF. For exam ple, the sam ple average y = n_1 yj is the m ean o f the E D F ; see Exam ple 2.1 below. M ore generally, the statistic o f interest t will be a sym m etric function o f y \ , . . . , y„, m eaning th a t t is unaffected by reordering the data. This implies th a t t depends only on the ordered values y(i) < • • • < y^), or equivalently on the E D F F. O ften this can be expressed simply as t = t(F), where t(-) is a statistical function — essentially ju st a m athem atical expression o f the algorithm for com puting t from F. Such a statistical function is o f central im portance in the n o n p aram etric case because it also defines the param eter o f interest 9 th ro u g h the “algorithm ” 9 = t(F). This corresponds to the qualitative idea th a t 6 is a characteristic o f the population described by F. Simple exam ples o f such functions are the m ean an d variance o f Y , which are respectively defined as t(F) =

J

y dF( y) ,

t(F) =

J

y 2 dF(y) ~ { J ydF(y) J

.

(2.2)

T he same definition o f 9 applies in p aram etric problem s, although then 6 is m ore usually defined explicitly as one o f the m odel param eters tp. T he relationship betw een the estim ate t an d F can usually be expressed as t = t(F), corresponding to the relation 9 = t(F) betw een the characteristic o f interest an d the underlying distribution. T he statistical function t( ) defines

13

2.1 ■Introduction

b o th the p aram eter an d its estim ate, b u t we shall use t( ) to represent the function, and t to represent the estim ate o f 9 based on the observed d ata

Example 2.1 (Average)

T he sample average, y, estim ates the population m ean H

=

J

ydF(y).

To show th a t y = t(F), we substitute for F in the defining function at (2.2) to obtain

j= i because f a ( y ) d H ( y — x) = a(x) for any continuous function a(-).



Example 2.2 (City population data) F or the problem outlined in Exam ple 1.2, the p aram eter o f interest is the ratio o f m eans 9 = E (X )/E (l/). In this case F is the bivariate C D F o f Y = (V , X ), and the bivariate E D F F puts probability n~l at each o f the d a ta pairs (uj ,Xj). T he statistical function version o f 9 simply uses the definition o f m ean for b o th nu m erato r and denom inator, so th at fxdF(u,x) f ud F( u, x) The corresponding estim ate o f 9 is * [ xdF(u,x) t = t(F) = J udF(u,x) w ith x = n-1 J2 x j ar*d « = n_1 J 2 uj-

A quantity A„ is said to be 0(nd) if lim„_00 n~dA„ = a for some finite a, and o(nJ) if lim„_0Q n~dA„ = 0.

x u ■

It is quite straightforw ard to show th at (2.1) implies convergence o f F to F as n—>oo (Problem 2.1). T hen if t(-) is continuous in an appropriate sense, the definition T = t( ) implies th a t T converges to 6 as n—>oo, which is the property o f consistency. N o t all estim ates are exactly o f the form t(F). For example, if t(F) = var(Y ) then the usual unbiased sam ple variance is nt(F)/(n — 1). A lso the sample m edian is n o t exactly F -1 ( |) . Such small discrepancies are fairly un im p o rtan t as far as applying the b o o tstrap techniques discussed in this book. In a very form al developm ent we could write T — tn(F) and require th a t tn—*t as n—>oo, possibly even th a t t„ — t = 0 ( « _1). But such form ality would be excessive here, an d we shall assum e in general discussion th at T = t(F). (One case th at does

2 • The Basic Bootstraps

14

require special treatm en t is n o n p aram etric density estim ation, which we discuss in Exam ple 5.13.) The representation 6 = t(F) defines the p aram eter and its estim ator T in a robust way, w ithout any assum ption ab o u t F, oth er th an th a t 6 exists. This guarantees th a t T estim ates the right thing, no m atter w hat F is. Thus the sam ple average y is the only statistic th a t is generally valid as an estim ate o f the population m ean f i : only if Y is sym m etrically distributed ab o u t /i will statistics such as trim m ed averages also estim ate fi. This property, which guarantees th at the correct characteristic o f the underlying distribution is estim ated, w hatever th a t distribution is, is som etim es called robustness o f specification.

2.1.2 Objectives M uch o f statistical theory is devoted to calculating approxim ate distributions for p articu lar statistics T , on which to base inferences ab o u t their estim ands 8. Suppose, for exam ple, th a t we w ant to calculate a (1 — 2a) confidence interval for 6. It m ay be possible to show th a t T is approxim ately norm al w ith m ean 6 + P and variance v; here P is the bias o f T. If p an d v are b o th know n, then we can write P r(T < 1 1 F) = O

.

(2-3)

where () is the stan d ard norm al integral. I f the a quantile o f the standard norm al distrib u tio n is z« = (F) = /i2/ n , and these are estim ated by 0 and y 2/n. Since n = 12, y = 108.083, and 20.025 = —1.96, a 95% confidence interval for /i based on the norm al approxim ation (2.3) is + 1.96n_1/2y = (46.93,169.24). ■ E stim ates such as those in (2.6) are b o o tstrap estim ates. H ere they have been used in conjunction w ith a norm al approxim ation, which som etim es will be adequate. However, the b o o tstrap approach o f substituting estim ates can be applied m ore am bitiously to im prove upon the norm al approxim ation and o th e r first-order theoretical approxim ations. The elaboration o f the b o o tstrap ap proach is the purpose o f this book.

2.2 Parametric Simulation In the previous section we pointed out th a t theoretical properties o f T m ight be h ard to determ ine w ith sufficient accuracy. We now describe the sound practical alternative o f repeated sim ulation o f d a ta sets from a fitted param etric model, an d em pirical calculation o f relevant properties o f T. Suppose th a t we have a p articular param etric m odel for the distribution o f the d a ta y \ , . . . , y „ . We shall use F v (y) and f v (y) to denote the C D F and P D F respectively. W hen 1p is estim ated by (p — often b u t not invariably its m axim um likelihood estim ate — its substitution in the m odel gives the fitted model, w ith C D F F{y) = F^(y), which can be used to calculate properties o f T, som etim es exactly. We shall use Y * to denote the random variable distributed according to the fitted m odel F, and the superscript * will be used with E, var and so forth when these m om ents are calculated according to the fitted distribution. O ccasionally it will also be useful to w rite \p = xp’ to em phasize th a t this is the p aram eter value for the sim ulation model. Example 2.4 (Air-conditioning data) We have already calculated the m ean and variance u nder the fitted exponential m odel for the estim ator T = Y o f Exam ple 1.1. O u r sam ple estim ate for the m ean fi is t = y. So here 7* is exponential w ith m ean y. In the n o tatio n ju st introduced, we have by

16

2 • The Basic Bootstraps

theoretical calculation w ith this exponential distrib u tion th at E*(Y*) = y,

v ar'(Y * ) = y 2/n.

N ote th a t the estim ated bias o f Y is zero, being the difference between E '(Y *) an d the value ji = y for the m ean o f the fitted distribution. These m om ents were used to calculate an approxim ate norm al confidence interval in Exam ple 2.3. If, however, we wished to calculate the bias and variance o f T = log Y under the fitted m odel, i.e. E* (log Y*) — lo g y and v ar’ (lo g Y '), exact calculation is m ore difficult. The delta m ethod o f Section 2.7.1 would give approxim ate values —(2n)~* and n-1 . But m ore accurate approxim ations can be obtained using sim ulated sam ples o f 7* s. Sim ilar results and com m ents would apply if instead we chose to use the m ore general gam m a m odel (1.1) for this example. T hen Y* would be a gam m a random variable with m ean y and index k. m

2.2.1 Moment estimates So now suppose th a t theoretical calculation w ith the fitted m odel is too complex. A pproxim ations m ay n o t be available, or they m ay be untrustw orthy, perhaps because the sam ple size is small. The alternative is to estim ate the properties we require from sim ulated datasets. We w rite such a dataset as Yj",. . . , Y„* w here the YJ are independently sam pled from the fitted distribution F. W hen the statistic o f interest is calculated from a sim ulated dataset, we denote it by T*. F rom R repetitions o f the d a ta sim ulation we obtain T [ , . . . , T ’R. Properties o f T — 6 are then estim ated from T,*,. . . , T^. F or example, the estim ator o f the bias b(F) — E (T | F) — 0 o f T is B = b(F) = E (T | F) — t = E*(T*) - t, and this in tu rn is estim ated by R

B r = / r 1 Y , Tr ~ t = T* - 1.

(2.7)

r= 1

N ote th a t in the sim ulation t is the p aram eter value for the model, so th at T ' — t is the sim ulation analogue o f T — 6. The corresponding estim ator o f the variance o f T is 1 Vr =

R D 7’-* - f *)2’

(2-8)

with sim ilar estim ators for oth er m om ents. These em pirical approxim ations are justified by the law o f large num bers. F or exam ple, B r converges to B, the exact value under the fitted model, as R

2.2 ■Parametric Simulation Figure 2.1 Empirical biases and variances of Y* for the air-conditioning data from four repetitions of parametric simulation. Each line shows how the estimated bias and variance for R ~ 10 initial simulations change when further simulations are successively added. Note how the variability decreases as the simulation size increases, and how the simulated values converge to the exact values under the fitted exponential model, given by the horizontal dotted lines.

17

cC/> O in

increases. We usually d ro p the subscript R from B R, VR, and so forth unless we are explicitly discussing the effect o f R. How to choose R will be illustrated in the exam ples th a t follow, and discussed in Section 2.5.2. It is im p o rtan t to recognize th a t we are not estim ating absolute properties o f T , b u t ra th e r o f T relative to 9. Usually this involves the estim ation erro r T —9, b u t we should n o t ignore the possibility th at T / 0 (equivalently log T — log 9) o r som e o th er relevant m easure o f estim ation error m ight be m ore appropriate, depending u p o n the context. B ootstrap sim ulation m ethods will apply to any such measure. Example 2.5 (Air-conditioning data) C onsider Exam ple 1.1 again. As we have seen, sim ulation is unnecessary in practice for this problem because the m om ents are easy to calculate theoretically, b u t the exam ple is useful for illustration. H ere the fitted m odel is an exponential distribution for the failure times, w ith m ean estim ated by the sam ple average y = 108.083. All sim ulated failure tim es Y * are generated from this distribution. Figure 2.1 shows the results from several sim ulations, four for each o f eight values o f R, in each o f which the em pirical biases and variances o f T" = Y" have been calculated according to (2.7) and (2.8). O n both panels the “correct” values, nam ely zero and y 2/ n = (108.083)2/1 2 = 973.5, are indicated by horizontal d o tted lines. Evidently the larger is R, the closer is the sim ulation calculation to the right answer. H ow large a value o f R is needed? Figure 2.1 suggests th a t for some purposes R = 100 or 200 will be adequate, b u t th a t R = 10 will n o t be large enough. In this problem the accuracy o f the em pirical approxim ations is quite easy to determ ine from the fact th at n Y / n has a gam m a distribution with

2 • The Basic Bootstraps

18 index n. The sim ulation variances o f B R and F r are t2

t4 /

2

6 \

nR’

n2 \ R - 1 + n R . ) ’

and we can use these to say how large R should be in order th a t the sim ulated values have a specified accuracy. For exam ple, the coefficients o f variation o f VR a t R = 100 and 1000 are respectively 0.16 and 0.05. However, for a com plicated problem w here sim ulation was really necessary, such calculations could n o t be done, an d general rules are needed to suggest how large R should be. These are discussed in Section 2.5.2. ■

2.2.2 Distribution and quantile estimates The sim ulation estim ates o f bias and variance will som etim es be o f interest in their own right, but m ore usually w ould be used w ith norm al approxim ations for T , p articularly for large samples. For situations like those in Exam ples 1.1 and 1.2, however, the norm al approxim ation is intrinsically inaccurate. This can be seen from a norm al Q -Q plot o f the sim ulated values t \ , . . . , t R, th a t is, a plot o f the ordered values < • • • < t ’R) against expected norm al order statistics. It is the em pirical distrib u tio n o f these sim ulated values which can provide a m ore accurate distrib u tio n al approxim ation, as we shall now see. If as is often the case we are approxim ating the distribution o f T — 8 by th a t o f T m— t, then cum ulative probabilities are estim ated simply by the em pirical distribution function o f the sim ulated values t ' — t. M ore formally, if G(u) = P r( T — 8 < u), then the sim ulation estim ate o f G(u) is n i \ — t < u} 1 G* (U) = ~ ^ R ------- = R Z 2 1{tr ~ 1 -

,

r=l

where I {A} is the indicator o f the event A, equal to 1 if A is true and 0 otherwise. As R increases, so this estim ate will converge to G(u), the exact C D F o f T* — t under sam pling from the fitted model. Ju st as w ith the m om ent approxim ations discussed earlier, so the approxim ation GR to G contains two sources o f error, i.e. th a t betw een G an d G due to d a ta variability and th a t betw een GR an d G due to finite sim ulation. We are often interested in quantiles o f the distrib ution o f T — 8, and these are approxim ated using ordered values o f t* — t. T he underlying result used here is th a t if X i , . . . , X N are independently distributed with C D F K and if denotes the j \ h ordered value, then

This implies th a t a sensible estim ate o f K ~ l (p) is X ^ N+i)p), assum ing th at

2.2 • Parametric Simulation

19

( N + l)p is an integer. So we estim ate the p quantile o f T —9 by the (R + l)p th ordered value o f t" — t, th a t is t(‘(R+1)p) — t. We assum e th at R is chosen so th at (/?

l)p is an integer. The sim ulation approxim ation GR and the corresponding quantiles are in principle b etter th a n results obtained by norm al approxim ation, provided th at R is large enough, because they avoid the supposition th a t the distribution o f T* — t has a p articu lar form. Example 2.6 (Air-conditioning data) T he sim ulation experim ents described in Exam ple 2.5 can be used to study the sim ulation approxim ations to the d istribution an d quantiles o f Y — fi. First, Figure 2.2 shows norm al Q -Q plots o f t* values for R = 99 (top left panel) and R = 999 (top right panel). Clearly a norm al ap proxim ation would n o t be accurate in the tails, and this is already fairly clear w ith R = 99. F or reference, the lower h a lf o f Figure 2.2 shows corresponding Q -Q plots w ith exact gam m a quantiles. T he n onnorm ality o f T * is also reasonably clear on histogram s o f t* values, show n in Figure 2.3, at least at the larger value R = 999. C orresponding density estim ate plots provide sm oother displays o f the same inform ation. We look next at the estim ated quantiles o f Y — p.. T he p quantile is a p ­ proxim ated by J'f’jK+np) — y for p = 0.05 and 0.95. The values o f R are 1 9 ,3 9 ,9 9 ,1 9 9 ,..., 999, chosen to ensure th a t (R + 1)p is an integer throughout. T hus at R = 19 the 0.05 quantile is approxim ated by y ^ — y and so forth. In order to display the m agnitude o f sim ulation error, we ran four independent sim ulations a t R = 1 9 ,3 9 ,9 9 ,...,9 9 9 . The results are plotted in Figure 2.4. A lso shown by d o tted lines are the exact quantiles under the m odel, which the sim ulations ap proach as R increases. T here is large variability in the approxi­ m ate quantiles for R less th an 100 and it appears th a t 500 or m ore sim ulations are required to get accurate results. The same sim ulations can be used in o th er ways. F or example, we m ight w ant to know a b o u t log Y — log /i, in which case the em pirical properties o f logy* — lo g y are relevant. ■ T he illustration used here is very simple, but essentially the same m ethods can be used in arb itrarily com plicated param etric problems. F or example, distributions o f likelihood ratio statistics can be approxim ated when largesam ple approxim ations are inaccurate or fail entirely. In C hapters 4 and 5 respectively we show how param etric boo tstrap m ethods can be used to calculate significance tests an d confidence sets. It is som etim es useful to be able to look at the density o f T, for exam ple to see if it is m ultim odal, skewed, or otherw ise differs appreciably from norm ality. A rough idea o f the density g(u) o f U = T —6, say, can be had from a histogram o f the values o f t ' — t. A som ew hat b etter picture is offered by a kernel density

20

2 • The Basic Bootstraps

Figure 2.2 Normal (upper) and gamma (lower) Q-Q plots of (* values based on R = 99 (left) and R = 999 (right) simulations from the fitted exponential model for the air-conditioning data.

Quantiles of standard normal

ooo

Quantiles of standard normal

/

/■ •

o



o C\J •>

o

/*S

o

CD



to

Jr

o o

/

o

/

o o o

j

in

/

/ /

O ''fr 60 80

120

160

200

50

Exact gamma quantile

100

150

200

Exact gamma quantile

estim ate, defined by

r= l

v

y

where w is a sym m etric P D F with zero m ean and h i s a. positive bandw idth th a t determ ines the sm oothness o f gh. The estim ate gh is non-negative and has unit integral. It is insensitive to the choice o f w(-), for which we use the standard norm al density. The choice o f h is m ore im portant. T he key is to produce a sm ooth result, while n o t flattening out significant modes. If the choice o f h is quite large, as it m ay be if R < 100, then one should rescale the density

21

2.2 - Parametric Simulation

Figure 2 3 Histograms of t* values based on R = 99 (left) and R = 999 (right) simulations from the fitted exponential model for the air-conditioning data.

o o

o

o

o r~

O o

co

o o

in o

o

o o

o

Tt o

liB 50

l

100

150 t*

lb

o

o

o 200

50

100

150

200

t*

Figure 2.4 Empirical quantiles (p = 0.05, 0.95) of T* — t under resampling from the fitted exponential model for the air-conditioning data. The horizontal dotted lines are the exact quantiles under the model.

estim ate to m ake its m ean and variance agree with the estim ated m ean bR and variance vR o f T — 9; see Problem 3.8. As a general rule, good estim ates o f density require at least R = 1000: density estim ation is usually h ard er th an probability o r quantile estim ation. N ote th a t the same m ethods o f estim ating density, distribution function and quantiles can be applied to any transform ation o f T. We shall discuss this fu rth er in Section 2.5.

22

2 • The Basic Bootstraps

2.3 Nonparametric Simulation Suppose th a t we have no p aram etric m odel, b u t th a t it is sensible to assum e th at Y i,. . . , Y„ are independent and identically distributed according to an unknow n A distribution function F. We use the E D F F to estim ate the unknow n C D F F. We shall use F ju st as we w ould a p aram etric m o d e l: theoretical calculation if possible, otherw ise sim ulation o f datasets and em pirical calculation o f required properties. In only very simple cases are exact theoretical calculations possible, b u t we shall see in Section 9.5 th a t good theoretical approxim ations can be obtained in m any problem s involving sam ple m om ents. Example 2.7 (Average) In the case o f the average, exact m om ents sam pling from the E D F are easily found. F or exam ple, E*(Y*) = E '(Y * ) = ^

^

under

; =y

j=i

and similarly 1 v a r* (Y * )= -v a r * ( Y ') n

=

1 1 " 1 -E *{Y * — E*(Y*)}2 = - x V - { y , — y f n 1 1 n ^ n 1 }=i (n — 1)

=

1

2



A p art from the factor (n — 1)/n, this is the usual result for the estim ated variance o f Y . ■ O ther simple statistics such as the sam ple variance and sam ple m edian are also easy to handle (Problem s 2.3, 2.4). To apply sim ulation w ith the E D F is very straightforw ard. Because the E D F puts equal probabilities on the original d a ta values y i , . . . , y „ , each Y* is independently sam pled a t ran d o m from those d a ta values. T herefore the sim ulated sam ple Y(’, . . . , Y„* is a ran d o m sam ple taken with replacem ent from the data. This simplicity is special to the case o f a hom ogeneous sample, but m any extensions are straightforw ard. This resam pling procedure is called the nonparametric bootstrap. Example 2.8 (City population data) H ere we look at the ratio estim ate for the problem described in Exam ple 1.2. F or convenience we consider a subset o f the d a ta in Table 1.3, com prising the first ten pairs. This is an application with no obvious param etric m odel, so nonparam etric sim ulation m akes good sense. Table 2.1 shows the d a ta and the first sim ulated sample, which has been draw n by random ly selecting subscript j ' from the set { l,...,n } w ith equal probability and taking (w*,x*) = (uj-,xj-). In this sam ple j ' = 1 never occurs

23

2.3 ■Nonparametric Simulation Table 2.1 The dataset

for ratio estimation, and one synthetic sample. The values j* are chosen randomly with equal probability from with replacement; the simulated pairs are

.7 u

1 138

X

/' u’ X*

143

2 93 104

3 61 69

4 179 260

5 48 75

6 37 63

7 29 50

8 23 48

9 30 111

10 2 50

6 37 63

7 29 50

2 93 104

2 93 104

3 61 69

3 61 69

10 2 50

7 29 50

2 93 104

9 30 111

1 138 143

2 93 104

(«/ -Xj*). Table 2.2 Frequencies with which each original data pair appears in each of R = 9 nonparametric bootstrap samples for the data on US cities.

j u X

3 61 69

4 179 260

5 48 75

6 37 63

7 29 50

8 23 48

9 30 111

10 2 50

1

1

1 2 4

1 1 2 1

N u m b ers o f tim es each p air sam pled D a ta

1

1

1

3

2 1

1

1

1

1

1 2 1 1

2 1

1

S tatistic t = 1.520

R eplicate r 1 2 3 4 5 6 7 8 9

an d /

1 1 3 1 1 2

1 1

2 1 1 3

2 1

1 1 1

2 2 1 1

2 1

1 2

2 3 2

1

1 1

1 1

2 1 1

1 1

1 2

1 1

3 1 1

t\ t* r; t\ t'5 t'6 t; tj (j

= = = = = = = = =

1.466 1.761 1.951 1.542 1.371 1.686 1.378 1.420 1.660

= 2 occurs three times, so th at the first d a ta pair is never selected, the

second is selected three times, and so forth. Table 2.2 shows the sam e sim ulated sample, plus eight m ore, expressed in term s o f the frequencies o f original d ata pairs. The ratio t* for each sim ulated sam ple is recorded in the last colum n o f the table. A fter the R sets o f calculations, the bias and variance estim ates are calculated according to (2.7) and (2.8). The results are, for the R = 9 replicates shown, b = 1.582 — 1.520 = 0.062,

v = 0.03907.

A simple approxim ate distribution for T — 6 is N(b,v). W ith the results so far, this is N (0.062,0.0391), b u t this is unlikely to be accurate enough and a larger value o f R should be used. In a sim ulation with R = 999 we obtained b = 1.5755 — 1.5203 = 0.0552 and v = 0.0601. The latter is appreciably bigger th an the value 0.0325 given by the delta m ethod variance estim ate n

vL = n~2 J ^ ( x ; - t u j f / u 1, j=i

24

2 ■The Basic Bootstraps

o C oO < oN

o

c\i

I 1 ll J llll.-_

in o

o o

0.5

1.0

1.5

2.0

Figure 2.5 City population data. Histograms of t9 and z * under nonparametric resampling for sample of size n — 10, R = 999 simulations. Note the skewness of both t* and

■ q

2.5

-8

_

n .llll

-6

-4

-2

0

z*

t*

which is based on an expansion th a t is explained in Section 2.7.2; see also Problem 2.9. The discrepancy betw een v and Vi is due partly to a few extrem e values o f f \ an issue we discuss in Section 2.3.2. T he left panel o f Figure 2.5 shows a histogram o f t \ whose skewness is evident: use o f a norm al approxim ation here w ould be very inaccurate. We can use the sam e sim ulations to estim ate d istributions o f related statistics, such as transform ed estim ates or studentized estim ates. The right panel o f Figure 2.5 shows a histogram o f studentized values z* = (t* — t ) / v ^ /2, where v'L is the delta m ethod variance estim ate based on a sim ulated sample. T h at is,

v'L = n~2 Y ^ ( x ,j - t , Uj)2/ u 2. 7=1 The corresponding theoretical ap proxim ation for Z is the N ( 0,1) distribution, which we would ju d g e also inaccurate in view o f the strong skewness in the histogram . We shall discuss the rationale for the use o f z* in Section 2.4. One n atu ral question to ask here is w hat effect the sm all sam ple size has on the accuracy o f norm al approxim ations. This can be answ ered in p a rt by plotting density estim ates. T he left panel o f Figure 2.6 shows three estim ated densities for T* — t w ith o u r sam ple o f n = 10, a kernel density estim ate based on o u r sim ulations, the N(b, v) approxim ation with m om ents com puted from the sam e sim ulations, an d the N ( 0 , vl ) approxim ation. The right panel shows corresponding density approxim ations for the full d a ta with n = 49; the em pirical bias and variance o f T are b = 0.00118 and v = 0.001290, and the

2.3 ■Nonparametric Simulation

25

Figure 2.6 Density estimates for 7* —t based on 999 nonparametric simulations for the city population data. The left pane! is for the sample of size n = 10 in Table 2.1, and the right panel shows the corresponding estimates for the entire dataset of size n = 49. Each plot shows a kernel density estimate (solid), the N(b,v) approximation (dashes), with these moments computed from the same simulations, and the N(0, vl ) approximation (dots).

delta m ethod variance approxim ation is vl = 0.001166. A t the larger sample size the norm al approxim ations seem very accurate. ■

2.3.1 Comparison with parametric methods A n atu ral question to ask is how well the nonparam etric resam pling m ethods m ight com pare to p aram etric m ethods, w hen the latter are appropriate. Equally im p o rtan t is the question as to which param etric m odel would produce results like those for n o n p aram etric resam pling: this is an o th er way o f asking just w hat the nonp aram etric b o o tstrap does. Some insight into these questions can be gained by revisiting Exam ple 1.1. Example 2.9 (Air-conditioning data) We now look at the results o f applying no n p aram etric resam pling to the air-conditioning data. O ne m ight naively expect to o btain results sim ilar to those in Exam ple 2.5, where exponential resam pling was used, since we found in Exam ple 1.1 th a t the d a ta ap p ear com patible w ith an exponential model. Figure 2.7 is the n o n p aram etric analogue o f Figure 2.4, and shows quantiles o f T* — t. It appears th a t R = 500 or so is needed to get reliable quantile estim ates; R = 100 is enough for the corresponding plot for bias and variance. U nder nonparam etric resam pling there is no reason why the quantiles should ap proach the theoretical quantiles under the exponential model, and it seems th a t they do n o t d o so. This suggestion is confirm ed by the Q-Q plots in Figure 2.8. The first panel com pares the ordered values o f t ' from R = 999 n o n p aram etric sim ulations w ith theoretical quantiles under the fitted exponen­ tial model, an d the second panel com pares the t' with theoretical quantiles

2 ■The Basic Bootstraps

26

Figure 2.7 Empirical quantiles (p = 0.05, 0.95) of T* — t under nonparametric resampling from the air-conditioning data. The horizontal lines are the exact quantiles based on the fitted exponential model.

R

Figure 2.8 Q-Q plots of y* under nonparametric resampling from the air-conditioning data, first-against theoretical quantiles under fitted exponential model (left panel) and then against theoretical quantiles under fitted gamma model (right pane!).

under the best-fitting gam m a m odel w ith index k = 0.71. The agreem ent in the second panel is strikingly good. O n reflection this is natural, because the E D F is closer to the larger gam m a m odel th a n to the exponential model. ■

2.3.2 Effects o f discreteness F or intrinsically continuous data, a m ajor difference betw een param etric and nonparam etric resam pling lies in the discreteness o f the latter. U nder nonpara-

2.4 ■Simple Confidence Intervals

27

m etric resam pling, T* and related quantities will have discrete distributions, even though they m ay be approxim ating continuous distributions. This m akes results som ew hat “fuzzy” com pared to their param etric counterparts. Example 2.10 (Air-conditioning data) For the nonparam etric sim ulation dis­ cussed in the previous exam ple, the right panels o f Figure 2.9 show the scatter plots o f sam ple stan d ard deviation versus sam ple average for R = 99 and R = 999 sim ulated datasets. C orresponding plots for the exponential sim u­ lation are shown in the left panels. T he qualitative feature to be read from any one o f these plots is th a t d a ta stan d ard deviation is proportional to d ata average. The discreteness o f the nonparam etric m odel (the E D F ) adds noise whose peculiar b anded structure is evident a t R = 999, although the qualitative structure is still apparent. ■ F or a statistic th at is sym m etric in the d a ta values, there are up to W"

_ f i n — 1\ _ (2n — 1)! \ n—1) n\(n — 1)!

possible values o f t*, depending upon the sm oothness o f the statistical function t( ). Even for m oderately small sam ples the support o f the distribution o f T* will often be fairly dense: values o f m„ for n = 7 and 11 are 1716 and 352 716 (Problem 2.5). It would therefore usually be harm less to think o f there being a P D F for T*, and to approxim ate it, either using sim ulation results as in Figure 2.6 o r theoretically (Section 9.5). There are exceptions, however, m ost n otably when T is a sam ple quantile. The case o f the sam ple m edian is discussed in Exam ple 2.16; see also Problem 2.4 and Exam ple 2.15. For m any practical applications o f the sim ulation results, the effects o f discreteness are likely to be fairly m inim al. However, one possible problem is th at outliers are m ore likely to occur in the sim ulation output. F or example, in Exam ple 2.8 there were three outliers in the sim ulation, and these inflated the estim ate v ‘ o f the variance o f T*. Such outliers should be evident on a norm al Q -Q plot (or com parable relevant plot), and when found they should be om itted. M ore generally, a statistic th at depends heavily on a few quantiles can be sensitive to the repeated values th a t occur under nonparam etric sampling, an d it can be useful to sm ooth the original d a ta when dealing with such statistics; see Section 3.4.

2.4 Simple Confidence Intervals The m ajor application for distributions and quantiles o f an estim ator T is in the calculation o f confidence limits. There are several ways o f using boo tstrap sim ulation results in this context, m ost o f which will be explored in C h apter 5. H ere we describe briefly two basic m ethods.

28

2 • The Basic Bootstraps

Figure 2.9 Scatter plots of sample standard deviation versus sample average for samples generated by parametric simulation from the fitted exponential model (left panels) and by nonparametric resampling (right panels). Top line is for R = 99 and bottom line is for R — 999.

Bootstrap average

Bootstrap average

O O

CO

O

in C\J Q

C/) o. CO to

o o

Q

co Q. (0

CsJ o

LO o

8 CD o

8

m

0

50

100 150 200 250 300

Bootstrap average

Bootstrap average

T he sim plest ap proach is to use a norm al approxim ation to the distribution o f T. As outlined in Section 2.1.2, this m eans estim ating the lim its (2.4), which require only b o o tstrap estim ates o f bias and variance. As we have seen in previous sections, a norm al approxim ation will n o t alw ays suffice. T hen if we use the b o o tstrap estim ates o f quantiles for T — 6 as described in Section 2.2.2, an equitailed (1 — 2a) confidence interval will have limits 1 ~ (^(R+lXl-a)) — f)>

1 — (^(R+lJa) — 0-

(2.10)

This is based on the probability im plication Prr„ 1000 to be safe. But accuracy also depends upon the extent to which the distribution o f T" — t agrees w ith th a t o f T — 9. Com plete agreem ent will occur if T — 9 has a distribution n o t depending on any unknow ns. This special property is enjoyed by quantities called pivots, which we discuss in m ore detail in Section 2.5.1. If, as is usually the case, the distribution o f T — 9 does depend on unknow ns, then we can try alternative expressions contrasting T and 6, such as differences o f transform ed quantities, o r studentized com parisons. For the latter, we define the studentized version o f T — 9 as

where V is an estim ate o f v a r(T | F): we give a fairly general form for V in Section 2.7.2. The idea is to mimic the Student-t statistic, which has this form, and which elim inates the unknow n standard deviation when m aking inference ab o u t a norm al mean. T hro u g hout this book we shall use Z to denote a studentized statistic. Recall th a t the S tudent-t (1 — 2a) confidence interval for a norm al m ean n has limits y - v l/2tn- i ( l - a ) ,

y - v l/2t„-i(a),

where v is the estim ated variance o f the m ean and f„_i(a), t„_ i(l — a) are quantiles o f the Student-f distribution w ith n — 1 degrees o f freedom , the distribution o f the pivot Z . M ore generally, when Z is defined by (2.11), the (1 — 2a) confidence interval limits for 9 have the analogous form

where zp denotes the p quantile o f Z . One simple approxim ation, which can often be justified for large sam ple size n, is to take Z as being N ( 0,1). The result would be no different in practical term s from using a norm al approxim ation for T — 9, and we know th a t this is often inadequate. It is m ore accurate to estim ate the quantiles o f Z from replicates o f the studentized bootstrap statistic, Z* = (T* — t ) / V * 1/2, where T ' and V * are based on a sim ulated ran d o m sample, Y ’, . . . , Yn'. If the m odel is param etric, the Y ' are generated from the fitted param etric distribution, and if the m odel is nonparam etric, they are generated from the E D F F, as outlined in Section 2.3. In either case we use the (R + l)a th order statistic o f the sim ulated values z \ , . . . , z ' R, nam ely z(*(K+1)(x), to estim ate z„. Then the studentized bootstrap confidence interval for 9 has limits (2 .12)

30

2 • The Basic Bootstraps

This studentized b o o tstrap m ethod is m ost likely to be o f use in n o n p ara­ m etric problem s. O ne reason for this is th a t w ith param etric m odels we can som etim es find “exact” solutions (as w ith the exponential m odel for E xam ­ ple 1.1), and otherw ise we have available m ethods based on the likelihood function. This does n o t necessarily rule out the use o f param etric sim ulation, o f course, for approxim ating the distribution o f the q uantity used as basis for the confidence interval. Example 2.11 (Air-conditioning data) U nder the exponential m odel for the d a ta o f Exam ple 1.1, we have T = Y , and since v a r(T | FM) = n 2/n, we would take V = Y 2/n. This gives Z = (T - n ) / V l/2 = n 1/2(l - n / Y ) , which is an exact pivot because Q = Y / n has the gam m a distribution with index n and unit mean. S im ulation to construct confidence intervals is unneces­ sary because the quantiles o f the gam m a distribution are available from tables. Param etric sim ulation would be based on Q* = Y* / t , where Y* is the average o f a rando m sam ple Y , \ . . . , Y* from the exponential distribution with m ean t. Since Q‘ has the same distribution as Q, the only erro r incurred by sim ulation would be due to the random ness o f the sim ulated quantiles. F or exam ple, the estim ates o f the 0.025 an d 0.975 quantiles o f Q based on R = 999 sim ulations are 0.504 and 1.608, com pared to the exact values 0.517 and 1.640; these lead to estim ated an d exact 95% confidence intervals (67.2,214.6) and (65.9,209.2) respectively. We shall discuss these intervals m ore fully in C hapter 5. ■ Example 2.12 (City population data) F or the sam ple o f n = 10 pairs analysed in Exam ple 2.8, o u r estim ate o f the ratio 8 is t = x / u = 1.52. The 0.025 and 0.975 quantiles o f the 999 values o f t ‘ are 1.236 and 2.059, so the 95% basic boo tstrap confidence interval (2.10) for 8 is (0.981,1.804). To apply the studentized interval, we use the delta m ethod approxim ation to the variance o f T, which is (Problem 2.9) n

VL = n ~ 2 J ^ ( x y - tU j)2/Q 2,

j =i and base confidence intervals for 8 on ( T — 0 ) / v lL[ 2, using sim ulated values o f z ' = (t* — t ) / v L . T he sim ulated values in the right panel o f Figure 2.5 show th at the density o f the studentized b o o tstrap statistic Z ' is n o t close to norm al. The 0.025 and 0.975 quantiles o f the 499 sim ulated z ' values are -3.063 and 1.447, and since v i = 0.0325, an approxim ate 95% equitailed confidence interval based on (2.12) is (1.260,2.072). T his is quite different from the interval above. The usefulness o f these confidence intervals will depend on how well F

2.5 ■Reducing Error

31

estim ates F an d the extent to which the distributions o f T — 6 and o f Z depend on F. We can n o t ju d g e the form er, b u t we can check the latter using the m ethods outlined in Section 3.9.2; see Exam ples 3.20 and 9.11. ■

2.5 Reducing Error T he erro r in resam pling m ethods is generally a com bination o f statistical error and sim ulation error. The first o f these is due to the difference between F and F, and the m agnitude o f the resulting error will depend upon the choice o f T. T he sim ulation erro r is wholly due to use o f em pirical estim ates o f properties under sam pling from F, ra th e r th an exact properties. Figure 2.7 illustrates these tw o sources o f error in quantile estim ation. The decreasing sim ulation erro r shows as reduced scatter o f the quantile estim ates for increased R. Statistical error due to an inappropriate m odel for T is reflected by the difference betw een the sim ulated nonparam etric quantiles for large R and the d o tted lines th a t indicate the quantiles under the exponential m odel. The fu rth er statistical error due to the difference betw een F and F cann o t be illustrated, because we do n o t know the true m odel underlying the data. However, other sam ples o f the same size from th a t m odel would yield different estim ates o f the true quantiles, quite ap art from the variability o f the quantile estim ates obtained from each specific dataset by sim ulation.

2.5.1 Statistical error T he basic b o o tstra p idea is to approxim ate a quantity c{F) — such as v ar(T | F) — by the estim ate c(F), where F is either a param etric or a nonparam etric estim ate o f F based on d a ta The statistical erro r is then the difference betw een c(F) and c(F), and as far as possible we wish to m inimize this or remove it entirely. This is som etim es possible by careful choice o f c(-). For exam ple, in Exam ple 1.1 w ith the exponential m odel, we have seen th a t w orking with T / 9 rem oves statistical error completely. F or b o th confidence interval and significance test calculation, we usually have a choice as to w hat T is and how to use it. Significance testing raises special issues, because we then have to deal with a null hypothesis sam pling distribution, so here it is best to focus on confidence interval calculation. For simplicity we also assum e th a t estim ate T is decided upon. T hen the quantity c(F) will be a quantile or a m om ent o f some quantity Q = q (F, F) derived from T , such as h (T) — h{6) o r ( T — 6 ) / V l/2 where V is an estim ated variance, or som ething m ore com plicated such as a likelihood ratio. The statistical problem is to choose am ong these possible quantities so th at the resulting Q is as nearly pivotal as possible, th a t is it has (at least approxim ately) the same distribution under sam pling from b o th F and F.

32

2 • The Basic Bootstraps

Provided th a t Q is a m onotone function o f 8, it will be straightforw ard to o btain confidence limits. F or exam ple, if Q = h ( T ) — h(8) with h(t) increasing in t, and if ax is an approxim ate lower a quantile o f h (T ) — h(8), then 1 - a = Pr{Ji(T) - h(8) > aa} = Pr [0 < h~l {h (T) - a* } ],

(2.13)

where /i_1( ) is the inverse transform ation. So h~l { h(T) — aa} is an upper (1 — a) confidence lim it for 8. Parametric problems In param etric problem s F = F# and F = Fv have the sam e form, differing only in p aram eter values. T he n otion o f a pivot is quite simple here, m eaning constant behaviour und er all values o f the m odel param eters. M ore formally, we define a pivot as a function Q = q ( T , 8 ) w hose distribution does o r n o t a p articular q uantity Q is exactly or nearly pivotal, by exam ining its behaviour under the m odel form w ith varying p aram eter values. F or example, in the context o f Exam ple 1.1 n o t depend on the value o f \p: for all q,

In general Q may also depend on other statistics, as when Q is the studentized form of T.

Pr{ q ( T ,0 ) < q | v>} is independent o f \p. O ne can check, som etim es theoretically and always em pirically, whether, we could sim ultaneously exam ine properties o f T — 8, log T — log 8 and the studentized version o f the form er, by sim ulation under several exponential m odels close to the fitted m odel. This m ight result in plots o f variance or selected quantiles versus param eter values, from which we could diagnose the nonpivotal behaviour o f T — 6 and the pivotal b ehaviour o f log T — log 8. A special role for tran sfo rm atio n h ( T) arises because som etim es it is rela­ tively easy to choose h{-) so th a t the variance o f T is approxim ately o r exactly independent o f 8, and this stability is the prim ary feature o f stability o f distri­ bution. Suppose th a t T has variance v(6). T hen provided the function h(-) is well behaved at 8, T aylor series expansion as described in Section 2.7.1 leads to h(8) is the first derivative dh(6)/d6.

W L i { h ( T ) } ± { h ( 8 ) } 2 v(8), which in tu rn implies th a t the variance is m ade approxim ately constant (equal to 1) if

H{t) = /

M lijp '

(114)

This is know n as the variance-stabilizing transformation. A ny constant m ultiple o f h ( T) will be equally effective: often in one-sam ple problem s where v{8) = ri~l it2(8) equation (2.14) w ould be applied w ith a(u) in place o f {u(m)}1/2, in which case h(-) is independent o f n and v a r(T ) = n-1 . F or a problem where v{8) varies strongly with 8, use o f this transform ation

2.5 ■Reducing Error

Figure 2.10 Log-log plot of estimated variance of Y against 6 for the air-conditioning data with an exponential model. The plot suggests strongly that var(Y | 0) oc 62.

33

< ocD (0 •c (0

o o o

>

50 60 70

90

200

theta

in conjunction w ith (2.13) will typically give m ore accurate confidence limits th an would be obtained using direct approxim ations o f quantiles for T — 6. If such use o f the transfo rm ation is appropriate, it will som etim es be clear from theoretical considerations, as in the exponential case. O therw ise the tran sfo rm atio n w ould have to be identified from a scatter plot o f sim ulationestim ated variance o f T versus 6 for a range o f values o f 8. Example 2.13 (Air-conditioning data) Figure 2.10 shows a log-log plot o f the em pirical variances o f r* = y ' based on R = 50 sim ulations for each o f a range o f values o f 6. T h a t is, for each value o f 0 we generate R values t ’ corresponding to sam ples y y " „ from the exponential distribution with m ean 6, and then plot log { ( R — l) -1 X)(t* — r*)2} against log0. T he linearity an d slope o f the plot confirm th at v a r(T | F ) oc 62, where 6 = E (T | F). a Nonparametric problems In n o n p aram etric problem s the situation is m ore com plicated. It is now unlikely (but n o t strictly im possible) th a t any quantity can be exactly pivotal. A lso we cann o t sim ulate d a ta from a distribution with the same form as F, because th a t form is unknow n. However, we can sim ulate d a ta from distributions near to and sim ilar to F, an d this m ay be enough since F is near F. A rough idea o f w hat is possible can be h ad from Exam ple 2.10. In the right-hand panels o f Figure 2.9 we plotted sam ple stan d ard deviation versus sam ple average for a series o f n o nparam etrically resam pled datasets. If the E D F s o f those datasets are th o u g h t o f as m odels n ear both F and F, then although the pattern is obscured by the banding, the plots suggest th a t the true m odel has standard deviation p ro p o rtio n al to its m ean — which is indeed the case for the m ost

34

2 • The Basic Bootstraps

likely true m odel. T here are conceptual difficulties with this argum ent, b u t there is little question th a t the im plication draw n is correct, nam ely th at log Y will have approxim ately the sam e variance und er sam pling from b o th F and F. A m ore tho ro u g h discussion o f these ideas for nonparam etric problem s will be given in Section 3.9.2. A m ajor focus o f research on resam pling m ethods has been the reduction o f statistical error. This is reflected particularly in the developm ent o f accurate confidence lim it m ethods, which are described in C h apter 5. In general it is best to rem ove as m uch o f the statistical erro r as possible in the choice o f procedure. However, it is possible to reduce statistical erro r by a b o o tstrap technique described in Section 3.9.1.

2.5.2 Simulation error Sim ulation erro r arises w hen M onte C arlo sim ulations are perform ed and properties o f statistics are approxim ated by their em pirical properties in these sim ulations. F o r exam ple, we approxim ate the estim ate B = E*(T* | F) — t o f bias /? = E (T ) — 8 by the average B R = R ~ l — t) = T ' — t, using the independent replications Tj*,. . . , T R, each based on a random sam ple from our d a ta E D F F. The M onte C arlo variability in R ~ ] T ’ can only be removed entirely by an infinite sim ulation, which seems b o th im possible and unnecessary in practice. T he practical question is, how large does R need to be to achieve reasonable accuracy, relative to the statistical accuracy o f the quantity (bias, variance, etc.) being approxim ated by sim ulation? W hile it is n o t possible to give a com pletely general an d firm answer, we can get a fairly good sense o f w hat is required by considering the bias, variance and quantile estim ates in simple cases. This we now do. Suppose th a t we have a sam ple y u - - - , y n from the N(p, 0, Pr{\Gpn(q) — GF^ }(q)\ > e}—>0 as n—yoo.

39

2.6 ■Statistical Issues

T he first condition ensures th at there is a limit for Gf,„ to converge to, and w ould be needed even in the happy situation where F equalled F for every n > n', for som e ri. N ow as n increases, F changes, so the second and third conditions are needed to ensure th at G p n approaches G fi00 along every possible sequence o f F s. If any one o f these conditions fails, the b o o tstrap can fail. Example 2.15 (Sample maximum) Suppose th at Y i,. . . , Yn is a random sample from the uniform distribution on (0 ,9). T hen the m axim um likelihood estim ate o f 9 is the largest sam ple value, T = Yln), where Y(i) < ■■< Y(n) are the sample order statistics. C onsider nonparam etric resam pling. The lim iting distribution o f Q = n(9 — T ) / 9 is stan d ard exponential, and this suggests th a t we take our standardized quantity to be Q' = n(t — T ' ) / t , where t is the observed value o f T , an d T* is the m axim um o f a b o o tstrap sam ple o f size n taken from y i , . . . , y n. As n—>oo, however, Pr(g* = 0 | F) = Pr(T* = t \ F) = 1 - (1 - n_1)"-> 1 - e_1, an d consequently the lim iting distribution o f Q* can n o t be stan d ard exponen­ tial. The problem here is th a t the second condition fails: the distributional convergence is not uniform on useful neighbourhoods o f F. A ny fixed o r­ d er statistic Y(k) suffers from the same difficulty, b u t a statistic like a sample quantile, where we would take k = pn for some fixed 0 < p < 1, does not. ■ Asymptotic accuracy Here and below we say X n = Op{nd) when Prfn^l-Xnl > e)-*p for some constant p as n—►oo, and X„ = op(nd) when Pr(n rf|ATn| > e)-*0 as n—>cc, for any e > 0.

Consistency is a w eak property, for exam ple guaranteeing only th at the true probability coverage o f a nom inal (1 — 2a) confidence interval is 1 —2ot + op(l). S tan d ard norm al approxim ation m ethods are consistent in this sense. Once consistency is established, m eaning th at the resam pling m ethod is “valid”, we need to know w hether the m ethod is “good” relative to o ther possible m ethods. This involves looking at the rate o f convergence to nom inal properties. For example, does the coverage o f the confidence interval deviate from (1 —2a) by 0 p(n~l/2) or by 0 p(n-1 )? Some insight into this can be obtained by expansion m ethods, as we now outline. M ore detailed calculations are m ade in Section 5.4. Suppose th a t the problem is one where the lim iting distribution o f Q is stan ­ d ard norm al, and where an Edgeworth expansion applies. T hen the distribution o f Q can be w ritten in the form Pr (Q < q \ F ) = (q) + n~x/1a{q)(q) + 0 ( n ~ l ),

(2.22)

where (•) an d {■) are the C D F and P D F o f the stan d ard norm al distribution, and a(-) is an even quad ratic polynom ial. For a wide range o f problem s it can be shown th a t the corresponding approxim ation for the b o o tstrap version o f Q is Pr(2* < q \ F ) = (q) + 0 ^ ) ,

(2.23)

40

2 • The Basic Bootstraps

where a(-) is obtained by replacing unknow ns in a(-) by estim ates. Now typically a(q) = a(q) + 0 p(n~1/2), so P r(Q' < q \ F) — Pr«2 < q \ F) = Op(n~l ).

(2.24)

T hus the estim ated distrib u tio n for Q differs from the true distribution by a term th a t is Op(n_1), provided th a t Q is constructed in such a way th a t it is asym ptotically pivotal. A sim ilar argum ent will typically hold when Q has a different lim iting distribution, provided it does n o t depend on unknow ns. Suppose th a t we choose n o t to standardize Q, so th a t its lim iting distribution is norm al w ith variance v. A n E dgew orth expansion still applies, now with form

Pr(fi £ , I F) _ * ( - « j ) +

( - k ) * ( J L ) + 0(n-1),

(125)

where a'(-) is a q u ad ratic polynom ial th a t is different from a( ). The corre­ sponding expansion for Q' is

Pr(Q■ < , | F) - ® ( ^ ) + „ - ' ' V ( j i j ) * ( j i j ) + O

,

(2.26)

Typically v = v + Op(n~l/2), which w ould im ply th a t P r(2 “ < q I F) - P r(Q < q \ F) = Op(n~V2),

(2.27)

because the leading term s on the right-hand sides o f (2.25) and (2.26) are different. The difference betw een (2.24) and (2.27) explains o u r insistence on w orking w ith approxim ate pivots w henever possible: use o f a pivot will m ean th at a boo tstrap distribution function is an o rd er o f m agnitude closer to its target. It also gives a cogent theoretical m otivation for using the b o o tstrap to set confidence intervals, as we now outline. We can obtain the a quantile o f the distribution o f Q by inverting (2.22), giving the Cornish-Fisher expansion qx = z a + n - '^ a ' ^ Z x ) + 0 ( n _1), where za is the a quantile o f the stan d ard norm al distribution, and a"(-) is a further polynom ial. T he corresponding b o o tstrap quantile has the property th a t q ’^ —qn = Op(n~l ). F or simplicity take Q = ( T — 0 ) / V l/1, where V estim ates the variance o f T. T hen an exact one-sided confidence interval for 9 based on Q would be I a = [T — V 1/2qx, oo), an d this contains the true 6 w ith probability a. T he corresponding b o o tstrap interval is / ’ = [T — I/1/2g ”,oo), where q ’ is the a quantile o f the distrib u tio n o f Q* — which w ould often be estim ated by sim ulation, as we have seen. Since q'x — qx = Op(n~[), we have Pr(0 e I a) = a,

P r(0 e /* ) = a + 0 ( n ~ l ),

2.6 ■Statistical Issues

41

so th a t the actual probability th at / ' contains 6 differs from the nom inal probability by only 0 ( n -1 ). In contrast, intervals based on inverting (2.25) will contain 8 w ith probability a + 0 ( n ~ l/2). This interval is in principle no m ore accurate th a n using the interval [T — F 1/2za, oo) obtained by assum ing th at the distribution o f Q is stan d ard norm al. Thus one-sided confidence intervals based on quantiles o f Q’ have an asym ptotic advantage over the use o f a norm al approxim ation. Sim ilar com m ents apply to tw o-sided intervals. The practical usefulness o f such results will depend on the num erical value o f the difference (2.24) at the values o f q o f interest, and it will always be wise to try to decrease this statistical error, as outlined in Section 2.5.1. T he results above based on E dgew orth expansions apply to m any com m on statistics: sm ooth functions o f sam ple m om ents, such as m eans, variances, and higher m om ents, eigenvalues and eigenvectors o f covariance m atrices; sm ooth functions o f solutions to sm ooth estim ating equations, such as m ost m axim um likelihood estim ators, estim ators in linear and generalized linear models, and som e robust estim ators; and to m any statistics calculated from tim e series.

2.6.2 Rough statistics: unsmooth and unstable W h at typically validates the b o o tstrap is the existence o f an E dgew orth ex­ pansion for the statistic o f interest, as would be the case when th at statistic is a differentiable function o f sam ple m om ents. Some statistics, such as sam ple quantiles, depend on the sam ple in an unsm ooth or unstable way such th at stan d ard expansion theory does n o t apply. O ften the nonparam etric resam ­ pling m ethod will still be valid, in the sense th a t it is consistent, b u t for finite sam ples it m ay n o t w ork very well. P art o f the reason for this is th a t the set o f possible values for T* m ay be very small, and very vulnerable to unusual d ata points. A case in poin t is th a t o f sam ple quantiles, the m ost fam iliar o f which — the sam ple m edian — is discussed in the next example. Exam ple 2.15 gives a case where naive resam pling fails completely. Example 2.16 (Sample median) Suppose th at the sample size is odd, n = 2m + 1, so th a t the sam ple m edian is y = y(m+\). In large sam ples the m edian is approxim ately norm ally distributed ab o u t the population m edian //, but stan d ard nonparam etric m ethods o f variance estim ation (jackknife and delta m ethod) d o not w ork here (Exam ple 2.19, Problem 2.17). N onparam etric resam pling does w ork to som e extent, provided the sam ple size is quite large and the d a ta are not too dirty. Crucially, b o o tstrap confidence limits work quite well. N ote first th a t the b o o tstrap statistic Y* is concentrated on the sample values y^k), which m akes the estim ated distribution o f the m edian very discrete and very vulnerable to unusual observations. Problem 2.4 shows th at the exact

2 ■The Basic Bootstraps

42

Normal

Theoretical Empirical M ean bootstrap Effective df

Table 2.4 Theoretical, empirical and mean bootstrap estimates of variance (x 10“ 2) of sample median, based on 10000 datasets of sizes n = 11,21. The effective degrees of freedom of bootstrap variances uses a x2 approximation to their distribution.

Cauchy

f3

11

21

11

21

11

21

14.3 13.9 17.2 4.3

7.5 7.3 8.8 5.4

16.8 19.1 25.9 3.2

8.8 9.5 11.4 4.9

22.4 38.3 14000 0.002

11.7 14.6 22.8 0.5

distribution o f Y * is p r(y * =

m

, \

^ ;=0

"

m

, s

(2.28) j=0 '■*'

for k = l , . . . , n where = k / n ; sim ulation is n o t needed in this case. The m om ents o f this b o o tstrap distribution, including its m ean and variance, converge to the correct values as n increases. However, the convergence can be very slow. To illustrate this, Table 2.4 com pares the average b o o tstrap variance w ith the em pirical variance o f the m edian for d a ta sam ples o f sizes n = 11 and 21 from the stan d ard norm al distribution, the Student-t distribution with three degrees o f freedom , and the C auchy d istrib u tio n ; also shown are the theoretical variance approxim ations, which are incalculable when the true distribution F is unknow n. We see th a t the b o o tstrap variance can be very po o r for n = 11 when distributions are long-tailed. The value 1.4 x 104 for average boo tstrap variance w ith C auchy d a ta is not a m istake: the b o o tstrap variance exceeds 100 for ab o u t 1% o f d atasets: for som e sam ples the b o o tstrap variance is huge. The situation stabilizes when n reaches 40 o r more. The gross discreteness o f y * could also affect the simple confidence limit m ethod described in Section 2.4. But provided the inequalities used to justify (2.10) are taken to be < an d > rath er th a n < and > , the m ethod w orks well. For example, for C auchy sam ples o f size n = 11 the coverage o f the 90% basic boo tstrap confidence interval (2.10) is 90.8% in 1000 sam ples; see Problem 2.4. We suggest ado p tin g the sam e practice for all problem s where t* is supported on a small nu m b er o f values. ■ The statistic T will certainly behave wildly under resam pling w hen t(F) does not exist, as happens for the m ean when F is a C auchy distribution. Q uite naturally over repeated sam ples the b o o tstrap will produce silly and useless results in such cases. T here are two points to m ake here. First, if d a ta are taken from a real population, then such m athem atical difficulties can n o t arise. Secondly, the stan d ard approaches to d a ta analysis include careful screening o f d a ta for outliers, nonnorm ality, an d so forth, which leads either to deletion o f disruptive d a ta elem ents or to sensible and reliable choices o f estim ators

2.6 ■Statistical Issues

43

T. In short, the m athem atical pathology o f nonexistence is unlikely to be a practical problem .

2.6.3 Conditional properties Resam pling calculations are based on the observed data, and in th at sense resam pling m ethods are conditional on the data. This is especially so in the nonp aram etric case, where nothing b u t d a ta is used. Because o f this, the question is som etim es asked: “Are resam pling m ethods therefore conditional in the inferential sense?” The short answ er is: “N o, at least n o t in any useful way — unless the relevant conditioning can be m ade explicit.” C onditional inference arises in param etric inference when the sufficient statis­ tic includes an ancillary statistic A whose distribution is free o f param eters. T hen we argue th at inferences ab o u t param eters (e.g. confidence intervals) should be based on sam pling distributions conditional on the observed value o f A ; this brings inference m ore into line w ith Bayesian inference. Two exam ­ ples are the configuration o f residuals in location models, and the values o f explanatory variables in regression models. The first cannot be accom m odated in nonp aram etric b o o tstrap analysis because the effect depends upon the u n ­ know n F. The second can be accom m odated (C hapter 6) because the effect does n o t depend upon the stochastic p a rt o f the model. It is certainly true th a t the b o o tstrap distribution o f T* will reflect ancillary features o f the data, as in the case o f the sam ple m edian (Exam ple 2.16), b u t the reflection is pale to the poin t o f uselessness. T here are situations where it is possible explicitly to condition the resam pling so as to provide conditional inference. Largely these situations are those where there is an experim ental ancillary statistic, as in regression. O ne other situation is discussed in Exam ple 5.17.

2.6.4 When might the bootstrap fail? Incomplete data So far we have assum ed th a t F is the distribution o f interest and th at the sample y i , . . . , y „ draw n from F has nothing rem oved before we see it. This m ight be im p o rtan t in several ways, n o t least in guaranteeing statistical consistency o f o u r estim ator T. But in some applications the observation th a t we get m ay not always be y itself. F or example, w ith survival d a ta the ys m ight be censored, m eaning th a t we m ay only learn th a t y was greater th an some cut-off c because observation o f the subject ceased before the event which determ ines y. Or, with m ultiple m easurem ents on a series o f patients it m ay be th a t for som e patients certain m easurem ents could n o t be m ade because the patient did n o t consent, or the d o cto r forgot.

44

2 • The Basic Bootstraps

U nder certain circum stances the resam pling m ethods we have described will work, b u t in general it w ould be unwise to assum e this w ithout careful thought. A lternative m ethods will be described in Section 3.6. Dependent data In general the n o n p aram etric resam pling m ethod th a t we have described will n o t work for dependent data. This can be illustrated quite easily in the case where the d a ta form one realization o f a correlated tim e series. For example, consider the sam ple average y an d suppose th a t the d a ta com e from a stationary series {Yj} whose m arginal variance is a 2 = var(Y; ) and whose autocorrelations are ph = c o n ( Y j , Y j +h) for h = 1 ,2 ,... In Exam ple 2.7 we showed th a t the nonparam etric b o o tstrap estim ate o f the variance o f Y is approxim ately s2/n, an d for large n this will ap proach ■;. Of2'

A simple illustration is Exam ple 2.20, where t is determ ined by the estim ating function c(y, 6) = x — 6u. For som e purposes it is useful to go beyond the first derivative term in the expansion o f t(F) and o btain the quad ratic approxim ation t(F) = t(F) + j L t( y; F) dF(y) +

\jj

Qt(y, 2; F) dF(y)dF(z),

(2.41)

where the second derivative Qt( y , z ; F ) is defined by

d£l d£2

£,=82=0

This derivative satisfies / Qt( x , y , F ) d F ( x ) = / Qt( x ,y ;F) dF{y ) = 0, b u t in general J Q, ( x, x; F) dF ( x) ^ 0. T he values qjk = Qt(yj,yk',F) are em pirical second derivatives o f t(-) analogous to the em pirical influence values lj. In principle (2.41) will be m ore accurate th an (2.35).

2.7.3 Jackknife estimates A n other ap p ro ach to approxim ating the influence function, b u t only a t the sam ple values y \ , . . . , y „ themselves, is the jackknife. H ere lj is approxim ated by ljackj = { n - W - t - j ) ,

(2.42)

where t - j is the estim ate calculated w ith y; om itted from the data. In effect this corresponds to num erical approxim ation (2.37) using e = —(n — I)- 1 ; see Problem 2.18.

2.7 • Nonparametric Bias and Variance

51

The jackknife approxim ations to the bias and variance o f T are 1

bjack = ~ ~

n

j

Ijack,j,

Vjack = ^ ackj ~

It is reasonably straightforw ard to apply (2.33) w ith F - j and F in place o f G an d F, respectively, to show th a t IjackJ — lj 5 see Problem 2.15. Example 2.21 (Average) F or the sam ple average t = y and the case deletion values are = (ny — y j ) / ( n — 1) and so ljack,j = }’j ~ V- This is the same as the em pirical influence function because t is linear. The variance approxim ation in (2.43) reduces to {n{n — l )}-1 ^2(yj — y)2 because bjack = 0; the denom inator n — 1 in the form ula for vjack was chosen to ensure th at this happens. ■ O ne application o f (2.43) is to show th a t in large sam ples the jackknife bias approxim ation gives n

bjack = E*(T") — t = \ n ~ 2

Qjj'i j=i

see Problem 2.15. So far we have seen two ways to approxim ate the bias and variance o f T using approxim ations to the influence function, nam ely the nonparam etric delta m ethod and the jackknife m ethod. O ne can generalize the basic approxim ation by using alternative num erical derivatives in these two m ethods.

2.7.4 Empirical influence values via regression T he approxim ation (2.35) can also be applied to the b o o tstrap estim ate T*. If the E D F o f the b o o tstra p sam ple is denoted by F*, then the analogue o f (2.35) is t(F*) = t(F) + - V L t(y*;F), n J 7=1

o r in sim pler n o tatio n =

(2.44)

j- 1

say, where /* is the nu m b er o f times th a t y* equals yj, for j = 1, . . . , n . The linear ap proxim ation (2.44) will be used several times in future chapters. U nder the n o n p aram etric b o o tstrap the jo in t distribution o f the /* is m ulti­ nom ial (Problem 2.19). It is easy to see th a t var(T *) = n~2 = vl , showing

2 • The Basic Bootstraps

52

Figure 2.12 Plots of linear approxim ation t*L against r* for the ratio applied to the city population data, with n = 10 (left panel), and n = 49 (right panel).

th a t the b o o tstrap estim ate o f variance should be sim ilar to the nonparam etric delta m ethod approxim ation. Example 2.22 (City population data) The right panels o f Figure 2.11 show how 999 resam pled values o f f* depend on «-1 / j for four values o f j, for the d ata w ith n = 10. T he lines w ith slope lj sum m arize fairly well how t’ depends on /* , b u t the correspondence is n o t ideal. A different way to see this is to p lo t t* against the corresponding t'L. Figure 2.12 shows this for 499 replicates. The line shows where the values for an exactly linear statistic would fall. The linear approxim ation is poor for n = 10, b u t it is m ore accurate for the full dataset, where n = 49. In Section 3.10 we outline how such plots m ay be used to find a suitable scale on which to set confidence limits. ■ Expression (2.44) suggests a way to approxim ate the /,-s using the results o f a b o o tstrap sim ulation. Suppose th a t we have sim ulated R sam ples from F as described in Section 2.3. Define /*• to be the frequency with which the d a ta value yj occurs in the rth b o o tstrap sample. T hen (2.44) implies th a t t; = t + ^

]

T

r = l,...,R.

j=i

This can be viewed as a linear regression equation for “ responses” t* with “covariate values” and “coefficients” lj. We should, however, adjust for the facts th a t E*(7” ) =f= t in general, th a t J2j h = 0, and th at J 2 j f r j = n- F ° r the first o f these we add a general intercept term , or equivalently replace t with T .

2.7 • Nonparametric Bias and Variance F or the second two we d ro p the term

A

A

A

53 resulting in the regression equation

_

So the vector I = ( /j,___ i ) o f approxim ate values o f the lj is obtained with the least-squares regression form ula / = (F*TF*)_1F*r d*,

(2.46)

where F* is the R x ( n — 1) m atrix w ith (r,j) elem ent n-1 /*;, and the rth row o f the R x 1 vector d* is t* — f*. In fact (2.45) is related to an alternative, o rthogonal expansion o f T in which the “rem ainder” term is uncorrelated with the “linear” piece. The several different versions o f influence produce different estim ates o f v ar(T ). In general vl is an underestim ate, w hereas use o f the jackknife values or the regression estim ates o f the Is will typically produce an overestim ate. We illustrate this in Section 2.7.5. Example 2.23 (City population data) For the previous exam ple o f the ratio estim ator, Table 2.5 gives regression estim ates o f em pirical influence values, obtained from R = 1000 samples. The exact estim ate v l for v a r(T ) is 0.036, com pared to the value 0.043 obtained from the regression estimates. The b o o tstrap variance is 0.042. For n = 49 the corresponding values are 0.00119, 0.00125 an d 0.00125. O u r experience is th a t R m ust be in the hundreds to give a good regression approxim ation to the em pirical influence values. ■

2.7.5 Variance estimates In previous sections we have outlined the m erits o f studentized quantities

where V = v{F) is an estim ate o f v a r(T | F). O ne general way to obtain a value for V is to set M v = (M - 1) 1 - 0 2> m=1 where t ], . . . ,t 'M are calculated by b o o tstrap sam pling from F. Typically we would take M in the range 50-200. N ote th at resam pling is needed to produce a stan d ard erro r for the original value t o f T.

54

2 • The Basic Bootstraps

Now suppose th a t we wish to estim ate the quantiles o f Z , using em pirical quantiles o f b o o tstrap sim ulations r=

(2-48)

Since M b o o tstrap sam ples from F were needed to obtain v, M bo o tstrap sam ples from F ' are needed to produce v". T hus w ith R = 999 and M = 50, we would require R ( M + 1) = 50949 sam ples in all, which seems prohibitively large for m any applications. This suggests th a t we should replace u1/2 with a standard error th a t involves no resam pling, as follows. W hen a linear approxim ation (2.44) applies, we have seen th a t var(T* | F) can be estim ated by v l = n~2 ^ l], where the lj = L ((y; ;F ) are the em pirical influence values for t based on the E D F F o f y \ , . . . , y n- T he corresponding variance estim ate for v a r(T ’ | F ' ) is v‘Lr = ri~2 ^ L 2{yy, F'), based on the em pirical influence values for t’ at the E D F F ’ o f y ‘r l, . . . , y' rn. A lthough this requires no furth er sim ulation, the L t( y ’ \ F *) m ust be calculated for each o f the R samples. If an analytical expression is know n for the em pirical influence values, it will typically be straightforw ard to calculate the VLr- If not, num erical differentiation can be used, though this is m ore tim e-consum ing. I f neither o f these is feasible, we can use the furth er approxim ation 2

(2.49) which is exact for a linear statistic. In effect this uses the usual form ula, with lj replaced by L t(y*j\F) — n-1 J 2 L t(y*k ;F) in the rth resam ple. However, the right-hand side o f (2.49) can badly underestim ate v'Lr if the statistic is not close to linear. A n im proved approxim ation is outlined in Problem 2.20. Example 2.24 (City population data) Figure 2.13 com pares the variance a p ­ proxim ations for n = 10. T he top left panel shows v" with M = 50 plotted against the values n

for R = 200 b o o tstrap samples. T he top right panel shows the values o f the approxim ate variance on the right o f (2.49), also plotted against v'L. T he lower panels show Q -Q plots o f the corresponding z* values, with (t* — t ) / v ^ /2 on the horizontal axis. Plainly v’L underestim ates v', though not so severely as to have a big effect on the studentized b o o tstrap statistic. But the right o f (2.49) underestim ates v'L to an extent th a t greatly changes the distribution o f the corresponding studentized b o o tstrap statistics.

2.8 ■Subsampling Methods

55

Figure 2.13 Variance approxim ations for the city population data, n — 10. The top panels com pare the bootstrap variance v* calculated with M = 50 and the right o f (2.49) with v*L for R = 200 samples. The bottom panels com pare the corresponding studentized bootstrap statistics.

co >

Q_

2 2o o

CO

vL*

T he rig h t-h an d panels o f the corresponding plots for the full d a ta show m ore nearly linear relationships, so it appears th a t (2.49) is a b etter approxim ation at sample size n = 49. In practice the sam ple size cannot be increased, and it is necessary to seek a tran sfo rm ation o f t to attain approxim ate linearity. T he tran sfo rm atio n outlined in Exam ple 3.25 greatly increases the accuracy o f (2.49), even w ith n = 10. ■

2.8 Subsampling Methods Before and after the developm ent o f nonparam etric b o o tstrap m ethods, other m ethods based on subsam ples were developed to deal with special problems.

56

2 ■ The Basic Bootstraps

We briefly review three such m ethods here. The first two are in principle superior to resam pling for certain applications, although their com petitive m erits in practice are largely untested. T he third m ethod provides an alternative to the nonparam etric delta m ethod for variance approxim ation.

2.8.1 Jackknife methods In Section 2.7.3 we m entioned briefly the jacknife m ethod in connection with estim ating the variance o f T, using the values o f t obtained when each case is deleted in turn. G eneralized versions o f the jackknife have also been proposed for estim ating the distribution o f T — 0, as alternatives to the bootstrap. For this to work, the jackknife m ust be generalized to m ultiple case deletion. For example, suppose th a t we delete d observations rath er th an one, there being N = (j) ways o f doing this; this is the sam e thing as taking all subsets o f size n — d. The full set o f group-deletion estim ates is t{,. . . , tfN , say. The em pirical distribution o f — t will approxim ate the distribution o f T — 6 only if we renorm alize to rem ove the discrepancy in sam ple sizes, n — d versus n. So if T — 6 = Op(n~a), we take the em pirical distribution o f z f = (n - d)a{S - t)

(2.50)

as the delete-^ jackknife approxim ation to the distribution o f Z = na( T — 6). In practice we would n o t use all N subsam ples o f size n — d, b u t rath er R random subsam ples, ju st as with ordinary resampling. In principle this m ethod will apply m uch m ore generally th an b o o tstrap resam pling. But to w ork in practice it is necessary to know a and to choose d so th at n — d—>oo and d /n —>1 as n increases. T herefore the m ethod will work only in rath er special circum stances. N ote th a t if n —d is small relative to n, then the m ethod is not very different from a generalized b o o tstrap th a t takes sam ples o f size n — d ra th er th an n.

Example 2.25 (Sample maximum) We referred earlier to the failure o f the boo tstrap w hen applied to the largest o rd er statistic t = y(n), which estim ates the upper lim it o f a distribution on [0,0]. The jackknife m ethod applies here w ith a = 1, as n(9— T ) is approxim ately exponential w ith m ean 6 for uniform ly distributed ys. However, em pirical evidence suggests th a t the jackknife m ethod requires a very large sam ple size in o rd er to give good results. For example, if we take sam ples o f n = 100 uniform variables, for values o f d in the range 80-95 the distrib u tio n o f (n — d)(t — T +) is close to exponential, but the m ean is w rong by a factor th a t can vary from 0.6 to 2. ■

2.8 ■Subsampling M ethods

57

2.8.2 All-subsamples method A different type o f subsam pling consists o f taking all N = 2" — 1 non-em pty subsets o f the data. This can be applied to a lim ited type o f problem , including M -estim ation where m ean /i is estim ated by the solution t to the estim ating equation ^ c(yj — t) = 0. If the ordered estim ates from subsets are denoted by tJ’j ),. . . , f[N), then rem arkably fi is equally likely to be in any o f the N + 1 intervals

Hence confidence intervals for fi can be determ ined. In practice one w ould take a ran d o m selection o f R such subsets, and attach equal probability ( R + I)-1 to the R + 1 intervals defined by the R ff values. It is unclear how efficient this m ethod is, and to w hat extent it can be generalized to o th er estim ation problems.

2.8.3 Half-sampling methods T he jackknife m ethod for estim ating v a r(T ) can be extended to deal with estim ates based on m any samples, b u t in one special circum stance there is another, sim pler subsam pling m ethod. O riginally this was proposed for samplesurvey d a ta consisting o f stratified sam ples o f size 2. To fix ideas, suppose th at we have sam ples o f size 2 from each o f m strata, and th a t we estim ate the p o pulation m ean n by the w eighted average t = Y27=i wifi^ these weights reflect stratu m sizes. The usual estim ate for v a r(T ) is v = J 2 wf sf with sj the sam ple variance for the ith stratum . The half-sam pling m ethod is designed to reproduce this variance estim ate using only subsam ple values o f t, ju st as the jackknife does. T hen the m ethod can be applied to m ore com plex problems. In the present context there are N = 2m half-sam ples form ed by taking one elem ent from each stratu m sample. If ft denotes the estim ator calculated on such a half-sam ple, then clearly ft — t equals \ ~ y a ) c ] , where cj = +1 according to which o f yn and y,%is in the half-sam ple. D irect calculation shows th a t for a ran d o m half-sam ple E (T t — T )2 = jv a r(T ), so th a t an unbiased estim ate o f v a r(T ) is obtained by doubling the average o f (ft — t)2 over all N half-sam ples: this average equals the usual estim ate given earlier. But it is unnecessary to use all N half-sam ples. If, say, we use R half-sam ples, then we require th at

2 ■The Basic Bootstraps

58 From the earlier representation for .

i

R

[ 1

s r= l

m

i E I

- 1 we see th a t this implies th at 1

m m

wf ( yn - y a )1 +

i= 1

j(yn - y a ) { y n - yj i) i= l j = 1

equals 1 m 4 E i=l

-

>‘2)2-

For this to hold for all d a ta values we m ust have = 0 for all i ± j. This is a stan d ard problem arising in factorial design, and is solved by w hat are know n as P lackett-B urm an designs. If the rth half-sam ple coefficients cfrj form the rth row o f the R x m m atrix C +, and if every observation occurs in exactly | R half-sam ples, then C +TC f = rnlmxm. In general the ith colum n o f C + can be expressed as ( c y, . —1) w ith the first R — 1 elem ents obtained by i — 1 cyclic shifts o f c i j , . . . , For exam ple, one solution for m = 7 with R = 8 is -1 -1 +1 - 1 +1 + 1 ni ( +l +1 +1 - 1 - 1 +1 - 1 +1 +1 +1 +1 - 1 - 1 +1 - 1 -1 +1 +1 +1 - 1 - 1 - 1 +1 - 1 +1 +1 +1 - 1 - 1 -1 +1 - 1 +1 +1 +1 - 1 -1 -1 +1 - 1 +1 +1 +1 U i -1 -1 -1 -1 -1 1) This solution requires th a t R be the first m ultiple o f 4 greater th a n or equal to m. The half-sam ple designs for m = 4 ,5 ,6 ,7 are the first in colum ns o f this C + m atrix. In practice it would be com m on to double the half-sam pling design by adding its com plem ent —C \ which adds furth er balance. It is fairly clear th a t the half-sam pling m ethod extends to stratum sample sizes k larger th a n 2. The basic idea can be seen clearly for linear statistics o f the form m

t= n + X i= 1

k

m

k~l E

k

= ^ + E

7=1

i= l

a,> j= l

say. Suppose th a t in the rth subsam ple we take one observation from each stratum , as specified by the zero -o n e indicator c jy . T hen '! - , = E

E

cl,,j(aU - a,),

which is a linear regression m odel w ithout erro r in which the atj — a, are coefficients and the are covariate values to be determ ined. If the ay — a,

2.9 ■Bibliographic Notes

59

can be calculated, then the usual estim ate o f v ar(T ) can be calculated. The choice o f - values corresponds to selection o f a fractional factorial design, w ith only m ain effects to be calculated, and this is solved by a Plackett-B urm an design. O nce the subsam pling design is obtained, the estim ate o f v a r(T ) is a form ula in the subsam ple values tj. The same form ula w orks for any statistic th a t is approxim ately linear. The same principles apply for unequal stratum sizes, although then the solution is m ore com plicated and m akes use o f orthogonal arrays.

2.9 Bibliographic Notes T here are two key aspects to the m ethods described in this chapter. The first is th a t in o rd er for statistical inference to proceed, an unknow n distribution F m ust be replaced by an estim ate. In a param etric m odel, the estim ate is a p aram etric distribution F$, w hereas in a nonparam etric situation the estim ate is the em pirical distribution function or som e m odification o f it (Section 3.3). A lthough the use o f the E D F to estim ate F m ay seem novel a t first sight, it is a n atu ral developm ent o f replacing F by a param etric estim ate. We have seen th a t in essence the E D F will produce results sim ilar to those for the “ nearest” param etric model. The second aspect is the use o f sim ulation to estim ate quantities o f interest. The w idespread availability o f fast cheap com puters has m ade this a practical alternative to analytical calculation in m any problem s, because com puter time is increasingly plentiful relative to the num ber o f hours in a researcher’s day. T heoretical approxim ations based on large samples can be tim e-consum ing to obtain for each new problem , and there m ay be d o u b t about their reliability in small samples. C ontrariw ise, sim ulations are tailored to the problem at hand an d a large enough sim ulation m akes the num erical erro r negligible relative to the statistical erro r due to the inescapable uncertainty ab o u t F. M onte C arlo m ethods o f inference had already been used for m any years when E fron (1979) m ade the connection to standard m ethods o f param etric inference, drew the atten tio n o f statisticians to their potential for nonparam etric inference, and originated the term “b o o tstra p ”. This work and subsequent developm ents such as his 1982 m onograph m ade strong connections with the jackknife, which had been introduced by Q uenouille (1949) and Tukey (1958), and w ith o th er subsam pling m ethods (H artigan, 1969, 1971, 1975; M cC arthy, 1969). M iller (1974) gives a good review o f jackknife m ethods; see also G ray an d Schucany (1972). Y oung and D aniels (1990) discuss the bias in the nonparam etric boo tstrap introduced by using the em pirical distribution function in place o f the true distribution. H all (1988a, 1992a) strongly advocates the use o f the studentized b o o tstrap

60

2 ■ The Basic Bootstraps

statistic for confidence intervals an d significance tests, and m akes the connec­ tion to E dgew orth expansions for sm ooth statistics. The em pirical choice o f scale for resam pling calculations is discussed by C h apm an and H inkley (1986) and T ibshirani (1988). H all (1986) analyses the effect o f discreteness on confidence intervals. Efron (1987) discusses the num bers o f sim ulations needed for bias and quantile estim ation, while D iaconis an d H olm es (1994) describe how sim ulation can be avoided com pletely by com plete en um eration o f b o o tstrap sam ples; see also the bibliographic notes for C h ap ter 9. Bickel and F reedm an (1981) were am ong the first to discuss the conditions under which the b o o tstrap is consistent. T heir w ork was followed by Bretagnolle (1983) and others, and there is a grow ing theoretical literature on m odifications to ensure th a t the b o o tstra p is consistent for different classes o f aw kw ard statistics. T he m ain m odifications are sm oothing o f the d ata (Sec­ tion 3.4), which can im prove m atters for nonsm ooth statistics such as quantiles (D e Angelis and Young, 1992), subsam pling (Politis and R om ano, 1994b), and rew eighting (B arbe and Bertail, 1995). H all (1992a) is a key reference to Edgew orth expansion theory for the b o o tstrap , while M am m en (1992) describes sim ulations intended to help show when the b o o tstrap works, and gives the­ oretical results for various situations. Shao and Tu (1995) give an extensive theoretical overview o f the b o o tstrap an d jackknife. A threya (1987) has show n th a t the b o o tstra p can fail for long-tailed distri­ butions. Some o th er exam ples o f failure are discussed by Bickel, G otze and van Zwet (1996). T he use o f linear approxim ations an d influence functions in the context o f robust statistical inference is discussed by H am pel et al. (1986). Fernholtz (1983) describes the expansion theory th a t underlies the use o f these approx­ im ation m ethods. A n alternative and o rthogonal expansion, sim ilar to th at used in Section 2.7.4, is discussed by E fron and Stein (1981) and E fron (1982). Tail-specific approxim ations are described by H esterberg (1995a). The use o f m ultiple-deletion jackknife m ethods is discussed by H inkley (1977), Shao and W u (1989), W u (1990), and Politis and R om ano (1994b), the last w ith num erous theoretical exam ples. T he m ethod based on all non-em pty subsam ples is due to H artig an (1969), an d is nicely p u t into context in C h apter 9 o f Efron (1982). H alf-sam ple m ethods for survey sam pling were developed by M cC arthy (1969) an d extended by W u (1991). The relevant factorial designs for half-sam pling were developed by Plackett and B urm an (1946).

2.10 Problems 1

Let F denote the E D F (2.1). Show that E {f(y )} = F(y) and that var{F(y)} = f (3'){l — F(y)}/ n. Hence deduce that provided 0 < F(y) < 1, F(y) has a limiting

61

2.10 ■Problems

normal distribution for large n, and that Pr(|F(y) — F(y)| < e)—>1 as n—too for any positive e. (In fact the much stronger property s u p ^ ^ ^ ^ |F(y) — F (y )|—>0 holds with probability one.) (Section 2.1) 2

Suppose that Y ],..., Y„ are independent exponential with mean Y=n~' E

their average is

Yj .

(a) Show that Y has the gamma density (1.1) with k = n, so its mean and variance are n and fi2/n. (b) Show that log Y is approximately normal with mean log^i and variance n~'. (c) Compare the normal approximations for Y and for log Y in calculating 95% confidence intervals for /z. Use the exact confidence interval based on (a) as the baseline for the comparison, which can be illustrated with the data o f Example 1.1. (Sections 2.1, 2.5.1) 3

Under nonparametric simulation from a random sample y [ , . . . , y„ in which T = nr1 Yj — Y) 2 takes value t, show that E '(T ') = (n — l)t/n,

var‘(7” ) = (n — l ) 2 [m4/ n + (3 - n)t2/ {n(n — 1)}] / n2,

where w 4 = n- 1 E / X ; - f ) 4(Section 2.3; Appendix A) 4

Let t be the median o f a random sample o f size n = 2m + 1 with ordered values >>(i) < • • • < y(„); t = y(m+i). (a) Show that T" > if and only if fewer than m + 1 o f the Y ’ are less than or equal to y ^ . (b) Hence show that

This specifies the exact resampling density (2.28) o f the sample median. (The result can be used to prove that the bootstrap estimate o f var(T ) is consistent as n—>oo.) (c) Use the resampling distribution to show that for n = 11 P r * ( r < y,3 j) = Pr’( T ‘ > y(9)) = 0.051, and apply (2.10) to deduce that the basic bootstrap 90% confidence interval for the population median 6 is (2 y(6) — y(9 ), 2 y(6) — (d) Examine the coverage o f the confidence interval in (c) for samples from normal and Cauchy distributions. (Sections 2.3, 2.4; Efron, 1979, 1982) 5

Consider nonparametric simulation o f Y* based on distinct linearly independent observations y i,...,y „ . (a) Show that there are m„ = (^"T,1) ways that n — 1 red balls can be put in a line with n white balls. Explain the connection to the number o f distinct values taken by Y '. (b) Suppose that the value y" taken by Y* is n~l J 2 f j y j < where / ” can be one o f 0 and J 2 j f j ~ n- Find Pr(Y ” = y), and deduce that the most likely value o f Y ” is y, with probability p„ = n'./n". (c) Use Stirling’s approximation, i.e. n \ ~ (27r)l/2e~"n"+1//2 as n—>oo, to find approx­ imate formulae for m„ and p„. (d) For the correlation coefficient T calculated from distinct pairs («i, x j ) ,. . . , (u„,x„),

62

2 ■The Basic Bootstraps show that T* is indeterminate with probability W hat is the probability that 17” | = 1? Discuss the implications o f this when n < 10. (Section 2.3; Hall, 1992a, Appendix I) Suppose that are independently distributed with a two-parameter density W hat simulation experiment would you perform to check whether or not Q = q ( Y u . . . , Y n;6) is a pivot? If / is the gamma density (1.1), let fi be the M LE o f n, let

feAy)-

tpin) = max Y

l°g//vc(y; )

j=i be the profile log likelihood for n and let Q = 2 { /p(/i) — /?p(n)}. In theory Q should be approximately a x] variable for large n. Use simulation to examine whether or not Q is approximately pivotal for n = 10 when k is in the range (0.5,2). (Section 2.5.1) 7

The bootstrap normal approximation for T — 9 is N ( b R, v R), so that the p quantile ap for T — 96 can be approximated appro by ap = bR + zpvR 2. Show that the simulation variance o f this estimate is

i* \ ■ v°° I . , *3 , l 2 / t , k4 K ) - R { ' + Z ' , ^ + i2' ( 2 + < where k 3 and k4 are the third and fourth cumulants o f T" under bootstrap resampling. If T is asymptotically normal, k ^ / v U2 = 0 ( n ~ l/2) and k 4/ v1^ = 0 (n “ ’). Compare this variance to that o f the bootstrap quantile estimate — t in the special case T = Y . (Sections 2.2.1, 2.5.2; Appendix A) 8

9

Suppose that estimator T has expectation equal to 0(1 + y ) , so that the bias is 9y. The bias factor y can be estimated by C = E’( T ' ) / T — 1. Show that in the case o f the variance estimate T = ri [ ^ 2(Yj — Y ) 2, C is exactly equal to y. I f C were approximated from R resamples, what would be the simulation variance o f the approximation? (Section 2.5) Suppose that the random variables U = (Ui, .. . , Um) have means C i,...,( m and covariances cov(Uk,Ui) = n-1 cow( 0 , and that Ti = g t ( U ) , . . . , T q = gq(U). Show that E(T,)

=

g , . ( 0 + i n - > f > w( 0 | ^ ,

cov(Tj, Tj)

=

/r ‘ f >

w(

How are these estimated in practice? Show that 2

\ " (x i — tuj)2

" - 2£ i=i is a variance estimate for t = x / u , based on independent pairs (u i, Xi) ,...,( « „ ,x n). (Section 2.7.1)

63

2.10 ■Problems 10

(a) Show that the influence function for a linear statistic t(F) = / a(x) dF(x) is a ( y ) — t(F). Hence obtain the influence functions for a sample mom ent fir — f x r dF(x), for the variance /1 2 (F) — {/ti(F)}2, and for the correlation coefficient (Example 2.18). (b) Show that the influence function for {t(F) — 6 } / v ( F ) i/2 evaluated at 9 = t{F) is v(F)~l/2L, (y; F) . Hence obtain the empirical influence values lj for the studentized quantity {t{F) — t ( F) } / v L( F ) l/2, and show that they have the properties E O = 0 and n~2 E I2 = 1 . (Section 2.7.2; Hinkley and Wei, 1984)

11

The pairs ( U [ , X i ) , . . . , { U „ , X n) are independent bivariate normal with correlation 9. Use the influence function o f Example 2.18 to show that the sample correlation T has approximate variance n~l { 1 — 92)2. Then apply the delta method to show that \ log ( j r £ ) , called Fisher’s z-transform, has approximate variance n~]. (Section 2.7.1; Appendix A)

12

Suppose that a parameter 0 = t(F) is determined implicitly through the estimating equation

J u { y, 9 ) d F ( y ) = 0

.

(a) Write the estimating equation as

J u { y J ( F ) } dF(y) = 0, u(x;0) = du(x-,6)/d8

replace F by (1 — e)F + eH y, and differentiate with respect to e to show that the influence function for f(-) is

,(-V’ *

— f U(x;9)dF(x) '

Hence show that with 9 = t{F) the y'th empirical influence value is t =

1

u ( y j ; 6)

- n ~ l E L i “(w ;

(b) Let {p be the maximum likelihood estimator o f the (possibly vector) parameter o f a regular parametric m odel / v (y) based on a random sample y u . ..,y„. Show that the j \ h empirical influence value for \p at yj may be written as n I ~ lSj, where y-v g 2 l o g / v-,(y; )

dxpdip7

d\ogjjiyj) ’

dxp

J

Hence show that the nonparametric delta method variance estimate for ip is the so-called sandwich estimator

/-> ( X s A r ) ' - ' Compare this to the usual parametric approximation when y \ , . . . , y „ is a random sample from the exponential distribution with mean tp . (Section 2.7.2; Royall, 1986)

64 13

2 ■ The Basic Bootstraps The a trimmed average is defined by

t { F) =r h a [ computed at the E D F F. Express t(F) in terms o f order statistics, assuming that na is an integer. How would you extend this to deal with non-integer values o f not? Suppose that F is a distribution symmetric about its mean, p.. By rewriting t(F) as

)

rii-«(f)

-— — / 1 2 a

udF(u),

where qa(F) is the a quantile o f F, use the result o f Example 2.19 to show that the influence function o f t(F) is

L t(y,F )= l

l - 2 « r I, 1 — 2a) ', {{q '(F )-p }(l-2 * )-\

y Xp and corre­ sponding orthogonal eigenvectors ej, where e j e y = 1. Let Fc = (1 — s)F + eHy. Show that the influence function for Q is L a ( y ',F) = { y - n)(y - p ) T — fl, and by considering the identities Q(Fc)ej(Fs) = Xj(Fe)ej(Fc),

ej (F£) t e j(Fc) = 1,

or otherwise, show that the influence function for l j is { e j ( y — p)}2 — Xj. (Section 2.7.2) 15

Consider the biased sample variance t — n_ 1 J2(yj ~ J')2(a) Show that the empirical influence values and second derivatives are lj = (yj - y ) 2 - U

qjk = - 2 ( y j - y)(yk - y).

(b) Show that the exact case-deletion values o f t are

Compare these with the result o f the general approximation t - t-j = ( n -

1

y 'lj -

-

1

)~2qjj,

which is obtained from (2.41) by substituting F for F and for F. (c) Calculate jackknife estimates o f the bias and variance o f T. Are these sensible estimates? (Section 2.7.3; Appendix A)

2.10 ■Problems 16

65

The empirical influence values lj can also be defined in terms o f distributions supported on the data values. Suppose that the support o f F is restricted to y i , . . . , y n, with probabilities p = ( p i , . . . , p n) on those values. For such distributions t(F) can be re-expressed as t{p). (a) Show that h = j Rt{(l - e)p + s l j } e=0

where P = ( $ , ■-■>%) and 1 j is the vector with Hence or otherwise show that

1

in the y'th position and

0

elsewhere.

n

0 = Mp) -

X Mp)> k=\

where 'tj(p) = 8t(p)/dpj. (b) Apply this result to derive the empirical influence values lj = (xj — tuj )/ u for the estimate t = J2 Pjx j ! 5Z Pjuj o f the ratio o f two means. (c) The empirical second derivatives qtj can be defined similarly. Show that

d2

qtj = g~ ^

t{(l - El - E 2)p + Ell, + E 2 ly}

£| =£2=0 Hence deduce that 0, (b) e = —( n — l ) -1 , (c) e = ( n + 1) - 1 which respectively give the infinitesimal jackknife, the ordinary jackknife, and the positive jackknife.

66

2 • The Basic Bootstraps

Show that in (b) and (c) the squared distance (dF — dFe)T(dF — dFc) from F to Fe = (1 — s)F + eH Vj is o f order 0 ( n ~ 2), but that if F* is generated by bootstrap sampling, E* j( d F ‘ — d F ) T {dF’ — dF) j = 0 ( n ~ l ). Hence discuss the results you would expect from the butcher knife, which uses e = n~l/2. How would you calculate it? (Section 2.7.3; Efron, 1982; Hesterberg, 1995a) 19

The cumulant generating function o f a multinomial random variable with denominator n and probability vector ( 7 1 1 , . . . , n„) is K ( £ ) = n log

7

ty e x p ( ^ -)|,

where £ = (a) Show that with Kj = n~l, the first four cumulants o f the /* are E‘(/D co v '( / ' , / * )

= =

1, dij-n~\

cum ' ( f i J ’j J k )

=

n~2{n2Sijk-« yj when d} = 1. This suggests the following algorithm .

times (weeks) for two groups o f patients with acute myelogeneous leukaemia (AM L), one receiving maintenance chem otherapy (G roup 1) and the other not (Miller, 1981, p. 49). ^ indicates right-censoring.

3.5 ■Censoring

Figure 3.3 Product-limit survivor function estimates for two groups o f patients with A M L, one receiving maintenance chem otherapy (solid) and the other not (dots). The left panel shows estimates for the time to remission, and the right panel shows the estimates for the time to censoring. In the left panel, + indicates times o f censored observations; in the right panel + indicates times o f uncensored observations.

85

n na o CO

> 3

C/D

Time (weeks)

Time (weeks)

Algorithm 3.1 (Conditional bootstrap for censored data) F or r = 1 ,...,/? , 1 generate Y| ° \ . .., Fn°* independently from F °; 2 for j = 1 ,..., n, m ake sim ulated censoring variables by setting C ’ = yj if dj = 0, an d if dj = 1, generating Cj from {G(y) — G(y; )}/{ 1 — G(y; )}, which is the estim ated distribution o f Cj conditional on Cj > y j ; then 3 set YJ = m in( Y,0*, CJ), for j = 1 ,..., n.

.

I f the largest observation is censored, it is given a notional failure time to the right o f the observed value, and conversely if the largest observation is uncensored, it is given a n o tio n al censoring tim e to the right o f the observed value. This ensures th a t the observation can ap p ear in b o o tstrap resamples. B oth the above sam pling plans can accom m odate m ore com plicated patterns o f censoring, provided it is uninform ative. F o r example, it m ight be decided at the start o f a reliability experim ent on independent and identical com ponents th a t if they have n o t already failed, item s will be censored at fixed times c i , . . . , c„. In this situation an ap p ro p riate resam pling plan is to generate failure tim es Y?* from F°, and then to take YJ = min(YJ0*,c,), for j = 1 Thi s am ounts to having separate censoring distributions for each item, w ith the j t h p u ttin g m ass one at c; . O r in a m edical study the yth individual m ight be subject to ran d o m censoring up to a tim e c“ , corresponding to a fixed calendar date for the end o f the study. In this situation, Yj = m in( Y f , C j , d f ) , with the indicator Dj equalling zero, one, o r tw o according to w hether Cj, Y j \ or c j was observed. T hen an ap p ro p riate conditional sam pling plan w ould generate

3 ■Further Ideas

86

Yj0' and C* as in the conditional plan above, b u t take YJ = m in(y;°”, and m ake D ’ accordingly. Weird bootstrap The sam pling plans outlined above m im ic how the d a ta are th o u g h t to arise, by generating individual failure and censoring times. W hen interest is focused on the survival o r h azard functions, a third and quite different approach uses direct sim ulation from the N elso n -A alen estim ate (3.12) o f the cum ulative hazard. The idea is to treat the num bers o f failures a t each observed failure tim e as independent binom ial variables w ith denom inators equal to the num bers of individuals at risk, and m eans equal to the num bers th at actually failed. Thus w hen yi < ■• ■< y n, we take the sim ulated num b er to fail at tim e yj, N*, to be binom ial w ith den o m in ato r n — j + 1 an d probability o f failure dj / ( n — j + 1). A sim ulated N elso n -A alen estim ate is then A°*00 = E V n - L ;=1 l ^ k =i ™\yj

vV yk)

(3-14)

which can be used to estim ate the uncertainty o f the original estim ate A Q(y). In this weird bootstrap the failures at different tim es are unrelated, the num ber at risk does n o t depend on previous failures, there are no individuals whose sim ulated failure tim es underlie -4°’ (y), and no explicit assum ption is m ade ab o u t the censoring m echanism . Indeed, under this scheme the censored indi­ viduals are held fixed, b u t the num b er o f failures is a sum o f binom ial variables (Problem 3.10). The sim ulated survivor function corresponding to (3.14) is obtained by substituting

into (3.13) in place o f dA°(yj). Example 3.10 (AM L data) Figure 3.3 suggests th a t the censoring distribu­ tions for b o th groups o f d a ta in Table 3.4 are sim ilar, b u t th at the survival distributions them selves are not. To com pare the resam pling schemes described above, we consider estim ates o f two param eters, the probability o f remission beyond 20 weeks and the m edian survival time, b o th for G ro u p 1. These estim ates are 1 — F°(20) = 0.71 an d inf{t : F°(t) > 5} = 31. Table 3.5 com pares results from 499 sim ulations using the ordinary, condi­ tional, and weird bootstraps. F or the survival probabilities, the ordinary and conditional b o o tstrap s give sim ilar results, and b o th stan d ard errors are sim­ ilar to th a t from G reenw ood’s form ula; the weird b o o tstrap probabilities are significantly higher an d are less variable. The schemes give infinite estim ates

87

3.5 ■Censoring Table 3.5 Results for 499 replicates of censored data bootstraps of Group 1 of the AML data: average (standard deviation) for estimated probability of remission beyond 20 weeks, average (standard deviation) for estimated median survival time, and the number of resamples in which case 3 occurs 0, 1, 2 and 3 or more times. Figure 3.4 Comparison of distributions of differences in median survival times for censored data bootstraps applied to the AML data. The dotted line is the line x = y.

F requency o f case 3

Cases C o n d itio n al W eird

P robability

M edian

0

1

2

>3

0.72 (0.14) 0.72 (0.14) 0.73 (0.12)

32.5 (8.5) 32.8 (8.5) 33.3 (7.2)

180 75 0

182 351 499

95 71 0

42 3 0

co c o

■o a>

V-

c o

O

-20

20 Cases

40

-20

0

20

40

Cases

o f the m edian 21, 19, and 2 tim es respectively. The w eird b o o tstrap results for the m edian are less variable th a n the others. The last colum ns o f the table show the num bers o f sam ples in which the sm allest censored observation appears 0, 1, 2, and 3 or m ore times. U nder the conditional scheme the observation appears m ore often th an under the ordinary b o o tstrap , and und er the weird b o o tstrap it occurs once in each resample. Figure 3.4 com pares the distributions o f the difference o f m edian survival times betw een the two groups, und er the three schemes. R esults for the condi­ tional and o rdinary b o o tstrap s are similar, b u t the weird bo o tstrap again gives results th a t are less variable th a n the others. This set o f d a ta gives an extrem e test o f m ethods for censored data, because quantiles o f the product-lim it estim ate are very discrete. T he weird b o o tstra p also gave results less variable th a n the o ther schemes for a larger set o f data. In general it seems th a t case resam pling and conditional resam pling give quite sim ilar an d reliable results, b o th differing from the weird bootstrap. ■

88

3 ■Further Ideas

3.6 Missing Data The expression “missing d a ta ” relates to d atasets o f a stan d ard form for which some entries are missing or incom plete. This happens in a variety o f different ways. F o r example, censored d a ta as described in Section 3.5 are incom plete w hen the censoring value c is reported instead o f y°. O r in a factorial ex­ perim ent a few factor com binations m ay n o t have been used. In such cases estim ates an d inferences w ould take a simple form if the dataset were “com ­ plete”. But because p a rt o f the stan d ard form is missing, we have two problem s: how to estim ate the quantities o f interest, and how to m ake inferences about them. We have already discussed ways o f dealing w ith censored data. N ow we exam ine situations where each response has several com ponents, some o f which are missing for som e cases. Suppose, then, th a t the fictional o r p o tential com plete d a ta are y°s and th a t corresponding observed d a ta are ys, w ith some com ponents taking the value N A to represent “n o t available”. Parametric problems F o r param etric problem s the situation is relatively straightforw ard, at least in principle. First, in defining estim ators there is a general fram ew ork w ithin which com plete-data M L E m ethods can be applied using the iterative EM algorithm , which essentially w orks by estim ating missing values. Form ulae exist for com puting approxim ate stan d ard errors o f estim ators, b u t sim ulation will often be required to obtain accurate answers. O ne extra com ponent th at m ust be specified is the m echanism which takes com plete d a ta y° into observed d a ta y, i.e. f ( y \ y°). T he m ethodology is sim plest w hen d a ta are missing at random . The corresponding Bayesian m ethodology is also relatively straightforw ard in principle, and num erous general algorithm s exist for using com plete-data form s o f posterior distribution. Such algorithm s, although they involve sim u­ lation, are som ew hat rem oved from the general context o f b o o tstra p m ethods and will n o t be discussed here. Nonparametric problems N onparam etric analysis is som ew hat m ore com plicated, in p a rt because o f the difficulty o f defining ap p ro p riate estim ators. T he following artificial exam ple illustrates som e o f the key ideas. Example 3.11 (Mean with missing data) Suppose th a t responses y° had been obtained from n random ly chosen individuals, b u t th a t m random ly selected values were then lost. So the observed d a ta are y u - - - , y n = y \ , - . - , y l - m, N A , . . . , N A .

The EM or expectation maximization algorithm is widely used in incomplete data problems.

89

3.6 • Missing Data

To estim ate the popu latio n m ean /i we should o f course use the average response y = (n — m)-1 X/’ whose variance we would estim ate by n—m

v = (n — m) 2 Y ( y j - y f ■ But think o f this as a prototype missing d a ta problem , to which resam pling m ethods are to be applied. C onsider the following two approaches: 1

First estim ate fi by t = y, the average o f the non-m issing data. Then (a) sim ulate sam ples y\,...,y*n by sam pling with replacem ent from the n observations y \ , . . . , y„-m, N A , . . . , N A ; then (b ) calculate f* as the average o f non-m issing values.

2

First estim ate the missing values y „_m+l, . . . , by = y for j = n —m + 1 , . . . , n an d estim ate n as the m ean o f y \ , . . . , y°_m, }>°_m+1). . . , y°. Then (a) sam ple w ith

replacem ent from

y\,...,yQ n_m, f n_m+x, . . . , f n to

get

(ft) duplicate the data-loss procedure by replacing a random ly chosen m o f the y*° w ith N A ; finally (c) duplicate the d a ta estim ation o f fi to get /*. In the first approach, we choose the form o f t to take account o f the missing data. T hen in the resam pling we get a random num ber o f missing values, M* say, w hose m ean is m. The effect o f this is to m ake the variance o f T* som ew hat larger th a n the variance o f T : specifically

A ssum ing th a t we discard all resam ples with rn = n (all d a ta missing), the b o o tstrap variance will overestim ate v ar(T ) by a factor which ranges from 15% for n = 10, m = 5 to 4% for n = 30, m = 15. In the second approach, the first step was to fix the d ata so th at the com plete-data estim ation form ula /t = n-1 YTj=i y*j f ° r t could be used. Then we attem pted to sim ulate d a ta according to the two steps in the original d ata-generation process. U nfortunately the E D F o f y®,...,y®_m,y®_m+l,...,y® is an underdispersed estim ate o f the true C D F F. Even though the estim ate t is n o t affected in this particularly simple problem , the boo tstrap distribution certainly is. This is illustrated by the b o o tstrap variance

Both approaches can be repaired. In the first, we can stratify the sam pling w ith com plete an d incom plete d a ta as strata. In the second approach, we can ad d variability to the estim ates o f missing values. This device, called multiple

90

3 ■Further Ideas

imputation, replaces the single estim ate y® = y by the set y® + e \ , . . . , yj + e„_m, where ek = yk — y for k = 1 ,..., n — m. W here the estim ate yj was previously given weight 1, the n — m im puted values for the y'th case are now given equal weights (n — m)~l . The im plication is th a t F is m odified to equal n~] on each com plete-data value, and n_1 x (n — m)_1 on the m(n — m) values + ek. In this simple case y® + ek = yk, so F reduces to the E D F o f the non-m issing d a ta y n-m, as a consequence o f which t(F) = y and the b o o tstrap distribution o f T* is correct. ■ This exam ple suggests two lessons. First, if the com plete-data estim ator can be m odified to w ork for incom plete data, then resam pling cases will w ork reasonably well provided the p ro p o rtio n o f m issing d a ta is sm all: stratified resam pling would reduce variation in the am o u n t o f missingness. Secondly, the com plete-data estim ator and full sim ulation o f d a ta observation (including the data-loss step) can n o t be based on single im p u tatio n estim ation o f missing values, b u t m ay w ork if we use m ultiple im p u tatio n appropriately. O ne fu rth er poin t concerns the data-loss m echanism , which in the exam ple we assum ed to be com pletely random . If d a ta loss is dependent upon the response value y, then resam pling cases should still be v a lid : this is som ew hat sim ilar to the censored-data problem . But the o th er approach via m ultiple im putatio n will becom e com plicated because o f the difficulty o f defining a p ­ propriate m ultiple im putations. Example 3.12 (Bivariate missing data) A m ore realistic exam ple concerns the estim ation o f bivariate correlation when some cases are incom plete. Suppose th a t Y is bivariate w ith com ponents U an d X . T he param eter o f interest is 6 = c o t t ( U , X ) . A ran d o m sam ple o f n cases is taken, such th a t m cases have x missing, b u t no cases have b o th u an d x missing o r ju st u missing. I f it is safe to assum e th a t X has a linear regression on U, then we can use fitted regression to m ake single im pu tatio n s o f missing values. T h a t is, we estim ate each missing x; by Xj = x + b(uj — u), where x, u and b are the averages and the slope o f linear regression o f x on u from the n — m com plete pairs. It is easy to see th a t it would be w rong to substitute these single im putations in the usual form ula for sam ple correlation. The result would be biased aw ay from zero if b ± 0. O nly if we can m odify the sam ple correlation form ula to remove this effect will it be sensible to use simple resam pling o f cases. The o th er strategy is to begin w ith m ultiple im p u tation to obtain a suitable bivariate F, next estim ate 6 w ith the usual sam ple correlation t(F), and then resam ple appropriately. M ultiple im p u tatio n uses the regression residuals from

3.6 • Missing Data

91

Figure 3.5 Scatter plot of bivariate sample and multiple imputation values. Left panel shows observed pairs (o) and cases where only u is observed (•). Right panel shows observed pairs (o) and multiple imputation values (+). Dotted line is imputation regression line obtained from observed pairs.

- 3 - 2 - 1 0 1 2 3

- 3 - 2 - 1 0 1 2 3

com plete pairs, ej = Xj — Xj = Xj — {x + b(uj — u )}, for j = — T hen each missing Xj is k j plus a random ly selected O ur estim ate F is the bivariate distribution which puts weight n~l on each com plete pair, and w eight n-1 x (n — m)-1 on each o f the n — m m ultiple im putations for each incom plete case. T here are two strong, implicit assum ptions being m ade here. First, as th ro u g h o u t o u r discussion, it is assum ed th at values are missing at random . Secondly, hom ogeneity o f conditional variances is being assumed, so th a t pooling o f residuals m akes sense. As an illustration, the left panel o f Figure 3.5 shows a scatter plot for a sam ple o f n = 20 where m = 5 cases have x com ponents missing. Com plete cases ap p e a r as open circles, and incom plete cases as filled circles — only the u com ponents are observed. In the right panel, the do tted line is the im putation line which gives x , for j = 1 6 ,...,2 0 , and the m ultiple im putation values are plotted w ith sym bol + . T he m ultiple im putation E D F will put probability ^ on each open circle, and probability on each + . The results in Table 3.6 illustrate the effectiveness o f the m ultiple im p u ta­ tion ED F. The table shows sim ulation averages and stan d ard deviations for estim ates o f co rrelation 6 and a \ = var(X ) using the stan d ard com plete-data form s o f the estim ators, w hen h alf o f the x values are missing in a sample o f size n = 20 from the bivariate norm al distribution. In this problem there would be little gain from using incom plete cases, b u t in m ore com plex situa­ tions there m ight be so few com plete cases th at m ultiple im putation would be highly effective or even essential.

92

3 ■Further Ideas Table 3.6 Average Full d a ta estim ates

a\ 9

1.00 (0.33) 0.69 (0.13)

O bserved d a ta estim ates --------------------------------------------------------------------------------------------------C om plete case only Single im p u ta tio n M ultiple im p u tatio n

1.01 (0.49) 0.68 (0.20)

0.79 (0.44) 0.79 (0.18)

0.96 (0.46) 0.70 (0.19)

H aving set u p an ap p ro p riate m ultiple im p u tation E D F F, resam pling proceeds in an obvious way, first creating a full set o f n pairs by random sam pling from F, and then selecting m cases random ly w ithout replacem ent for which the x values are “lo st”. T he first stage is equivalent to random sam pling w ith replacem ent from n — m copies o f the com plete d a ta plus all m x (n — m) possible m ultiple im p u tatio n values. ■

3.7 Finite Population Sampling Basics The sim plest form o f finite popu latio n sam pling is when a sample is taken random ly w ith o u t replacem ent from a population ^ with values w ith N > n know n. T he statistic t ( y \ , . . . , y n) is used to estim ate the corresponding popu latio n q uantity 9 = t{°i)\,...,ay ^ ) . The d a ta are one o f the (^ ) possible sam ples Y \ , . . . , Y n from the population, and the w ithoutreplacem ent sam pling m eans th a t the Yj are exchangeable b u t n o t independent; the sam pling fraction is defined to be / = n / N . I f n ;—y )2 is an unbiased estim ate o f y, an d the usual stan d ard erro r for y under w ithoutreplacem ent sam pling is obtained from the second line o f (3.15) by replacing y with c. N orm al approxim ation to the distribution o f Y then gives approxim ate (1 — 2a) confidence lim its y + (1 — / ) 1'/2c 1/2n_ 1/ 2za for 9, where za is the a

(standard deviation) o f estim ators for variance and correlation 6 from bivariate normal da ta (u,x) with sample size n = 20 and m = 10 x values missing at random. True values o^ — l and B — 0.7. Results from 1000 simulated datasets.

3.7 ■Finite Population Sampling

93

quantile o f the stan d ard norm al distribution. Such confidence intervals are a factor (1 —/ ) 1/2 shorter th a n for sam pling with replacem ent. The lack o f independence affects possible resam pling plans, as is seen by applying the o rdinary b o o tstrap to 7 . Suppose th a t 7 1*,...,Y„* is a random sam ple tak en w ith replacem ent from y i , . . . , y n- T heir average 7* has variance var*(7*) = n~2 ^ 2 ( y j —y ) 2, and this has expected value n~2(n— l)y over possible sam ples y i , . . . , y „ . This only m atches the second line o f (3.15) if / = n~l . T hus for the larger values o f / generally m et in practice, ordinary b o o tstrap standard errors for y are too large an d the confidence intervals for 6 are system atically too wide. ■ Modified sample size The key difficulty w ith the ordinary b o o tstrap is th at it involves withreplacem ent sam ples o f size n and so does n o t capture the effect o f the sam pling fraction, which is to shrink the variance o f an estim ator. O ne way to deal w ith this is to take resam ples o f size n', resam pling with or w ithout re­ placem ent. The value o f n' is chosen so th a t the estim ator variance is m atched, a t least approxim ately. F or w ith-replacem ent resam ples the average 7 ’ o f 7 ,* ,...,7 n* has variance var*(7*) = (n — 1)c/{n'n), which is only an unbiased estim ate o f (1 — f ) y / n w hen n' = (n — 1)/(1 — / ) ; this usually exceeds n. F or w ithout-replacem ent resam pling, a sim ilar argum ent implies th a t we should take n' = f n . O ne obvious difficulty with this is th a t if / WjV — TV / _

«jv /

UJ '

1 V ''. .

N

(3.16) F o r o u r d a ta trat = 156.8 an d vrat = 10.852. The regression estim ate is based on the straight-line regression x = j?o + fixu fit to the d a ta (w i,x i),...,(u „ ,x „ ), using least squares estim ates /?o and (1]. The regression estim ate o f 9 and its estim ated variance are 11 _n treg = Po +

Vreg =

^ ^

Pluj) j

(3-17)

for ou r d a ta treg = 138.3 and vreg = 8.322. Table 3.7 contains 95% confidence intervals for 6 based on norm al approxi­ m ations to trat an d treg, an d on the studentized b o o tstrap applied to (3.16) and (3.17). N orm al approxim ations to the distributions o f trat and treg are poor, an d intervals based on them are considerably shorter th an the o ther intervals. The popu latio n and su perpopulation bootstraps give rath er sim ilar intervals. T he sam pling fraction is / = 10/49, so the estim ate o f the distribution o f 7” using m odified sam ple size and w ithout-replacem ent resam pling uses

3 ■Further Ideas

96

Schem e

R a tio

N o rm al M odified size, n' = 2 M odified size, n' = 11 M irro r-m atch , m = 2 P o p u lation S u p erp o p u latio n

137.8 58.9 111.9 115.6 118.9 120.3

174.7 298.6 196.2 196.0 193.3 195.9

123.7 1 M il 112.8 116.1 114.0

N o rm al M odified size, n' = 2 M odified size, n' = 11 M irro r-m atch , m = 2 P o p u latio n S u p erp o p u latio n

7 1 2 3 2 1

152.0 — 258.2 258.7 240.7 255.4

L ength

C overage L ow er

Table 3.7 City population data: 95% confidence limits for the mean population per city in 1930 based on the ratio and regression estimates, using normal approximation and various resampling methods with R = 999.

R egression

U pper

O verall

A verage

SD

89

82 98 89 88 89 91

23 151 34 33 36 41

142 19 19 21 24

98 91 91 91 92

8.2

sam ples o f size n f = 2. N o t surprisingly, w ithout-replacem ent resam ples o f size n' = 2 from 10 observations give a very p o o r idea o f w hat happens w hen sam ples o f size 10 are taken w ithout replacem ent from 49 observations, and the corresponding confidence interval is very wide. Studentized boo tstrap confidence limits can n o t be based on treg, because w ith ri = 2 we have v’eg = 0. F or w ith-replacem ent resam pling, we take (n — 1)/(1 — / ) = n' = 11, giving intervals quite close to those for the m irror-m atch, population and superpop u latio n bootstraps. Figure 3.6 shows why the upp er endpoints o f the ratio and regression confidence intervals differ so m uch. T he variance estim ate v*eg is unstable because o f resam ples in which case 4 does n o t ap p e a r and case 9 appears just once o r n o t at all; then z* takes large negative values. The right panel o f the figure explains this; the regression slope changes m arkedly w hen case 4 is deleted. Exclusion o f case 9 fu rth er reduces the regression sum o f squares and hence v’eg. T he ratio estim ate is m uch less sensitive to case 4. I f we insisted on using treg, one solution w ould be to exclude from the sim ulation sam ples in which case 4 does n o t appear. T hen the 0.025 and 0.975 quantiles o f z ’eg using the popu latio n b o o tstrap are -1.30 an d 3.06, and the corresponding confidence interval is [112.9,149.1].

Table 3.8 City population data. Empirical coverages (%) and average and standard deviation of length of 90% confidence intervals based on the ratio estimate of the 1930 total, based on 1000 samples of size 10 from the population of size 49. The nominal lower, upper and overall coverages are 5, 95 and 90.

91

3.1 • Finite Population Sampling

Figure 3.6 Population bootstrap results for regression estim ator based on city d a ta with n = 10. The left panel shows values o f z'eg and ivJ/2 for resamples in which case 4 appears at least once (dots), and in which case 4 does not appear and case 9 appears zero times (0), once (1), or m ore times (+ ); the dotted line shows The right panel shows the sample and the regression lines fitted to the d a ta with case 4 (dashes) and w ithout it (dots); the vertical line shows the value fi at which 0 is estimated.

X o

-----------y

o o CO o in C\J o o CVJ o lO

/

4

//

o o

CM

o in

O co

/ //

9 / > 2'

Aft /Q

•m ,IUy

o 2

4

6 sqrt(v*)

8

10

0

50 100 150 200 250 300

u

To com pare the perform ances o f the various m ethods in setting confidence intervals, we conducted a num erical experim ent in which 1000 sam ples o f size n = 10 were taken w ithout replacem ent from the p o p ulation o f size N = 49. F or each sam ple we calculated 90% confidence intervals [L, U] for 6 using R = 999 b o o tstrap samples. Table 3.8 contains the em pirical values o f Pr(0 < L), Pr(0 < U), an d Pr(L < 9 < U). T he norm al intervals are short an d their coverages are m uch too small, while the m odified intervals with ri = 2 have the opposite problem . Coverages for the m odified sam ple size with ri = 11 and for the pop u latio n and superpopulation b o o tstrap are close to their nom inal levels, though their endpoints seem to be slightly too far left. The 80% and 95% intervals an d those for the regression estim ator have sim ilar properties. In line w ith o th er studies in the literature, we conclude th a t the population and superp o p u latio n b o o tstraps are the best o f those considered here. ■ Stratified sampling In m ost applications the pop u lation is divided into k strata, the ith o f which contains N t individuals from which a sam ple o f size n, is taken w ithout replacem ent, independent o f o th er strata. The ith sam pling fraction is f i = tii/Ni and the p ro p o rtio n o f the p o pulation in the ith stratu m is vv, = N t/ N , where N = N i H-------- 1- N k- The estim ate o f 9 and its stan d ard erro r are found by com bining quantities from each stratum . Two different setups can be envisaged for m athem atical discussion. In the first — the “small-fc” case — there is a small num ber o f large stra ta: the asym ptotic regim e takes k fixed and n „ N j—>oo with where 0 < 7tj < 1.

98

3 ■Further Ideas

A p art from there being k strata, the same ideas and results will apply as above, w ith the chosen resam pling scheme applied separately in each stratum . The second setup — the “large-/c” case — is where there are m any sm all stra ta; in m athem atical term s we suppose th a t k —>00 b u t th a t N, and n, are bounded. This situation is m ore com plicated, because biases from each stratum can com bine in such a way th a t a b o o tstrap fails completely. Example 3.16 (Average)

Suppose th a t the p o p u lation ,]M com prises k strata,

and th at the yth item in the ith stratu m is labelled the average for th at stratum is ^ . T hen the pop u latio n average is 6 = which is estim ated by T = wiYi, where % is the average o f the sam ple Y,i,. . . , Yint from the ith stratum . T he variance o f T is k

. Ni W,2(l - / ,) X — — - W f ,

V= £ i=l

(3.18)

j= 1

1

an unbiased estim ate o f w hich is k

v = £ v v , 2( l - U ) x — >=1 Hi

.

Hi

- £ ( Y y - Yj)2. 1 j= 1

(3.19)

Suppose for sake o f sim plicity th a t each N ,/n , is an integer, and th a t the popu latio n b o o tstrap is applied to each stratu m independently. T hen the variance o f the b o o tstra p version o f T is v a r '( T ') - E » , 2( l - / , ) X

x - l j

(3.20)

the m ean o f which is obtained by replacing the last term on the right by (Ni — I )-1 Z j i & i j — &i)2- If k is fixed and TV,—>-oo while f ~ * n t , (3.20) will converge to v, b u t this will n o t be the case if n!; N, are bounded and k —>00. T he boo tstrap bias estim ate also m ay fail for the same reason (Problem 3.12).

■ F or setting confidence intervals using the studentized b o o tstrap the key issue is n o t the perform ance o f bias and variance estim ates, b u t the extent to which the distrib u tio n o f the resam pled q uantity Z* = (T* — t ) / V ’ll2m atches th at o f Z = ( T —6 ) / V 1/2. D etailed calculations show th a t when the population and superpopulation b o o tstrap s are used, Z an d Z* have the same limiting distribution u n d er b o th asym ptotic regimes, an d th a t under the fixed-/c setup the approxim ation is b etter th a n th a t using the other resam pling plans. Example 3.17 (Stratified ratio) F or em pirical com parison o f the m ore prom is­ ing o f these finite populatio n resam pling schemes w ith stratified data, we gen­ erated a pop u latio n w ith N pairs (u,x) divided into strata o f sizes N i , . . . , N k

99

3.7 ■Finite Population Sampling Table 3.9 Empirical coverages (%) of nominal 90% confidence intervals using the ratio estimate for a population average, based on 1000 stratified samples from populations with k strata of size N, from each of which a sample of size n = N/'i was taken without replacement. The nominal lower (L), upper (U) and overall (O) coverages are 5, 95 and 90.

N o rm al M odified size M irro r-m atch P o p u latio n S u p erp o p u latio n

k = 20, N = 18

k = 5, N = 72

k = 3 , N = 18

L

U

O

L

U

O

L

U

O

5 6 9 6 3

93 94 92 95 97

88 89 83 89 95

4 4 8 5 2

94 94 90 95 98

90 90 82 90 96

7 6 6 6 3

93 96 94 95

86 90 88 89 96

98

according to the ordered values o f u. The aim was to form 90% confidence intervals for k

N,

e = r l E E x'> .=i j=\

where x,j is the value o f x for the jth elem ent o f stratu m i. We took independent sam ples (uy,Xy) o f sizes n, w ithout replacem ent from the ith stratum , an d used these to form the ratio estim ate o f 9 and its estim ated variance, given by k

k

t = V WjU, X ti, i= 1

i

V = Y Wi ( 1 ~ f i ) i= 1

n,

X — (---- 7T ^ l } j —1

~~ t t o j ) 2’

where E / ' = 1 X ij

E jW

_

1

Ni

.....

these extend (3.16) to stratified sampling. We used b o o tstrap resam ples with R = 199 to com pute studentized b oo tstrap confidence intervals for 9 based on 1000 different sam ples from sim ulated datasets. Table 3.9 shows the em pirical coverages o f these confidence intervals in three situations, a “large-/c” case with k = 20, Nj = 18 and n, = 6, a “small-fc” case with k = 5, Ni = 72 and n, = 24, and a “small-fc” case w ith k = 3, Ni = 18 and n, = 6. The m odified sam pling m ethod used sam pling w ith replacem ent, giving sam ples o f size n' = 7 when n = 6 an d size ri = 34 w hen n = 24, while the corresponding values o f m for the m irror-m atch m ethod were 3 and 8. T h roughout / i = jIn all three cases the coverages for norm al, population and m odified sample size intervals are close to nom inal, while the m irror-m atch m ethod does poorly. T he superp o p u latio n m ethod also does poorly, perhaps because it was applied to separate stra ta ra th e r th an used to construct a new p o pulation to be stratified a t each replicate. Sim ilar results were obtained for nom inal 80% and 95% confidence limits. O verall the population b o o tstrap and m odified sample

3 ■Further Ideas

100

size m ethods d o best in this lim ited com parison, an d coverage is n o t im proved by using the m ore com plicated m irror-m atch m ethod. ■

3.8 Hierarchical Data In some studies the v ariatio n in responses m ay be hierarchical or m ulti­ level, as happens in repeated-m easures experim ents and the classical split-plot experim ent. D epending u p o n the n atu re o f the p aram eter being estim ated, it m ay be im p o rtan t to take careful account o f the two (or m ore) sources o f variation w hen setting up a resam pling scheme. In principle there should be no difficulty w ith p aram etric resam pling: having fitted the m odel param eters, resam ple d a ta will be generated according to a com pletely defined model. N onparam etric resam pling is n o t straightforw ard: certainly it will n o t m ake sense to use simple n o n p aram etric resam pling, which treats all observations as independent. H ere we discuss some o f the basic points ab o u t nonparam etric resam pling in a relatively simple context. Perhaps the m ost basic problem involving hierarchical variation can be form ulated as follows. F o r each o f a groups we o b tain b responses y tj such th a t y i} = X; +

Zij,

i = 1 , . . . , a, j = l , . . . , b ,

(3.21)

where the x,s are random ly sam pled from Fx an d independently the z^s are random ly sam pled from Fz, w ith E (Z ) = 0 to force uniqueness o f the model. T hus there is hom ogeneity o f variation in Z betw een groups, and the structure is additive. T he feature o f this m odel th a t com plicates resam pling is the correlation betw een observations w ithin a group, var(Yjy) = c* + a\,

cov(y,; , Yik) = a 2x,

j f k.

(3.22)

For d a ta having this nested structure, one m ight be interested in param eters o f Fx o r Fz o r some co m bination o f both. F o r exam ple, w hen testing for presence o f variation in X the usual statistic o f interest is the ratio o f betw een-group and w ithin-group sum s o f squares. How should one resam ple nonparam etrically for such a d ata structure? There are two simple strategies, for b o th o f which the first stage is to random ly sample groups w ith replacem ent. A t the second stage we random ly sam ple w ithin the groups selected at the first stage, either w ithout replacem ent (Strategy 1) or w ith replacem ent (Strategy 2). N ote th a t Strategy 1 keeps selected groups intact. To see which strategy is likely to w ork better, we look at the second m om ents o f resam pled d a ta y'j to see how well they m atch (3.22). C onsider selecting y'i V. . . , y ’ib. A t the first stage we select a ran d o m integer /* from {1 ,2 ,__a}. A t the second stage, we select ran d o m integers from {1,2 either w ithout replacem ent (Strategy 1) o r w ith replacem ent (Strategy 2): the

101

3.8 ■Hierarchical Data

sam pling w ithout replacem ent is equivalent to keeping the J* th group intact. U nder b o th strategies E*(5y I /* = O = )V, and

However, E*(Yy* Y*’ | /* = n =

6(6- 1)

yi’iyi'm,

Strategy 1, Strategy 2.

h £ tm = i ynyi'm,

T herefore E*(Yt; ) = ?., 1

SSg SSyy var*(Y,*) = — + J a ab

(3.23)

and Strategy 1, Strategy 2,

(3.24)

where y = a 1 £ y t, S S B = E L iO ^ - y - f and S S W = £ ? =1 E*=i(.Vy ~ tf)2- To see how well the resam pling variation mimics (3.22), we calculate expectations o f (3.23) an d (3.24), using

This gives E {v a r'(i'jy )} = and Strategy 1, Strategy 2. O n balance, therefore, Strategy 1 m ore closely mimics the variation properties o f the data, an d so is the preferable strategy. R esam pling should w ork well so long as a is m oderately large, say at least 10, ju st as resam pling hom ogeneous d a ta w orks well if n is m oderately large. O f course b o th strategies would work well if b o th a an d b were very large, b u t this is rarely the case. A n application o f these results is given in Exam ple 6.9. The preceding discussion w ould apply to balanced d a ta structures, b u t not to m ore com plex situations, for which a m ore general approach is required. A direct, m odel-based ap proach would involve resam pling from suitable estim ates o f the tw o (or m ore) d a ta distributions, generalizing the resam pling from F in C h ap ter 2. H ere we outline how this m ight work for the d a ta structure (3.21).

3 ■Further Ideas

102

Estim ates o f the two C D F s Fx an d Fz can be form ed by first estim ating the xs and zs, and then using their E D F s. A naive version o f this, which parallels stan d ard linear m odel theory, is to define xi = yu

ztj = y,j - %

(3.25)

The resulting way to o btain a resam pled d ataset is to

1

choose x j , . . . , x* by random ly sam pling w ith replacem ent from x i , . . . , x a ; then

2

choose z*n , . . . , z ' ab by random ly sam pling ab times with replacem ent from z n , . . . , z ab; and finally

3

set y-j = x* + z-j,

i=

j = l,...,b.

S traightforw ard calculations (Problem 3.17) show th a t this approach has the sam e second-m om ent properties o f as Strategy 2 earlier, show n in (3.23) and (3.24), w hich are n o t satisfactory. Som ew hat predictably, Strategy 1 is mim icked by choosing z\ r a n d o m l y w ith replacem ent from one group o f residuals Zki,...,Zkb — either a random ly selected group or the group corresponding to x* (Problem 3.17). W hat has gone w rong here is th a t the estim ates x* in (3.25) have excess variation, nam ely a ^ S S g = *, in p art to avoid negative estim ates v(0). Provided th at a suitable sm oothing m ethod is used, inclusion o f t\ and t'R in the set for which the v" are estim ated implies th at all the transform ed values h(t*) can be calculated. T he transform ed estim ator h ( T ) should have approxim ately unit variance. A ny o f the com m on sm oothers can be used to obtain v(0), and simple inte­ gration algorithm s can be used for the integral (3.40). I f the nested boo tstrap is used only to obtain the variances o f Ri o f the f*, the total num ber o f b o o tstrap sam ples required is R + M R i . Values o f R\ and M in the ranges 50-100 and 25-50 will usually be adequate, so if R = 1000 the overall num ber o f b o o tstrap sam ples required will be 2250-6000. If variance estim ates for all the t ’ are available, for exam ple nonparam etric delta m ethod estim ates, then the delta m ethod shows th a t approxim ate standard errors for the h(t'r) will be i>*1/2/ v ( t ') 1/2; a plot o f these against t* will provide a check on the adequacy o f the transform ation. T he sam e procedure can be applied with second-level resam pling done from sm oothed frequencies, as in Exam ple 3.22. Example 3.23 (City population data) For the city population d ata o f E xam ­ ple 2.8 the p aram eter o f interest is the ratio 6 , which is estim ated by t = x / u. Figure 3.7 shows th a t the variance o f T depends strongly on 6 . We used the procedure outlined above to estim ate a transform ation based on R = 999 b o o tstrap samples, w ith R\ = 50 and M = 25. The transform ation is shown in the left panel o f Figure 3.11: the right panel shows the stan d ard errors v ^ 2 / v ( O l/2 o f the h(t'). T he transform ation has been largely successful in stabilizing the variance. In this case the variances VLr based on the linear approxim ation are readily calculated, an d the tran sfo rm atio n could have been estim ated from them rather than from the nested bootstrap. ■

3.10 Bootstrap Diagnostics 3.10.1 Jackknife-after-bootstrap Sensitivity analysis is im p o rtan t in understanding the im plications o f a statisti­ cal calculation. A conclusion th a t depended heavily on ju st a few observations would usually be regarded as m ore tentative th an one supported by all the data. W hen a p aram etric m odel is fitted, difficulties can be detected by a wide range o f diagnostics, careful scrutiny o f which is p a rt o f a param etric boo tstrap analysis, as o f any param etric m odelling. But if a nonparam etric b o o tstrap is used, the E D F F is in effect the m odel, and there is no baseline against which

114

3 ■Further Ideas

f ID

CO

to of as or

com pare outliers, for example. In this situation we m ust focus on the effect individual observations on b o o tstrap calculations, to answ er questions such “would the confidence interval differ greatly if this point were rem oved?”, “w hat happens to the significance level when this observation is deleted?”

Nonparametric case Once a nonparam etric resam pling calculation has been perform ed, a basic question is how it w ould have been different if an observation, yj, say, had been absent from the original data. F or exam ple, it m ight be wise to check w hether or n o t a suspicious case has affected the quantiles used in a confidence interval calculation. T he obvious way to assess this is to do a fu rth er sim ulation from the rem aining observations, b u t this can be avoided. This is because a resam ple in which y; does n o t ap p ear can be th o u g ht o f as a random sample from the d a ta w ith yj excluded. Expressed formally, if J* is sam pled uniform ly from { l ,...,n } , then the conditional distribution o f J ' given th at J* =/= j is the sam e as the distribution o f /*, where /* is sam pled uniform ly from { 1 ,... , j — \ , j + 1 ,...,« } . T he probability th a t is n o t included in a boo tstrap sample is (1 — n-1 )" = e ~ \ so the num b er o f sim ulations R - j th a t do not include yj is roughly equal to R e ~l = 0.368R. So we can m easure the effect o f on the calculations by com paring the full sim ulation w ith the subset o f t \ , . . . , t R ’ obtained from bo o tstrap sam ples where yj does n o t occur. In term s o f the frequencies f ’j which count the num ber o f tim es yj app ears in the rth sim ulation, we sim ply restrict attention to replicates with f ' j = 0. F or exam ple, the effect o f yj on the bias estim ate B can be

Figure 3.11 Variance-stabilization for the city population ratio. The left panel shows the empirical transformation «(•), and the right panel shows the standard errors u jy2/{v(r*)}1,/2 of the h{t*), with a smooth curve.

115

3.10 ■Bootstrap Diagnostics Table 3.10 M easurements on the head breadth and length o f the first two adult sons in 25 families (Frets, 1921).

1 2 3 4 5 6 7 8 9 10 11 12 13

F irst son L en Brea

Second son Len Brea

191 195 181 183 176 208 189 197 188 192 179 183 174

179 201 185 188 171 192 190 189 197 187 186 174 185

155 149 148 153 144 157 150 159 152 150 158 147 150

145 152 149 149 142 152 149 152 159 151 148 147 152

14 15 16 17 18 19 20 21 22 23 24 25

F irst son Len B rea

Second son L en Brea

190 188 163 195 186 181 175 192 174 176 197 190

195 187 161 183 173 182 165 185 178 176 200 187

159 151 137 155 153 145 140 154 143 139 167 163

157 158 130 158 148 146 137 152 147 143 158 150

m easured by the scaled difference

n(B_j - B) = J

J -

I

£ J

(t; - t - j ) - i

'^>=0

- t ) 1, r

(3.41)

J

where B - j is the bias estim ate from the resam ples in which yj does not appear, and r_; is the value o f t when yj is excluded from the original data. Such calculations are applications o f the jackknife m ethod described in Section 2.7.3, so the technique applied to b o o tstra p results is called the jackknife-after-bootstrap. The scaling factor n in (3.41) is n o t essential. A useful diagnostic is the plot o f jackknife-after-bootstrap m easures such as (3.41) against em pirical influence values, possibly standardized. F or this purpose any o f the approxim ations to em pirical influence values described in Section 2.7 can be used. The next exam ple illustrates a related plot th a t shows how the distrib u tio n o f r* — t changes w hen each observation is excluded. Example 3.24 (Frets’ heads) Table 3.10 contains d ata on the head breadth and length o f the first two ad u lt sons in 25 families. T he correlations am ong the log m easurem ents are given below the diagonal in Table 3.11. T he values above the diagonal are the partial correlations. For exam ple, the value 0.13 in the second row is the correlation betw een the log head b read th o f the first son, b i, and the log head length o f the second son, h, after allowing for the other variables. In effect, this is the correlation betw een the residuals from separate regressions o f b\ and lj on the other two variables. T he correlations are all large, b u t four o f the partial correlations are small, which suggests the simple in terpretation th at each o f the four pairs o f m easurem ents for first and second sons is independent conditionally on the values o f the o th er two m easurem ents.

116

3 ■Further Ideas

F irst son L ength B readth

F irst son S econd son

L ength B readth L ength B readth

0.43 0.75 0.72 0.72

0.70 0.72

Table 3.11 Correlations (below diagonal) and partial correlations (above diagonal) for log measurements on the head breadth and length of the first two adult sons in 25 families.

Second son L ength B readth

0.21

0.17

0.13

0.22 0.64

0.85

We focus on the p artial correlation t = 0.13 betw een log foj and log I2 . The top panel o f Figure 3.12 shows a jack k n ife-after-b ootstrap plot for t, based on 999 b o o tstrap samples. T he points at the left-hand end show the em pirical 0.05, 0.1, 0.16, 0.5, 0.84, 0.9, an d 0.95 quantiles o f the values o f t’ — t *_2 for the 368 b o o tstrap sam ples in which case 2 was n o t selected; ~t_’ 2 is the average o f t* for those samples. T he d o tted lines are the corresponding quantiles for all 999 values o f t* — t. T he distribution is clearly m uch m ore peaked when case 2 is left out. T he panel also contains the corresponding quantiles when other cases are excluded. T he horizontal axis shows the em pirical influence values for t: clearly puttin g m ore weight on case 2 sharply decreases the value o f t. The low er left panel o f the figure shows th a t case 2 lies som ew hat away from the rest, and the plot o f residuals for the regressions o f logfti and lo g /2 on (lo g b2,lo g h) in the low er right panel accounts for the jackknife-afterb oo tstrap results. Case 2 seems outlying relative to the others: deleting it will clearly increase t substantially. T he overall average and stan d ard deviation o f the t* are 0.14 an d 0.23, changing to 0.34 and 0.17 when case 2 is excluded. The evidence against zero p artial correlation depends heavily on case 2. ■ A n o th er version o f the diagnostic plot uses case-deletion averages o f the i-e- t_j = R_j X>r:/*.=0 instead o f the em pirical influence values. This m ore clearly reveals how the quantity o f interest varies w ith param eter values. Parametric case In the p aram etric case different calculations are needed, because random sam ples from a case-deletion m odel are n o t simply an unw eighted subset o f the original b o o tstrap samples. N evertheless, those original b o o tstrap samples can still be used if we m ake use o f the following identity relating expectations under two different p aram eter v alu es: E { h ( Y ) \ r p ' } = E { h ( Y ) f^ Y li 'P w) | y j-

(3.42)

Suppose th a t the full-data estim ate (e.g. m axim um likelihood estim ate) o f the m odel p aram eter is xp, an d th a t when case j is deleted the corresponding estim ate is xp^j. The idea is to use (3.42) w ith xp an d xp-j in place o f xp and xpr

117

3.10 • Bootstrap Diagnostics

Figure 3.12 Jackknifeafter-bootstrap analysis for the partial correlation between lo g b\ and lo g /2 for Frets’ heads data. The top panel shows 0.05, 0.1,0.16, 0.5, 0.84, 0.9 and 0.95 empirical quantiles o f r’ — t*_j when each o f the cases is dropped from the bootstrap calculation in turn. The lower panels show scatter plots o f the raw values o f logfci and log fe, and o f their residuals when regressed on the other two variables.

-

3

-

2

-

1

0

1

2

infinitesimal jackknife value

Log b1

Residual for log b1

respectively. F or example,

Therefore the param etric analogue o f (3.41) is /d

di _

;

f l W .*

} ~ "\

R

r

. \ f ( y * I V-y)

j) f ( y ;

Iv)

1 V~V**

R

§ (r

}J ’

w here the sam ples y* are draw n from the full-data fitted model, th at is with p aram eter value ip. Sim ilar w eighted calculations apply to o ther features o f the

118

3 ■Further Ideas

distributio n o f T* — t; see Problem 3.20. O th er applications o f the importance reweighting identity (3.42) will be discussed in C h ap ter 9.

3.10.2 Linearity Statistical analysis is simplified w hen the statistic o f interest T is close to linear. In this case the variance approxim ation v i will be an accurate estim ate o f the b o o tstrap variance v a r(T | F), and saddlepoint m ethods (Section 9.5) can be applied to o btain accurate estim ates o f the distribution o f t \ w ithout recourse to sim ulation. A linear statistic is n o t necessarily close to norm ally distributed, as Exam ple 2.3 illustrates. N o r does linearity guarantee th at T is directly related to a pivot and therefore useful in finding confidence intervals. O n the o th er hand, experience from o th er areas in statistics suggests th at these three properties will often occur together. This suggests th a t we aim to find a transfo rm atio n h(-) such th a t h ( T ) is well described by the linear approxim ation th a t corresponds to (2.35) or (3.1). For simplicity we focus on the single-sam ple case here. T he shape o f h(-) would be revealed by a p lo t o f h(t) against t, b u t o f course this is n o t available because h(-) is unknow n. However, using T aylor approxim ation and (2.44) we do have h(t') = h(tl) = h{t) + h(t)± Y ' f j l j - h(t) + h(t)(t'L - t), " i =i which shows th a t t’L = c + dh(t') w ith ap p ro p riate definitions o f constants c and d. T herefore a plot o f the values o f t'L = t + m_1 Y ^ f ) h against the t* will look roughly like h(-), a p a rt from a location and scale shift. We can now estim ate h(-) from this plot, either by fitting a p articular param etric form, or by nonparam etric curve estim ation. Example 3.25 (City population data) T he top left panel o f Figure 3.13 shows t ’L plotted against t" for 499 b o o tstrap replicates o f the ratio t = x / u for the d ata in Table 2.1. The p lo t is highly nonlinear, an d the logarithm ic tran sfo r­ m ation, o r one even m ore extreme, seems appropriate. N ote th a t the plot has shape sim ilar to th a t for the em pirical variance-stabilizing transform ation in Figure 3.11. For a p aram etric transform ation, we try a B ox-C ox transform ation, h{t) = (tx — 1) / 1, w ith the value o f k estim ated by m axim izing the log likelihood for the regression o f the h(t') on the t'Lr. This strongly suggests th at we use I = —2, for which the fitted curve is shown as the solid line on the plot. This is close to the result for a sm oothing spline, shown as the d o tted line. The to p right panel shows the linear approxim ation for h(t‘), i.e. h(t) + h(t)n~l Y T j = i f j b ’ plotted against h(tm). This plot is close to the line w ith unit gradient, and confirm s the results o f the analysis o f transform ations.

h(t) is dh(t)/dt.

3.10 • Bootstrap Diagnostics

Figure 3.13 Linearity transformation for the ratio applied to the city population data. The top left panel shows linear approximations t*L plotted against bootstrap replicates f \ with the estimated parametric transformation (solid) and a transformation estimated by a smoothing spline (dots). The top right panel shows the same plot on the transformed scale. The lower left panel shows the plot for the studentized bootstrap statistic. The lower right panel shows a normal Q-Q plot of the studentized bootstrap statistic for the transformed values h{t*).

119

h(t*)

CO

CO

..y -‘

CM

r*

CM _c= * N

O

O

CNJ

V

•‘ jf c a

C \1

CO

CO

-6

-4-2

0

2

- 3 - 2 - 1 0 1 2 3 Quantiles of Standard Normal

z*

The lower panels show related plots for the studentized b o o tstrap statistics on the original scale and on the new scale, . t'-t Z ~ *1/2 ’ vL

. h(t')-h(t) Z> >~ *1/2 ’ h(t)vL

where v’L = n~ 2 ^ 2 f j l j . T he left panel shows that, like t*, z ’ is far from linear. The lower right panel shows th a t the distribution o f z ’h is fairly close to stan d ard norm al, though there are som e outlying values. The distribution o f z* is far from norm al, as shown by the right panel o f Figure 2.5. It seems that, here, the tran sfo rm ation th a t gives approxim ate linearity o f t* also

3 ■Further Ideas

120

m akes the corresponding studentized b o o tstrap statistic roughly norm al. The transform atio n based on the sm oothing spline w ould give sim ilar results. ■

3.11 Choice of Estimator from the Data In some applications we m ay w ant to choose an estim ator o r o th er procedure after looking a t the data, especially if there is considerable prio r uncertainty ab o u t the n atu re o f ran d o m variation o r o f the form o f relationship am ong variables. The sim plest exam ple w ith hom ogeneous d a ta involves the choice o f estim ator for a pop u latio n m ean fi, when em pirical evidence suggests th at the underlying distribution F has long, n on-norm al tails. Suppose th a t T ( 1 ) ,..., T ( K ) can all be considered potentially suitable esti­ m ators for n, and for the m om ent assum e th a t all are unbiased, which m eans th a t the underlying d a ta distrib u tio n is sym m etric. T hen one n a tu ra l criterion for choice am ong these estim ators is variance or, since their exact variances will be unknow n, estim ated variance. So if the estim ated variance o f T(i) is V(i), a n atu ral procedure is to select as estim ate for a given dataset th at t(i) whose estim ated variance is smallest. This defines the adaptive estim ator T by T = T(i)

if

V(i) = m in V(k). 1Zk(i). T here are two byproducts o f this double b o o tstrap procedure. One is infor­ m ation on how w ell-determ ined is the choice o f estim ator, if this is o f interest, simply by exam ining the relative frequency with which each estim ator is cho­ sen. Secondly, the bias o f v(i) can be approxim ated: on the log scale bias is estim ated by R ~ l ^ l o g y ’ — log v, where v'r is the sm allest value o f the v’(i)s in the rth b o o tstrap sample. Example 3.26 (Gravity data) Suppose th a t the d a ta in Table 3.1 were only available as a com bined sample o f n = 81 m easurem ents. T he different dispersions o f the ingredient series m ake the com bined sam ple very no n ­ norm al, so th a t the simple average is a po o r estim ator o f the underlying m ean fi. O ne possible ap proach is to consider trim m ed average estim ates n-k

which are averages after d ropping the k smallest and k largest order statistics yy y The usual average and sam ple m edian correspond respectively to k = 0 an d \{n — 1). The left panel o f Figure 3.14 plots the trim m ed averages against k. The m ild dow nw ard trend in the plot suggests slight asym m etry o f the d a ta distribution. O u r aim is to use the b o o tstrap to choose am ong the trim m ed averages. T he trim m ed averages will all be unbiased if the underlying d a ta distribution is sym metric, an d estim ator variance will then be a sensible criterion on which to base choice. The b o o tstrap procedure m ust build in the assum ed symmetry,

3 • Further Ideas

2.0

2.0

122

9

9 &

§ a >

1

'

I

O

% " 9 e

6

*



,

20

30

40

0

.

*

*



0

O

0

o

0 0

0 0

10

9 6

10

20

30

40

10

and this can be done (cf. Exam ple 3.4) by sim ulating sam ples sym m etrized version o f F such as

20

30

40

from a

F sym(y ) = l2 { F ( y ) + F( 2 U - y - 0)} ,

which is sim ply the E D F o f y i , . . . , y „, p. — {y\ — p.),. . . , p — (y„ — p.), with p. an estim ate o f fi which for this purpose we take to be the sam ple m edian. The centre panel o f Figure 3.14 shows b o o tstrap estim ates o f variance for eleven trim m ed averages based on R = 1000 sam ples d raw n from Fsym. We conclude from this th a t k = 36 is best, b u t th a t there is little to choose am ong trim m ed averages w ith k = 2 4 ,..., 40. A sim ilar conclusion em erges if we sam ple from F, although the b o o tstrap variances are noticeably higher for k > 24. If sym m etry o f the underlying distrib u tio n were in doubt, then we should take the biases o f the estim ators into account. O ne n atu ral criterion then would be m ean squared error. In this case o u r b o o tstrap sam ples would be draw n from F, an d we w ould select am ong the trim m ed averages on the basis o f bo o tstrap m ean squared error R

mse(i) = K_ 1 £ { r ; ( 0 - y } 2 r= 1

N ote th a t m ean squared erro r is m easured relative to the m ean y o f the b o o tstrap population. T he right panel o f Figure 3.14 shows the boo tstrap m ean squared errors for o u r trim m ed averages, an d we see th a t the estim ated biases do have an effect: now a value o f k nearer 20 w ould ap p e ar to be best. U nder the sym m etric b o o tstrap , when the m ean o f Fsym is the sam ple m edian because we sym m etrized ab o u t this point, b o o tstrap m ean squared erro r equals bo o tstrap variance. To focus the rest o f the discussion, we shall assum e sym m etry and therefore choose t to be the trim m ed average w ith k = 36. T he value o f t is 78.33, and the m inim um b o o tstrap variance based on 1000 sim ulations is 0.321. We now use the double b o o tstra p procedure to estim ate the variance for t, and to determ ine ap pro p riate quantiles for t. First we generate R = 1000

Figure 3.14 Trimmed averages and their estimated variances and m ean squared errors for the pooled gravity data, based on R = 1000 bootstrap samples, using the ordinary bootstrap (•) and the symmetric bootstrap (o).

3.12 ■Bibliographic Notes

123

sam ples y j,...,y g [ from Fsym. To each o f these sam ples we then apply the original sym m etric b o o tstrap procedure, generating M = 100 sam ples o f size n = 81 from the sym m etrized E D F o f y \ , . .. , 3^ , choosing t* to be th a t one o f the 11 trim m ed averages w ith sm allest value o f v’(i). The variance v o f t\ , . . . , t'R equals 0.356, which is 10% larger th an the original m inim um variance. If we use this variance w ith a norm al aproxim ation to calculate a 95% confidence interval centred on t, the interval is [77.16,79.50]. This is very sim ilar to the intervals obtained in Exam ple 3.2. The frequencies w ith which the different trim m ing proportions are chosen are: k 12 16 20 24 28 32 36 40 Frequency 1 25 54 96 109131 49886 T hus when sym m etry o f the underlying distribution is assum ed, a fairly heavy degree o f trim m ing seems desirable for these data, and the value k = 36 actually chosen seems reasonably well-determ ined. ■ The general features o f this discussion are as follows. We have a set o f estim ators T (a) = t(a, F ) for a e A, and for each estim ator we have an estim ated value C (a ,F ) for a criterion C (a ,F ) = E {c(T (a),0) | F} such as variance or m ean squared error. The adaptive estim ator is T = t(a, F) where a = a(F) m inim izes C (a ,F ) w ith respect to a. We w ant to know ab o u t the d istribution o f T, including for exam ple its bias and variance. The distribution o f T — 6 = t(F) — t(F) under sam pling from F will be approxim ated by evaluating it under sam pling from F. T h at is, it will be approxim ated by the d istribution o f T* - t = t (F') - f(F) = t( a , F*) - t( a, F) un d er sam pling from F. H ere F* is the analogue o f F based on y y * : if F is the E D F o f the data, then F* is the E D F o f sam pled from F. W hether or n o t the allowance for selection bias is num erically im portant will depend u p o n the density o f a values and the variability o f C(a,F).

3.12 Bibliographic Notes The extension o f b o o tstrap m ethods to several unrelated sam ples has been used by several authors, including Hayes, Perl and Efron (1989) for a special contrast-estim ation problem in particle physics; the application is discussed also in Efron (1992) an d in Practical 3.4. A general theoretical account o f estim ation in sem iparam etric m odels is given in the book by Bickel et al. (1993). The m ajority o f applications o f sem iparam etric m odels are in regression; see references for C hapters 6 and 7.

124

3 ■Further Ideas

E fron (1979, 1982) suggested and studied em pirically the use o f sm ooth ver­ sions o f the ED F, b u t the first system atic investigation o f sm oothed bootstraps was by Silverm an and Y oung (1987). They studied the circum stances in which sm oothing is beneficial for statistics for which there is a linear approxim ation. Hall, D iCiccio an d R om an o (1989) show th a t when the quantity o f interest depends on a local property o f the underlying C D F, as do quantiles, sm ooth­ ing can give w orthw hile theoretical reductions in the size o f the m ean squared error. Sim ilar ideas apply to m ore com plex situations such as L\ regression (D e Angelis, H all and Y oung 1993); see how ever the discussion in Section 6.5. D e Angelis an d Y oung (1992) give a useful review o f b o o tstrap sm oothing, and discuss the em pirical choice o f how m uch sm oothing to apply. See also W ang (1995). R o m an o (1988) describes a problem — estim ation o f the m ode o f a density — where the estim ator is undefined unless the E D F is sm oothed; see also Silverm an (1981). In a spatial d a ta problem , K endall and K endall (1980) used a form o f b o o tstrap th a t jitte rs the observed data, in order to keep the rough configuration o f p oints co n stan t over the sim ulations; this am ounts to sam pling w ithout replacem ent when applying the sm oothed bootstrap. Young (1990) concludes th a t although this ap proach can o u tperform the unsm oothed bootstrap , it does n o t perform so well as the sm oothed b o o tstrap described in Section 3.4. G eneral discussions o f survival d a ta can be found in the books by Cox and O akes (1984) and Kalbfleisch an d Prentice (1980), while Flem ing and H arringto n (1991) and A ndersen et al. (1993) give m ore m athem atical accounts. T he product-lim it estim ator was derived by K ap lan and M eier (1958): it and variants are widely used in practice. Efron (1981a) proposed the first b o o tstra p m ethods for survival data, and discussed the relation betw een trad itio n al an d b o o tstrap stan d ard errors for the product-lim it estim ator. A kritas (1986) com pared variance estim ates for the m edian survival tim e from E fron’s sam pling scheme and a different a p ­ proach o f R eid (1981), and concluded th a t E fron’s scheme is superior. The conditional m ethod outlined in Section 3.5 was suggested by H jo rt (1985), and subsequently studied by K im (1990), who concluded th a t it estim ates the conditional variance o f the product-lim it estim ator som ew hat b etter th an does resam pling cases. D oss and G ill (1992) an d B urr and D oss (1993) give weak convergence results leading to confidence bands for quantiles o f the survival time distribution. T he asym ptotic behaviour o f param etric and no n ­ param etric b o o tstrap schemes for censored d a ta is described by H jo rt (1992), while A ndersen et al. (1993) discuss theoretical aspects o f the weird b o o t­ strap. The general ap p ro ach to m issing-data problem s via the EM algorithm is dis­ cussed by D em pster, L aird and R ubin (1977). Bayesian m ethods using m ultiple im putatio n an d d a ta au gm entation are decribed by T anner and W ong (1987)

3.12 ■Bibliographic Notes

125

and T anner (1996). A detailed treatm ent o f m ultiple im putation techniques for m issing-data problem s, w ith special em phasis on survey data, is given by R ubin (1987). The principal reference for resam pling in m issing-data problem s is Efron (1994), together with the useful, cautionary discussion by D. B. Rubin. T he account in Section 3.6 puts m ore em phasis on careful choice o f estim ators. C ochran (1977) is a stan d ard reference on finite population sampling. V ari­ ance estim ation by balanced subsam pling m ethods was discussed in this con­ text as early as M cC arthy (1969), but the first a ttem p t to apply the boo tstrap directly was by G ross (1980), who describes w hat we have term ed the “p o pula­ tion b o o tstra p ”, b u t restricted to cases where N / n is an integer. This approach was subsequently developed by Bickel and F reedm an (1984), while C hao and Lo (1994) also m ake a case for this approach. Booth, Butler and H all (1994) describe the construction o f studentized b o o tstrap confidence limits in this context. Presnell and B ooth (1994) give a critical discussion o f earlier literature and describe the superp o p u latio n bootstrap. The use o f modified sam ple sizes was proposed by M cC arth y and Snowden (1985) and the m irror-m atch m ethod by S itter (1992). A different approach based on rescaling was introduced by R ao and W u (1988). A com prehensive theoretical discussion o f the jackknife an d b o o tstrap in sam ple surveys is given in C h apter 6 o f Shao and Tu (1995), w ith later developm ents described by Presnell and Booth (1994) and Booth, Butler and H all (1994), on which the account in Section 3.7 is largely based. Little has been w ritten ab o u t resam pling hierarchical d a ta although two relevant references are given in the bibliographic notes for C h apter 7. R elated m ethods for b o o tstrap p in g em pirical Bayes estim ates in hierarchical Bayes m odels are described by L aird and Louis (1987). N onparam etric estim ation o f the C D F for a ran d o m effect is discussed by L aird (1978). B ootstrapping the b o o tstrap is described by C hapm an and H inkley (1986), an d was applied to estim ation o f variance-stabilizing transform ations by Tibshirani (1988). T heoretical aspects o f adjustm ent o f boo tstrap calculations were developed by H all an d M artin (1988). See also the bibliographic notes for C hapters 4 and 5. M ilan and W h ittak er (1995) give a param etric boo tstrap analysis o f the d a ta in Table 3.10, and discuss the difficulties th at can arise when resam pling in problem s with a singular value decom position. Efron (1992) introduced the jackknife-after-bootstrap, and described a vari­ ety o f ingenious uses for related calculations. D ifferent graphical diagnostics for b o o tstrap reliability are developed in an asym ptotic fram ew ork by Beran (1997). The linearity plot o f Section 3.10.2 is due to C ook and W eisberg (1994). Theoretical aspects o f the em pirical choice o f estim ator are discussed by Leger and R om an o (1990a,b) and Leger, Politis and R om ano (1992). Efron (1992) gives an exam ple o f choice o f level o f trim m ing o f a robust estim ator, w ithout double bootstrapping. Some o f the general issues, w ith examples, are discussed by Faraw ay (1992).

126

3 ■Further Ideas

3.13 Problems 1

In a two-sample problem, with data y tj, j = 1 ,..., n„ i = 1,2, giving sample averages y,- and variances t>„ describe models for which it would be appropriate to resample the following quantities: (a) e y = ytj - % (b) ei} = (ytj - 3>.)/(l + n~l )l/2, (c) etj = (ytj - y,•)/{«.■( 1 + n - l )}l/2, (d) = + ( y , j — yi)/{vt( 1 + n~l )}l/1, where the signs are allocated with equal probabilities,

(e) etj = yij/% In each case say how a simulated dataset would be constructed. What difficulties, if any, would arise from replacing y and v, by more robust estimates o f location and scale? (Sections 3.2, 3.3) 2

A slightly simplified version o f the weighted mean o f k samples, as used in Example 3.2, is defined by

=

E

k

-

i=i w.-y,-

E i= i wi where w, = n j a j , with y,- = n~' J 2 j ytj and a f = n~[ J 2 j(yij ~ Pi)2 estimates o f mean /j, and variance of o f the ith distribution. Show that the influence functions for T are

Ltjiy-;F) = ^ 7- [yi - /*.• - (w- - 0) { (y< / v Wi

^ }! ^ \.

where qj,- = n j a } . Deduce that the first-order approximation under the constraint Hi = ■■■= Hk for the variance o f T is vL = 1 / ^ with empirical analogue vL = 1/ vv>- Compare this to the corresponding formula based on the unconstrained empirical influence values. (Section 3.2.1) 3

Suppose that Y is bivariate with polar representation (X , m ), so that Y T = (X cos co, X sin co). If it is known that w has a uniform distribution on [0,27t), independent o f X , what would be an appropriate resampling algorithm based on the random sample y i , . . . , y „ l (Section 3.3)

4

Spherical data y i , . . . , y „ are points on the sphere o f unit radius. Suppose that it is assumed that these data come from a distribution that is symmetric about the unknown mean direction /i. In light o f the symmetry assumption, what would be an appropriate resampling algorithm for simulating data y j ,...,y * ? (Section 3.3; Ducharme et a l., 1985)

5

Two independent random samples y ii,...,y i„ , and y i \ , . . . , y 2ni o f positive data are obtained, and the ratio o f sample means t = y ^ / y i is used to estimate the corresponding population ratio 9 = ^ 2 / ^ 1 (a) Show that the influence functions for t are Lt.i (yi ;F) = -(J 'i - 111 )6 / ni,

L t’i,

= ( £ j W I > y ) / ( 5 > u * y /£*iy)The observed value t is then equal to u{n) where n = (~n, . . . , - n ) with n = £ * 1 , nt. Show that l j = j u {(1 - e)n + e h j} ^ where

1

y is the vector with

1

= -/y,

in the («,■_1 + j )th position, with n0 =

elsewhere. One consequence o f this is that vL = n~ 2 A pply these calculations to the ratio t = yi/yi. (Section 3.2.1) 8

0

, and zeroes

J2j'=i %•

If x i , . . . , x „ is a random sample from some distribution G with density g, suppose that this density is estimated by

-

Vh i b w i ^ r ) = l j w { ^ r )

where w is a symmetric P D F with mean zero and variance t2. (a) Show that this density estimate has mean x and variance n~l Y H x j ~ x ) 2 + h2x2. (b) Show that the random variable x = x j + he has P D F gh, where J is uniformly distributed on ( l , . . . , n ) and e has P D F w. Hence describe an algorithm for bootstrap simulation from a sm oothed version o f the EDF. (c) Show that the rescaled density 1

^

( x

j= 1

v

— a —bxj\ hb

J7

will have the same first two mom ents as the E D F if a = (1 — b)x and b = {1 + nh2z 2/ J2(x j “ x)2} ~ l/2. W hat algorithm simulates from this sm oothed E D F?

128

3 ■Further Ideas (d) D iscuss the special problems that arise from using gh(x) when the range o f x is [0, oo) rather than (—oo, oo). (e) Extend the algorithms in (b) and (c) to multivariate x. (Section 3.4; Silverman and Young, 1987; Wand and Jones, 1995)

9

Consider resampling cases from censored data (y i, d \ ) , . . . , (y„, dn), where yi < ■■■< y n. Let f j denote the number o f times that (y j , d j ) occurs in an ordinary bootstrap sample, and let Sj = / ' H-------1- / ' . (a) Show that when there is no censoring, the product-limit estimate puts mass n-1 on each observed failure yi show that

E-(Y ') = y,

E m {var'(Y* | M)} = ^

x (1 - f h ^ c .

(Section 3.7; Presnell and Booth, 1994) 14

Suppose we wish to perform mirror-match resampling with k independent withoutreplacement samples o f size m, but that k = {n(l — m/ n ) } / {{ m( 1 — / ) } is not an integer. Let K ’ be the random variable such that Pr( K ’ = k') = 1 - Pr(X* = k' + 1) = k'(l + k' - k)/ k, where k' = [k] is the integer part o f k. Show that if the mirror-match algorithm is applied for an average Y ’ with this distribution fo r X ', var”(Y ”) = (1—m/n)c/ (mk). Show also that under mirror-match resampling with the simplifying assumption that randomization is not required because k is an integer,

f, « (* -!) E (C ) = c l 1- ^ j ^ T j where C ‘ is the sample variance o f the Y-. What implications are there for variance estimation for more complex statistics? (Section 3.7; Sitter, 1992) 15

Suppose that n is a large even integer and that N = 5n/2, and that instead o f applying the population bootstrap we choose a population from which to resample according to

f

#{/!} is the number of elements in the set A.

y i , - - - , y n,

yi, - --, y«,

y u . . . , y n,

y u . . . , y n,

with probability \ , yi,...,y„,

with probability

Having selected > i,...,y '). Since und er Ho the chain starts in equilibrium , Pr(Y* = /

| H 0) = P r( Z N = / ) = / 0( / ) .

T h at is, if Ho is true, then the R replicates and d a ta y are all sam pled from /o, as we require. M oreover, the R replicates o f y ‘ are jointly exchangeable w ith the d a ta und er Ho- To see this, we have first th at R

f ( y , y l . . . , f R | Ho) = fo(y) £

Pr(Z 0 = x | Z N = y ) ] ] P r(Z N = x

| Z 0 = x),

r= l

using the independence o f the replicate sim ulations from x. But by the definition o f the first p a rt o f the sim ulation, where (4.12) applies, /o (y )P r(Z 0 = x | Z N = y) = / 0(x)P r(Z ^ = y \ Z 0 = x),

145

4.2 ■Resampling fo r Parametric Tests

and so f ( y , y[, . .. , y 'R \ H 0) = J 2 /o (x ){ p r(Z N = y | Z 0 = x) [ J Pr(Z N = y*r | Z 0 = x ) \ , x

r= l

'

w hich is a sym m etric function o f y , y { , - - - , y R as required. G iven th a t the d ata vector and sim ulated d a ta vectors are exchangeable under Ho, the associated test statistic values ( t , t j , . . . , t R) are also exchangeable outcom es under H q. Therefore (4.11) applies for the P-value calculation. To com plete the description o f the m ethod, it rem ains to define the transition probability m atrix Q so th a t the chain is irreducible w ith equilibrium d istribu­ tion fo(y)- T here are several ways to do this, all o f which use ratios f o( v) / fo (u)F or exam ple, the M etropolis algorithm starts with a carrier M arkov chain on state space @S having any sym m etric one-step forw ard transition probability m atrix M , an d defines one-step forw ard transition from state u in the desired M arkov chain as follows: •

given we are in state u, select state v with probability muv;



accept the tran sitio n to v with probability min{ l,fo(v)/fo(u)}, otherwise reject it an d stay in state u.

It is easy to check th a t the induced M arkov chain has transition probabilities quv = m i n { l, f o( v )/ fo ( u) } muv,

u^v,

and Qua = muu + Y ^ max{0 , 1 - fo(v)/fo{u)}muo, V^U an d from this it follows th a t f o is indeed the equilibrium distribution o f the M arkov chain, as required. In applications it is n o t necessary to calculate the probabilities muv explicitly, although the sym m etry and irreducibility o f the carrier chain m ust be checked. If the m atrix M is n o t sym m etric, then the acceptance probability in the M etropolis algorithm m ust be m odified to m in [l,fo(v)mvu/{fo(u)muv}]. T he crucial feature o f the M arkov chain m ethod is th a t fo itself is not needed, only ratio s fo(v)/fo(u) being involved. This m eans th a t for conditional tests, w here f o is the conditional density for Y given S = s, only ratios o f the u nconditional null density for Y are n e ed ed : fo(v) = P r(7 = v \ S = s , H q) = P r(7 = v | H 0) fo(u) P r(7 = u | S = s , H 0) P r(Y= u\H o)' This greatly simplifies m any applications. The realizations o f the M arkov chain are sym m etrically tied to the artificial starting value x, an d this induces a sym m etric correlation am ong (t,

146

4 ■ Tests

This correlation depends upon the p articu lar construction o f Q, and reduces to zero at a rate which depends upon Q as m increases. W hile the correlation does not affect the validity o f the P-value calculation, it does affect the power o f the te s t: the higher the correlation, the lower the power. Example 4.3 (Logistic regression) We retu rn to the problem o f Exam ple 4.1, which provides a very sim ple if artificial illustration. The d a ta y are a binary sequence o f length n w ith s ones, and calculations are to be conditional on Y , Yj = s. Recall th a t direct M onte C arlo sim ulation is possible, since all (") possible d a ta sequences are equally likely und er the null hypothesis o f constant probability o f a unit response. One simple M arkov chain has one-step transitions which select a pair o f subscripts i, j a t random , an d switch y t an d yj. Clearly the chain is irreducible, since one can progress from any one binary sequence with s ones to any other. All ratios o f null probabilities /o (u )//o (« ) are equal to one, since all binary sequences w ith s ones are equally probable. Therefore if we run the M etropolis algorithm , all switches are accepted. But note th a t this M arkov chain, while simple to im plem ent, is inefficient and will require a large num ber o f steps to induce approxim ate independence o f the t’s. T he m ost effective M arkov chain would have one-step transitions which are ran d o m p erm utations, and for this only one step w ould be required. ■ Example 4.4 (AM L data) F or d a ta such as those in Exam ple 3.9, consider testing the null hypothesis o f p ro p o rtio n al h azard functions. D enote the failure times by z\ < z2 < • • • < z„, assum ing no ties for the m om ent, and define rtj to be the nu m b er in group i w ho were at risk ju st p rior to zj. Further, let yj be 0 or 1 according as the failure at zj is in group 1 or 2, and denote the hazard function a t tim e z for group i by fy(z). Then P r(y . = l ) = _____

1

r*Mzj>_____

rljh l (zj) + r2jh2(zj)

=

°J

aj + 0 /

where aj = rij/rzj and 6j = h2{zj)/h\(zj) for j = 1 The null hypothesis o f p ropo rtio n al hazards implies the hypothesis H q : 6\ = • • • = 6n. For the d a ta o f Exam ple 3.9, where n — 18, the values o f y and a are given in Table 4.2; one tie has been random ly split. N ote th a t censored d a ta contribute only to the rs: the times are n o t used. O f course the YjS are n o t independent, because aj depends upon the o u t­ com es o f Yu . . . , Y j - i . However, for the purposes o f illustration here we shall pretend th a t the ajS are fixed, as well as the survival times and censoring times. T h a t is, we shall treat the Y)s as independent Bernoulli variables with probabilities as given above. U nder this pretence the conditional likelihood for

147

4.2 • Resampling fo r Parametric Tests

n

5 11

8 11

n

12

11

10

9

8

a

n 12

1

11 10

n 9

n 8

y

1

1

1

1

0

1

Table 4.2 Ingredients o f the conditional test for proportional hazards. Failure times as in Table 3.4; at time z = 23 the failure in group 2 is taken to occur first.

8 11

*18

9 11

12

5 11

z

13 10

18 8

23 7

23 7

27 6

30 5

31 5

33 4

34 4

43 3

45 3

48 2

8

7

6

6

5

5

4

3

3

2

2

1

0

10 8

10 7

8 6

7 6

7 5

6 5

5 4

5 3

4 3

2

3 2

3

oo

0

0

1

0

1

1

0

1

0

1

1

0

10

is simply 18

n

dj + Oj

7=1

N ote th a t because aig = oo, m ust be 0 w hatever the value o f 0ig, and so this final response is uninform ative. We therefore dro p yig from the analysis. H aving done this, we see th a t under Ho the sufficient statistic for the com m on h azard ratio 0 is S = Yj, whose observed value is s = 11. W hatever the test statistic T, the exact conditional P-value (4.4) m ust be approxim ated. D irect sim ulation appears impossible, but a simple M arkov chain sim ulation is possible. First, the state space o f the chain is 3§ = {x = ( x i , . . . , x n ) : Y l x j = s}> th a t is all perm utations o f y i , . . . , y n . F or any two vec­ tors x and x in the state-space, the ratio o f null conditional jo in t probabilities p{x | s, 0 ] p(x | s, 01

;'= i

We take the carrier M arkov chain to have one-step transitions which are ra n ­ dom p erm u tatio n s: this guarantees fast m ovem ent over the state space. A step which moves from x to x is then accepted with probability min ^ 1, f l j l i a]‘



By sym m etry the reverse chain is defined in exactly the same way. The test statistic m ust be chosen to m atch the particular alternative hy p o th ­ esis th o u g h t relevant. H ere we suppose th at the alternative is a m onotone ratio o f hazards, for which T = YljLi Yj log(Zj) seems to be a reasonable choice. The M arkov chain sim ulation is applied with N = 100 steps back to give the initial state x an d 100 steps forw ard to state y ' , the latter repeated R = 99 times. O f the resulting £* values, 48 are less th an or equal to the observed value t = 17.75, so the P-value is (1 + 4 8 )/(l + 99) = 0.49. Thus there appears to be no evidence against the prop o rtional hazards model. Average acceptance probability in the M etropolis algorithm is approxim ately 0.7, and results for N = 10 and N = 1000 ap p ear indistinguishable from those for N = 100. This indicates unusually fast convergence for applications o f the M arkov chain m ethod. ■

148

4 ■ Tests

T he use o f R conditionally independent realizations o f the M arkov chain is som etim es referred to as the parallel method. In co n trast is the series method, where only one realization is used. Since the successive states o f the chain are dependent, a rand o m izatio n device is needed to induce exchangeability. For details see Problem 4.2.

4.2.3 Parametric bootstrap tests In m any problem s o f course the distribution o f T under H q will depend upon nuisance param eters which can n o t be conditioned away, so th at the M onte C arlo test m ethod does not apply exactly. T hen the n atu ral approach is to fit A

A

the null m odel Fo and use (4.5) to com pute the P-value, i.e. p = P r(T > t \ Fo). F or exam ple, for the p aram etric m odel where we are testing Ho : ip = ipo with X a nuisance p aram eter, Fo w ould be the C D F o f f ( y \ ipo,Xo) with Xo the m axim um likelihood estim ator (M L E ) o f the nuisance param eter when ip is fixed equal to ipo. C alculation o f the P-value by (4.5) is referred to as a b o o tstrap test. If (4.5) can n o t be com puted exactly, o r if there is no satisfactory approx­ im ation (norm al or otherwise), then we proceed by sim ulation. T h at is, R independent replicate sam ples yj,...,_y* are draw n from Fo, and for the rth such sam ple the test statistic value t'r is calculated. T hen the significance probability (4.5) will be approxim ated by Pboot ~

J

( 4 .1 3 )

O rdinarily one would use a simple p ro p o rtio n here, but we have chosen to m ake the definition m atch th a t for the M onte C arlo test in (4.11). Example 4.5 (Separate families test) Suppose th a t we wish to choose between the alternative m odel form s fo(y \ r\) and f i ( y \ £) for the P D F o f the random sam ple y \ , . . . , y n. In some circum stances it m ay m ake sense to take one model, say fo, as a null hypothesis, and to test this against the o th er m odel as alternative hypothesis. In the n o tatio n o f Section 4.1, the nuisance param eter is X = (t],C) and ip is the binary indicator o f m odel, w ith null value ipo = 0 and alternative value ipa = 1. The likelihood ratio statistic (4.7) is equivalent to the m ore convenient form r = » - N ° g ^ = n- ' X > g £ M ^ , L o(rj) fo(yj I ri)

(4.14)

where f\ and ( are the M L E s and Lo an d L\ the likelihoods under f o and / 1 respectively. If the tw o families are strictly separate, then the chi-squared approxim ation (4.8) does n o t apply. T here is a norm al approxim ation for the

149

4.2 ■Resampling fo r Parametric Tests

null distribution o f T , b u t this is often quite unreliable except for very large n. The p aram etric b o o tstrap provides a m ore reliable and simple option. The p aram etric b o o tstrap w orks as follows. We generate R sam ples o f size n by ran d o m sam pling from the fitted null m odel /o (y | fj). For each sample we calculate estim ates fj* and ( ’ by m axim izing the sim ulated log likelihoods

m) = E lo&w i

4>fa) = E lo&w 11)*

and com pute the sim ulated log likelihood ratio statistic

T hen we calculate p using (4.13). As a p articu lar illustration, consider the failure-tim e d ata in Table 1.2. Two plausible m odels for this type o f d a ta are gam m a and lognorm al, th a t is , , , , Kiicy)*-1 e x p ( - K y / n ) f o ( y \ r i ) = ----------^ r ( K ) ----------’

, = ^

{ l o g y - ot\ — p—

) ’y > 0 -

F or these d a ta the M L E s o f the gam m a m ean and index are fi = y = 108.083 and k = 0.707, the latter being the solution to log(/c) - h(k) = log(y) - lo g y logy and s^og) are the average and sample variance for the log yj.

with h(x) = d \o gr ( K) /d K, the digam m a function. The M L E s o f the m ean and variance o f the norm al distribution for log Y are a = lo g y = 3.829 and P2 = (n — 1)s?ogy/ n = 2.339. The test statistic (4.14) is t = —k log(fc/y) — ka + k + log r(/c) — | \og(2n[i2) — whose value for the d a ta is t = —0.465. The left panel o f Figure 4.2 shows a histogram o f R = 999 values o f t* under sam pling from the fitted gam m a m odel: o f these, 619 are greater th an t and so p = 0.62. N ote th a t the histogram has a fairly non-norm al shape in this case, suggesting th a t a norm al approxim ation will not be very accurate. This is true also for the (rath er com plicated) studentized version Z o f T : the right panel o f Figure 4.2 shows the norm al plot o f b o o tstrap values z \ The observed value o f z is 0.4954, for which the b o o tstrap P-value is 0.34, som ew hat sm aller th an th at com puted for t, b u t not changing the conclusion th a t there is no evidence to change from a gam m a to a lognorm al m odel for these data. T here are good general reasons to studentize test statistics; see Section 4.4.1. It should p erhaps be m entioned th at significance tests o f this kind are not always helpful in distinguishing between models, in the sense th at we could find evidence against either b o th or neither o f them. This is especially true w ith small sam ples such as we have here. In this case the reverse test shows no evidence against the lognorm al model. ■

150

4 ■ Tests

Figure 4.2 Null hypothesis resampling for failure data. Left panel shows histogram of under gamma sampling. Right panel shows normal plot of z ' ; R — 999 and gamma parameters p. = 108.0833, k = 0.7065; dotted line is theoretical N(0,1) approximation.

t*

Quantiles of standard normal

4.2.4 Graphical tests G raphical m ethods are p o p u lar in m odel checking: exam ples include norm al and half-norm al plots o f residuals in regression, plots o f C ook distance in regression, plots o f n o nparam etric h azard function estim ates, and plots o f intensity functions in spatial analysis (Section 8.3). In m any cases the nom inal shape o f the plot is a straight line, which aids the detection o f deviation from a null model. W hatever the situation, inform ed in terpretation o f the plot requires som e n otion o f its probable variation und er the m odel being checked, unless the sam ple size is so large th a t deviation is obvious (c.f. the plot o f resam pling results in Figure 4.2). The sim plest and m ost com m on approach is to superim pose a “probable envelope”, to which the original d a ta plot is com pared. This probable envelope is obtained by M onte C arlo or param etric resam pling m ethods. G raphical tests are n o t usually ap p ro p riate when a single specific alternative m odel is o f interest. R ath er they are used to suggest alternative models, depending upon the m anner in which such a plot deviates from its null expected behaviour, or to find suspect data. (Indeed graphical tests are not tests in the usual sense, because there is usually no simple notion o f “rejectable” behaviour: we com m ent m ore fully on this below.) Suppose th a t the g raph plots T (a) versus a for a e s / , a bounded set. The observed plot is {t(a) : a € j / } . F or exam ple, in a norm al plot j / is a set o f norm al quantiles and the values o f t(a) are the ordered values o f a sample, possibly studentized. T he idea o f the p lo t is to com pare t(a) w ith the probable behaviour o f T(a) for all a e when H q is true. Example 4.6 (Normal plot)

C onsider the d ata in Table 3.1, and suppose in

151

4.2 • Resampling fo r Parametric Tests

yt)

Figure 4.3 Normal plot of n = 13 studentized values for final sample in Table 3.1.

O/ N CO

o

o O/'

c>d in o ■CDO o c o t occurred 50 times in the first 100 samples, then it is reasonably certain th a t p will exceed 0.25, say, for m uch larger R, so there is little p o in t in sim ulating further. O n the other hand, if we observed t* > t only five times, then it w ould be w orth sam pling fu rther to m ore accurately determ ine the level o f significance. O ne effect o f n o t com puting p exactly is to w eaken the pow er o f the test, essentially because the critical region o f a fixed-level test has been random ly displaced. T he effect can be quantified approxim ately as follows. C onsider testing a t level a, which is to say reject Ho if p < a. If the integer k is chosen equal to (R + l)a , then the test rejects Ho when t'{R+l_k) < t. F or the alternative hypothesis H a , the pow er o f the test is nR(a, HA) = Pr(reject H 0 \ H A) = P r(T (*R+1_k) < T \ H A). To evaluate this probability, suppose for simplicity th a t T has a continuous distribution, w ith P D F go(t) and C D F Go(t) under Ho, and density gA(t) under H A. T hen from the stan d ard result for P D F o f an order statistic we have nR( a, HA) =

J J

R ( ^ _ Q c o M ^ g o M U - Goix ) } ^ 1 gA(t)dxdt.

A fter change o f variable an d some rearrangem ent o f the integral, this becom es nR(cc,Ha ) = [ ^ao(u, H A)hR(u;tx)du, Jo

(4.18)

where nx (u,HA) is the pow er o f the test using the exact P-value, and hR{u;a) is the b eta density on [0,1] w ith indices (R + l)a and (R + 1)(1 — a). T he next p a rt o f the calculation relies on n R{ot, H A) being a concave function o f a, as is usually the case. T hen a lower bound for n ^ u , H a ) is nm[ u , H a ) which equals U7taj( a ,H a) / a for u < a and 7tx ( a ,H 4 ) for u > a. It follows by applying (4.18) to n R(y., HA), and som e m anipulation, th at n 00( o L , H A ) - n R( a,HA)

< nco^^A')J

\u - a \ h R(u;cc)du

7too(a, H y4)a*R+1* Hi is p = (48 + l ) / ( 9 9 9 + l) = 0.049. A pplication o f the p erm u tatio n test gave the sam e result. It is w orth stressing again th a t because the resam pling m ethod is wholly com putational, any sensible test statistic is as easy to use as any other. So here, if outliers were present, it w ould be ju st as easy, and perhaps m ore sensible, to choose t to be the difference o f trim m ed means.

4.4 • Nonparametric Bootstrap Tests

163

The question is: do we gain or lose anything by assum ing th a t the two distributions have the same shape? ■ The p articu lar null fitted m odel used in the previous exam ple was suggested in p a rt by the p erm u tatio n test, and is clearly n o t the only possibility. Indeed, a m ore reasonable null m odel in the context would be one which allowed different variances for the tw o p opulations sam pled: an analogous m odel is used in Exam ple 4.14 below. So in general there can be m any candidates for null m odel in the nonparam etric case, each corresponding to different restrictions im posed in ad d itio n to H q. O ne m ust judge which is m ost ap p ro p riate on the basis o f w hat m akes sense in the practical context. Semiparametric null models If d a ta are described by a sem iparam etric m odel, so th a t some features o f underlying distributions are described by param eters, then it m ay be relatively easy to specify a null model. The following exam ple illustrates this. Example 4.14 (Comparison of several means) F or the gravity d a ta in E xam ­ ple 3.2, one p o in t th a t we m ight check before proceeding w ith an aggregate estim ation is th a t the underlying m eans for all eight series are in fact the same. One plausible m odel for the data, as m entioned in Section 3.2, is

)fij ~

j

— L ••• ? I ~ I?•■•)

where the ei; com e from a single distribution G. The null hypothesis to be tested is Ho : p\ = ■■■ = p.%, w ith general alternative. F or this an appropriate test statistic is given by yi and sj are the average and sample variance for the ith series.

8

t= E

Wi(yi - £o)2,

Wi = Hi/sf,

i=1 w ith fo = Y wi}'i/ Y wi null estim ate o f the com m on mean. The null distribution o f T w ould be approxim ately yfi were it n o t for the effect o f small sam ple sizes. So a b o o tstrap approach is sensible. T he null m odel fit includes /to and the estim ated variances °K> = (« i “

l ) s f / « i + ( Pi ~ M

2-

T he null m odel studentized residuals ytj - fo eij

{ ^ - ( E w , ) - 1}172’

when plotted against norm al quantiles, suggest mild non-norm ality. So, to be safe, we apply a nonparam etric bootstrap. D atasets are sim ulated under the null m odel

y'j = fo +

164

4 ■ Tests

0

10

20

30 t*

i

^

s?

1 2 3 4 5 6 7 8

66.4 89.9 77.3 81.4 75.3 78.9 77.5 80.4

370.6 233.9 248.3 68.8 13.4 34.1 22.4 11.3

40

50

w,' 474.4 339.9 222.3 67.8 23.1 31.1 21.9 13.5

0.022 0.047 0.036 0.116 0.599 0.323 0.579 1.155

60 Chi-squared quantiles

with e'jS random ly sam pled from the pooled residuals {e^, i = 1.......8, j = l,...,n ,} . F or each such sim ulated d ataset we calculate sam ple averages and variances, then weights, the pooled m ean, and finally t*. Table 4.3 contains a sum m ary o f the null m odel fit, from which we calculate f o = 78.6 an d t = 21.275. A set o f R = 999 b o o tstrap sam ples gave the histogram o f t ‘ values in the left panel o f Figure 4.10. O nly 29 values exceed t = 21.275, so p = 0.030. The right panel o f the figure plots ordered t* values against quantiles o f the Xi approxim ation, which is off by a factor o f ab o u t 1.24 and gives the distorted P-value 0.0034. A n o rm al-error p aram etric b o o tstrap gives results very sim ilar to the nonparam etric b o otstrap. ■

Table 4.3 Summary statistics for eight samples in gravity data, plus ingredients for significance test. The weighted mean is po = 78.6.

Figure 4.10 Resampling results for comparison of the means of the eight series of gravity data. Left panel: histogram of R = 999 values of t* under nonparametric resampling from the null model with pooled studentized residuals; the unshaded area to right of observed value t = 21.275 gives p = 0.029. Right panel: ordered t‘ values versus Xi quantiles; the dotted line is the theoretical approximation.

4.4 ■Nonparametric Bootstrap Tests

165

Example 4.15 (Ratio test) Suppose that, as in Exam ple 1.2, each observation y is a p air (u,x), and th a t we are interested in the ratio o f m eans 8 = E ( X ) /E ( U) . In p articu lar suppose th a t we wish to test the null hypothesis Hq : 6 = 0OThis problem could arise in a variety o f contexts, and the context would help to determ ine the relevant null model. F or example, we m ight have a pairedcom parison experim ent where the m ultiplicative effect 0 is to be tested. H ere do would be 1, an d the m arginal distributions o f U and X should be the same und er Hq- O ne n atu ral null m odel Fo w ould then be the sym m etrized E D F, i.e. the E D F o f the expanded d a ta ( u i , x i ) , . . . , (u„,x„),(xi,ui),. . . , ( x n,u„). ■ Fully nonparametric null models In those few situations where the context o f the problem does n o t help identify a suitable sem iparam etric null m odel, it is in principle possible to form a w holly nonp aram etric null m odel Fo. H ere we look a t one general way to do this. Suppose the test involves k distributions F i ,...,F ^ for which the null hy­ pothesis im poses a constraint, Ho : r(F i,. . . , F*) = 0. T hen we can obtain a null m odel by nonp aram etric m axim um likelihood, or a sim ilar m ethod, by adding the constraint to the usual derivation o f the E D F s as M LEs. To be specific, suppose th a t we force the estim ates o f F \ , . . . , Fk to be supported on the corre­ sponding sam ple values, as the E D F s are. T hen the estim ate for F, will attach probabilities p, = (p,i, ■■• , P;n,) to sam ple values y , i , t h e unconstrained E D F Ft corresponds to pi = n ^ 'f l , . . . , 1). N ow m easure the discrepancy be­ tween a possible F, and the E D F F,- by rf(p„p,), say, such th at the E D F probabilities p, m inim ize this when no constraints o ther th an Y.%] Pij = 1 are im posed. T hen a n o nparam etric null m odel is given by the probabilities which m inim ize the aggregate discrepancy subject to t ( Fi , . . . ,F k) = 0. T h at is, the null m odel m inim izes the L agrange expression

(4.22)

w here t{p i , . . . , pt) is a re-expression o f the original constraint function t ( F \ , . . ., Fk). We denote the solutions o f this constrained m inim ization problem by p®, i = l,...,k . T he choice o f discrepancy function d(-, ■) th at corresponds to m axim um likelihood estim ation is the aggregate inform ation distance k

ttj

(4.23)

4 ■ Tests

166 and a useful alternative is the reverse inform ation distance k

nk

Y Y Pli log(P x — y). This can be rew ritten as P = P r L - - G. - „ - G . > (m + * - ‘ ) } , (_ mu + n j

(4.35)

where u = x / y , and Gm and Gn are independent gam m a random variables with indices m a n d n respectively and unit scale param eters. The b o o tstrap P-value (4.35) does n o t have a uniform distribution under the null hypothesis, so P = p does n o t correspond to erro r rate p. This is fully corrected using the adjustm ent (4.34). To see this, w rite (4.35) as p = h(u), so th a t po(F') equals P r* * (T " > T* | F*o) = h(U*), where U ' = X ' / Y ' . Since h( ) is decreasing, it follows th at Padj = Pr*{/i(l/*) < h(u) | x , y } = Pr*(t/* > u | x , y ) = P r(F 2m,2„ > u),

177

4.5 ■Adjusted P-values

which is the P-value o f the exact test. Therefore p a O versus expected uniform order statistics: the straight line corresponds to the theoretical chi-squared approxim ation for T. The b o o tstrap P-value tu rn s out to be quite non-uniform . A double bo o tstrap calculation w ith R = M = 999 gives paoo, p ' —>ur such th a t the urs are a ran d o m sam ple from the uniform distribution on [0,1]. In this case there is no need to adjust the b o o tstrap P-value, so padj = P■U nder this assum ption (M + l)p* is alm ost a B inom (M ,ur) random variable, so th a t equation (4.36) can be approxim ated by ■ l +

£ r = l* r

Padj = — r + t ~ ' where X r = /{B in o m (M , ur) < ( M + \)p}. We can now calculate the sim ulation m ean and variance o f

p adj

by using the fact th at

E(X^ | ur) = Pr{B inom (M , ur) < (M + 1)p} for k = 1,2. F irst we have th a t for all r ri m + m E (* ? ) = y

T . y=0

( " ; )uJ( l - u ) M^ d u =

w here [z] is the integer p a rt o f z. Since pa^ is p ro portional to the average o f independent X rs, it follows th a t UW

R [ ( M + l)p] (n + i)(Af + i)>

180

4 ■ Tests

which tends to the correct answ er p as R, M —>00, and , . . R [( M + 1)p](M + l - [ ( M + l)p]) var(padj) = A simple aggregate m easure o f sim ulation erro r is the m ean squared error relative to p, M S E ( p 3di) =

[(M + l)p]{M + l - [ ( M + l)p]} R ( M + l )2

N um erical evaluations o f this result suggest th a t M = 249 would be a safe choice. If 0.01 < p < 0.10 then M = 99 would be satisfactory, while M = 49 would be adequate for larger p. N ote th a t two assum ptions were m ade in the calculation, b o th o f which are harm less. First, we assum ed th a t p was independent o f the t ', w hereas in fact it w ould likely be calculated from the sam e values. Secondly, o u r m ain interest is in cases where P-values are not exactly uniform ly distributed. Problem 4.12 suggests a m ore flexible calculation, from which very sim ilar conclusions emerge.

4.6 Estimating Properties of Tests A statistical test involves two steps, collection o f d a ta and application o f a p articular test statistic to those data. Both steps involve choice, and resam pling m ethods can have a role to play in such choices by providing estim ates o f test power. Estimation o f power A s regards collection o f data, in simple problem s o f the kind under discussion in this chapter, the statistical co n trib u tio n lies in recom m endation o f sample sizes via considerations o f test power. I f it is proposed to use test statistic T, an d if the p articu lar alternative H a to the null hypothesis Ho is o f prim ary interest, then the pow er o f the test is 7i(p,HA) = P r(T > tp I H a ), where tp is defined by P r(T > tp \ Ho) = p. In the simplified language o f testing theory, if we fix p and decide to reject Ho when t > tp, then n ( p, HA) is the chance o f rejection when HA is true. A n alternative specification is in term s o f E (P | H a ), the expected P-value. In m any problem s hypotheses are expressed in term s o f param eters, and then pow er can be evaluated for arbitrary param eter values to give a pow er function. W h at is o f interest to us here is the use o f resam pling to assess the pow er o f a test, either as an aid to determ ination o f appropriate sam ple sizes for a p articu lar test, or as a way to choose from a set o f possible tests.

4.6 ■Estimating Properties o f Tests

181

Suppose, then, th a t a pilot set o f d a ta y i , . . . , y n is in hand, and th a t the m odel description is sem iparam etric (Section 3.3). The pilot d a ta can be used to estim ate the n onparam etric com ponent o f the model, and to this can be added a rb itrary values o f the param etric com ponent. This provides a fam ily o f alternative hypothesis m odels from which to sim ulate d a ta and test statistic values. F ro m these sim ulations we obtain approxim ations o f test power, provided we have critical values tp for the test statistic. This last condition will not always be met, b u t in m any problem s there will at least be a simple approxim ation, for exam ple N ( 0,1) if we are using a studentized statistic. For m any nonparam etric tests, such as those based on ranks, critical values are distribution-free, and so are available. The following exam ple illustrates this idea. Example 4.23 (M aize height data) The E D F s plotted in the left panel o f Figure 4.14 are for heights o f m aize plants growing in two adjacent rows, and differing only in a pollen sterility factor. The two sam ples can be modelled approxim ately by a sem iparam etric m odel with an unspecified baseline distri­ b u tio n F and one m edian-shift p aram eter 8. F or analysis o f such d a ta it is proposed to test Ho : 8 = 0 using the W ilcoxon test. W hether or n o t there are enough d a ta can be assessed by estim ating the power o f this test, which does depend upon F. D enote the observations in sample i by y i j, j = l ,...,n ; . The underlying distributions are assum ed to have the form s F ( y ) and F(y — 8), where 8 is estim ated by the difference in sam ple m edians 0. To estim ate F we subtract 0 from the second sam ple to give y 2j = y ij — 8- Then F is the pooled E D F o f the yijS and y 2js. F or these d a ta n\ = n2 = 12 and 8 = —4.5. The right panel o f Figure 4.14 plots E D F s o f the y );s and y 2js. T he next step is to sim ulate d a ta for selected values o f 0 and selected sample sizes N i an d N 2 as follows. F or group 1, sam ple d a ta from F(y), i.e. random ly w ith replacem ent from

and for group 2, sam ple d a ta y 2\ , - - - , y 2Nl from F(y — 8), i.e. random ly with replacem ent from y n + 8, . . . , yi„, + 8, y 2\ + 0, . . . , y 2„2 + 0T hen calculate test statistic t*. W ith R repetitions o f this, the pow er o f the test at level p is the p ro p o rtio n o f tim es th a t t* > tp, where tp is the critical value o f the W ilcoxon test for specified N\ and N 2. In this p articu lar case, the sim ulations show th a t the W ilcoxon test at level p = 0.01 has pow er 0.26 for 8 = 8 and the observed sam ple sizes. A dditional

4 • Tests

182

Figure 4.14 Power comparison for maize height data (Hand et al., 1994, p. 130). Left panel: EDFs of plant height for two groups. Right panel: EDFs for group 1 (unadjusted) and group 2 (adjusted by estimated median-shift 6 ~ —4.5).

Data values

Data values

calculations show th a t b o th sam ple sizes need to be increased from 12 to at least 33 to have pow er 0.8 for 9 = 9. ■ If the proposed test uses the pivot m ethod o f Section 4.4.1, then calculations o f sample size can be done m ore simply. F or exam ple, for a scalar 9 consider a two-sided test o f Ho : 9 = 9o w ith level 2a based on the pivot Z . The pow er function can be w ritten n(2a, 9) = 1 - Pr I zx>N +

I

< Z N < z X- ^ N +

VN

- i ,

VN

(4.39)

J

where the subscript N indicates sam ple size. A rough approxim ation to this pow er function can be obtained as follows. First sim ulate R sam ples o f size N from F , an d use these to approxim ate the quantiles za>sr and zi_a>jv. N ext set v Jl 2 = n^^vh^2/ N 1/2, where v„ is the variance estim ate calculated from the pilot data. Finally, approxim ate the probability (4.39) using the same R boo tstrap samples. Sequential tests Sim ilar sorts o f calculations can be done for sequential tests, where one im p o rtan t criterion is term inal sam ple size. In this context sim ulation can also be used to assess the likely eventual sam ple size, given d a ta y i , . . . , y „ at an interim stage o f a test, w ith a specified protocol for term ination. This can be done by sim ulating d a ta co n tin u atio n y^+i,y^,+2 , - ■■ up to term ination, by sam pling from fitted m odels or E D F s, as appropriate. F rom repetitions o f this sim ulation one obtains an approxim ate distribution for term inal sam ple size N.

4.7 ■Bibliographic Notes

183

4.7 Bibliographic Notes The stan d ard theory o f significance tests is described in C hapters 3-5 and 9 o f Cox an d H inkley (1974). F o r detailed treatm ent o f the m athem atical theory see L ehm ann (1986). In recent years m uch w ork has been done on obtaining im proved distrib u tio n al approxim ations for likelihood-based statistics, and m ost o f this is covered by Barndorff-N ielsen and Cox (1994). R and o m izatio n an d p erm u tation tests have long histories. R. A. Fisher (1935) introduced rando m izatio n tests as a device for explaining and justifying signifi­ cance tests, b o th in simple cases and for com plicated experim ental designs: the rando m izatio n used in selecting a design can be used as the basis for inference, w ithout appeal to specific erro r models. F o r a recent account see M anly (1991). A general discussion o f how to apply random ization in com plex problem s is given by W elch (1990). P erm utation tests, which are superficially sim ilar to random ization tests, are specifically n onparam etric tests designed to condition out the unknow n sam pling distribution. T he theory was developed by Pitm an (1937a,b,c), and is sum m arized by L ehm ann (1986). M ore recently R om ano (1989, 1990) has exam ined properties o f p erm u tation tests and their relation to b o o tstrap tests for a variety o f problems. M onte C arlo tests were first suggested by B arnard (1963) and are particularly p o p u lar in spatial statistics, as described by Ripley (1977,1981,1987) and Besag an d Diggle (1977). G raphical tests for regression diagnostics are described by A tkinson (1985), and Ripley (1981) applies them to m odel-checking in spatial statistics. M arkov chain M onte C arlo m ethods for conditional tests were introduced by Besag and Clifford (1989); applications to contingency table analysis are given by Forster, M cD onald and Sm ith (1996) and Smith, Forster and M cD o n ald (1996), w ho give additional references. G ilks et al. (1996) is a good general reference on M arkov chain M onte C arlo m ethods, including design o f sim ulation. T he effect o f sim ulation size R on power for M onte C arlo tests (with independent sim ulations) has been considered by M a rrio tt (1979), Jockel (1986) and by H all an d T itterington (1989); the discussion in Section 4.2.5 follows Jockel. Sequential calculation o f P-values is described by Besag and Clifford (1991) and Jennison (1992). The use o f tilted E D F s was introduced by E fron (1981b), and has sub­ sequently h ad a strong im pact on confidence interval m ethods; see C hapters 5 and 10. D ouble b o o tstrap adjustm ent o f P-values is discussed by Beran (1988), Loh (1987), H inkley and Shi (1989), and H all and M artin (1988). A pplications are described by N ew ton and G eyer (1994). G eyer (1995) discusses tests for inequality-constrained hypotheses, which sheds light on possible inconsistency

184

4 ■Tests

o f b o o tstrap tests an d suggests remedies. F or references to discussions o f im proved sim ulation m ethods, see C h ap ter 9. A variety o f m ethods and applications for resam pling in m ultiple testing are covered in the books by N oreen (1989) an d W estfall and Y oung (1993). Various aspects o f resam pling in the choice o f test are covered in papers by Collings an d H am ilton (1988), H am ilton an d Collings (1991), and Samawi (1994). A general theoretical treatm en t o f pow er estim ation is given by Beran (1986). The b rief discussion o f adaptive tests in Section 4.4.2 is based on D onegani (1991), w ho refers to previous w ork on the topic.

4.8 Problems 1

For the dispersion test of Example 4.2, y \ , . . . , y n are hypothetically sampled from a Poisson distribution. In the Monte Carlo test we simulate samples from the conditional distribution of Y i,..., Y„ given Y Yj — si,_,u„) which involve only adding and subtracting 1 from two randomly selected us. (Note that zero counts must not be reduced.) Such an algorithm might be slow. Suggest a faster alternative. (Section 4.2)

2

Suppose that X i , . . . , X n are continuous and have the same marginal CDF F, although they are not independent. Let / be a random integer between 1 and n. Show that rank(X/) has a uniform distribution on {1,2,...,n}. Explain how to apply this result to obtain an exact Monte Carlo test using one realization of a suitable Markov chain. (Section 4.2.2; Besag and Clifford, 1989)

3

Suppose that we have a m x m contingency table with entries ytj which are counts. (a) Consider the null hypothesis of row-column independence. Show that the sufficient statistic So under this hypothesis is the set of row and column marginal totals. To assess the significance of the likelihood ratio test statistic conditional on these totals, a Markov chain Monte Carlo simulation is used. Develop a Metropolis-type algorithm using one-step transitions which modify the contents of a randomly selected tetrad yik,yu>yjk>yji> where i ^ j , k ^ I. (b) Now consider the the null hypothesis of quasi-symmetry, which implies that in the loglinear model for mean cell counts, log E(Yy) = /i + a, + + ytj, the interaction parameters satisfy yy = y;i- for all /, j. Show that the sufficient statistic So under this hypothesis is the set of totals yy+yji, i =£ j, together with the row and column totals and the diagonal entries. Again a conditional test is to be applied. Develop a Metropolis-type algorithm for Markov chain Monte Carlo simulation using one-step transitions which involve pairs of symmetrically placed tetrads. (Section 4.2.2; Smith et al, 1996)

4

Suppose that a one-sided bootstrap test at level a is to be applied with R simulated samples. Then the null hypothesis will be rejected if and only if the number of t’s exceeding t is less than k = (R + l)a — 1. If kr is the number of t*s exceeding t in the first r simulations, for what values of kr would it be unnecessary to continue simulation? (Section 4.2.5; Jennison, 1992)

4.8 ■Problems

5

185

(a) Consider the following rule for choosing the number of simulations in a Monte Carlo test. Choose k, and generate simulations t\,t’2,..., t] until the first I for which k of the t’ exceed the observed value t; then declare P-value p = (k + I)/(I + 1). Let the random variables corresponding to I and p be L and P. Show that Pr{P < (k + 1)/(/ + 1)} = Pr(L > 1 - 1 ) = k / l ,

l = k , k + 1,. .

and deduce that L has infinite mean. Show that P has the distribution of a t/(0, 1) random variable rounded to the nearest achievable significance level l , k / ( k + l ) , k / ( k + 2),..., and deduce that the test is exact. (b) Consider instead stopping immediately if k of the f* exceed t at any I < R, and anyway stopping when I = R, at which point m values exceed t. Show that this rule gives achievable significance levels / ( * + ! ) /( / + !),

P ~ \( m + l) /( K + l) ,

m = k, m l ) = k + k ^ 2

1=1

l~\

Mc+l

and evaluate this with k = 49 and 9 for R = 999. (Section 4.2.5; Besag and Clifford, 1991) 6

Suppose that n subjects are allocated randomly to each of two treatments, A and B. In fact each subject falls in one of two relevant groups, such as gender, and the treatment allocation frequencies differ between groups. The response y t] for the j l h

subject in the ith group is modelled as y,j = y,- + + e,;, where xA and rb are treatment effects and k(i, j ) is A or B according to which treatment was allocated to the subject. Our interest is in testing Ho : rA = xB with alternative that xA < tb, and the test statistic chosen is T =

Y . ri> - Y r‘>’ i,j±(i,j)=B i,jM>s on the group indicators. (a) Describe how to calculate a permutation P-value for the observed value t using the method described above Example 4.12. (b) A different calculation of the P-value is possible which conditions on the observed covariates, i.e. on the treatment allocation frequencies in the two groups. The idea is to first eliminate the group effects by reducing the data to differences djj = yij — yij+i, and then to note that the joint probability of these differences under Ho is constant under permutations of data within groups. That is, the minimal sufficient statistic So under H0 is the set of differences — Yl(J+l), where Yni) < % ) < ■• • are the ordered values within the ith group. Show carefully how to calculate the P-value for t conditional on so­ le) Apply the unconditional and conditional permutation tests to the following data: Group 1 A

3

5

4

B O (Sections 4.3, 6.3.2; Welch and Fahey, 1994)

Group 2 4

1

2

1

186 1

4 ■ Tests A randomized matched-pair experiment to compare two treatments produces paired responses from which the paired differences dj = yij — >’i7 are calculated for j = 1 The null hypothesis Ho o f no treatment difference implies that the djs are sampled from a distribution that is symmetric with mean zero, whereas the alternative hypothesis implies a positive mean difference. For any test statistic t, such as d, the exact randomization P-value Pr(T* > t | H0) is calculated under the null resampling m odel

d) = Sjdj,

j =

where the Sj are independent and equally likely to be + 1 and —1. W hat would be the corresponding nonparametric bootstrap sampling m odel Fo? Would the resulting bootstrap P-value differ much from the randomization P-value? See Practical 4.4 to apply the randomization and bootstrap tests to the following data, which are differences o f measurements in eighths o f an inch on cross- and self-fertilized plants grown in the same pot (taken from R. A. Fisher’s famous discussion o f Darwin’s experiment). 49

-6 7

8

16

6

23

28

41

14

29

56

24 7560 -4 8

(Sections 4.3, 4.4; Fisher, 1935, Table 3) 8

For the two-sample problem o f Example 4.16, consider fitting the null m odel by maximum likelihood. Show that the solution probabilities are given by Pij,°

1 . ni (a + Xy i j) ’ P2]'°

1 n2(P - Xy2j) ’

where a, fi and / are the solutions to the equations Y P i j f l = 1>Y PVfl ~ U and Y yijPij.o = Y y 2jP2j,o- Under what conditions does this solution not exist, or give negative probabilities? Compare this null m odel with the one used in Example 4.16. 9

For the ratio-testing problem o f Example 4.15, obtain the nonparametric M LE o f the joint distribution o f ( U , X ) . That is, if pj is the probability attached to the data pair (Uj,Xj), maximize Yl Pj subject to Y P)(x i ~ ^ a uj) = 0- Verify that the resulting distribution is the E D F o f (U,X) when 0o = x/u. Hence develop a numerical algorithm for calculating the pjS for general $oN ow choose probabilities p i , . . . , p n to minimize the distance d(p, q) = Y ^ V j log Pj - Y 2 Pi lo § with q = ( ^ ,..., i) , subject to Y ( x j ~ &oUj)Pj = 0. Show that the solution is the exponential tilted E D F Pj cc exp{r\(xj - Bouj)}. Verify that for small values o f do — x / u these PjS are approximately the same as those obtained by the M LE method. (Section 4.4; Efron, 1981b)

10

Suppose that we wish to test the reduced-rank m odel H0 : g(0) — 0, where g(-) is a Pi-dimensional reduction o f p-dimensional 6. For the studentized pivot method we take Q = {g(T ) - g(6)}T V ~ l { g ( T ) - g(0)}, with data test value q0 = g(t)r i;g-1g(t), where vg estimates var[g(T )}. Use the nonparametric delta method to show that var{g(T )} = g(t)VLg ( t y , where g(0) = 8 g( 6 ) / d d T. Show how the method can be applied to test equality o f p means given p indepen­ dent samples, assuming equal population variances. (Section 4.4.1)

187

4.9 ■Practicals 11

In a parametric situation, suppose that an exact test is available with test statistic U, that S is sufficient under the null hypothesis, but that a parametric bootstrap test is carried out using T rather than U. Will the adjusted P-value padj always produce the exact test? (Section 4.5)

12

In calculating the mean squared error for the simulation approximation to the adjusted P-value, it might be more reasonable to assume that P-values u, follow a Beta distribution with parameters a and b which are close to, but not equal to, one. Show that in this case

E(Xk) = " V ,Pl j^o

T(M + l)r (a + j)V(b + M - j)T(a + b) T ( j + I W ( M - j + l )T(a + b + M ) r ( a ) r ( b ) ’

where X r = /{B in om (M , ur) < ( M + l)p}. Use this result to investigate numerically the choice o f M. (Section 4.5) 13

For the matched-pair experiment o f Problem 4.7, suppose that we choose between the two test statistics ty = d and t2 = (n — 2m)~l J2"Z2+i ^c/)> f° r som e m in the range 2, . . . , [^n], on the basis o f their estimated variances Vi and v2, where



=

E (d j-h )2

1 v->

n2 =

Ej=m+l(^U) ~ f2)2 + m(^(rn+1) ~ h ) 2 + m(rf(„_m) —t2)2 --- ----------------------------------------------------------------------- . n(n — 2m)

Give a detailed description o f the adaptive test as outlined in Section 4.4.2. To apply it to the data o f Problem 4.7 with m = 2, see Practical 4.4. (Section 4.4.2; D onegani, 1991) 14

Suppose that we want critical values for a size a one-sided test o f Ho : 9 = 9o versus H A : 9 > 0n. The ideal value is the 1 — a quantile to,i-a o f the distribution o f T under Ho, and this is estimated by the solution f o , i - a to Pr’(T ' > t0 | F o) = aTypically t o i - c is biased. Consider an adjusted critical value ( o , i - o - y . Obtain the double bootstrap algorithm for choosing y, and compare the resulting test to use o f the adjusted P-value (4.34). (Sections 4.5, 3.9.1; Beran, 1988)

4.9 Practicals 1

The data in dataframe dogs are from apharmacological experiment. The two variables are cardiac oxygen consum ption (M VO) and left ventricular pressure (LVP). D ata for n = 7 dogs are M VO LVP

78 32

92 33

116 45

90 30

106 38

7899 24 44

Apply a bootstrap test for the hypothesis o f zero correlation between M VO and LVP. Use R = 499 simulations. (Sections 4.3, 4.4) 2

For the permutation test outlined in Example 4.12,

188

4 ■Tests

ami.fun

*

>

o C\J

50

100

150

200

250

t*

log t*

as we shall see in Section 5.4. However, variance approxim ations such as vL can be som ew hat unstable for small n, as in the previous exam ple with n = 12. Experience suggests th a t the m ethod is m ost effective when 6 is essentially a location p aram eter, which is approxim ately induced by variance-stabilizing tran sfo rm atio n (2.14). However, this requires know ing the variance function v(9) = v a r(T | F), which is never available in the nonparam etric case. A suitable transfo rm atio n m ay som etim es be suggested by analogy w ith a p aram etric problem , as in the previous example. T hen equations (5.10) and (5.11) will apply w ithout change. Otherwise, a transform ation can be obtained em pirically using the technique described in Section 3.9.2, using either nested b o o tstrap estim ates v* or delta m ethod estim ates v*L w ith which to estim ate values o f the variance function v(6). E quation (5.10) will then apply with estim ated transfo rm atio n h( ) in place o f h( ). F or the studentized boo tstrap interval (5.11), if the tran sfo rm ation is determ ined em pirically by (3.40), then studentized values o f the transform ed estim ates h(t'r) are K = v { K) l/2{ k O - M O }/”! 1/ 2-

O n the original scale the (1 — 2a) studentized interval has endpoints h~l { h(t) - v1/' 2tS(0 _ 1/2Z('(i-o

*

U((R + l)(l-a))>

whose tran sfo rm atio n back to the 9 scale is f ((R+l)= h(9) is norm ally distributed, U ~ N ( — wer(), 6 v a r * { n 0 )3/2}

(5.23)

w here ( ' is the log likelihood o f a set o f d a ta sim ulated from the fitted m odel. M ore generally a is one-sixth the standardized skewness o f the linear approxim ation to T. One p o tential problem w ith the B C a m ethod is th a t if a in (5.21) is m uch closer to 0 or 1 th an a, then (R + l)a could be less th an 1 o r greater th an R, so th a t even w ith interpolation the relevant quantile can n o t be calculated. If this happens, and if R can n o t be increased, then it would be appropriate to quote the extrem e value o f t' and the im plied value o f a. For example, if ( R + l)a > JR, then the u pper confidence limit t'Rj would be given w ith implied right-tail error a 2 equal to one m inus the solution to a = R / ( R + 1). Example 5.5 (Air-conditioning data, continued) R eturning to the problem o f Exam ple 5.4 and the exponential b o o tstrap results for R = 999, we find th a t the n u m b er o f y * values below y = 108.083 is 535, so by (5.22) w =

whose derivative is iw = ^

fiz

-

nfi

The second and third m om ents o f if(fi) are nfi~2 and 2n/i~3, so by (5.23) a = I n ” 1/2 = 0.0962.

206

5 ■Confidence Intervals

a

z„ = w + za

$ = ®(w + - p rjj;)

r = (/?-(- 1)5

0.025 0.975 0.050 0.950

- 1 .8 7 2 2.048 -1 .5 5 7 1.733

0.067 0.996 0.103 0.985

67.00 995.83 102.71 984.89

Table 5.1 Calculation of adjusted percentile bootstrap confidence limits for fi with the data of Example 1.1, under the parametric exponential model with R = 999; a = 0.0962, w = 0.0878.

‘w 65.26 199.41 71.19 182.42

T he calculation o f the adjusted percentile limits (5.21) is illustrated in Table 5.1. T he values o f r = ( K + l) a are not integers, so we have applied the interpolation form ula (5.8). H ad we tried to calculate a 99% interval, we should have had to calculate the 999.88th ordered value o f t ‘, which does n o t exist. The im plied right-tail erro r for t'gg9j is the value a 2 which solves 9"

1000

= d>

(V0 0 8 7 8 .

+

0-0878 + ^

1 -0 .0 9 6 2 (0 .0 8 7 8 + Z !_ a2)

nam ely a2 = 0.0125.



Parametric case with nuisance parameters W hen 9 is one o f several unknow n param eters, the previous developm ent applies to a derived distribution called the least-favourable family. As usual we denote the nuisance param eters by X and w rite ip = (9, a ). If the log likelihood function for ip based on all the d a ta is n2/cd,$). Finally, for the studentized b o o tstrap upp er a confidence lim it (5.7), we first calculate the variance approxim ation v = 2n~l t2 from the expected Fisher inform ation m atrix and then the confidence lim it is nt/cd,i-a. The coverage o f this limit is exactly a. Table 5.3 shows num erical values o f coverages for the four m ethods in the case k = 10 an d m = 2, w here d = | n = 10. The results show quite dram atically first how b a d the basic an d percentile m ethods can be if used w ithout careful thought, an d secondly how well studentized and adjusted percentile m ethods can do in a m oderately difficult situation. O f course use o f a logarithm ic transform atio n would im prove the basic b o o tstrap m ethod, which would then give correct answers. ■

yi is the average of

yn>---,ymr

209

5.3 ■Percentile M ethods Table 5.3 Exact coverages (%) of confidence limits for normal variance based on maximum likelihood estimator for 10 samples each of size two.

N om inal

Basic

S tudentized

P ercentile

BCa

1.0 2.5 5.0 95.0 97.5 99.0

0.8 2.5 4.8 35.0 36.7 38.3

1.0 2.5 5.0 95.0 97.5 99.0

0.0 0.0 0.0 1.6 4.4 6.9

1.0 2.5 5.0 91.5 100.0 100.0

Nonparametric case: single sample The adjusted percentile m ethod for the nonparam etric case is developed by applying the m ethod for the p aram etric case w ith no nuisance param eters to a specially constructed nonparam etric exponential family w ith support on the d a ta values, the least-favourable fam ily derived from the m ultinom ial distribution for frequencies o f the d a ta values under nonparam etric resampling. Specifically, if lj denotes the em pirical influence value for t at yj, then the resam pling m odel for an individual Y * is the exponential tilted distribution P r(7 * = y j ) = pj =

(5-26)

The p aram eter o f interest 6 is a m onotone function o f r\ with inverse rj(6), say. The M L E o f rj is fj = rj(t) = 0, which corresponds to the E D F F being the n o n p aram etric M L E o f the sam pling distribution F. The bias correction factor w is calculated as before from (5.22), b u t using nonp aram etric b o o tstrap sim ulation to obtain values o f t*. The skewness correction a is given by the em pirical analogue o f (5.23), where now fj($) is the first derivative drj(6)/dd.

W hen the m om ents needed in (5.23) are evaluated at 6, or equivalently at fj = 0, two sim plifications occur. First we have E*(L*) = 0, and secondly the m ultiplier ij(t) cancels when (5.23) is applied. The result is th at

a

6 /

6 (e

\ 3/2’ if)

which is the direct analogue o f (5.25). Example 5.8 (Air-conditioning data, continued) The nonparam etric version o f the calculations in the preceding exam ple involves the same form ula (5.21), b u t now w ith a = 0.0938 and w = 0.0728. The form er constant is calculated from (5.27) w ith lj = y7 — y. The confidence lim it calculations are shown in Table 5.4 for 90% an d 95% intervals. ■

210

5 ■Confidence Intervals

a

za = w + z a

0.025 0.975 0.050 0.950

-1.8872 2.0327 -1.5721 1.7176

5 = ®(w +

i^ i;)

0.0629 0.9951 0.0973 0.9830

r

= (R + 1)5 62.93 995.12 97.26 983.01

Table 5.4 Calculation of adjusted percentile bootstrap confidence limits for p. in Example 1.1 using nonparametric bootstrap with R ~ 999; a = 0.0938, w = 0.0728.

C(r) 55.33 243.50 61.50 202.08

function o f sam ple m om ents, say t == t ( s ) where In = n - ' E ”=i Sitij) for i = then (5.26) is a one-dim ensional reduction o f a /c-dimensional exponential fam ily for si(Y * ),... ,s*( Y *). By equation (2.38) the influence values lj for t are given sim ply by lj = t T {s(yj) — s} w ith t = dt/ds. T he m ethod as described will apply as given to any single-sample problem, and to m ost regression problem s (C hapters 6 and 7), but n o t exactly to problem s where statistics are based on several independent samples, including stratified samples. Nonparametric case: several samples In the param etric case the B C a m ethod as described applies quite generally through the unifying likelihood function. In the n onparam etric case, however, there are predictable changes in the B C a m ethod. The background approx­ im ation m ethods are described in Section 3.2.1, which defines an estim ator in term s o f the E D F s o f k samples, t = t(F\ , . . ., Fk). T he em pirical influence values lij for j = 1 and i = 1, . . . , k and the variance approxim ation vL are defined in (3.2) an d (3.3). I f we retu rn to the origin and developm ent o f the B C a m ethod, we see th at the definition o f bias correction w in (5.22) will rem ain the same. The skewness correction a will again be one-sixth the estim ated standardized skewness o f the linear approxim ation to t, which here is ,

i

s i , » rJ

This can be verified as an application o f the p aram etric m ethod by constructing the least-favourable jo in t family o f k distributions from the k m ultinom ial distributions on the d a ta values in the k samples. N ote th a t (5.28) can be expressed in the same form as (5.27) by defining hj = nlij/ni, where n = Y so th at

vL = n 2 Y ^ i ij

° =

(5.29)

( e u ?5)!

211

5.4 ■ Theoretical Comparison o f M ethods

see Problem 3.7. This can be helpful in w riting an all-purpose algorithm for the B C a m eth o d ; see also the discussion o f the A B C m ethod in the next section. A n exam ple is given at the end o f the next section.

5.4 Theoretical Comparison of Methods The studentized b o o tstrap and adjusted percentile m ethods for calculating confidence limits are inherently m ore accurate th an the basic b o o tstrap and percentile m ethods. This is quite clear from em pirical evidence. H ere we look briefly at the theoretical side o f the story for statistics which are approxim ately norm al. Some aspects o f the theory were discussed in Section 2.6.1. For simplicity we shall restrict m ost o f the detailed discussion to the single-sam ple case, but the results generalize w ithout m uch difficulty.

5.4.1 Second-order accuracy To assess the accuracies o f the various b o o tstrap confidence limits we calculate coverage probabilities up to the n_1/2 term s in series approxim ations, these based on corresponding approxim ations for the C D F s o f U = ( T — 0 ) /v 1/2 and Z = (T —0 ) / F 1/2. H ere v is v a r(T ) or any approxim ation which agrees to first order w ith vL, the variance o f the linear approxim ation to T. Sim ilarly V is assum ed to agree to first order with VL. F o r exam ple, in the scalar param etric case where T is the m axim um likelihood estim ator, v is the inverse o f the expected Fisher inform ation m atrix. In all o f the equations in this section equality is correct to o rd er n~1^2, i.e. ignoring errors o f order n ~ \ The relevant approxim ations for C D F s are the one-term C o rn ish-F isher approxim ations P r([/ < u) = G(6 +

v 1/ 2m)

= OoThere are several possible including those described in hazard ratio 9o- H ere we use which holds fixed the survival sim ulated values y \ , . . . , y ' n are

resam pling schemes th a t could be used here, Section 3.5 b u t m odified to fix the constant the sim pler conditional m odel o f Exam ple 4.4, and censoring times. T hen for any fixed 9o the generated by

222

5 • Confidence Intervals

Figure 5.3 Bootstrap P-values p(0o) for testing constant hazard ratio 0o, with R = 199 at each point. Solid curve is spline fit on logistic scale. Dotted lines interpolate solutions to p(l?o) = 0.05,0.95, which are endpoints of 90% confidence interval.

log(theta)

where the num bers a t risk ju st p rio r to zj are given by

f

J-i

)

r\j = m ax I 0, m - ^ ( 1 - y ’k ) - c1;I *=i

( r2j = m ax

1 0, r 2i

Y.y'k

C2j

k= 1

w ith Cij the n u m b er o f censoring tim es in group i before zj. F o r the A M L d a ta we sim ulated R = 199 sam ples in this way, and calculated the corresponding values t*(90) for a grid o f 21 values o f 90 in the range 0.5 < 0o ^ 10. F or each Go we com puted the one-sided P-value Pieo) =

#{t*(0o) > t(0o)} 200

then on the logit scale we fitted a spline curve (in log 6), and interpolated the solutions to p(9o) = a, 1—a to determ ine the endpoints o f the (1—2a) confidence interval for 9. Figure 5.3 illustrates this procedure for a = 0.05, which gives the 90% confidence interval [1.07,6.16]; the 95% interval is [0.86,7.71] and the p o int estim ate is 2.52. T hus there is m ild evidence th a t 6 > 1. A m ore efficient ap proach w ould be to use R = 99 for the initial grid to determ ine rough values o f the confidence limits, n ear which further sim ulation with R = 999 w ould provide accurate interp o latio n o f the confidence limits. Yet m ore efficient algorithm s are possible. ■ In a m ore system atic developm ent o f the m ethod, we m ust allow for a nuisance p aram eter X, say, which also governs the d a ta distribution b u t is not constrained by Ho. T hen b o th Ra(0) an d C \ - a{ Y \ , . . . , Y„) m ust depend upon X to m ake the inversion m ethod w ork exactly. U nder the b o o tstra p approach X is replaced by an estim ate.

5.6 • Double Bootstrap M ethods

223

Suppose, for exam ple, th a t we w ant a lower 1 — a confidence limit, which is obtained via the critical region for testing Ho : 9 = 9 q versus the alternative hypothesis H a : 9 > 9o■Define ip = (9, A). I f the test statistic is T(9o), then the size a critical region has the form R«(8o) = { ( y u - - - , y n) ■Pr{T (0o) > t(90) | ip = (0o,A)} < a}, an d the exact lower confidence limit is the value uy = ua(y, X), such th a t Pr{ T (ua) > t(ua) | xp = (ua,/1)} = a. We replace X by an estim ate s, say, to obtain the lower 1 — a boo tstrap confidence lim it u i_ a = ua(y,s). The solution is found by applying for u the equation Pr* {T*(u) > t(u) | xp = (u,s)} = a, where T*(w) follows the distribution under xp = (u , s). This requires application o f an interp o latio n m ethod such as the one illustrated in the previous example. T he sim plest test statistic is the point estim ate T o f 9, and then T(9o) = T. The m ethod will tend to be m ore accurate if the test statistic is the studentized estim ate. T h a t is, if v a r(T ) = o 2(9,A), then we take Z = (T — 9o)/v(9o,S)\ for furth er details see Problem 5.11. The same rem ark would apply to score statistics, such as th a t in the previous example, where studentization would involve the observed or expected Fisher inform ation. N ote th a t for the p articu lar alternative hypothesis used to derive an upper limit, it w ould be stan d ard practice to define the P-value as Pr{T(0o) < t(9o) \ Fo}, for exam ple if T ( 0 q) were an estim ator for 9 or its studentized form. Equivalently one can retain the general definition and solve p(9o) = 1 — a for an upp er limit. In principle these m ethods can be applied to b o th param etric and sem ipara­ m etric problem s, b u t not to com pletely nonparam etric problems.

5.6 Double Bootstrap Methods W hether the basic or percentile b o o tstrap m ethod is used to calculate con­ fidence intervals, there is a possibly non-negligible difference betw een the nom inal 1 — a coverage an d the actual probability coverage o f the interval in repeated sam pling, even if R is very large. The difference represents a bias in the m ethod, an d as indicated in Section 3.9 the b o o tstrap can be used to estim ate and correct for such a bias. T h a t is, by b o otstrapping a b o o tstrap confidence interval m ethod it can be m ade m ore accurate. This is analogous to the b o o tstrap adjustm ent for b o o tstra p P-values described in Section 4.5. O ne straightforw ard application o f this idea is to the norm al-approxim ation confidence interval (5.4), which produces the studentized b o o tstra p interval;

5 • Confidence Intervals

224

see Problem 5.12. A m ore am bitious application is b o o tstrap adjustm ent o f the basic b o o tstrap confidence limit, which we develop here. First we recall the full n o tatio n s for the quantities involved in the basic bo o tstrap confidence interval m ethod. The “ideal” u p per 1 —a confidence limit is t(F) — ax(F), where Pr { T - 6 < ax(F) | F j = Pr{f(F) - t(F) < aa(F) \ F} = a. W h at is calculated, ignoring sim ulation error, is the confidence lim it t(F)—ax(F). The bias in the m ethod arises from the fact th a t aa(F) ^ a a(F) in general, so th at Pr{f(F) < t(F) - aa( F) | F} ± 1 - a.

(5.52) A

We could try to elim inate the bias by adding a correction to ax(F), b u t a m ore successful approach is to adjust the subscript a. T h a t is, we replace ax(F) by Oq(a)(F) an d estim ate w hat the adjusted value q(a) should be. This is in the sam e spirit as the B C a m ethod. Ideally we w ant q(a) to satisfy P r{t(F) < t(F) - fl, (a)(F) | F} = 1 - a.

(5.53)

The solution q(a) will depend u p o n F, i.e. q(oc) = q(a, F). Because F is unknow n, we estim ate q(a) by q(a) = q(a, F). This m eans th a t we obtain q(a) by solving the b o o tstrap version o f (5.53), namely Pr*{t(F) < t(F') - ai{a)( h

I F} = 1 - a.

(5.54)

This looks intim idating, b u t from the definition o f aa(F) we see th a t (5.54) can be rew ritten as Pr*{Pr**(T** < 2 T ' - t \ F*) > q{oc) | F} = 1 - a.

(5.55)

The sam e m ethod o f adjustm ent can be applied to any b o o tstrap confi­ dence lim it m ethod, including the percentile m ethod (Problem 5.13) and the studentized b o o tstra p m ethod (Problem 5.14). To verify th a t the nested b o o tstrap reduces the o rd er o f coverage erro r m ade by the original b o o tstra p confidence limit, we can apply the general discussion o f Section 3.9.1. In general we find th a t coverage 1 —a + 0 ( n ~ “) is corrected to 1—a + 0 ( n ~ fl~1/2) for one-sided confidence limits, w hether a = | or 1. However, for equi-tailed confidence intervals coverage 1 — 2a + 0 (n-1 ) is corrected to 1 — 2a -I- 0 ( n ~ 2); see Problem 5.15. Before discussing how to solve equation (5.55) using sim ulated samples, we look at a simple illustrative exam ple where the solution can be found theoretically. Example 5.12 (Exponential mean) C onsider the param etric problem o f ex­ ponential d a ta w ith unknow n m ean /i. T he d a ta estim ate for fi is t = y, F is

5.6 ■Double Bootstrap M ethods

225

the fitted exponential C D F w ith m ean y, and F * is the fitted exponential C D F w ith m ean y * — the m ean o f a param etric b o o tstrap sam ple y \ , . . . , y ' n draw n from F. A result th a t we use repeatedly is th a t if X \ , . . . , X n are independent exponential w ith m ean y, then 2n X / y has the x l n distribution. The basic b o o tstrap u p p e r 1 — a confidence limit for n is 2y - y c 2n,u/(2n), where Pt(x I„ < cjn,%) = oc. To evaluate the left-hand side o f (5.55), for the inner probability we have P r* * (F " < 2 ? - y | F*) = Pr{*2„ < 2n(2 - J) / ? ) } , which exceeds q if and only if 2n(2 — y / y ’) > C2n,q■ Therefore the outer probability on the left-hand side o f (5.55) is Pr" {2„(2 - « ? • ) >

I

= Pr { & > 2 _

^ / ( 2„ , } .

(5-56)

w ith q = q(a). Setting the probability on the right-hand side o f (5.56) equal to 1 — a, we deduce th a t 2n 2 - cl n m l{2n)

C2n’a'

Using q(a) in place o f a in the basic b o o tstrap confidence lim it gives the adjusted u p p er 1 —a confidence limit 2 n y / c 2n,a, which has exact coverage 1 —oc. So in this case the double b o o tstrap adjustm ent is perfect. Figure 5.4 shows the actual coverages o f nom inal 1 — a b o o tstrap upper confidence limits when n = 10. There are quite large discrepancies for both basic and percentile m ethods, which are com pletely rem oved using the double b o o tstrap adjustm ent; see Problem 5.13. ■ In general, an d especially for n onparam etric problem s, the calculations in (5.55) can n o t be done exactly and sim ulation or approxim ation m ethods m ust be used. A basic sim ulation algorithm is as follows. Suppose th a t we draw R sam ples from F, and denote the m odel fitted to the rth sam ple by F ’ — the E D F for one-sam ple n o nparam etric problem s. Define ur = Pr(T** < 21* - 1 1 F*). This will be approxim ated by draw ing M sam ples from F", calculating the estim ator values r” for m = 1, . . . , M and com puting the estim ate I {A} is the zero-one indicator function of the event A.

M «M ,r =

^ K

m=1

~ '} •

5 • Confidence Intervals

226

Figure 5.4 Actual coverages of percentile (dotted line) and basic bootstrap (dashed line) upper confidence limits for exponential mean when n = 10. Solid line is attained by nested bootstrap confidence limits.

0.0

0.2

0.4

0.6

0.8

1.0

Nominal coverage

T hen the M onte C arlo version o f (5.55) is R

^ «(«)} = 1 r= l

which is to say th a t q(a) is the a quantile o f the uMr. The sim plest way to obtain ;ft)}1/2

12 ’

F rom (5.58) an d (5.59) we see th a t if h

\(nh)~^K ^ oc

n“ 1/5, then as n increases

2 = e + { f ( y ) } - l/2K - ' /2{f " (y) - \ K } ,

Z* = e \

(5.59)

5 • Confidence Intervals

228

Figure 5.5 Studentized quantities for density estimation. The left panels show values of Z when h = n~1^5 for 500 standard normal samples of sizes n and 500 bootstrap values for one sample at each n. The right panels show the corresponding values when h = n-1^3.

20

50 100 200 5001000

20

50 100 200 5001000

20

50 100 200 5001000

20

50 100 200 5001000

where b o th e and s' are N ( 0,1). This m eans th a t quantiles o f Z can n o t be well approxim ated by quantiles o f Z*, no m atter how large is n. The same thing happens for the u n transform ed density estim ate. There are several ways in which we can try to overcome this problem . O ne o f the sim plest is to change h to be o f o rd er « -1/3, when calculations sim ilar to those above show th a t Z = e an d Z* = e*. Figure 5.5 illustrates the effect. H ere we estim ate the density a t y = 0 for sam ples from the N ( 0,1) distribution, with w(-) the stan d ard norm al density. T he first two panels show box plots o f 500 values o f z an d z* w hen h = n~1/s, which is near-optim al for estim ation in this case, for several values o f n; the values o f z* are obtained by resam pling from one dataset. T he last two panels correspond to h = n~1/3. The figure confirm s the key points o f the theory sketched above: th a t Z is biased aw ay from zero when h = n-1^5, b u t not w hen h = n_1/3; an d th a t the distributions o f Z and Z ’ are quite stable and sim ilar when h = n-1/3. U nder resam pling from F, the studentized b o o tstrap applied to {/(>’; ^)}1/2 should be consistent if h oc n~1/3. F rom a practical point o f view this m eans considerable undersm oothing in the density estim ate, relative to standard practice for estim ation. A bias in Z o f o rd er n~ 1/3 or worse will rem ain, and this suggests a possibly useful role for the double bootstrap. F or a num erical exam ple o f nested b o o tstrap p in g in this context we revisit Exam ple 4.18, where we discussed the use o f a kernel density estim ate in estim ating species abundance. T he estim ated P D F is

f(y.h) = z z where (•) is the stan d ard norm al density, and the value o f interest is / ( 0 ;/i), which is used to estim ate /(0 ). In light o f the previous discussion, we base

5.6 ■Double Bootstrap M ethods Figure 5.6 Adjusted bootstrap procedure for variance-stabilized density estimate f = {/(0;0.5)}1/2 for the tuna data. The left panel shows the EDF of 1000 values of I* —t. The right panel shows a plot of the ordered u'Mr against quantiles r/(R + 1) of the 1/(0,1) distribution. The dashed line shows how the quantiles of the u are used to obtain improved confidence limits, by using the right panel to read off the estimated coverage q{a) corresponding to the required nominal coverage a, and then using the left panel to read off the q(a) quantile of t* —t.

229

o o

O

■0) O

LU

fo E

LU

t*-t

Nominal coverage

confidence intervals on the variance-stabilized estim ate t = { /(0 ;h )} 1/2. We also use a value o f h considerably sm aller th an the value (roughly 1.5) used to estim ate / in Exam ple 4.18. T he right panel o f Figure 5.6 shows the quantiles o f the uMr obtained when the double b o o tstrap bias adjustm ent is applied with R = 1000 and M = 250, for the estim ate w ith b andw idth h = 0.5. If T* — t were an exact pivot, the distrib u tio n o f the u would lie along the do tted line, and nom inal and estim ated coverage would be equal. The distribution is close to uniform , confirm ing o u r decision to use a variance-stabilized statistic. The dashed line shows how the distribution o f the u* is used to remove the bias in coverage levels. F or an up p er confidence limit with nom inal level 1 — a = 0.9, so th a t a = 0.1, the estim ated level is 4(0-1) = 0.088. The 0.088 quantile o f the values o f tj. — t is t(*gg) — t = —0.091, while the 0.10 quantile is t(*100) — t = —0.085. The corresponding u p per 10% confidence limits for f ( 0 ) V 2 are t - (t(*88) - t) = 0.356 - (-0 .0 9 1 ) = 0.447 and t - (t(*100) - t) = 0.356 — (—0.085) = 0.441. F or this value o f a the adjustm ent has only a small effect. Table 5.7 com pares the 95% limits for /(0 ) for different m ethods, using bandw idth h = 0.5, for which /(0 ;0 .5 ) = 0.127. The longer upper tail for the double b o o tstrap interval is a result o f adjusting the nom inal a = 0.025 to §(0.025) = 0.004; a t the upper tail we obtain §(0.975) = 0.980. The lower tail o f the interval agrees well w ith the o ther second-order correct m ethods. F o r larger values o f h the density estim ates are higher and the confidence intervals narrow er.

5 • Confidence Intervals

230

Upper Lower

Basic

Basic1-

Student

S tu d en t

Percentile

BCa

D ouble

0.204 0.036

0.240 0.060

0.273 0.055

0.266 0.058

0.218 0.048

0.240 0.058

0.301 0.058

In Exam ple 9.14 we describe how saddlepoint m ethods can greatly reduce the tim e taken to perform the double b o o tstrap in this problem . It m ight be possible to avoid the difficulties caused by the bias o f the kernel estim ate by using a clever resam pling scheme, b u t it would be m ore com plicated th an the direct ap p ro ach described above. ■

5.7 Empirical Comparison of Bootstrap Methods T he several b o o tstrap confidence lim it m ethods can be com pared theoretically on the basis o f first- and second-order accuracy, as in Section 5.4, b u t this really gives only suggestions as to which m ethods we would expect to be good. The theory needs to be bolstered by num erical com parisons. O ne rath e r extrem e com parison was described in Exam ple 5.7. In this section we consider one m oderately com plicated application, estim ation o f a ratio o f means, and assess through sim ulation the perform ances o f the m ain b o o tstrap confidence limit m ethods. T he conclusions ap p ear to agree qualitatively with the results o f other sim ulation studies involving applications o f sim ilar com plexity: references to some o f these are given in the bibliographic notes a t the end o f the chapter. The application here is sim ilar to th a t in Exam ple 5.10, and concerns the ratio o f m eans for d a ta from tw o different gam m a distributions. The first sam ple o f size ni is draw n from a gam m a distrib u tio n w ith m ean fi\ = 100 and index 0.7, while the second independent sam ple o f size n2 is draw n from the gam m a distribution w ith m ean n 2 = 50 and index 1. T he p aram eter 9 = n i / ( i 2, whose value is 2, is estim ated by the ratio o f sam ple m eans t = y \ / y 2. F or particular choices o f sam ple sizes we sim ulated 10000 datasets and to each applied several o f the nonparam etric b o o tstrap confidence lim it m ethods discussed earlier, always w ith R = 999. We did n o t include the double b o o tstrap m ethod. As a control we added the exact p aram etric m ethod when the gam m a indexes are know n: this turns out not to be a strong control, b u t it does provide a check on sim ulation validity. The results quoted here are for tw o cases, n\ = n2 = 10 and n\ = n2 = 25. In each case we assess the left- and right-tail erro r rates o f confidence intervals, and their lengths. Table 5.8 shows the em pirical erro r rates for b o th cases, as percentages, for nom inal rates betw een 1% and 10% : sim ulation stan d ard errors are rates

Table 5.7 Upper and lower endpoints of 95% confidence limits for / ( 0) for the tuna data, with bandwidth h = 0.5; t indicates use of square-root transformation.

231

5.8 • M ultiparameter Methods Table 5.8 Empirical error rates (%) for nonparametric bootstrap confidence limits in ratio estimation: rates for sample sizes wi = n2 = 10 are given above those for sample sizes «| = «2 = 25. R = 999 for all bootstrap methods. 10000 datasets generated from gamma distributions.

M e th o d

N o m in al e rro r rate L ow er lim it

E xact N o rm al ap proxim ation Basic Basic, log scale S tudentized S tudentized, log scale B o o tstrap percentile BCa ABC

U p p e r lim it

1

2.5

5

10

10

5

2.5

1

1.0 1.0 0.1 0.1 0.0 0.0 2.6 1.6 0.6 0.8 1.1 1.1 1.8 1.2 1.9 1.4 1.9 1.3

2.8 2.3 0.5 0.5 0.0 0.1 4.9 3.2 2.1 2.3 2.8 2.5 3.6 2.6 4.0 3.0 4.2 3.0

5.5 4.8

10.5 9.9

1.7 2.1 0.2 0.4 8.1 6.0 4.6 4.6 5.6 5.0 6.5 5.1 6.9 5.6 7.4 5.7

6.3 6.4 1.8 3.0 12.9 11.4 9.9 9.9 10.7 10.1 11.6 10.1 12.3 10.9 12.7 11.0

9.8 10.2 20.6 16.3 24.4 19.2 13.1 11.5 11.9 10.9 11.6 10.8 14.6 12.6 14.0 11.8 14.6 12.1

4.8 4.9 15.7 11.5 21.0 15.0 7.5 6.3

2.6 2.5 12.5 8.2 18.6 12.5 4.8 3.3 4.0 3.0 3.5 2.9 5.9 4.2 5.3 3.8 5.5 3.7

1.0 1.1 9.6 5.5 16.4 10.3 2.5 1.7 2.0 1.4 1.7 1.3 3.3 2.1 3.0 1.9 3.1 1.9

6.7 5.9 6.3 5.7 8.9 7.1 8.3 6.8 8.7 6.8

divided by 100. The norm al approxim ation m ethod uses the delta m ethod variance approxim ation. The results suggest th a t the studentized m ethod gives the best results, provided the log scale is used. Otherwise, the studentized m ethod and the percentile, B C a and A B C m ethods are com parable b u t only really satisfactory a t the larger sample sizes. Figure 5.7 shows box plots o f the lengths o f 1000 confidence intervals for b o th sam ple sizes. The m ost pronounced feature for ni = n2 = 10 is the long — som etim es very long — lengths for the two studentized m ethods, which helps to account for their good error rates. This feature is far less prom inent a t the larger sam ple sizes. It is noticeable th a t the norm al, percentile, B C a an d A B C intervals are sh o rt com pared to the exact ones, and th at taking logs improves the basic intervals. Sim ilar com m ents apply when ni = n2 = 25, but w ith less force.

5.8 Multiparameter Methods W hen we w ant a confidence region for a vector param eter, the question o f shape arises. Typically a rectangular region form ed from intervals for each com ponent p aram eter will n o t have high enough coverage probability, although a B onferroni argum ent can be used to give a conservative confidence coefficient,

232

5 ■Confidence Intervals

n1=n2=10

Figure 5.7 Box plots of confidence interval lengths for the first 1000 simulated samples in the numerical experiment w ith gamma data.

1000 100 10

...... ^ ................... B "

" S .......E3........ Et3....... S "

n1=n2=25 10 5

2

■0.... 0 .... 0 .....0 .... 6 .... B .... [j.....0 .... 0 -

1

as follows. Suppose th a t 9 has d com ponents, an d th a t the confidence region Ca is rectangular, w ith interval Cxj = (9Lyi, 9Vj) for the ith com ponent 9t. T hen Pr(0 * Ca) = P r ( \ J { 9 t $

^

Pr(0, ^ Q , ) = ^

say. If we take a, = a / d then the region Ca has coverage a t least equal to 1 — a. F or certain applications this could be useful, in p a rt because o f its simplicity. But there are tw o poten tial disadvantages. First, the region could be very conservative — the true coverage could be considerably m ore than the nom inal 1 — a. Secondly, the rectangular shape could be quite at odds w ith plausible likelihood contours. This is especially true if the estim ates for p aram eter com ponents are quite highly correlated, w hen also the B onferroni m ethod is m ore conservative. One simple possibility for a jo in t b o o tstrap confidence region when T is approxim ately norm al is to base it on the quad ratic form Q = ( T - 9 ) t V ~ 1( T - 9 ) ,

(5.60)

where V is the estim ated variance m atrix o f T. N ote th a t Q is the m ultivariate extension o f the square o f the studentized statistic o f Section 5.2. If Q had exact p quantiles ap, say, then a 1 — a confidence set for 9 would be {9 : ( T - 9 ) t V ~ 1( T - 9 ) < a ^ } .

(5.61)

233

5.8 ■Multiparameter Methods

T he elliptical shape o f this set is correct if the distribution o f T has elliptical contours, as the m ultivariate norm al distribution does. So if T is approxim ately m ultivariate norm al, then the shape will be approxim ately correct. M oreover, Q will be approxim ately distributed as a y 2d variable. But as in the scalar case such distrib u tio n al approxim ations will often be unreliable, so it m akes sense to approxim ate the distrib u tio n o f Q, and in p articular the required quantiles a i_a, by resam pling. T he m ethod then becom es com pletely analogous to the studentized b o o tstrap m ethod for scalar param eters. The b o o tstrap analogue o f Q will be Q’ = ( T , - t ) r F * - 1( T * - t ) , which will be calculated for each o f R sim ulated samples. If we denote the ordered b o o tstra p values by q[ ’C 2 „ , i - a / ( 2 n ) . Verify that the bootstrap adjustment o f this limit gives the exact upper 1 — a limit 2 n y / c 2n,tt(Section 5.6; Beran, 1987; Hinkley and Shi, 1989) 14

Show how to make a bootstrap adjustment o f the studentized bootstrap confidence limit method for a scalar parameter. (Section 5.6)

cv is the a quantile of the

distribution.

251

5.13 ■Practicals 15

For an equi-tailed (1 — 2a) confidence interval, the ideal endpoints are t + p with values o f P solving (3.31) with h(F, F ; P ) = I {t(F) - t(F) < 0} - a,

h(F, F; P) = I {t ( F) - t(F) < p } - (1 - a).

Suppose that the bootstrap solutions are denoted by [i? and P t- a., and that in the language o f Section 3.9.1 the adjustments b(F, y) are /Ja+?1 and /?i_a+w. Show how to estimate yi and y2, and verify that these adjustments modify coverage 1 — 2a + 0 (n _1) to 1 — 2a + 0(n~2). (Sections 3.9.1, 5.6; Hall and Martin, 1988) 16

Suppose that D is an approximate ancillary statistic and that we want to estimate the conditional probability G(u | d) = Pr(T — 9 < u \ D = d) using R simulated values (t’,d"r). One sm ooth estimate is the kernel estimate

G(„ I d ) , £ f= i W{h-'(d;-d)} where w( ) is a density symmetric about zero and h is an adjustable bandwidth. Investigate the bias and variance o f this estimate in the case where ( T , D ) is ap­ proximately bivariate normal and w( ) = o o .

304

6 ■Linear Regression

Bootstrap methods C orresponding results can be obtained for b o o tstrap resam pling m ethods. The b o o tstrap estim ate o f aggregate prediction erro r (6.51) becomes

Ab ( M ) = n~l R S S m + R ~ l £

n~l

j

- RSS'M,

j

,

(6.59)

where the second term on the right-hand side is an estim ate o f the expected excess erro r defined in (6.42). The resam pling scheme can be either case resam pling o r error resam pling, w ith x m Mj r = x Mj for the latter. It turns o u t th a t m inim ization o f A B( M) behaves m uch like m inim ization o f the leave-one-out cross-validation estim ate, an d does n o t lead to a consistent choice o f true m odel as n—*o o . However, there is a m odification o f A B(M), analogous to th a t m ade for the cross-validation procedure, which does produce a consistent m odel selection procedure. T he m odification is to m ake sim ulated datasets be o f size n — m rath er th an n, such th a t m / n —>l and n — m—> o o as n—>co. Also, we replace the estim ate (6.59) by the sim pler b o o tstrap estim ate R

Ab (M ) = R - 1 r= l

n

n- 1 Y ^ ( y j ~ x l j K r ) 2> j= 1

(6.60)

which is a generalization o f (6.49). (The previous doubts ab o u t this simple estim ate are less relevant for small n — m.) I f case resam pling is used, then n — m cases are random ly selected from the full set o f n. If m odel-based resam pling is used, the m odel being M w ith assum ed hom oscedasticity o f errors, then is a ran d o m selection o f n — m rows from X m and the n — m errors £* are random ly sam pled from the n m ean-corrected m odified residuals i"Mj ~ for m odel M. Bearing in m ind the general advice th a t the nu m ber o f sim ulated datasets should be at least R = 100 for estim ating second m om ents, we should use at least th a t m any here. T he sam e R b o o tstra p resam ples are used for each m odel M , as w ith the cross-validation procedure. One m ajo r practical difficulty th a t is shared by the consistent cross-validation and b o o tstrap procedures is th a t fitting all candidate m odels to small subsets o f d a ta is n o t always possible. W h at em pirical evidence there is concerning good choices for m / n suggests th a t this ratio should be ab o u t | . If so, then in m any applications some o f the R subsets will have singular designs X'M for big models, unless subsets are balanced by ap p ro p riate stratification on covariates in the resam pling procedure. Example 6.12 (Nuclear power stations) In Exam ples 6.8 and 6.10 o u r analyses focused on a linear regression m odel th a t includes six o f the p = 10 covariates available. T hree o f these covariates — d a te , lo g ( c a p ) and NE — are highly

305

6.4 ■Aggregate Prediction Error and Variable Selection

Figure 6.12 Aggregate prediction error estimates for sequence of models fitted to nuclear power stations data; see text. Leave-one-out cross-validation (solid line), bootstrap with R = 100 resamples of size 32 (dashed line) and 16 (dotted line).

0

2

4

6

8

10

Number of covariates

sign ifica n t, a ll o th ers h a v in g P -v a lu e s o f 0.1 or m ore. H ere w e co n sid e r the sele c tio n o f v a ria b les to in c lu d e in th e m o d el. T h e to ta l n u m b er o f p o ssib le m o d els, 2 10 = 1024, is p ro h ib itiv e ly larg e, a n d for th e p u r p o se s o f illu stra tio n w e co n sid e r o n ly the p a rticu la r seq u en ce o f m o d e ls in w h ich v a ria b les en ter in th e ord er d a t e , l o g ( c a p ) , NE, CT, l o g ( N ) , PT, T l, T2, PR, BW: th e first three are th e h ig h ly sig n ifica n t variab les.

Figure 6.12 plots the leave-one-out cross-validation estim ates and the b o o t­ strap estim ates (6.60) w ith R = 100 o f aggregate prediction error for the m odels w ith 0 , 1 ,..., 10 covariates. The two estim ates are very close, and b o th are m inim ized w hen six covariates are included (the six used in Exam ples 6.8 an d 6.10). Selection o f five or six covariates, ra th er th a n fewer, is quite clearcut. These results b ear o u t the rough rule-of-thum b th a t variables are selected by cross-validation if they are significant at roughly the 0.1 level. As the previous discussion would suggest, use o f corresponding crossvalidation and b o o tstra p estim ates from training sets o f size 20 or less is precluded because for training sets o f such sizes the m odels with m ore th an five covariates are frequently unidentifiable. T h at is, the unbalanced nature o f the covariates, coupled w ith the binary nature o f some o f them , frequently leads to singular resam ple designs. Figure 6.12 includes b o o tstrap estim ates for m odels w ith u p to five covariates and training set o f size 16: these results were obtained by om itting m any singular resamples. These ra th er fragm entary results confirm th a t the m odel should include at least five covariates. A useful lesson from this is th a t there is a practical obstacle to w hat in theory is a preferred variable selection procedure. O ne w ay to try to overcome

306

6 ■Linear Regression cv, resample 10

cv, resample 20

cv, resample 30

leave-one-out cv

boot, resample 10

boot, resample 20

boot, resample 30

boot, resample 50

this difficulty is to stratify on the b inary covariates, b u t this is difficult to im plem ent an d does n o t w ork well here. ■ Example 6.13 (Simulation exercise) In order to assess the variable selection procedures w ithout the com plication o f singular resam ple designs, we consider a sm all sim ulation exercise in which procedures are applied to ten datasets sim ulated from a given m odel. T here are p = 5 independent covariates, whose values are sam pled from the uniform distrib u tio n on [0, 1], and responses y are generated by adding N ( 0,1) variates to the m eans p. = x Tp. The cases we exam ine have sam ple size n = 50, an d yS3 = jS4 = = 0, so the true m odel includes an intercept and two covariate terms. To simplify calculations only six m odels are fitted, by successively adding x i , . . . , x 5 to an initial m odel with con stan t intercept. All resam pling calculations are done with R = 100 samples. T he num b er o f d atasets is adm ittedly small, b u t sufficient to m ake rough com parisons o f perform ance. The m ain results concern m odels w ith P\ = P2 = 2, which m eans th a t the two non-zero coefficients are ab o u t four stan d ard errors aw ay from zero. Each panel o f Figure 6.13 shows, for the ten datasets, one variable selection criterion plotted against the n u m b er o f covariates included in the model. Evidently the clearest indications o f the tru e m odel occur w hen training set size is 10 or 20. L arger training sets give flat profiles for the criterion, and m ore frequent selection o f overfitted models. These indications m atch the evidence from m ore extensive sim ulations, which suggest th a t if training set size n —m is a b o u t n /3 then the probability o f correct m odel selection is 0.9 or higher, com pared to 0.7 o r less for leave-one-out crossvalidation. F u rther results were obtained w ith P\ = 2 an d P2 = 0.5, the latter equal to one stan d ard erro r aw ay from zero. In this situation underfitting — failure to

Figure 6.13 Cross-validation and bootstrap estimates of aggregate prediction error for sequence of six models fitted to ten datasets of size n = 50 with p = 5 covariates. The true model includes only two covariates.

6.5 ■Robust Regression

307

include x 2 in the selected m odel — occurred quite frequently even w hen using training sets o f size 20. This deg radation o f variable selection procedures when coefficients are sm aller th a n tw o stan d ard errors is reputed to be typical.



The theory used to justify the consistent cross-validation and boo tstrap procedures m ay depend heavily on the assum ptions th at the dim ension o f the true m odel is small com pared to the num ber o f cases, and th a t the non-zero regression coefficients are all large relative to their stan d ard errors. It is possible th a t leave-one-out cross-validation m ay w ork well in certain situations where m odel dim ension is com parable to num ber o f cases. This w ould be im p o rtan t, in light o f the very clear difficulties o f using small training sets w ith typical applications, such as Exam ple 6.12. Evidently fu rther work, b o th theoretical an d em pirical, is necessary to find broadly applicable variable selection m ethods.

6.5 Robust Regression T he use o f least squares regression estim ates is preferred w hen errors are n ear-norm al in distrib u tio n an d hom oscedastic. However, the estim ates are very sensitive to outliers, th a t is cases which deviate strongly from the general relationship. Also, if errors have a long-tailed distribution (possibly due to heteroscedasticity), then least squares estim ation is n o t an efficient m ethod. A ny regression analysis should therefore include ap p ro p riate inspection o f diagnostics based on residuals to detect outliers, and to determ ine if a norm al assum ption for errors is reasonable. If the occurrence o f outliers does not cause a change in the regression model, then they will likely be om itted from the fitting o f th a t m odel. D epending on the general pattern o f residuals for rem aining cases, we m ay feel confident in fitting by least squares, or we m ay choose to use a m ore robust m ethod to be safe. Essentially the resam pling m ethods th a t we have discussed previously in this chapter can be adapted quite easily for use w ith m any robust regression m ethods. In this section we briefly review som e o f the m ain points. Perhaps the m ost im p o rtan t p o in t is th a t gross outliers should be rem oved before final regression analysis, including resam pling, is undertaken. There are tw o reasons for this. The first is th a t m ethods o f fitting th a t are resistant to outliers are usually n o t very efficient, and m ay behave badly u n der resampling. T he second reason is th a t outliers can be disruptive to resam pling analysis o f m ethods such as least squares th a t are n o t resistant to outliers. F o r m odel-based resam pling, the erro r distribution will be contam inated and in the resam pling the outliers can then occur at any x values. F or case resam pling, outlying cases will occur w ith variable frequency and m ake the b o o tstrap estim ates o f coefficients too variable; see Exam ple 6.4. The effects can be diagnosed from

308

6 ■Linear Regression

D ose (rads)

117.5

235.0

470.0

705.0

940.0

1410

S urvival %

44.000 55.000

16.000 13.000

4.000 1.960 6.120

0.500 0.320

0.110 0.015 0.019

0.700 0.006

Table 6.11 Survival data (Efron, 1988).

the jackk n ife-after-b o o tstrap plots o f Section 3.10.1 o r sim ilarly inform ative diagnostic plots, b u t such plots can fail to show the occurrence o f m ultiple outliers. For datasets w ith possibly m ultiple outliers, diagnosis is aided by initial use o f a fitted m ethod th a t is highly resistant to the effects o f outliers. One preferred resistant m ethod is least trim m ed squares, which minimizes m

5 > 0 )(/*)j=i

(6.61)

the sum o f the m sm allest squares o f deviations e; (/}) = yj — x j p. Usually m is taken to be [\n] + 1. R esiduals from the least trim m ed squares fit should clearly identify outliers. The fit itself is n o t very efficient, and should best be th o ught o f as an initial step in a m ore efficient analysis. (It should be noted th a t in som e im plem entations o f least trim m ed squares, local m inim a o f (6.61) m ay be found far aw ay from the global m inim um .) Example 6.14 (Survival proportions) T he d a ta in Table 6.11 and the left panel o f Figure 6.14 are survival percentages for rats a t a succession o f doses o f radiation, w ith two o r three replicates at each dose. T he theoretical relationship betw een survival rate an d dose is exponential, so linear regression applies to x = dose,

y = log(survival percentage).

T he right panel o f Figure 6.14 plots these variables. There is a clear outlier, case 13, at x = 1410. T he least squares estim ate o f slope is —59 x 10-4 using all the data, changing to —78 x 10-4 w ith stan d ard erro r 5.4 x 10-4 when case 13 is om itted. T he least trim m ed squares estim ate o f slope is —69 x 10-4 . F rom the scatter p lo t it app ears th a t heteroscedasticity m ay be present, so we resam ple cases. The effect o f the outlier on the resam ple least squares estim ates is illustrated in Figure 6.15, which plots R = 200 b o o tstrap least squares slopes PI against the corresponding values o f ]T (x ” — x*)2, differentiated by the frequency w ith which case 13 appears in the resam ple. There are three distinct groups o f b o o tstrap p ed slopes, w ith the lowest corresponding to resam ples in which case 13 does n o t occur and the highest to sam ples where it occurs twice or more. A jack k n ife-after-b o o tstrap plot w ould clearly reveal the effect o f case 13. T he resam pling stan d ard erro r o f p \ is 15.3 x 10-4 , b u t only 7.6 x 10-4 for

Here [•] denotes integer part.

6.5 • Robust Regression

Figure 6.14 Scatter plots of survival data.

309



S

o

t



0s 15 > o £ D (0 O ) CM O '

o

i co D o

CO

C\J

• • •

CM

• • • ••

200



• • t

• 600

• 1000

• 1400

• 200

600

1000

1400

Dose

Dose

Figure 6.15 Bootstrap estimates of slope and design sum-of-squares J2(x } - x

)2 ( x \ 0 5 ),

differentiated by frequency of case 13 (appears zero, one or more times), for case resampling with R = 200 from survival data.

Sum of squares

sam ples w ithout case 13. T he corresponding resam pling standard errors o f the least trim m ed squares slope are 20.5 x 10-4 and 18.0 x 10~4, showing b o th the resistance an d inefficiency o f the least trim m ed squares m ethod. ■

Exam ple 6.15 (Salinity d a ta ) The d a ta in Table 6.12 are n = 28 observations on the salinity o f w ater in Pam lico Sound, N o rth C arolina. The response in the second colum n is the bi-weekly average o f salinity. The next three colum ns contain values o f the covariates, respectively a lagged value o f salinity, a trend

310

6 ■Linear Regression

Salinity sal

L agged salinity la g

T ren d in d icato r tre n d

R iver discharge d is

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

7.6 7.7 4.3 5.9 5.0 6.5 8.3 8.2 13.2 12.6 10.4 10.8 13.1 12.3 10.4 10.5 7.7 9.5 12.0 12.6 13.6 14.1 13.5 11.5

8.2 7.6 4.6 4.3 5.9 5.0 6.5 8.3 10.1 13.2 12.6 10.4 10.8 13.1 13.3 10.4 10.5 7.7

23.01 22.87 26.42 24.87 29.90 24.20 23.22 22.86 22.27 23.83 25.14 22.43 21.79 22.38 23.93 33.44 24.86 22.69 21.79 22.04 21.03 21.01 25.87 26.29

25 26 27 28

12.0 13.0 14.1 15.1

4 5 0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 0 1 4 5 0 1 2 3 4 5

10.0 12.0 12.1 13.6 15.0 13.5 11.5 12.0 13.0 14.1

Table 6.12 Salinity data (Ruppert and Carroll, 1980).

22.93 21.31 20.77 21.39

indicator, an d the river discharge. We consider a linear regression m odel with these three covariates. The initial least squares analysis gives coefficients 0.78, —0.03 and —0.30, with intercept 9.70. The usual stan d ard error for the trend coefficient is 0.16, so this coefficient would be ju d g ed n o t nearly significant. However, this fit is suspect, as can be seen n o t from the Q -Q plot o f m odified residuals b u t from the plot o f cross-validation residuals versus leverages, where case 16 stands out as an outlier — due apparen tly to its unusual value o f d is . T he outlier is m uch m ore easily detected using the least trim m ed squares fit, w hich has the quite different coefficient values 0.61, —0.15 and —0.86 w ith intercept 24.72: the residual o f case 16 from this fit has standardized value 6.9. Figure 6.16 shows norm al Q -Q plots o f standardized residuals from least squares (left panel) and least trim m ed squares fits (right panel); for the la tte r the scale factor is taken to be the m edian absolute residual divided by 0.6745, the value appropriate for estim ating the stan d ard deviation o f norm al errors.

Application of standard algorithms for least trimmed squares with default settings can give very different, incorrect solutions.

311

6.5 ■Robust Regression

Figure 6.16 Salinity data: standardized residuals from least squares (left) and least trimmed squares (right) fits using all cases.

co 3 ■D '(/) T© 3 N CO x>

c

co

55

Quantiles of standard normal

Quantiles of standard normal

T here is some question as to w hether the outlier is really ab errant, o r simply reflects the need for a quad ratic term in d i s . ■ Robust methods We suppose now th a t outliers have been isolated by diagnostic plots and set aside from fu rth er analysis. The problem now is w hether o r n o t th a t analysis should use least squares estim ation: if there is evidence o f a long-tailed error distribution, then we should dow nw eight large deviations yj — x j fi by using a robust m ethod. Two m ain options for this are now described. O ne ap p ro ach is to m inim ize n o t sums o f squared deviations b u t sums o f absolute values o f deviations, Y , Iy j ~ x J J®l> so liv in g less weight to those cases w ith the largest errors. This is the L i m ethod, which generalizes — and has efficiency com parable to — the sam ple m edian estim ate o f a population mean. T here is n o simple expression for approxim ate variance o f L\ estim ators. M ore efficient is M -estim ation, which is analogous to m axim um likelihood estim ation. H ere the coefficient estim ates /? for a m ultiple linear regression solve the estim ating equation 0,

(6.62)

where tp(z) is a b o unded replacem ent for z, and s is either the solution to a sim ultaneous estim ating equation, o r is fixed in advance. We choose the latter, tak in g s to be the m edian absolute deviation (divided by 0.6745) from the least trim m ed squares regression fit. T he solution to (6.62) is obtained by iterative weighted least squares, for which least trim m ed squares estim ates are good startin g values.

6 • Linear Regression

312

W ith a careful choice o f ip(-), M -estim ates should have sm aller standard errors th a n least squares estim ates for long-tailed d istributions o f random errors e, yet have com parable stan d ard errors should those errors be hom o­ scedastic norm al. O ne stan d ard choice is tp(z) = z m in (l,c /|z |), H u b er’s winsorizing function, for which the coefficient estim ates have approxim ate effi­ ciency 95% relative to least squares estim ates for hom oscedastic norm al errors when c = 1.345. F or large sam ple sizes M -estim ates ft are approxim ately norm al in distribu­ tion, with approxim ate variance v ar(£) = o'2 * {'p2{e/ fa is equivalent to f a /u*1/2 > f a / v lf2. This confirms that the P-value o f the permutation test is unaffected by studentizing. (Section 6.2.5)

6 • Linear Regression

318

For least squares regression, model-based resampling gives a bootstrap estimator fi' which satisfies n 7=1

where the sj are randomly sampled modified residuals. An alternative proposal is to bypass the resampling model for data and to define directly n p = $ + { x Tx r i Y t »i’ j=i where the u’s are randomly sampled from the vectors uj = xj ( y j - xJ h

j = 1......... n.

Show that under this proposal fi" has mean fi and variance equal to therobust variance estimate (6.26). Examine, theoretically or through numerical examples, to what extent the skewness of fi’ matches the skewness of fi. (Section 6.3.1; Hu and Zidek, 1995) For the linear regression model y = X p + e, the improved version of the robust estimate of variance for the least squares estimates fi is Vrob = (X TX ) - lX Tdizg(r2i, . . . , r 2n) X ( XTX ) - \ where rj is the j th modified residual. If the errors have equal variances, then the usual variance estimate v = s2^ 7* ) - 1 would be appropriate and vroi, could be quite inefficient. To quantify this, examine the case where the random errors e; are independent N(0, a2). Show first that

E(rj) = „=, Hence show that the efficiency of the ith diagonal element of vrob relative to the ith diagonal element of v, as measured by the ratio of their variances, is bl (n-p)g{Qgt where bu is the ith diagonal element of (Z TX )_1, gJ = (d^...... dfn) with D = TX)~lX T, and Q has elements (1 —h j k ) 2/ { ( 1 —/i; )(l —hk ) } . Calculate this relative efficiency for a numerical example. (Sections 6.2.4, 6.2.6, 6.3.1; Hinkley and Wang, 1991) (X

The statistical function /?(F) for M-estimation is defined by the estimating equation

J xv{

y - x Tm ' a(F)

dF(x,y) = 0,

where a(F) is typically a robust scale parameter. Assume that the model contains an intercept, so that the covariate vector x includes the dummy variable 1. Use the

hjk is the (J,k)th element of hat matrix H and hjj = hj.

6.1 ■Problems

319

technique o f Problem 2.12 to show that the influence function for fl(F) is V?(u) is d ip(u)/du.

M

^ ) = { / x x Tyj(e)dF(x, y) |

oxy>(e),

where e — (y — x Tf i ) / o ; it is assumed that sy)(e) has mean zero. If the distribution o f the covariate vector is taken to be the E D F o f x i , . . . , x „ , show that

Lp(x,y) = m k ~ 1( X TX)~1x\p(e), where X is the usual covariate matrix and k = E{ip(e)}. U se the empirical version o f this to verify the variance approximation

y-rX ) i T , V 2(ej/s) Vl = ns.2 / (X

{ £ v(ej/s)}2’ where e; = yj — x j f t and s is the estimated scale parameter. (Section 6.5) Given raw residuals e i , . . . , e n, define independent random variables ej by (6.21). Show that the first three mom ents o f ej are 0, ej, and ej. (a) Let be raw residuals from the fit o f a linear m odel y = X f t + e , and define bootstrap data by y ' = x f t + e ’ , where the elements o f s’ are generated according to the wild bootstrap. Show that the bootstrap least squares estimates ft" take at m ost 2" values, and that

E’(ft') = ft,

var'($*) = vwild = (X TX r lX TW X ( X TX ) ~ \

where W = d ia g ( e f,...,e 2). (b) Show that when all the errors have equal variances and the design is balanced, so that hj = p / n , vwiu is negatively biased as an estimate o f var(/3). (c) Show that for the simple linear regression m odel (6.1) the expected value o f var'($*) is

/r2 m2

n 2(n — 1 — m^/m\),

where mr = n~l J2(x j — x ) r. Hence show that if the x j are uniformly spaced and the errors have equal variances, the wild bootstrap variance estimate is too small by a factor o f about 1 — 14/(5n). (d) Show that if the e,- are replaced by r;, the difficulties in (b) and (c) do not arise. (Sections 6.2.4, 6.2.6, 6.3.2) Suppose that responses y i , . . . , y „ with n = 2m correspond to m independent samples o f size two, where the ith sample comes from a population with mean n t and these means are o f primary interest; the m population variances may differ. Use appropriate dummy variables x t to express the responses in the linear m odel y = X f t + e, where /?, = n t. With parameters estimated by least squares, consider estimating the standard error o f ft, by case resampling. (a) Show that the probability o f getting a simulated sample in which all the parameters are estimable is

6 ■Linear Regression

320

(b) Consider constrained case resampling in which each o f the m samples must be represented at least once. Show that the probability that there are r resample cases from the ith sample is i

^ \ // 2m \ (/ 11 \\

11 \\ 2m—r in—1 / a, y + ,i-a w here y +iP satisfies 0,

X

=

exp {Po — Pi log(x — 5 + e^4) } ,

k

=

exp (/?2 — P 3 log x).

(7.16)

347

7.3 ■Survival Data Table 7.6 Failure times (hours) from an accelerated life test on PET film in SFg gas insulated transformers (Hirose, 1993). ^ indicates right-censoring.

V oltage (kV) 5 7

10 15

7131 >9104.25 50.25 108.30 135.60 15.17 23.90 2.40 6.68

8482 >9104.25 87.75 108.30

8559 >9104.25 87.76 117.90

19.87 28.17 2.42 7.30

20.18 29.70 3.17

8762

9026

9034

9104

87.77 123.90

92.90 124.30

92.91 129.70

95.96 135.60

21.50

21.88

22.23

23.02

3.75

4.65

4.95

6.23

This param etrizatio n is chosen so th a t the range o f each param eter is u n ­ boun d ed ; n ote th a t xq = 5 — e^*. The u p p er panels o f Figure 7.7 show the fit o f this m odel when the p aram ­ eters are estim ated by m axim izing the log likelihood (. The left panel shows Q -Q plots for each o f the voltages, and the right panel shows the fitted m ean failure tim e an d estim ated threshold xo- T he fit seems broadly adequate. We sim ulate replicate d atasets by generating observations from the W eibull m odel obtained by substituting the M L E s into (7.16). In order to apply our assum ed censoring m echanism , we sort the observations sim ulated w ith x = 5 to get _y*1} < < say, an d then set y(*9), and equal to y'7) + 0.25. We give these three observations censoring indicators d* = 0, so th a t they are treated as censored, treat all the o th er observations as uncensored, and fit the W eibull m odel to the resulting data. For sake o f illustration, suppose th a t interest focuses on the m ean failure tim e 9 w hen x = 4.9. To facilitate this we reparam etrize the m odel to have T(v) is the Gamma function / 0°° uv-1e-u du.

p aram eters 9 an d /? = ( / f i ,...,/ ^ ) , where 9 = 10- 3A r(l + 1/k), w ith x = 4.9. T he lower left panel o f Figure 7.7 shows the profile log likelihood for 9, i.e. ^Prof(0) = m a P

in the figure we renorm alize the log likelihood to have m axim um zero. U nder the stan d ard large-sam ple likelihood asym ptotics outlined in Section 5.2.1, the approxim ate distrib u tio n o f the likelihood ratio statistic W( 9) = 2 {< V of(0) — is xj, so a 1 — a confidence set for the true 9 is the set such th at cVtP is the p quantile of the Xv distribution. ^ p ro f(0 ) ^ < V o f ( ^ ) — 5 C U _ a .

348

7 ■Further Topics in Regression

CD

£ o

_D

o>

(5

o

(0 0

log Weibull quantiles

Voltage

theta

Chi-squared quantile

■oo o o> o S

Q_

where 6 is the overall M L E . F o r these d a ta 0 = 24.85 and the 95% confidence interval is [19.75,35.53]; the confidence set contains values o f 6 for which f prof (^) exceeds the d o tted line in the b o tto m left panel o f Figure 7.7. T he use o f the chi-squared quantile to set the confidence interval presupposes th a t the sam ple is large enough for the likelihood asym ptotics to apply, and this can be checked by the p aram etric sim ulation outlined above. The lower right panel o f the figure is a Q -Q plot o f likelihood ratio statistics w ’(6) = 2 { /‘rof(0‘ ) — /* rof(0)} based on 999 sets o f d a ta sim ulated from the fitted model. The distribution o f the w ’(6) is close to chi-squared, b u t w ith

Figure 7.7 PET reliability data analysis. Top left panel: Q-Q plot of log failure times against quantiles of log Weibull distribution, with fitted model given by dotted lines, and censored data by o. Top right panel: Fitted mean failure time as a function of voltage x; the dotted line shows the estimated voltage £o below which failure is impossible. Lower left panel: normalized profile log likelihood for mean failure time 0 at x = 4.9; the dotted line shows the 95% confidence interval for Q using the asymptotic chi-squared distribution, and the dashed line shows the 95% confidence interval using bootstrap calibration of the likelihood ratio statistic. Lower right panel: chi-squared Q-Q plot for simulated likelihood ratio statistic, with dotted line showing its large-sample distribution.

7.3 • Survival Data

349

Table 7.7 Com parison of estim ated biases and standard errors o f maximum likelihood estimates for the PET reliability data, using standard first-order likelihood theory, param etric bootstrap simulation, and model-based nonparam etric resampling.

P aram eter

Po Pi Pi ft *0

M LE

6.346 1.958 4.383 1.235 4.758

L ikelihood

P aram etric

N o n p a ra m e tric

Bias

SE

Bias

SE

Bias

SE

0 0 0 0 0

0.117 0.082 0.850 0.388 0.029

0.007 0.007 0.127 0.022 -0.004

0.117 0.082 0.874 0.393 0.030

0.001 0.006 0.109 0.022 -0.002

0.112 0.080 0.871 0.393 0.028

m ean 1.12, an d their 0.95 quantile is w(*950) = 4.09, to be com pared with ci,o.95 = 3.84. This gives b o o tstrap calibrated 95% confidence interval the set o f 9 such th a t / prof(0) > / prof(9) — 5 x 4.09, th a t is [19.62,36.12], which is slightly w ider th a n the stan d ard interval.

? is the m atrix o f second derivatives o f £ with respect to 0 and /?.

Table 7.7 com pares the bias estim ates and stan d ard errors for the m odel param eters using the param etric b o o tstra p described above and standard firsto rd er likelihood theory, und er which the estim ated biases are zero, and the variance estim ates are obtained as the diagonal elem ents o f the inverse observed inform ation m atrix (—?)_1 evaluated at the M LEs. The estim ated biases are sm all b u t significantly different from zero. The largest differences betw een the stan d ard theory and the b o o tstrap results are for f o and fo, for which the biases are o f order 2 -3 % . T he threshold param eter xo is well determ ined; the sta n d a rd 95% confidence interval based on its asym ptotic norm al distribution is [4.701,4.815], w hereas the norm al interval with estim ated bias and variance is [4.703,4.820], A m odel-based nonparam etric b o o tstrap m ay be perform ed by using resid­ uals e = ( y / ) . f , three o f which are censored, then resam pling errors £* from their product-lim it estim ate, an d then m aking uncensored b o o tstrap observa­ tions le*1/*. T he observations with x = 5 are then m odified as outlined above, an d the m odel refitted to the resulting data. The product-lim it estim ate for the residuals is very close to the survivor function o f the stan d ard exponential dis­ tribution, so we expect this to give results sim ilar to the param etric sim ulation, and this is w hat we see in Table 7.7. F or censoring at a pre-determ ined tim e c, the sim ulation algorithm s would w ork as described above, except th a t values o f y * greater th a n c would be replaced by c an d the corresponding censoring indicators d* set equal to zero. T he nu m b er o f censored observations in each sim ulated dataset would then be ran d o m ; see Practical 7.3. Plots show th a t the sim ulated M L E s are close to norm ally distributed: in this case sta n d a rd likelihood theory w orks well enough to give good confi­ dence intervals for the param eters. The benefit o f param etric sim ulation is th at the b o o tstra p estim ates give em pirical evidence th a t the stan d ard theory can

350

7 ■Further Topics in Regression

be trusted, while providing alternative m ethods for calculating m easures o f uncertainty if the stan d ard theory is unreliable. It is typical o f first-order like­ lihood m ethods th a t the variability o f likelihood quantities is underestim ated, although here the effect is sm all enough to be un im p o rtant. ■ Proportional hazards model I f it can be assum ed th a t the explanatory variables act m ultiplicatively on the hazard function, an elegant an d pow erful ap p ro ach to survival d a ta analysis is possible. U nder the usual form o f proportional hazards model the hazard function for an individual w ith covariates x is d A ( y ) = exp( x T P)dA°(y), where dA°(y) is the ‘baseline’ h azard function th a t would apply to an individual w ith a fixed value o f x, often x = 0. T he corresponding cum ulative hazard and survivor functions are A{y) = [ y e x p ( x T P)dA°(u), Jo

1 - F ( y ; p, x) = {1 - F °(y )}exp(x7 P)

where 1 — F°(y) is the baseline survivor function for the hazard dA°(y). The regression p aram eters P are usually estim ated by m axim izing the partial likelihood, which is the p ro d u ct over cases w ith dj = 1 o f term s ________g P f r r ft>________ E L i H (yj - y k ) e xp (x Tpky

(717)

where H(u) equals zero if u < 0 an d equals one otherwise. Since (7.17) is unaltered by recentring the xj, we shall assum e below th at E x j = 0 ; the baseline h azard then corresponds to the average covariate value x = 0. In term s o f the estim ated regression param eters the baseline cum ulative hazard function is estim ated by the Breslow estimator

A °(y )= J 2 ^n m d\ (T tiV j:yj')}exp(x^ ) ; then 2 set 7 / = m in(7P*,C *), w ith Dj = 1 if 7 / = Y f and zero otherwise.

T he next exam ple illustrates the use o f these algorithm s.

352

7 ■Further Topics in Regression

Example 7.6 (Melanoma data) To illustrate these ideas, we consider d a ta on the survival o f patients w ith m alignant m elanom a, whose tum ours were re­ m oved by o p eratio n a t the D ep artm en t o f Plastic Surgery, U niversity H ospital o f Odense, D enm ark. O perations to o k place from 1962 to 1977, and patients were followed to the end o f 1977. Each tu m o u r was com pletely removed, to ­ gether w ith a b o u t 2.5 cm o f the skin aro u n d it. T he following variables were available for 205 p atients: tim e in days since the operation, possibly censored; status at the end o f the study (alive, dead from m elanom a, dead from other causes); sex; age; year o f o p eratio n ; tu m o u r thickness in m m ; and an indi­ cator o f w hether or n o t the tu m o u r was ulcerated. U lceration and tum our thickness are im p o rtan t prognostic variables: to have a thick o r ulcerated tu m o u r substantially increases the chance o f d eath from m elanom a, and we shall investigate how they affect survival. We assum e th a t censoring occurs at random . We fit a p ro p o rtio n al hazards m odel und er the assum ption th a t the baseline hazards are different for the ulcerated group o f 90 individuals, and the no n ­ ulcerated group, b u t th a t there is a com m on effect o f tu m o u r thickness. F or a flexible assessm ent o f how thickness affects the h azard function, we fit a natu ral spline w ith four degrees o f freed o m ; its k nots are placed a t the em pirical 0.25, 0.5 and 0.75 quantiles o f the tu m o u r thicknesses. T hus our m odel is th at the survivor functions for the ulcerated an d non-ulcerated groups are 1 - F l ( y ; P , x ) = {1 - f ? ( 30}“ p(xrw,

l - F 2( y ; p , x ) = {1 - F 2°(y)}exp(xT/f),

where x has dim ension fo u r an d corresponds to the spline, /? is com m on to the groups, b u t the baseline survivor functions 1 — F^(y) and 1 — F^iy) m ay differ. F o r illustration we take the fitted censoring distribution to be the product-lim it estim ate obtained by setting censoring indicators d' = 1 —d, and fitting a m odel w ith no covariates, so G is ju st the product-lim it estim ate o f the censoring time distribution. T he left panel o f Figure 7.8 shows the estim ated survivor functions 1 — F®(y) an d 1 — F °(y); there is a strong effect o f ulceration. T he right panel shows how the linear predictor x Tji depends on tu m o u r thickness: from 0-3 m m the effect on the baseline h azard changes from ab o u t exp(—1) = 0.37 to ab o u t exp(0.6) = 1.8, followed by a slight dip an d a gradual upw ard increase to a risk o f a b o u t exp(1.2) = 3.3 for a tu m o u r 15 m m thick. T hus the hazard increases by a factor o f a b o u t 10, b u t m ost o f the increase takes place from 0 -3 mm. However, there are too few individuals w ith tum ours m ore th an 10 m m thick for reliable inferences at the right o f the panel. The top left panel o f Figure 7.9 shows the original fitted linear predictor, together w ith 19 replicates o btained by resam pling cases, stratified by ulcera­ tion. The lighter solid lines in the panel below are pointw ise 95% confidence limits, based on R = 999 replicates o f this sam pling scheme. In effect these are percentile m ethod confidence lim its for the linear predictor a t each thickness.

7.4 ■Other Nonlinear Models

Figure 7.8 Fit o f a proportional hazards model for ulcer histology and survival o f patients with malignant m elanom a (Andersen et al., 1993, pp. 709-714). Left panel: estim ated baseline survivor functions for cases with ulcerated (dots) and non-ulcerated (solid) tumours. Right p an el: fitted linear predictor x Tfi for risk as a function o f tum our thickness. The lower rug is for non-ulcerated patients, and the upper rug for ulcerated patients.

353

Time (days)

Tumour thickness (mm)

T he sharp increase in risk for small thicknesses is clearly a genuine effect, while beyond 3mm the confidence interval for the linear predictor is roughly [0,1], w ith thickness having little o r no effect. R esults from m odel-based resam pling using the fitted m odel and applying A lgorithm 7.3, an d from conditional resam pling using A lgorithm 7.2 are also show n; they are very sim ilar to the results from resam pling cases. In view o f the discussion in Section 3.5, we did n o t apply the weird bootstrap. The right panels o f Figure 7.9 show how the estim ated 0.2 quantile o f the survival distribution, yo.2 = min{y : F i ( y ; P , x ) > 0.2} depends on tum our thickness. T here is an initial sharp decrease from 3000 days to ab o u t 750 days as tu m o u r thickness increases from 0 -3 mm, but the estim ate is roughly co n stan t from then on. T he individual estim ates are highly variable, b u t the degree o f uncertainty m irrors roughly th a t in the left panels. Once again results for the three resam pling schemes are very similar. U nlike the previous exam ple, where resam pling and stan d ard likelihood m ethods led to sim ilar conclusions, this exam ple shows the usefulness o f resam pling w hen stan d ard approaches would be difficult o r im possible to apply. ■

7.4 Other Nonlinear Models A nonlinear regression m odel w ith independent additive errors is o f form

yj

=

Kxj,P) + £j,

; =

(7.20)

354

7 • Further Topics in Regression

Figure 7.9 Bootstrap results for melanoma data analysis. Top left: fitted linear predictor (heavy solid) and 19 replicates from case resampling (solid); the rug shows observed thicknesses. Top right: estimated 0.2 quantile of survivor distribution as a function of tumour thickness, for an individual with an ulcerated tumour (heavy solid), and 19 replicates for case resampling (solid); the rug shows observed thicknesses. Bottom left: pointwise 95% percentile confidence limits for linear predictor, from case (solid), model-based (dots), and conditional (dashes) resampling. Bottom right: pointwise 95% percentile confidence limits for 0.20 quantile of survivor distribution, from case (solid), model-based (dots), and conditional (dashes) resampling, R — 999.

o g TD ok> Q.

(0 -p

T his defines an iteration th a t starts at P' using a linear regression least squares fit, an d a t the final iteratio n /?' = /?. A t th a t stage the left-hand side o f (7.21) is simply the residual ej = yj — fi(xj,P). A pproxim ate leverage values and o th er diagnostics are obtained from the linear approxim ation, th a t is using the definitions in previous sections b u t w ith the UjS evaluated a t p' = p as the values o f explanatory variable vectors. This use o f the linear approxim ation can give m isleading results, depending upon the “intrinsic curvature” o f the regression surface. In particu lar, the residuals will no longer have zero expectation in general, an d standardized residuals r; will no longer have co n stan t variance u n d er hom oscedasticity o f true errors. T he usual norm al approxim ation for the distribution o f P is also based on the linear approxim ation. F or the approxim ate variance, (6.24) applies w ith X replaced by U = ( u i , . . . , u n)T evaluated at p. So w ith s2 equal to the residual m ean square, we have P -P

~

N ( 0 , s 2( U T U r l ) .

(7.22)

T he accuracy o f this ap proxim ation will depend upon tw o types o f curvature effects, called p aram eter effects and intrinsic effects. The first o f these is specific to the p aram etrizatio n used in expressing /x(x, •), and can be reduced by careful choice o f param etrization. O f course resam pling m ethods will be the m ore useful the larger are the curvature effects, and the worse the norm al approxim ation. R esam pling m ethods apply here ju st as with linear regression, either sim ­ u lating d a ta from the fitted m odel w ith resam pled m odified residuals or by resam pling cases. F o r the first o f these it will generally be necessary to m ake a m ean adjustm ent to w hatever residuals are being used as the erro r population. It would also be generally advisable to correct the raw residuals for bias due to nonlinearity: we d o n o t show how to do this here. Exam ple 7.7 (Calcium uptake d ata) T he d ata plotted in Figure 7.10 show the calcium u p tak e o f cells, y, as a function o f tim e x after being suspended in a solution o f radioactive calcium. Also shown is the fitted curve fi(x,P) = Po { l - e x p ( - / ? i x ) } . T he least squares estim ates are Po = 4.31 and Pi = 0.209, and the estim ate o f a is 0.55 w ith 25 degrees o f freedom. The stan d ard errors for Po and Pi based on (7.22) are 0.30 an d 0.039.

7 *Further Topics in Regression

356

Figure 7.10 Calcium uptake data and fitted curve (left panel), with raw residuals (right panel) (Rawlings, 1988, p. 403).

to o (0 ZJ "O *35 o £ o 5(0 cr m o

2

Time (minutes)

Po h

4

6

8

10

12

14

Time (minutes)

E stim ate

B o o tstrap bias

T heoretical SE

B o o tstrap SE

4.31

0.028

0.30

0.38

0.209

0.004

0.039

0.040

The right panel o f Figure 7.10 shows th a t hom ogeneity o f variance is slightly questionable here, so we resam ple cases by stratified sam pling. Estim ated biases and stan d a rd errors for f o an d fo based on 999 b o o tstrap replicates are given in Table 7.8. T he m ain p o in t to notice is the appreciable difference betw een A theoretical an d b o o tstra p stan d ard errors for Po. Figure 7.11 illustrates the results. N ote the non-elliptical p a ttern o f variation and the n on-norm ality: the z-statistics are also quite non-norm al. In this case the b o o tstrap should give b etter results for confidence intervals th an norm al approxim ations, especially for Po- T he b o tto m right panel suggests th a t the param eter estim ates are closer to norm al on logarithm ic scales. Results for m odel-based resam pling assum ing hom oscedastic errors are fairly similar, alth o u g h the sta n d a rd error for f o is then 0.32. The effects o f nonlin­ earity are negligible in this case: for exam ple, the m axim um absolute bias o f residuals is a b o u t 0.012 ;w { (x ~ */)/*>} E » {(x -x # )} ’

(7.24)

w ith w(-) a sym m etric density function and b an adjustable “ban d w id th ” con­ stan t th a t determ ines how widely the averaging is done. This estim ate is similar in m any ways to the kernel density estim ate discussed in Exam ple 5.13, and as there the choice o f b depends upon a trade-off betw een bias and variability o f the e stim a te : sm all b gives sm all bias and large variance, whereas large b has the opposite effects. Ideally b would vary w ith x, to reflect large changes in the derivative o f /i(x) and heteroscedasticity, b o th evident in Figure 7.14. M odifications to the estim ate (7.24) are needed at the ends o f the x range, to avoid the inherent bias when there is little or no d ata on one side o f x. In m any ways m ore satisfactory are the local regression m ethods, where a local linear or quad ratic curve is fitted using weights w{(x — xj ) / b} as above, and then p.(x) is taken to be the fitted value at x. Im plem entations o f this idea include the lowess m ethod, which also incorporates trim m ing o f outliers. A gain the choice o f b is critical. A different approach is to define a curve in term s o f basis functions, such as pow ers o f x which define polynom ials. The fitted m odel is then a linear co m bination o f basis functions, with coefficients determ ined by least squares regression. W hich basis to use depends on the application, b u t polynom ials are

364

7 • Further Topics in Regression

generally b a d because fitted values becom e increasingly variable as x moves tow ard the ends o f its d a ta range — polynom ial extrapolation is notoriously poor. O ne p o p u lar choice for basis functions is cubic splines, w ith which n(x) is m odelled by a series o f cubic polynom ials joined at “k n o t” values o f x, such th a t the curve has continuous second derivatives everywhere. The least squares cubic spline fit m inim izes the penalized least squares criterion for fitting /i(x), ~ M*/)}2 + * J { t t x ) } 2dx; w eighted sum s o f squares can be used if necessary. In m ost softw are im ple­ m entations the spline fit can be determ ined either by specifying the degrees o f freedom o f the fitted curve, o r by applying cross-validation (Section 6.4.1). A spline fit will generally be biased, unless the underlying curve is in fact a cubic. T h a t such bias is nearly always present for nonparam etric curve fits can create difficulties. T he o th er general feature th a t m akes in terp retatio n difficult is the occurrence o f spurious bum ps an d bends in the curve estim ates, as we shall see in Exam ple 7.10. Resampling methods Two types o f applications o f n o n p aram etric curves are use in checking a p a ra ­ m etric curve, an d use in setting confidence lim its for fi(x) o r prediction limits for Y = h ( x ) + e at some values o f x. The first type is quite straightforw ard, be­ cause d a ta would be sim ulated from the fitted param etric m odel: Exam ple 7.11 illustrates this. H ere we look briefly a t confidence lim its and prediction limits, where the n o n p aram etric curve is the only “m odel”. The basic difficulty for resam pling here is sim ilar to th a t w ith density estim ation, illustrated in Exam ple 5.13, nam ely bias. Suppose th a t we w ant to calculate a confidence interval for ji(x) at one o r m ore values o f x. Case resam pling can n o t be used w ith stan d ard recom m endations for nonparam etric regression, because the resam pling bias o f f i { x ) will be sm aller th an th at o f ju(x). T his could probably be corrected, as w ith density estim ation, by using a larger b andw idth o r equivalent tuning constant. But simpler, at least in principle, is to apply the idea o f m odel-based resam pling discussed in C h apter 6. The naive extension o f m odel-based resam pling would generate responses y j = p.{xj) + e*, where fa(x; ) is the fitted value from some nonparam etric regression m ethod, an d ej is sam pled from appropriately m odified versions o f the residuals yj — fi(xj). U n fortunately the inherent bias o f m ost n o n p a ra ­ m etric regression m ethods distorts b o th the fitted values and the residuals, and thence biases the resam pling scheme. O ne recom m ended strategy is to use as sim ulation m odel a curve th a t is oversm oothed relative to the usual estim ate. F o r definiteness, suppose th a t we are using a kernel m ethod o r a local sm oothing m ethod w ith tuning co n stan t b, an d th a t we use cross-validation

7.6 • Nonparametric Regression

365

to determ ine the best value o f b. T hen for the sim ulation m odel we use the corresponding curve with, say, 2b as the tuning constant. To try to elim inate bias from the sim ulation errors ej, we use residuals from an undersm oothed curve, say w ith tuning co n stan t b / 2. As with linear regression, it is appropriate to use m odified residuals, where leverage is taken into account as in (6.9). This is possible for m ost nonparam etric regression m ethods, since they are linear. D etailed asym ptotic theory shows th at som ething along these lines is necessary to m ake resam pling work, b u t there is no clear guidance as to precise relative values for the tuning constants. E xam ple 7.10 (M otorcycle im pact d a ta ) The response y here is acceleration m easured x m illiseconds after im pact in an accident sim ulation experim ent. T he full d a ta were shown in Figure 7.14, b u t for com putational reasons we elim inate replicates for the present analysis, which leaves n = 94 cases with distinct x values. The solid line in the top left panel o f Figure 7.15 shows a cubic spline fit for the d a ta o f Figure 7.14, chosen by cross-validation and having approxim ately 12 degrees o f freedom. The top right panel o f the figure gives the plot o f m odified residuals against x for this fit. N ote the heteroscedasticity, w hich broadly corresponds to the three stra ta separated by the vertical dotted lines. The estim ated variances for these stra ta are approxim ately 4, 600 and 140. Reciprocals o f these were used as weights for the spline fit in the left panel. Bias in these residuals is evident at times 10-15 ms, where the residuals are first m ostly negative and then positive because the curve does not follow the d a ta closely enough. There is a rough correspondence betw een kernel sm oothing and spline sm oothing an d this, together w ith the previous discussion, suggests th a t for m odel-based resam pling we use yj = p(xj) + ej, where fi is the spline fit obtained by doubling the cross-validation choice o f L This fit is the dotted line in the top left panel o f Figure 7.15. The random errors ej are sam pled from the m odified residuals for an o th er spline fit in which X is h a lf the crossv alidation value. The lower right panel o f the figure displays these residuals, which show less bias th a n those for the original fit, though perhaps a smaller b andw idth would be b etter still. The sam pling is stratified, to reflect the very strong heteroscedasticity. We sim ulated R = 999 d atasets in this way, and to each fitted the spline curve fi’ (x), w ith the b an d w id th chosen by cross-validation each time. We then calculated 90% confidence intervals at six values o f x, using the basic b o otstrap m ethod m odified to equate the distributions o f /i*(x) —p(x) and F or example, at x = 20 the estim ates ft and p. are respectively —110.8 and —106.2, and the 950th ordered value o f p" is —87.2, so th a t the upper confidence limit is —110.8 — {—87.2 — (—106.2)} = —129.8. The resulting confidence intervals are shown in the b o tto m left panel o f Figure 7.15, together w ith the original

7 • Further Topics in Regression

366

c3o ' o 0.

Show that Y has unconditional mean and variance (7.15) and express n and in terms o f a and fa Express a and /? in terms o f n and , and hence explain how to generate data with mean and variance (7.15) by generating n from a beta distribution, and then, conditional on the probabilities, generating binom ial variables with probabilities n and denominators m. How should your algorithm be amended to generate beta-binomial data with variance function II(l — II)? (Example 7.3) 6

For generalized linear models the analogue o f the case-deletion result in Problem 6.2 is

Kj = P-(xTwxy'wjk-^xj^^i. (a) Use this to show that when the y'th case is deleted the predicted value for y, is

7 • Further Topics in Regression

378

(b) Use (a) to give an approximation for the leave-one-out cross-validation estimate o f prediction error for a binary logistic regression with cost (7.23). (Sections 6.4.1,7.2.2)

7.9 Practicals 1

Dataframe r e m is s io n contains data from Freeman (1987) concerning a measure o f cancer activity, the LI values, for 27 cancer patients, o f whom 9 went into remission. Remission is indicated by the binary variable r = 1. Consider testing the hypothesis that the LI values do not affect the probability o f remission. First, fit a binary logistic m odel to the data, plot them, and perform a permutation test:

attach(remission) plot(LI+O.03*rnorm(27),r,pch=l,xlab="LI, jittered",xlim=c(0,2.5)) rem.glm , and so it arises with co n stan t b u t unknow n rate X ’ p er person-year. If Sfi' is also rare, it will be reasonable to suppose th a t the num b er o f cases o f at y has a Poisson distributio n w ith m ean X 'f i ( y ) . H ence the conditional probability o f a case o f at y given th a t there is a case o f o r 3 ' a t y is n { y ) = X { y ) / { X ' + A(y)}. If the disease locations are indicated by yj, an d dj is zero o r one according as the case a t yj has 3)' or Q>, the likelihood is n ^ { i - « ( y ^ . j If a suitable form for X(y) is assum ed we can o btain the likelihood ratio or perhaps an o th er statistic T to test the hypothesis th at 7i(y) is constant. This is a test o f pro p o rtio n al hazards for Q) and & , b u t unlike in Exam ple 4.4 the alternative is specified, at least weakly. W hen A(y) = Xo an ap proxim ation to the null distribution o f T can be obtained by perm uting the labels on cases at different locations. T h at is, we and 3l' to the yj, recom pute T perform R ran d o m reallocations o f the labels for each such reallocation, an d see w hether the observed value o f t is extrem e relative to the sim ulated values t \ , . . . , t ’R. m Exam ple 8.12 (Bram bles) The upp er left panel o f Figure 8.17 shows the locations o f 103 newly em ergent an d 97 one-year-old bram ble canes in a 4.5 m square plot. It seems plausible th a t these two types o f event are related, but how should this be tested? Events o f b o th types are clustered, so a Poisson null hypothesis is not appropriate, n o r is it reasonable to perm ute the labels attached to events, as in the previous example. Let us denote the locations o f the two types o f event by y i , . . . , y „ and y [, . . ., y 'n-. Suppose th a t a statistic T = t ( y i , . . . , y „ , y [ , . . . , y ' n,) is available th at tests for association betw een the event types. If the extent o f the observation region were infinite, we m ight construct a null distribution for T by applying random translations to events o f one type. T hus we would generate values T ‘ = t(yi + U*, . . ., y„ + U*,y[,...,y'rf), where I/* is a random ly chosen location in the plane. This sam pling scheme has the desirable property o f fixing the

8.3 • Point Processes

423

Figure 8.17 Brambles data. Top left: positions of newly emergent (+) and one-year bramble canes (•) in a 4.5 m square plot. Top right: random toroidal shift of the newly emergent canes, with the original edges shown by dotted lines. Bottom left: Original dependence function Z n (solid) and 20 replicates (dots) under the null hypothesis of no association between newly emergent and one-year canes. Bottom right: original dependence function and pointwise (dashes) and overall (dots) 95% null confidence sets. The data used here are the upper left quarter of those displayed on p. 113 of Diggle (1983).

++ \ \

+:*4• + ++ *- •

v*

+ ;

r

t

+

++ •** 4-

++

► V.

t

relative locations o f each type o f event, b u t cannot be applied directly to the d a ta in Figure 8.17 because the resam pled patterns will n o t overlap by the sam e am o u n t as the original.

[•] denotes integer part.

We overcom e this by ran d o m toroidal shifts, where we im agine th a t the pattern is w rapped on a torus, the random translation is applied, and the translated p attern is then unw rapped. Thus for points in the unit square we w ould generate U * = ( [ /j, Uj) at random in the unit square, and then m ap the event a t y} = ( y i j , y 2j) to yj = ( y {] + U\ - [yij + U[],y2j + U 2' - [y2J + U\]). The u p p er right panel o f Figure 8.17 shows how such a shift uncouples the tw o types o f events.

424

8 ■Complex Dependence

We can construct a test through an extension o f the K -function to events o f two types, th a t is the function (# {ty p e 2 events w ithin distance t o f an arbitrary type 1 e v e n t} ), where A2 is the overall intensity o f type 2 events. Suppose th a t there are «i, ri2 events o f types 1 an d 2 in an observation region A o f area \A\, th at u,, is the distance from the ith type 1 event to the 7th type 2 event, th a t w,(u) is the proportio n o f the circum ference o f the circle th a t is centred at the ith type 1 event an d has radius u th a t lies w ithin A, and let /(•) denote the indicator o f the event T hen the sam ple version o f this bivariate K -function is K i2(r) = (nin 2 r l \ A \ J 2 '^2 w - l (uij)I(uij < t). i=i j=\ A lthough it is possible to base an overall statistic on K n i t ) , for exam ple taking T = f Z n ( t ) 2 dt, where Z\ i ( t) = { k n { t ) / n } 112 — f, a graphical test is usually m ore inform ative. The lower left panel o f Figure 8.17 shows results from 20 random toroidal shifts o f the data. The original value o f Z \ 2 (t) seems to show m uch stronger local association th an do the sim ulations. This is confirm ed by the lower right panel, which shows 95% pointw ise an d overall confidence bands for Z n ( t ) based on R = 999 shifts. T here is clear evidence th a t the point patterns are no t in d ep en d en t: as the original d a ta suggest, new canes emerge close to those from the previous year. ■

8.3.4 Tiles Little is know n ab o u t resam pling spatial processes when there is no param etric model. One n onparam etric ap proach th a t has been investigated starts from a p artition o f the observation region St into disjoint tiles o f equal size and shape. I f we abuse n o tatio n by identifying each tile with the pattern it contains, we can w rite the original value o f the statistic as T = t(.stf The idea is to create a resam pled p attern by tak ing a random sam ple of tiles s 4 \ , . . . , s 4 ' n from with corresponding boo tstrap statistic T* = t( j/J ,...,,s /* ) . The hope is th a t if dependence is relatively short-range, taking large tiles will preserve enough dependence to m ake the properties o f T* close to those o f T. If this is to w ork, the size o f the tile m ust be chosen to trade off preserving dependence, which requires a few large tiles, and getting a good estim ate o f the distribution o f T , which requires m any tiles. This idea is analogous to block resam pling in tim e series, and is capable o f sim ilar variations. F o r exam ple, ra th e r th an choosing the stf* independently from the fixed tiles s i we m ay resam ple m oving tiles by setting

8.3 • Point Processes

o in

..... •

*

: •• • : •: •• :

1

*

*

.

* . . •

..

* * -I .* • j * ......... «-• • V

«



. •r

V #

• •

* -V • • • • •......... ..... . * !



!

.

.

! .



•: : : 0

100

•; 200

•* 300

. 400



o o 300



• . :•



• ;

.v

.. -•........- ♦ ...................................... •

••

• • -* . ••

............ !••••! • * ** • :* •

.......•...... • » • . * •: •/. * * . *• * •• * * ' • */ . • .• • •** • : • * •

o 500

» : •

200

..

100

Figure 8.18 Tile resampling for the caveolae data. The left panel shows the original data, with nine tiles sampled at random using toroidal wrapping. The right panel shows the resampled point pattern.

425

0

100

200

300

400

■srf'j = Uj + sJj, where Uj is a random vector chosen so th a t s / j lies wholly w ithin we can avoid bias due to undersam pling near the boundaries o f 9t by toroidal w rapping. As in all problem s involving spatial data, edge effects are likely to play a critical role. Exam ple 8.13 (Caveolae) Figure 8.18 illustrates tile resam pling for the d ata o f Exam ple 8.9. T he left panel shows the original caveolae data, with the dotted lines showing nine square tiles taken using the m oving scheme w ith toroidal w rapping. The right panel shows the resam pled p a ttern obtained when the tiles are laid side-by-side. F or example, the centre top tile and m iddle right tiles were respectively taken fropi the top left and b ottom right o f the original data. A long the tile edges, events seem to lie closer together th a n in the left p anel; this is analogous to the w hitening th a t occurs in blockwise resam pling o f tim e series. N o analogue o f the post-blackened b o o tstrap springs to mind, however. F or a num erical evaluation o f tile resam pling, we experim ented with esti­ m ating the variance 9 o f the nu m ber o f events in an observation region 3tt o f side 200 units, using d a ta generated from three random processes. In each case we generated 8800 events in a square o f side 4000, then estim ated 9 from 2000 squares o f side 200 taken at random . F or each o f 100 random squares o f side 200 we calculated the em pirical m ean squared error for estim ation o f 9 using b o o tstrap s o f size R, for b o th fixed and m oving tiles. D a ta were generated from a spatial Poisson process (9 = 23.4), from the Strauss process th a t gave the results in the b o tto m right panel o f Figure 8.14 (9 = 17.5), and from a sequential spatial inhibition process, which places points sequentially at ran d o m b u t n o t w ithin 15 units o f an existing event (6 = 15.6).

8 • Complex Dependence

426

Table 8.5 M ean n 4

16

36

64

100

144

196

256

theory fixed m oving

224.2 255.2 92.2

77.9 66.1 39.7

47.3 40.2 35.8

36.3 31.7 31.6

31.2 27.6 33.0

28.4 27.6 30.8

26.7 25.5 27.4

25.6 27.8 27.0

S trau ss

fixed m oving

129.1 53.2

49.1 26.4

27.9 19.0

19.2 17.4

16.4 15.9

19.3 18.9

20.8 18.7

21.9 17.9

SSI

fixed m oving

123.8 36.5

37.7 12.9

14.8 11.2

13.5 15.6

17.9 18.3

25.1 21.2

34.6 28.6

42.4 35.4

Poisson

Table 8.5 shows the results. F o r the Poisson process the fixed tile results broadly agree w ith theoretical calculations (Problem 8.9), and the m oving tile results accord w ith general theory, which predicts th a t m ean squared errors for m oving tiles should be lower th a n for fixed tiles. H ere the m ean squared erro r decreases to 22 as n—►o o . T he fitted Strauss process inhibits pairs o f points closer together th an 12 units. The m ean squared erro r is m inim ized w hen n = 100, corresponding to tiles o f side 20; the average estim ated variances from the 100 replicates are then 19.0 an d 18.2. T he m ean squared errors for m oving tiles are rath er lower, b u t their p a tte rn is similar. The sequential spatial inhibition results are sim ilar to those for the Strauss process, b u t w ith a sh arp er rise in m ean squared error for larger n. In this setting theory predicts th a t for a process with sufficiently shortrange dependence, the optim al n o c \ I f the caveolae d a ta were generated by a Strauss process, results from Table 8.5 would suggest th a t we take n = 100 x 500/200 = 162, so there w ould be 16 tiles along each side o f 3k. W ith R = 200 an d fixed and m oving tiles this gives variance estim ates o f 101.6 and 100.4, b o th considerably sm aller th a n the variance for Poisson data, which would be 138. ■

8.4 Bibliographic Notes There are m any books on tim e series. Brockwell an d D avis (1991) is a recent book aim ed at a fairly m athem atical readership, while Brockwell and D avis (1996) an d Diggle (1990) are m ore suitable for the less theoretically inclined. Tong (1990) discusses nonlinear tim e series, while Beran (1994) covers longm em ory processes. Bloomfield (1976), Brillinger (1981), Priestley (1981), and Percival an d W alden (1993) are introductions to spectral analysis o f tim e series.

squared errors for estim ation o f the variance o f the num ber o f events in a square of side 200, based on bootstrapping fixed and moving tiles. D ata were generated from a Poisson process, a Strauss process with param eters chosen to match the da ta in Figure 8.14, and from a sequential spatial inhibition process with radius 15. In each case the mean num ber o f events is 22. For n £ 64, we took R = 200, for n = 100, 144, we took R = 400, and for n ^ 196 we took R = m .

8.4 ■Bibliographic Notes

427

M odel-based resam pling for tim e series was discussed by F reedm an (1984), Freedm an an d Peters (1984a,b), Sw anepoel and van W yk (1986) and Efron and T ibshirani (1986), am ong others. Li and M ad d ala (1996) survey m uch o f the related tim e dom ain literature, which has a som ew hat theoretical em phasis; their account stresses econom etric applications. F or a m ore applied account o f param etric b o o tstrap p in g in tim e series, see Tsay (1992). B ootstrap prediction in tim e series is discussed by K ab aila (1993b), while the b o otstrapping o f statespace m odels is described by Stoffer and W all (1991). The use o f m odel-based resam pling for o rd er selection in autoregressive processes is discussed by Chen et al. (1993). Block resam pling for tim e series was introduced by C arlstein (1986). In an im p o rta n t paper, K iinsch (1989) discussed overlapping blocks in tim e series, although in spatial d a ta the proposal o f block resam pling in H all (1985) predates both. Liu an d Singh (1992a) also discuss the properties o f block resam pling schemes. Politis an d R om ano (1994a) introduced the stationary b o o tstrap , an d in a series o f papers (Politis and R om ano, 1993, 1994b) have discussed theoretical aspects o f m ore general block resam pling schemes. See also B uhlm ann an d K iinsch (1995) and L ahiri (1995). The m ethod for block length choice outlined in Section 8.2.3 is due to H all, H orow itz and Jing (1995); see also H all an d H orow itz (1993). B ootstrap tests for unit roots in autoregressive m odels are discussed by F erretti and R om o (1996). H all and Jing (1996) describe a block resam pling approach in which the construction o f new series is replaced by R ichardson extrapolation. Bose (1988) showed th a t m odel-based resam pling for autoregressive p ro ­ cesses has good asym ptotic higher-order properties for a wide class o f statistics. L ahiri (1991) an d G otze and K iinsch (1996) show th a t the same is true for block resam pling, b u t D avison and H all (1993) p o int o u t th a t unfortunately — and unlike w hen the d a ta are independent — this depends crucially on the variance estim ate used. Form s o f phase scram bling have been suggested independently by several au th o rs (N ordgaard, 1990; Theiler et al., 1992), and B raun and K ulperger (1995, 1997) have studied its properties. H artig an (1990) describes a m ethod for variance estim ation in G aussian series th a t involves sim ilar ideas b u t needs no rand o m izatio n ; see Problem 8.5. Frequency dom ain resam pling has been discussed by F ranke and H ardle (1992), w ho m ake a strong analogy w ith b o o tstrap m ethods for nonparam etric regression. It has been fu rth er studied by Janas (1993) and D ahlhaus and Janas (1996), on which o u r account is based. O u r discussion o f the R io N egro d a ta is based on Brillinger (1988, 1989), which should be consulted for statistical details, while Sternberg (1987, 1995) gives accounts o f the d a ta and background to the problem. M odels based on p o in t processes have a long history and varied provenance.

428

8 • Complex Dependence

D aley and V ere-Jones (1988) an d K a rr (1991) provide careful accounts o f their m athem atical basis, while Cox an d Isham (1980) give a m ore concise treatm ent. Cox and Lewis (1966) is a sta n d a rd account o f statistical m ethods for series o f events, i.e. p o in t processes in the line. Spatial p o in t processes and their statistical analysis are described by Diggle (1983), Ripley (1981, 1988), and Cressie (1991). Spatial epidem iology has recently received atten tio n from various p oints o f view (M uirhead and D arby, 1989; Bithell and Stone, 1989; Diggle, 1993; Law son, 1993). Exam ple 8.11 is based on Diggle and Rowlingson (1994). Owing to the im possibility o f exact inference, a num ber o f statistical proce­ dures based on rando m izatio n or sim ulation originated in spatial d a ta analysis. Exam ples include graphical tests, which were used extensively by Ripley (1977), and various approaches to p aram etric inference based on M arkov chain M onte C arlo m ethods (Ripley, 1988, C hapters 4, 5). However, nonparam etric b o o t­ stra p m ethods for spatial d a ta have received little attention. O ne exception is H all (1985), a pioneering w ork on the theory th a t underlies block resam pling in coverage processes, a p articu lar type o f spatial data. F u rth er discussion o f resam pling these processes is given by H all (1988b) and G arcia-S oidan and H all (1997). Possolo (1986) discusses subsam pling m ethods for estim ating the p aram eters o f a ran d o m field. O th er applications include H all and K eenan (1989), w ho use the b o o tstra p to set confidence “gloves” for the outlines o f hands, an d Journel (1994), w ho uses p aram etric b o o tstrap p in g to account for estim ation uncertainty in an application o f kriging. Y oung (1986) describes b o o tstrap approaches to testing in som e geom etrical problems. Cowling, H all and Phillips (1996) describe the resam pling m ethods for inhom ogeneous Poisson processes th a t form the basis o f Exam ple 8.10, as well as outlining the related theory. V entura, D avison and Boniface (1997) describe a different analysis o f the neurophysiological d a ta used in th at example. Diggle, Lange an d Benes (1991) describe an application o f the b o o tstrap to a point process problem in neuroanatom y.

8.5 Problems 1

Suppose that y i,...,y „ is an observed time series, and let zy denote the block of length / starting at yu where we set y, = yi+(i_i mod „) and y0 = ynAlso let h , . . . be a stream of random numbers uniform on the integers 1,...,n and let be a stream of random numbers having the geometric distribution Pr(L = I) = p(l —p)‘~ \ I = 1,— The algorithm to generate a single stationary bootstrap replicate is Algorithm 8.2 (Stationary bootstrap) • •

Set 7* = z/jx,, and set i = 1. While length(Y’) < n, {increment /; replace Y ’ with (Y z ; i>Li)}.

429

8.5 ■Problems



Set 7* =

(a) Show that the algorithm above is equivalent to Algorithm 8.3 . Set Yl' = y , r • For i = 2,...,n, let Y ' = with probability p, and let Y" = yj+l with probability 1 —p, where y,l, = yj.

(b) Define the empirical circular autocovariance

n Ck =

O '; -

y ) ( y i + u + k - t mod n) -

y ),

k =Q ,...,n.

;=1 Show that conditional on y i , . . . , y „ ,

E’(y /) = y,

cov*(y,-,y;+1) = ( i - Py Cj

and deduce that y ' is second-order stationary. (c) Show that if y i , . . . , y n are all distinct, 7 ’ is a first-order Markov chain. Under what circumstances is it a fcth-order Markov chain? (Section 8.2.3; Politis and Romano, 1994a) 2

Let Y i , . . . , Yn be a stationary time series with covariances f j = cov(Y!, Yj+ 1 ). Show that v a r (? ) = y0 + 2 ^

fl -

yh

;=l ' and that this approaches C = Vo + 2 £ 5 ° yj if ! j\yj\ is finite. Show that under the stationary bootstrap, conditional on the data, n—1 / v a r '( y ‘) = c0 + 2 ^ 3 ( 1 - " ) (! ~ P ) JCj,

;=l '

nJ

where Co,c1;. .. are the empirical circular autocovariances defined in Problem 8.1. (Section 8.2.3; Politis and Romano, 1994a) 3

(a) Using the setup described on pages 405-408, show that J2($j ~ S )2 has mean vy — b~l v,j and variance V ijjj +

2 Vj jV tj - 2 b _1( v Uj j t + 2 v u v iJt) + b - 2( v iJJcJ + 2 v u v k J ),

where vy = cov(S,,S,), = cum(S,, Sj, St) and so forth are the joint cumulants o f the Sj, and summation is understood over each index. (b) For an m-dependent normal process, show that provided / > m,

( l~' 4 }, i = j, v‘.i = \ l - 2c(l>, \ i - j \ = l, ( 0,

otherwise,

and show that /“ ‘cq1—>(, c,1*— as /—»o o . Hence establish (8.13) and (8.14). (Section 8.2.3; Appendix A ; Hall, Horowitz and Jing, 1995)

8 ■Complex Dependence

430 4

Establish (8.16) and (8.17). Show that under phase scrambling,

n_1 H YJ =

cov‘(y/. Y,'+m) = «_ 1 - y)(yi+* - y)>

where j + m is interpreted m od n, and that all odd joint mom ents o f the Y j are zero. This last result implies that the resampled series have a highly symmetric joint distribution. W hen the original data have an asymmetric marginal distribution, the following procedure has been proposed: •

let Xj = - 1 { r j/(n +

1

)}, where rj is the rank o f y} among the original series

ya, . . . , y n- 1 ; • •

apply Algorithm 8.1 to x 0 , . . . , x„-i, giving X ‘_ , ; then set Y j = y(r/), where rj is the rank o f X j am ong Aro ,...,A '* _ 1.

D iscuss critically this idea (see also Practical 8.3). (Section 8.2.4; Theiler et al., 1992; Braun and Kulperger, 1995, 1997) 5

(a) Let / i , . . . , / m be independent exponential random variables with means fij, and consider the statistic T = Yl"j=\ ai h ’ where the a; are unknown. Show that V = | ajl ? is an unbiased estimate o f var(T ) = Y j N ow let C = (c 0 , - . . , c m) be an ( m + 1) x ( m + 1) orthogonal matrix with colum ns cj, where co is a vector o f ones; the Zth element o f c, is cj,-. That is, for som e constant

b, cjci= 0 ,

ii= j,

c j c j = b,

j= l,...,m .

Show that for a suitable choice o f b, V is equal to

j

ffl+1 ttl+1

2 ^ r n ) g B r ' - TO' where for i = 1 , . . . , m + 1 , Tf = + ca)h(b) N ow suppose that Yo,. . . , Y„_i is a time series o f length n = 2m + lz with empirical Fourier transform fb .---.iB _ i and periodogram ordinates h = \Yk\2/n, for k = 0 , . . . , m. For each i = 1 ,..., m + 1, let the perturbed periodogram ordinates be

YJ = ?o,

Y> = ( l + c ^ 2 Yk,

= ( l + c * ) 1/2Y„_*,

k = l,...,m ,

from which the ith replacement time series is obtained by the inverse Fourier transform. Let T be the value o f a statistic calculated from the original series. Explain how the corresponding resample values, T 1' , . . . , T ^ +1, may be used to obtain an approximately unbiased estimate o f the variance o f T , and say for what types o f statistics you think this is likely to work. (Section 8.2.4; Hartigan, 1990) 6

In the context o f periodogram resampling, consider a ratio statistic T =

a(u>k)I((Qk) = / a M g M dw( 1 + n } ' /2X a) YkF =i 1 (®fc)/ g(ft>) dw( 1 -f

1/2Z i)

say. U se (8.18) to show that X a and X i have means zero and that var(-Xa) COV(XUX a)

=

n l aaggl ^ 2 + i(c4,



1llagglag Ig

“t-

2

^4 .

\ a r ( X i ) = n l gel ~ 2 + ^ k4,

431

8.5 ■Problems

where I aagg = / a2(co)g2(co) dco, and so forth. Hence show that to first order the mean and variance o f T do not involve k4, and deduce that periodogram resampling may be applied to ratio statistics. Use simulation to see how well periodogram resampling performs in estimating the distribution o f a suitable version o f the sample estimate o f the lag j autocorrelation, = Pl

e~toJg M dco f l n g (« ) dco

(Section 8.2.5; Janas, 1993; Dahlhaus and Janas, 1996) 7

Let y \ , . . . , y n denote the times o f events in an inhom ogeneous Poisson process o f intensity My), observed for 0 < y < 1, and let

J= 1

denote a kernel estimate o f My), based on a kernel w( ) that is a PDF. Explain why the following two algorithms for generating bootstrap data from the estimated intensity are (almost) equivalent.

Algorithm 8.4 (Inhomogeneous Poisson process 1) • •

Let N have a Poisson distribution with mean A = f Q' l(u ;h )d u . For j = 1, . . . , N , independently take 17* from the t /( 0 ,1) distribution, and then set Y ’ = F ~ l (U j), where F (y) = A-1 f0} l(u ;h )d u .

Algorithm 8.5 (Inhomogeneous Poisson process 2) • •

A

p1

Let N have a Poisson distribution with mean A = J0 /.(u; h) du. For j = 1, . . . , N , independently generate /* at random from the integers { ! , . . . , « } and let s* be a random variable with P D F w(-). Set YJ = y,- + ht:'.

(Section 8.3.2) 8

Consider an inhom ogeneous Poisson process o f intensity /.(y) = N n(y), where fi(y) is fixed and sm ooth, observed for 0 < y < 1. A kernel intensity estimate based on events at y i , . . . , y n is

i =i

where w( ) is the P D F o f a symmetric random variable with mean zero and variance one; let K = / w2(u)du. (a) Show that as N - * c c and h—>0 in such a way that N h —>cej, E { l(y ; h)} = X(y) + ±h2X"(y),

var j l(y ; h) j = K h~l X(y);

you may need the facts that the number o f events n has a Poisson distribution with mean A = /J Mu) du, and that conditional on there being n observed events, their

432

8 ■Complex Dependence

times are independent random variables with PDF Hence show that the asymptotic mean squared error of is minimized when h oc N ~l/S. Use the delta method to show that the approximate mean and variance of l 1/ 2(y;h) are *'/ 2 (y) + \ * r m (y) {h 2f ( y ) - ±K h r 1},

\ Kh ~l.

(b) Now suppose that resamples are formed by taking n observations at random from yi,...,y„. Show that the bootstrapped intensity estimate w ', y - y j h J=l has mean E’{ l ‘(y, h)} = l(y;h), and that the same is true when there are n' resampled events, provided that E '(n') = n. For a third resampling scheme, let n have a Poisson distribution with mean n, and generate n events independently from density ).(y;h)/ f Ql l(u;h)du. Show that under this scheme E*{3.*{_y; Ai)} =

J w(u)2(y — hu;h)du.

(c) By comparing the asymptotic distributions of P 2( y ; h ) - ^ 2 (y) z i y ’h) =

{kU -w

, ’

Z ( r ’h) =

{ r ( y ; h ) \ ' - l 1/ 2 (y;h) ------- W m F u i ---------*

find conditions under which the quantiles of Z ' can estimate those of Z. (Section 8.3.2; Example 5.13; Cowling, Hall and Phillips, 1996) Consider resampling tiles when the observation region ^ is a square, the data are generated by a stationary planar Poisson process of intensity X, and the quantity of interest is d = var(Y), where Y is the number of events in 3t. Suppose that 0t is split into n fixed tiles of equal size and shape, which are then resampled according to the usual bootstrap. Show that the bootstrap estimate of 6 is t = ^2(yj — y)2, where yj is the number of events in the jth tile. Use the fact that var(T) = (n — 1)2{k4/h + 2 k \ /( n — 1)}, where Kr is the rth cumulant of Yj, to show that the mean squared error of T is ^ { n + ( n - l ) ( 2n + n - l ) } , where n = l\9l\. Sketch this when p. > 1, fi = 1, and /i < 1, and explain in qualitative terms its behaviour when fi > 1. Extend the discussion to moving tiles. (Section 8.3)

8.6 Practicals 1

Dataframe lynx contains the Canadian lynx data, to the logarithm of which we fit the autoregressive model that minimizes A IC : t s .plot(log(lynx)) lynx.ar z)? T he heavy solid line in the right panel shows the “tru e” survivor function o f Z* estim ated from 50 000 o rdinary b o o tstra p sim ulations. T he lighter solid line is the im portance

456

9 ■Improved Calculation

resam pling estim ate K- 1 £

wrf{*r* ^ Z)

r= 1

with R = 99, an d the d o tted line is the estim ate based on 99 ordinary boo tstrap sam ples from the null distribution. T he im portance resam pling estim ate follows the “tru e” survivor function accurately close to zq b u t does poorly for negative z*. The usual estim ate does best n ear z* = 0 b u t poorly in the tail region o f interest; the estim ated significance probability is f a = 0. W hile the usual estim ate decreases by R ~ { at each z*, the weighted estim ate decreases by m uch sm aller ju m p s close to z; the raw im portance sam pling tail probability estim ate is p.H,raw = 0.015, which is very close to the true value. T he weighted survivor function estim ate has large ju m p s in its left tail, where the estim ate is unreliable. In 50 repetitions o f this experim ent the o rdinary and raw im portance re­ sam pling tail probability estim ates h ad variances 2.09 x 10-4 and 2.63 x 10-5 . F or a tail probability o f 0.015 this efficiency gain o f ab o u t 8 is sm aller th an would be predicted from Table 9.4, the reason being th a t the distribution o f z* is rath er skewed an d the norm al approxim ation to it is poor. ■ In general there are several ways to obtain tilted distributions. We can use exponential tilting w ith exact em pirical influence values, if these are readily available. O r we can estim ate the influence values by regression using jRo initial ordinary b o o tstra p resam ples, as decribed in Section 2.7.4. A n other way o f using an initial set o f b o o tstrap sam ples is to derive weighted sm ooth distributions as in (3.39): illustrations o f this are given later in Exam ples 9.9 and 9.11.

9.4.2 Improved estimators Ratio and regression estimators One simple m odification o f the raw im portance sam pling estim ate is based on the fact th a t the average w eight R -1 w ( Y ' ) from any particular sim ulation will n o t equal its theoretical value o f E*{w(Y*)} = 1. This suggests th a t the weights w(Yr”) be norm alized, so th a t (9.15) is replaced by the importance resampling ratio estimate tl

_ E f-i h y ; m y ;) Z L

m y

;)

To some extent this controls the effect o f very large fluctuations in the weights. In practice it is b etter to treat the weight as a control variate o r covariate. Since ou r aim in choosing H is to concentrate sam pling where m( ) is largest, the values o f m(Yr’ )w(Yr*) and w(Yr*) should be correlated. If so, and if

457

9.4 ■Importance Resampling

the average weight differs from its expected value o f one un d er sim ulation from H, then the estim ate pH,raw probably differs from its expected value fi. This m otivates the covariance adjustm ent m ade in the importance resampling regression estimate Ph ,reg = Ah,raw ~ b(w - 1),

(9.23)

w here vv* = R ~ x w(Yr*), an d b is the slope o f the linear regression o f the m ( Y ' ) w ( Y * ) on the w (Y r*). The estim ator pH,reg is the predicted value for m { Y ' ) w { Y “) at the poin t w(Y*) = 1. T he adjustm ents m ade to pH,raw in b o th pH,rat and pH,reg m ay induce bias, b u t such biases will be o f o rd er R ~ l and will usually be negligible relative to sim ulation stan d ard errors. C alculations outlined in Problem 9.12 indicate th a t for large R the regression estim ator should outperform the raw and ratio estim ators, b u t the im provem ent depends on the problem , and in practice the raw estim ator o f a tail probability o r quantile is usually the best. Defensive mixtures A second im provem ent aim s to prevent the weight w( y' ) from varying wildly. Suppose th a t H is a m ixture o f distributions, n H\ + (1 —n ) H 2 , where 0 < n < 1. T he distributions Hi and H 2 are chosen so th at the corresponding probabilities are n o t b o th sm all sim ultaneously. T hen the weights d G ( / ) / { j i d H , ( / ) + (1 - 7z)dH 2 (y')} will vary less, because even if d H i ( y m) is very small, d H 2 (y*) will keep the den o m in ato r aw ay from zero and vice versa. This choice o f H is know n as a defensive mixture distribution, and it should do particularly well if m any estim ates, w ith different m( y’ ), are to be calculated. T he m ixture is applied by stratified sam pling, th a t is by generating exactly n R observations from Hi and the rest from H 2, and using pH,reg as usual. T he com ponents o f the m ixture H should be chosen to ensure th a t the relevant range o f values o f t* is well covered, b u t beyond this the detailed choice is n o t critical. F o r exam ple, if we are interested in quantiles o f T* for probabilities betw een a an d 1 — a, then it would be sensible to target Hi at the a quantile and H 2 a t the 1 — a quantile, m ost simply by the exponential tilting m ethod described earlier. As a further precaution we m ight add a th ird com ponent to the m ixture, such as G, to ensure stable perform ance in the m iddle o f the distribution. In general the m ixture could have m any com ponents, b u t careful choice o f two or three will usually be adequate. A lways the application o f the m ixture should be by stratified sam pling, to reduce variation. Exam ple 9.9 (G ravity d a ta ) To illustrate the above ideas, we again consider the hypothesis testing problem o f Exam ple 9.8. T he left panel o f Figure 9.6

458

9 • Improved Calculation

shows 20 replicate estim ates o f the null survivor function o f z*, using ordinary b o o tstrap resam pling w ith R = 299. The right panel shows 20 estim ates o f the survivor function using the regression estim ate fiH,reg after sim ulations w ith a defensive m ixture distribution. This m ixture has three com ponents which are G (the tw o E D F s), an d tw o pairs o f exponential tilted distributions targeted at the 0.025 an d 0.975 quantiles o f Z*. From o u r earlier discussion these distributions are given by (9.21) w ith X = ± 2 / v L \ we shall denote the first pair o f distributions by probabilities p i j an d p 2j , and the second by probabilities q i j and q 2j . The first com ponent G was used for R i = 99 samples, the second com ponent (the ps) for R 2 = 100 an d the th ird com ponent (the qs) for R j = 100: the m ixture prop o rtio n s were therefore nj = R j / ( R \ + R 2 + R 3 ) for j = 1,2,3. T he im portance resam pling weights were

where as before f \ j and f y respectively count how m any tim es y ij and y 2j a p p e ar in the resample. F or convenience we estim ated the C D F o f Z* at the sam ple values z*. The regression estim ate at z* is obtained by setting m( y’ ) = I { z ( y *) < z ( y ’ )} and calculating (9.23); this appears to involve 299 regressions for each C D F estim ate, b u t Problem 9.13 shows how in fact ju st one m atrix calculation is needed. T he im portance resam pling estim ate o f the C D F is ab o u t as variable as the ordin ary estim ate over m ost o f the distribution, b u t m uch less variable well into the tails. For a m ore system atic com parison, we calculated the ratio o f the m ean

Figure 9.6 Importance resampling to test for a location difference between series 7 and 8 of the gravity data. In each panel the heavy solid line is the survivor function Pr’(Z ‘ > z‘) estimated from 50000 ordinary bootstrap resamples and the vertical dotted lines show z q . The left panel shows the estimates for 20 ordinary bootstraps of size 299. The right panel shows 20 importance resampling estimates using 299 samples with a regression estimate following resampling from a defensive mixture distribution with three components. See text for details.

459

9.4 ■Importance Resampling Table 9.5 Efficiency gains (ratios of mean squared errors) for estimating a tail probability, a bias, a variance and two quantiles for the gravity data, using importance resampling estimators together with defensive mixture distributions, compared to ordinary resampling. The mixtures have Ri ordinary bootstrap samples mixed with R 2 samples exponentially tilted to the 0.025 quantile of z*, and with R 3 samples exponentially tilted to the 0.975 quantile of r*. See text for details.

M ixture Ri

r2

E stim ate Ri 299

99

100

100

19

140

140

R aw R a tio R egression R aw R a tio R egression R aw R a tio R egression

E stim an d Pr* (Z* > z0)

E ’ ( Z ')

var*(Z *)

Z0.05

z0.025

11.2 3.5 12.4 3.8 3.4 4.0 3.9 2.3 4.3

0.04 0.06 0.18 0.73 0.79 0.93 0.34 0.43 0.69

0.03 0.05 0.07 1.5 1.5 1.6 1.2 0.82 1.3

0.07 0.06 0.06 1.3 0.93 0.87 0.96 0.48 0.44

0.05 0.04 2.5 1.3 1.2 2.6 1.1 1.3

_

squared erro r from ordinary resam pling to th at w hen using defensive m ixture d istributions to estim ate the tail probability Pr*(Z* > z q ) with zo = 1.77, two quantiles, an d the bias E *(Z ’ ) and the variance var’ (Z*) for sam pling from the two series. T he m ixture distributions have the same three com ponents as before, b u t w ith different values for the num bers o f sam ples R \ , R 2 and Rt, from each. Table 9.5 gives the results for three resam pling m ixtures with a to tal o f R = 299 resam ples in each case. The m ean squared errors were estim ated from 100 replicate b ootstraps, w ith “tru e ” values obtained from a single b o o tstra p o f size 50000. The m ain contribution to the m ean squared erro r is from variance ra th e r th an bias. The first resam pling distribution is n o t a m ixture, b u t simply the exponential tilt to the 0.975 quantile. This gives the best estim ates o f the tail probability, w ith efficiencies for raw an d regression estim ates in line with Exam ple 9.8, b u t it gives very p o o r estim ates o f the other quantities. F or the other two m ixtures the regression estim ates are best for estim ating the m ean and variance, while the raw estim ates are best for the quantiles and n o t really worse for the tail probability. B oth m ixtures are ab o u t the same for tail quantiles, while the first m ixture is b etter for the m om ents. In this case the efficiency gains for tail probabilities and quantiles predicted by Table 9.4 are unrealistic, for two reasons. First, the table com pares 299 o rdinary sim ulations w ith ju st 100 tilted to each tail o f the first m ixture distribution, so we w ould expect the variance for a tail quantity based on the m ixture to be larger by a factor o f ab o u t three; this is ju st w hat we see when the first distrib u tio n is com pared to the second. Secondly, the distribution o f Z* is quite skewed, which considerably reduces the efficiency out as fa r as the 0.95 quantile. We conclude th a t the regression estim ate is best for estim ating central

9 ■Improved Calculation

460

quantities, th a t the raw estim ate is best for quantiles, th a t results for estim ating quantiles are insensitive to the precise m ixture used, and th a t theoretical gains m ay not be realized in practice unless a single tail quantity is to be estim ated. This is in line w ith o th er studies.

9.4.3 Balanced importance resampling Im portance resam pling w orks best for the extrem e quantiles corresponding to small tail probabilities, b u t is less effective in the centre o f a distribution. Balanced resam pling, on the o th er hand, w orks best in the centre o f a distri­ bution. Balanced im portance resam pling aims to get the best o f b o th worlds by com bining the two, as follows. Suppose th a t we wish to generate R balanced resam ples in which y j has overall probability p, o f occurring. To do this exactly in general is im possible for finite n R , b u t we can do so approxim ately by applying the following simple algorithm ; a m ore efficient algorithm is described in Problem 9.14.

Algorithm 9.2 (Balanced importance resampling) C hoose Ri = n R p i , . . ., C oncatenate to form .

R\

= nRpn, such th a t Ri H----- + R n = nR.

copies o f y\ w ith

R 2

copies o f y 2 w ith ... with

R n

copies o f y n,

Perm ute the n R elem ents o f W at ran d o m to form and read off the R balanced im portance resam ples as sets o f n successive elem ents o f . • A simple way to choose the Rj is to set Rj = 1 + [n(R — l)p ; ], j = 1 wher e [•] denotes integer part, and to set Rj = Rj + \ for the d = n R — (R[ - (- ■+ R'„) values o f j w ith the largest values o f nRpj — R j ; we set R j = Rj for the rest. This ensures th a t all the observations are represented in the b o o tstrap sim ulation. Provided th a t R is large relative to n , individual sam ples will be approx­ im ately independent an d hence the w eight associated w ith a sam ple having frequencies ( / j , . . . , / ^ ) is approxim ately

this does n o t take account o f the fact th a t sam pling is w ithout replacem ent. Figure 9.7 shows the theoretical large-sam ple efficiencies o f balanced re­ sampling, im portance resam pling, an d balanced im portance resam pling for estim ating the quantiles o f a norm al statistic. O rdinary balance gives m ax­ im um efficiency o f 2.76 a t the centre o f the distribution, while im portance

461

9.4 ■Importance Resampling

Figure 9.7 Asymptotic efficiencies of balanced importance resampling (solid), importance resampling (large dashes), and balanced resampling (small dashes) for estimating the quantiles of a normal statistic. The dotted horizontal line is at relative efficiency one.

-

2

-

1

0

1

2

Normal quantile

resam pling w orks well in the lower tail b u t badly in the centre and u p per tail o f the distribution. Balanced im portance resam pling dom inates both. Exam ple 9.10 (Returns d a ta ) In order to assess how well these ideas m ight w ork in practice, we again consider setting studentized b o o tstrap confidence intervals for the slope in the returns example. We perform ed an experim ent like th a t o f Exam ple 9.7, b u t w ith the R = 999 b o o tstrap sam ples generated by balanced resam pling, im portance resam pling, and balanced im portance resampling. Table 9.6 shows the m ean squared error for the ordinary b o o tstrap divided by the m ean squared errors o f the quantile estim ates for these m ethods, using 50 replicate sim ulations from each scheme. This slightly different “efficiency” takes into account any bias from using the im proved m ethods o f sim ulation, though in fact the co n trib u tio n to m ean squared error from bias is small. The “tru e ” quantiles are estim ated from an ordinary b o o tstrap o f size 100000. The first tw o lines o f the table show the efficiency gains due to using the control m ethod w hen the linear approxim ation is used as a control variate, w ith em pirical influence values calculated exactly and estim ated by regression from the sam e b o o tstrap sim ulation. The results differ little. T he next two rows show the gains due to balanced sampling, both w ithout and w ith the control

462

M eth o d

9 • Improved Calculation

D istrib u tio n

Q u an tile (% ) 1

2.5

5

10

50

90

95

97.5

99

C o n tro l (exact) C o n tro l (approx)

1.7 1.4

2.7 2.8

2.8 3.2

4.0 4.1

11.2 11.8

5.5 5.1

2.4 2.2

2.6 2.6

1.4 1.3

B alance w ith co n tro l

1.0 1.4

1.2 1.8

1.5 3.0

1.4 2.8

3.1 4.4

2.9 4.7

1.7 2.5

1.4 2.2

0.6 1.5

7.8 4.6 3.6 4.3 2.6

3.7 2.9 3.7 2.6 2.1

3.6 3.5 2.0 2.5 0.7

1.8 1.1 1.7 1.8 0.3

0.4 0.1 0.5 0.9 0.4

3.5 2.6 2.4 1.6 0.5

2.3 3.1 2.2 1.6 0.6

3.1 4.3 2.6 2.2 1.6

5.5 5.2 3.6 2.3 2.1

5.0 4.2 5.2 4.3 3.2

5.7 3.4 4.2 3.3 2.8

4.1 2.4 3.8 3.4 1.0

1.9 1.8 1.8 2.2 0.4

0.5 0.2 0.9 2.1 0.9

2.6 2.0 3.0 2.7 0.9

2.2 3.6 2.4 3.7 1.4

6.3 4.2 4.0 3.3 2.1

4.5 3.9 4.0 4.3 2.1

Im p o rtan ce

Hi Hi Hi H* Hs

B alanced im p o rtan ce

Hi Hi h3 h4 h 5

m ethod, which gives a w orthw hile im provem ent in perform ance, except in the tail. The next five lines show the gains due to different versions o f im portance resam pling, in each case using a defensive m ixture distribution and the raw quantile estim ate. In practice it is unusual to perform a b o o tstrap sim ulation w ith the aim o f setting a single confidence interval, and the choice o f im ­ portance sam pling distrib u tio n H m ust balance various potentially conflicting requirem ents. O u r choices were designed to reflect this. We first suppose th at the em pirical influence values lj for t are know n an d can be used for exponen­ tial tilting o f the linear approxim ation t'L to t ‘. T he first defensive m ixture, H\, uses 499 sim ulations from a distribution tilted to the a quantile o f t*L and 500 sim ulations from a distribution tilted to the 1 — a quantile o f fL, for a = 0.05. The second m ixture is like this b u t w ith a = 0.025. The third, fo u rth an d fifth distributions are the sort th a t m ight be used in practice w ith a com plicated statistic. We first perform ed an ordinary b o otstrap o f size Ro, which we used to estim ate first the em pirical influence values lj by regression an d then the tilt values rj for the 0.05 and 0.95 quantiles. We then perform ed a fu rth er b o o tstrap o f size (R — Ro)/2 using each set o f tilted probabilities, giving a to tal o f R sim ulations from three different distributions, one centred an d tw o tilted in opposite directions. We took Ro = 199 and Ro = 499, giving Hj an d i / 4. F or H$ we took Ro = 499, b u t estim ated the tilted distributions by frequency sm oothing (Section 3.9.2) w ith bandw idth

Table 9.6 Efficiencies for estimation of quantiles of studentized slope for returns data, relative to ordinary bootstrap resampling.

463

9.4 ■Importance Resampling

e = 0.5t>1/2 at the 0.05 an d 0.95 quantiles o f t*, where v x/1 is the standard error o f t estim ated from the ordinary bootstrap. Balance generally im proves im portance resam pling, which is n o t sensitive to the m ixture distrib u tio n used. The effect o f estim ating the em pirical influence values is n o t m arked, while frequency sm oothing does n o t perform so well as exponential tilting. Im portance resam pling estim ates o f the central quantiles are poor, even w hen the sim ulation is balanced. Overall, any o f schemes H \H 4 leads to appreciably m ore accurate estim ates o f the quantiles usually o f interest. ■

9.4.4 Bootstrap recycling In Section 3.9 we introduced the idea o f b o o tstrapping the b o otstrap, b o th for m aking bias adjustm ents to b o o tstrap calculations and for studying the v aria­ tion o f properties o f statistics. F u rth er applications o f the idea were described in C hapters 4 an d 5. In b o th param etric and nonparam etric applications we need to sim ulate sam ples from a series o f distributions, themselves obtained from sim ulations in the nonparam etric case. Recycling m ethods replace m any sets o f sim ulated sam ples by one set o f sam ples and m any sets o f weights, and have the p otential to reduce the com putational effort greatly. This is particularly valuable when the statistic o f interest is expensive to calculate, for exam ple when it involves a difficult optim ization, or w hen each b o o tstrap sam ­ ple is costly to generate, as when using M arkov chain M onte C arlo m ethods (Section 4.2.2). T he basic idea is repeated use o f the im portance sam pling identity (9.14), as follows. Suppose th a t we are trying to calculate = E{m(Y)} for a series o f d istributions G i , . . . , G k ■The naive M onte C arlo approach is to calculate each value Hk = E { m ( Y ) | Gk} independently, sim ulating R sam ples y u - - - , y R from G/c and calculating pk = R ~ l m(yr). But for any distribution H whose su p p o rt includes th a t o f G* we have

E{m(Y) | Gk} =

J m(y)dGk{y) = J

=

E jm(Y)

dGk( Y ) dH(Y)

We can therefore estim ate all K values using one set o f sam ples y \ , . . . , y N sim ulated from H, w ith estim ates N

P k = N 1^ m ( y , )

(9.24)

In some contexts we m ay choose N to be m uch larger th a n the value R we m ight use for a single sim ulation, b u t less th an K R . It is im p o rtan t to choose H carefully, an d to take account o f the fact th a t the estim ates are correlated.

464

9 • Improved Calculation

Both N and the choice o f H depend u p o n the use being m ade o f the estim ates and the form o f m(-). Exam ple 9.11 (City population d a ta ) C onsider again estim ating the bias and variance functions for ratio 8 = t(F ) o f the city population d a ta with n = 10. In Exam ple 3.22 we estim ated b(F) = E (T | F) — t(F) and v(F) = v ar( T | F) for a range o f values o f 0 = t{F) using a first-level b o o tstrap to calculate values o f t* for 999 b o o tstrap sam ples F*, and then doing a secondA A level b o o tstrap to estim ate b(F') an d v( F’) for each o f those samples. H ere the second level o f resam pling is avoided by using im portance re-weighting. A t the sam e time, we retain the sm oothing introduced in Exam ple 3.22. R a th er th a n take each Gk to be one o f the b o o tstrap E D F s F*, we obtain a sm ooth curve by using sm ooth distributions F'f) w ith probabilities pj( 6 ) as defined by (3.39). Recall th a t the p aram eter value o f F e’ is t(F'g) = 0*, say, which will differ slightly from 6 . F o r H we take F , the E D F o f the original data, on the grounds th a t it has the correct su p p o rt and covers the range o f values for y ’ w ell: it is n o t necessarily a good choice. T hen we have weights dGk( f r ) = dFg(y') = A ( PjW V " = dH(y'r ) dH(y'r ) y i n - 1/

.

say, where as usual /*• is the frequency with which y} occurs in the rth bo o tstrap sample. We should em phasize th a t the sam ples y * draw n from H here replace second-level b o o tstrap samples. C onsider the bias estim ate. T he weighted sum R~' ^ ( f ’ — 6")w'(0} is an unbiased estim ate o f the bias E” (T *‘ | F'e ) — 6 *, an d we can plot this estim ate to see how the bias varies as a function o f O' or 6 . However, the weighted sum can behave badly if a few o f the w ' ( 0 ) are very large, and it is b etter to use the ratio an d regression estim ates (9.22) and (9.23). The top left panel o f Figure 9.8 shows raw, ratio, an d regression estim ates o f the bias, based on a single set o f R = 999 sim ulations, w ith the curve obtained from the double b o o tstrap calculation used in Figure 3.7. F o r example, the ratio estim ate o f bias for a p articu lar value o f d is ]T r(r' — 0 ’)w‘(0 ) / 2 2 r w '(0), and this is plotted as a function o f 0*. T he raw an d ratio estim ates are rath er poor, but the regression estim ate agrees fairly well w ith the double boo tstrap curve. The panel also shows the estim ated bias from a defensive m ixture w ith 499 ordinary sam ples m ixed w ith 250 sam ples tilted to each o f the 0.025 and 0.975 quantiles; this is the best estim ate o f those we consider. The panels below show 20 replicates o f these estim ated biases. These confirm the im pression from the panel a b o v e: w ith o rdinary resam pling the regression estim ator is best, but it is b etter to use the m ixture distribution. The to p right panel shows the corresponding estim ates for the standard

465

9.4 ■Importance Resampling

ID o

£ o

^

o

■o

iS c

1 0, we can obtain a saddlepoint approxim ation to (9.31) by applying (9.28) an d (9.30) w ith u = (21* — t )2 and pj = Including program m ing, it took ab o u t ten m inutes to calculate 3000 values o f (9.31) by saddlepoint approxim ation; direct sim ulation with 250 sam ples at the second level took ab o u t four hours on the sam e w orkstation. ■ Estimating functions O ne simple extension o f the basic approxim ations is to statistics determ ined by m onotonic estim ating functions. Suppose th a t the value o f a scalar bo o tstrap statistic T* based on sam pling from y i , . . . , y „ is the solution to the estim ating equation n (9.32) U*(t) = ^ 2, a{ f ,y j )f 'j = 0, where for each y the function a( 6 ;y) is decreasing in d. T hen T* < t if and only if U ’(t) < 0. H ence Pr*(T* < t) m ay be estim ated by Gs(0) applied w ith cum ulant-generating function (9.30) in which aj = a{t;yj). A saddlepoint approxim ation to the density o f T is (9.33) .

A

where K ( ^ ) = d K / d t , an d £ solves the equation K '( £) = 0. The first term on the right in (9.33) corresponds to the Jacobian for transform ation from the density o f U ’ to th a t o f T ' .

471

9.5 ■Saddlepoint Approximation

Example 9.15 (M aize data) Problem 4.7 contains d a ta from a paired com ­ parison experim ent perform ed by D arw in on the grow th o f m aize plants. The d a ta are reduced to 15 differences y \ , . . . , y n betw een the heights (in eighths o f an inch) o f cross-fertilized and self-fertilized plants. W hen two large negative values are excluded, the differences have average J> = 33 and look close to norm al, b u t w hen those values are included the average drops to 20.9. W hen d a ta m ay have been co ntam inated by outliers, robust M -estim ates are useful. If we assum e th a t Y = 8 + as, where the distribution o f e is sym m etric a b o u t zero b u t m ay have long tails, an estim ate o f location 0 can be found by solving the equation ' = 0,

(9.34)

j=i where tp(e) is designed to dow nw eight large values o f s. A com m on choice is the H u b er estim ate determ ined by

y>(e) =

c,

(9.35)

W ith c = oo this gives 1p(s) = s and leads to the norm al-theory estim ate 9 = y, b u t a sm aller choice o f c will give b etter behaviour w hen there are outliers. W ith c = 1.345 and a fixed a t the m edian absolute deviation s o f the data, we obtain 8 = 26.45. H ow variable is this? We can get some idea by looking at replicates o f 9 based on b o o tstrap sam ples y j,...,y * . A b o o tstrap value 9* solves

P

i

^

h

so the saddlepoint approxim ation to the P D F o f b o o tstrap values is obtained starting from (9.32) w ith a(f , yj ) = y>{(yj — t)/s}. The left panel o f Figure 9.10 com pares the saddlepoint approxim ation with the em pirical distribution o f 9*, and w ith the approxim ate P D F o f the b o o tstrap p ed average. The saddlepoint approxim ation to 9’ seems quite accurate, while the P D F o f the average is w ider and shifted to the left. The assum ption o f sym m etry underlies the use o f the estim ator 9, because the p aram eter 9 m ust be the same for all possible choices o f c. The discus­ sion in Section 3.3 an d Exam ple 3.26 implies th at our resam pling scheme should take this into account by enlarging the resam pling set to y i ,. . . , y „ , 9 — (yi — 9 ) , . . . , 9 — {y„ — 9), for some very robust estim ate o f 9 ; we take 9 to be the m edian. T he cum ulant-generating function required w hen taking sam ples

9 • Improved Calculation

472

CD

o

o o

Q

Q_

C\J

o

o

o d

-20 theta

20

40

60

theta

o f size n from this set is n log

(2n) 1 ^

exP { £ a (f ;?;)} + ex p { < ^ a(t;2 § - y , ) }

j=i

T he right panel o f Figure 9.10 com pares saddlepoint and M onte C arlo a p ­ proxim ations to the P D F o f O' und er this sym m etrized resam pling scheme; the P D F o f the average is shown also. All are sym m etric ab o u t 6 . One difficulty here is th a t we m ight prefer to approxim ate the P D F o f O' w hen s is replaced by its b o o tstrap version s', an d this cannot be done in the current fram ew ork. M ore fundam entally, the distrib u tion o f interest will often be for a q uantity such as a studentized form o f O' derived from 6 ", s', and perhaps other statistics, necessitating the m ore sophisticated approxim ations outlined in Section 9.5.3. ■

9.5.2 C onditional approxim ation T here are num erous ways to extend the discussion above. O ne o f the m ost straightforw ard is to situations w here U is a q x 1 vector which is a linear function o f independent variables W i , . . . , W „ w ith cum ulant-generating func­ tions K j { £ ) , j = T h a t is, U = A T W , where A is a n x q m atrix with rows a j . The jo in t cum ulant-generating function o f U is

K(0

= log E exp

(ZTA T W) =

n

Figure 9.10 Comparison of the saddlepoint approximation to the PDF of a robust M-estimate applied to the maize data (solid), with results from a bootstrap simulation with R = 50000. The heavy curve is the saddlepoint approximation to the PDF of the average. The left panel shows results from resampling the data, and the right shows results from a symmetrized bootstrap.

473

9.5 ■Saddlepoint Approximation

an d the saddlepoint approxim ation to the density o f U at u is gs(u) = ( 2 n r q/ 2 \ K " ( t r 1/2cxp { * ( £ ) - l Tu ] ,

(9.36)

w here | satisfies the q x 1 system o f equations 8 K(t;)/d£, = u, and K "(£) = d 2K { t ) / d m T is the q x q m atrix o f second derivatives o f K ; | • | denotes determ inant. N ow suppose th a t U is p artitioned into U\ and U2, th at is, U T = ( I / / , U j ), w here U\ an d U 2 have dim ension q\ x 1 and (q — qi) x 1 respectively. N ote th a t U2 = A% W , where A 2 consists o f the last q — qi colum ns o f A. The cum ulant-generating function o f U 2 is simply K(0, £,2), where £ T = (£jr ,u2) ,

(9.37)

w here £20 satisfies the (q — qi) x 1 system o f equations d K ( 0 , £ 2 )/dt ; 2 = u2, and K '22 is the (q — q\) x {q — qi) corner o f K " corresponding to U2. D ivision o f (9.36) by (9.37) gives a double saddlepoint approxim ation to the conditional density o f U\ at ui given th a t U2 = u2. W hen U\ is scalar, i.e. q\ = 1, the approxim ate conditional C D F is again (9.28), b u t with w

=

sig n (lt) ^ | x ( 0 , | 2o) - 32T0M2} - { ^ ( ^ ) -

' .

\ |K"2( 0 , 6 o)I J

Example 9.16 (City population data) A simple b o o tstrap application is to obtain the distribution o f the ratio T* in b o o tstrap sam pling from the city population d a ta pairs w ith n = 10. In order to avoid conflicts o f no tatio n we set yj = (Zj, Xj), so th a t T* is the solution to the equation ]T (x; — tZj)f* = 0. F or this we take the W j to be independent Poisson random variables with equal m eans /j, s o K j ( £ ) = n{e{ — 1). We set

=

- ( ”)• " - ( V '

N ow T ' < t if and only if J2 j(xj ~ tzj ) W j < 0, where Wj is the num ber o f times ( z j , X j ) is included in the sample. But the relation betw een the Poisson an d m ultinom ial distributions (Problem 9.19) implies th a t the jo in t conditional distrib u tio n o f ( W \ , . . . , W „ ) given th a t J 2 ^ j = n is the same as th a t o f the m ultinom ial frequency vector (/*, . . . , / * ) in ordinary b o o tstra p sam pling from a sam ple o f size n. T hus the probability th a t J2 j(xj “ tzj)W j < 0 given th at J2 W j = n is ju st the probability th a t T ' < t in ordinary b o o tstrap sam pling from the d a ta pairs.

474

9 ■Improved Calculation a

0.001 0.005 0.01 0.025 0.05 0.1 0.9 0.95 0.975 0.99 0.995 0.999

Unconditional

Conditional

W ithout replacement

S’point

Sim’n

S’point

Sim’n

S’point

Sim’n

1.150 1.191 1.214 1.251 1.286 1.329 1.834 1.967 2.107 2.303 2.461 2.857

1.149 1.192 1.215 1.252 1.286 1.329 1.833 1.967 2.104 2.296 2.445 2.802

1.216 1.236 1.248 1.273 1.301 1.340 1.679 1.732 1.777 1.829 1.865 1.938

1.215 1.237 1.247 1.269 1.291 1.337 1.679 1.736 1.777 1.833 1.863 1.936

1.070 1.092 1.104 1.122 1.139 1.158 1.348 1.392 1.436 1.493 1.537 1.636

1.070 1.092 1.103 1.122 1.138 1.158 1.348 1.392 1.435 1.495 1.540 1.635

In this situation it is o f course m ore direct to use the estim ating function m ethod w ith a(t;yj) = Xj—tZj and the sim pler approxim ations (9.28) and (9.33). T hen the Jaco b ian term in (9.33) is | 22 z; e x p { |(x , —t zj ) } / 22 exp{|(x,- —tZj)}\. A n o th er application is to conditional distributions for T*. Suppose th at the populatio n pairs are related by x; = Zj 6 + z l/ 2 £j, where the e; are a random sam ple from a distrib u tio n w ith m ean zero. T hen conditional on the Zj, the ratio 2 2 xj / 2 2 zj has variance p ro p o rtio n al to (]P Z j)~' ■In some circum stances we m ight w ant to obtain an ap proxim ation to the conditional distribution o f T * given th a t 2 2 Z j = 2 2 zjthis case we can use the approach outlined in the previous p aragraph, b u t w ith tw o conditioning variables: we take the Wj to be independent Poisson variables w ith equal m eans, and set "E (xj-tzj)W j\ [/*=

2 2 zjW j

22 w j

( h

J

o

u = \ 2 2 zj

\

n )

a, =

zJ

V 1

A third application is to approxim ating the distribution o f the ratio when a sam ple o f size m = 10 is taken w ithout replacem ent from the n = 49 d a ta pairs. A gain T ' < t is equivalent to the event 2 2 j( x j ~ t z j ) W j < 0, b u t now W j indicates th a t (z j , X j) is included in the m cities chosen; we w ant to im pose the condition 2 2 ^ 0 = m - We take Wj to be binary variables with equal success probabilities 0 < n < 1, giving Kj(£) = lo g (l — n + ne*), with n any value. We then apply the double saddlepoint approxim ation with

- U ) -

" - ( V '

Table 9.8 com pares the quantiles o f these saddlepoint distributions with

Table 9.8 Comparison of saddlepoint and simulation quantile approximations for the ratio when sampling from the city population data. The statistics are the ratio £ x j / £ zj with n = 10, the ratio conditional on Yl zj = 640 with n = 10, and the ratio in samples of size 10 taken without replacement from the full data. The simulation results are based on 100000 bootstrap samples, with logistic regression used to estimate the simulated conditional probabilities, from which the quantiles were obtained by a spline fit.

9.5 • Saddlepoint Approximation

475

M onte C arlo approxim ations based on 100000 samples. The general agreem ent is excellent in each case. ■ A fu rth er application is to p erm u tatio n distributions. Exam ple 9.17 (C orrelation coefficient) In Exam ple 4.9 we applied a perm u­ tation test to the sam ple co rrelation t betw een variables x and z based on pairs (x i,z i), ..., (x„,z„). F or this statistic and test, the event T > t is equivalent to EjXjZ(U) - Y l x i zj> where £(•) is a p erm u tatio n o f the integers 1,.. . , n . A n alternative form ulation is as follows. Let Wy, i , j = 1 denote independent binary variables w ith equal success probabilities 0 < n < 1, for any n. T hen consider the distrib ution o f U\ = J 2 i j x izj ^ U conditional on U 2 = ( £ , W i j , . . . , Y , j w nj,E , W|1....... 5Di w i,n-i) r = M2, where u 2 is a vector o f ones o f length 2n — 1. N otice th a t the condition E , = 1 is entailed by the o th er conditions an d so is redundant. E ach value o f Xj and each value o f zj app ears precisely once in the sum U\, w ith equal probabilities, and hence the conditional distrib u tio n o f U\ given U 2 = u 2 is equivalent to the p erm utation distribution o f T. H ere m = n2, q = 2n, and qi = 1. O u r lim ited num erical experience suggests th a t in this exam ple the sad­ d lepoint ap proxim ation can be inaccurate if the large num ber o f constraints results in a conditional distribution on only a few values. ■

9.5.3 Marginal approximation T he approxim ate distrib u tio n and density functions described so far are useful in contexts such as testing hypotheses, b u t they are hard er to apply to such problem s as setting studentized b o o tstrap confidence intervals. A lthough (9.26) an d (9.28) can be extended to some types o f com plicated statistics, we merely outline the results. Approximate cumulant-generating function T he sim plest ap proach is direct approxim ation to the cum ulant-generating function o f the b o o tstrap statistic o f interest, T ’. The key idea is to replace the cum ulant-generating function K ( ^ ) by the first four term s o f its expansion in powers o f + \ £ 2 k 2 + g£3*C3 + ^ 0, —oo
i) + e2 H ( y - y 2) + £3H (y - y3), differentiate w ith respect to £1, s2, and £3, and set £1 = £2 = £3 = 0. The em pirical influence values for W at F are then obtained by replacing F with F. In term s o f the influence values for t and v the result o f this calculation is L w(yi) Qviyuyi) Cw{y \, y 2 , y i )

= v~ 1 / 2 L t(yi), = v~ll 2 Qt{yx, y 2) - ^v~ 3/ 2 L t{yi)Lv{y2 )[2], = v ^ l/ 2 Ct(yu y 2 , y 3) - \ v ~ V 2 { 6 f0 'i,j'2 )l‘.,0'3) + Qv (y uy 2 ) Lt(yi)} [3] + 1V~5/ 2 L[ (y 1)LV(y 2 )LV(y3) [3],

9 • Improved Calculation

478

where [fc] after a term indicates th a t it should be sum m ed over the perm utations o f its y^s th a t give the k distinct quantities in the sum. Thus for exam ple L t( yi ) Lv(y 2 )Lv(y 3 )[3 ]

=

L t( y i) Lv(y 2 )Lv(y}) + L t( yi ) Lv( yi ) Lv(y 2 ) + L t{y2 )Lv( yi ) Lv(yi).

The influence values for z involve linear, quadratic, and cubic influence values for t, and linear an d quad ratic influence values for v, the latter given by

J

L t( x )2 dF(x)

+ 2J L t(x)Qt( x , y l )dF(x),

L v(yi)

=

L t{yi )2 —

lQv(yi,y2)

=

L t( y i )Q t ( y uy 2 )l 2 ] - L t{yi)Lt(y2) ~ J { Q t ( x , y i ) + Qt( x , y 2 ) }Lt( x) dF( x)

+J

{Qt(x,y 2 )Qt(x,yi) + L t(x)Ct{ x , y u y 2)} dF(x).

The sim plest exam ple is the average t(F) = f x d F ( x ) = y o f a sam ple o f values y u - - - , y „ from F. T hen L t(j/,-) = y t - y , Qtiyuyj) = Ct(yi,yj,yk) = 0, the expressions above simplify greatly, an d the required influence quantities are

li 9ij Cijk

= Lw(yi;F) = v~x,2{yi - y), = Q U y i , y j i h = - i v ~ i/ 2 ( y i - y ) { ( y j - y ) 2 - v } [ 2 ], = Cw(yi , yj, yk ;F) = 3v~i/2(yi - y)(yj - y)(yk - y)

+\v~5n{yi - y) {(yj - y)2 -

{(yk - y)2 -

[3],

where v = n-1 J2(yi ~ y)2- The influence quantities for Z are obtained from those for W by m ultiplication by n 1/2. A num erical illustration o f the use o f the corresponding approxim ate cum ulant-generating function K c ( £ ) is given in Exam ple 9.19. ■ Integration approach A n other ap proach involves extending the estim ating function approxim ation to the m ultivariate case, an d then approxim ating the m arginal distribution o f the statistic o f interest. To see how, suppose th a t the quantity T o f interest is a scalar, an d th a t T and S = ( S i , . . . , S q- i) r are determ ined by a q x 1 estim ating function n U(t,s) = ^ a ( t , s i , . . . , s 9- l ;Yj ). J=i T hen the b o o tstra p quantities T* an d S ’ are the solutions o f the equations n U'(t, s) = J 2 a j ( t , s ) f j = 0 ,

j=i

(9.40)

9.5 • Saddlepoint Approximation

479

where a; (t,s) = a(t,s;yj) an d the frequencies ( / j , . . . , / * ) have a m ultinom ial distribution w ith d en o m in ato r n and m ean vector n ( p \ , - typically pj = n_1. We assum e th a t there is a unique solution (t*,s*) to (9.40) for each possible set o f /* , an d seek saddlepoint approxim ations to the m arginal P D F and C D F of r . F o r fixed t an d s, the cum ulant-generating function o f U" is

K ( £ ; t , s ) = n log

(9.41)

Y 2 PJex P l ^ a / M ) } ;'=i

an d the jo in t density o f the U * at u is given by (9.36). The Jacobian needed to obtain the jo in t density o f T* and S ' from th at o f U ' is h ard to obtain exactly, b u t can be approxim ated by dcij(t,s) 8 aj(t,s) ' dt

j=i



dsT

where ,, 1 ’

.

p j e x p { Z Taj(t,s)} Y l = i P k e x p { £ , Tak{ t , s) } ’

as usual for r x 1 an d c x 1 vectors a and s w ith com ponents at and sj, we w rite 8 a / d s T for the r x c array whose (i,j) elem ent is dat/dsj. T he Jacobian J { t , s \ £,) reduces to the Jacobian term in (9.33) w hen s is not present. Thus the saddlepoint approxim ation to the density o f (T*,S*) at (t,s) is J ( t , s ; l ) { 2 n ) - ^ 2 \ K " { l ; t, s) p 1/2 exp K & ; t, s),

(9.42)

w here £ = £(t,s) is the solution to the q x 1 system o f equations 8 K/d£, = 0. L et us w rite A(t,s) = —K{£( t, s) ;t, s} . We require the m arginal density and distribution functions o f T* a t t. In principle they can be obtained by integration o f (9.42) num erically w ith respect to s, but this is tim e-consum ing when s is a vector. A n alternative approach is analytical approxim ation using Laplace’s m ethod, which replaces the m ost im p o rtan t p a rt o f the integrand — the rightm ost term in (9.42) — by a norm al integral, suitably centred an d scaled. Provided th a t the m atrix d 2 A ( t ,s ) / ds d sT is positive definite, the resulting approxim ate m arginal density o f T * at t is

J(t,S;?)(2 n)-,/2 |X"(|;t,S)|-1/ 2

d 2 A(t, s) dsdsT

-

1 /2

exp

s),

(9.43)

w here \ = \ ( t ) an d s = s(t) are functions o f t th a t solve sim ultaneously the

480

9 •Improved Calculation

q x 1 and (q — 1) x 1 systems o f equations 8K —

; t, s)

al



=

nYj l

i

8 K (



j

s—

, 8 aj

=

nYlpft

=1

s)

>s )

d-s- t =

°-

;= 1

(9.44) These can be solved using packaged routines, w ith starting values given by noting th a t w hen t equals its sam ple value to, say, s equals its sam ple value and £ = 0. The second derivatives o f A needed to calculate (9.43) m ay be expressed as 8 2 A(t,s) _ d 2 K ( £ ; t , s ) f d 2 K ( i ; t , s ) Y i d 2K(£-,t,s) 8 s8 s T

8 s8 £ T

\

8^8^ T

J

8 £,dsT

82 K(t;-,t,s) 8 sdsT

where a t the solutions to (9.44) the m atrices in (9.45) are given by 8 2K { t - , t , s )

n ^2p ' j( t ,s ) aj ( t , s ) a j ( t ,s ) T,

(9.46)

(9.47,

w ith sc and sj the cth and dth com ponents o f s. The m arginal C D F approxim ation for T ' at t is (9.28), with w

=

s i g n ( t - t0){ 2 X (^ ;t,s )} 1/2, dt

|K " (£ ;t,* )l1/2

(9.49) d2A (t, s) 8 s8 s T

1/2

evaluated a t s = s, £ = | ; the only additional q uantity needed here is (9.51) ;= i A pproxim ate quantiles o f T* can be obtained in the way described ju st before Exam ple 9.13. The expressions above look forbidding, b u t their im plem entation is relatively straightforw ard. The key p o in t to note is th a t they depend only on the qu an ti­ ties aj(t, s), their first derivatives w ith respect to t, an d their first two derivatives w ith respect to s. Once these have been program m ed, they can be input to a generic routine to perform the saddlepoint approxim ations. Difficulties th a t som etim es arise w ith num erical overflow due to large exponents can usually be circum vented by rescaling d a ta to zero m ean and unit variance, which has no

481

9.5 ■Saddlepoint Approximation

effect on location- an d scale-invariant quantities such as studentized statistics. R em em ber, however, o u r initial com m ents in Section 9.1: the investm ent o f tim e an d effort needed to p rogram these approxim ations is unlikely to be w orthw hile unless they are to be used repeatedly. Exam ple 9.19 (M aize d ata) To illustrate these ideas we consider the boo tstrap variance an d studentized average for the m aize data. Both these statistics are location-invariant, so w ithout loss o f generality we replace yj with yj — y and henceforth assum e th a t y = 0. W ith this sim plification the statistics o f interest are

where Y" = n 1 J2 YJ. A little algebra shows th at ii-1 Y , V 2 = V * {1 + Z * 2/(n - 1)} ,

n~l Y , Yj = Z ' V l/2{n - 1)~1/2,

so to apply the integration ap proach we take pj = n 1 and

from which the 2 x 1 m atrices o f derivatives daj(z,v) 8z

daj(z,v) dv

d 2 cij(z,v) 8z 2 ’

8 2 aj(z,v) 8 v1

needed to calculate (9.43)—(9.51) are readily obtained. To find the m arginal distribution o f Z*, we apply (9.43)-(9.51) with t = z and s = v. F or a given value o f z, the three equations in (9.44) are easily solved numerically. The u p p er panels o f Figure 9.11 com pare the saddlepoint distribution an d density approxim ations for Z* w ith a large sim ulation. The analytical quantiles are very close to the sim ulated ones, and although the saddlepoint density seems to have integral greater th an one it captures well the skewness o f the distribution. F or V * we take t = v and s = z, b u t the lower left panel o f Figure 9.11 shows th a t resulting P D F approxim ation fails to capture the bim odality o f the density. This arises because V * is deflated for resam ples in which neither o f the two sm allest observations — which are som ew hat separated from the rest — appear. The contours o f —A(z, v) in the lower right panel reveal a potential problem w ith these m ethods. For z = —3.5, the Laplace approxim ation used to obtain (9.43) am ounts to replacing the integral o f exp{—A(z, t>)} along the dashed vertical line by a norm al approxim ation centred at A and w ith precision given by the second derivative o f A(z, v) at A along the line. But A(—3.5, v) is bim odal for v > 0, an d the Laplace ap proxim ation does n o t account for the second peak at B. As it turns out, this doesn’t m atte r because the peak at B is so m uch

9 *Improved Calculation

482

o W a> c co 3

o

-4

*2

2000

3000

2 z

Quantiles of standard normal

1000

0

-2

v

lower th a n a t A th a t it adds little to the integral, b u t clearly (9.43) would be catastrophically bad if the peaks at A an d B were com parable. This behaviour occurs because there is no guarantee th a t A(z, v) is a convex function o f v and z. If the difficulty is th o u g h t to have arisen, num erical integration o f (9.42) can be used to find the m arginal density o f Z ’, b u t the problem is n o t easily diagnosed except by checking th a t (9.45) is positive definite a t any solution to (9.44) an d by checking th a t different initial values o f c, and s lead to the the same solution for a given value o f t. This m ay increase the com putational burden to an extent th a t direct sim ulation is m ore efficient. Fortunately this difficulty is m uch rarer in larger samples. The quantities needed for the approxim ate cum ulant-generating function

Figure 9.11 Saddlepoint approximations for the bootstrap variance V * and studentized average Z* for the maize data. Top left: approximations to quantiles of Z* by integration saddlepoint (solid) and simulation using 50000 bootstrap samples (every 20th order statistic is shown). Top right: density approximations for Z* by integration saddlepoint (heavy solid), approximate cumulant-generating function (solid), and simulation using 50 000 bootstrap samples. Bottom left: corresponding approximations for V*. Bottom right: contours of —A(z,t>), with local maxima along the dashed line z = —3.5 at A and at B.

9.5 • Saddlepoint Approximation

483

ap proach to obtaining the distribution o f n '/2(n — 1)-1/2Z* were given in E xam ple 9.18. The approxim ate cum ulants for Z* are Kc,i = 0.13, k c ,2 = 1.08, Kc,3 = 0.51 and k c ,4 = 0.50, w ith k c ,2 = 0.89 and k c ,4 = —0.28 when the term s involving the are dropped. W ith or w ithout these term s, the cum ulants are som e way from the values 0.17, 1.34, 1.05, and 1.55 estim ated from 50000 sim ulations. T he upper right panel o f Figure 9.11 shows the P D F approxim ation based on the m odified cum ulant-generating function; in this case Kc fi ( £) is convex. The m odified P D F m atches the centre o f the distribution m ore closely th a n the in tegration PD F, b u t is poor in the u p per tail. F or V ' , we have U = (yi ~ y f ~ t,

qtj = - 2 (yt - y)(yj - y),

ciJk = 0,

so the approxim ate cum ulants are kc,i = 1241, k c ,2 /k c i = kc j / kc i = 0.013 and , = —0.0015; the corresponding sim ulated values are 1243, 0.18, 0.018, 0.0010. N either saddlepoint approxim ation captures the bim odality o f the sim ulations, though the integration m ethod is the b etter o f the two. In this case b = j for the approxim ate cum ulant-generating function m ethod, and the resulting density is clearly too close to norm al.



Exam ple 9.20 (Robust M -estim ate) For a second exam ple o f m arginal ap ­ proxim ation, we suppose th a t 8 and a are M -estim ates found from a random sam ple y i , . . . , y n by sim ultaneous solution o f the equations

7=1

v

7

;=1

v

7

T he choice rp(e) = e, ^(e) = e2, y = 1 gives the non-robust estim ates 8 = y and 0 and som e non-negative integer k we wish to estimate

/

m(y ) g( y ) dy =

/•°° vke~y - j j - x e e ~ eydy

by simulating from density h(y) = fie~^y, y > 0, fi > 0. Give w(y) and show that E{m (Y )w (Y )} = n for any fi and 6, but that var(£//rat) is only finite when 0 < fi < 2 6 . Calculate var{m (Y)w (Y )}, cov{m (Y )w (Y ), w (Y)}, and var{w(Y)}. Plot the asymptotic efficiencies var(/i;; raw) / var(£// ra, ) and var(/*//ratv) / var(^Wrfg) as functions o f fi for 0 = 2 and fc = 0 ,1 ,2 ,3 . Discuss your findings. (Section 9.4.2; Hesterberg, 1995b) 13

Suppose that an application o f importance resampling to a statistic T" has resulted in estimates tj < ■■■< t'R and associated weights w”, and that the importance re­ weighting regression estimate o f the C D F o f T" is required. Let A be the R x R matrix w hose (r,s) element is w“/ ( t “ < t‘ ) and B be the R x 2 matrix whose rth row is ( I X ) . Show that the regression estimate o f the C D F at t \ , . . . , t ’R equals (1,1 ) ( BTB ) ~ i B TA. (Section 9.4.2)

14

(a) Let h = ( h \ , . • ■,hn), k = 1, . . . , n R , denote independent identically dis­ tributed multinomial random variables with denominator 1 and probability vector p = (p\, . . . ,p„). Show that SnK = Yl k =l ^ ^as a multinomial distribution with denominator n R and probability vector p, and that the conditional distribution o f I nR given that S„R = q is multinomial with denominator 1 and mean vector (nR) ~{q , where q = ( R i , . . . , R „ ) is a fixed vector. Show also that Prf/]

i i , . . . ,/ni?

InR I SnR

q)

equals

nfi-l g(inR I S„r = q)

g (inR-j |

= q — i„R-J+] — ■■■— i„Rj ,

i =i

where g( ) is the probability mass function o f its argument. (b) U se (a) to justify the following algorithm:

Algorithm 9.4 (Balanced importance resampling) Initialize by setting values o f R i , . . . , R „ such that Rj = n R Pj and For m = n R , . . . , 1:

= n ^-

(a) Generate u from the 1/(0,1) distribution. (b) Find the j such that £ 1=i Ri < um < Y2i=i Ri­ fe) Set I m = j and decrease Rj to Rj — 1. Return the sets {I„+l, . . . , I 2n}, •••, { /n(R_i)+1, ...,/„ « } as the indices o f the R bootstrap samples o f size n. •

(Section 9.4.3; Booth, Hall and Wood, 1993)

9 • Improved Calculation

492 15

For the bootstrap recycling estimate o f bias described in Example 9.12, consider the case T = Y with the parametric m odel Y ~ N ( 0 , 1). Show that if H is taken to be the N ( y , a ) distribution, then the simulation variance o f the recycling estimate o f C is approximately

1

i

n

R

/ a2 y + \2 « -l/

~ 1)/2 r « ( « - ! ) 1 I (2a - 3)3/2 R N

°2 11 8 (a - \ f ' 2 N J J ’

provided a > Compare this to the simulation variance when ordinary double bootstrap methods are used. What are the im plications for nonparametric double bootstrap calculations? In­ vestigate the use o f defensive mixtures for H in this problem. (Section 9.4.4; Ventura, 1997) 16

Consider exponential tilting for a statistic whose linear approximation is

where the ( / ' , , . . . , f ‘„s), s = 1 ,..., S, are independent sets o f m ultinom ial frequen­ cies. (a) Show that the cumulant-generating function o f T I is s K { 0 = ft + Y

f 1 "s n* lo6 \ ~

s=l

I

exP ( ^ y M ) t= 1

Hence show that choosing £ to give K ' ( ^ ) = t0 is equivalent to exponential tilting o f T [ to have mean to, and verify the tilting calculations in Example 9.8. (b) Explain how to modify (9.26) and (9.28) to give the approximate P D F and C D F o f T[. (c) How can stratification be accom m odated in the conditional approximations o f Section 9.5.2? (Section 9.5) 17

In a matched pair design, two treatments are allocated at random to each o f n pairs o f experimental units, with differences dj and average difference d = n~l J2 djI f there is no real effect, all 2" sequences + d i , . . . , + d n are equally likely, and so are the values D" = n~l J2^j^j> where the Sj take values + 1 with probability The one-sided significance level for testing the null hypothesis o f no effect is Pr*(D* > d). (a) Show that the cumulant-generating function o f D ' is n

K(£) = Y

io g c o sh (Zdj/n),

i=i and find the saddlepoint equation and the quantities needed for saddlepoint approximation to the observed significance level. Explain how this may be fitted into the framework o f a conditional saddlepoint approximation. (b) See Practical 9.5. (Section 9.5.1; Daniels, 1958; D avison and Hinkley, 1988) 18

For the testing problem o f Problem 4.9, use saddlepoint methods to develop an approximation to the exact bootstrap P-value based on the exponential tilted EDF. Apply this to the city population data with n = 10. (Section 9.5.1)

9.7 ■Problems

493

19

(a) If W \ , . . . , W „ are independent Poisson variables with means show that their joint distribution conditional on J2j = m is multinomial with probability vector n = (fi\ ^ fij and denominator w. Hence justify the first saddlepoint approximation in Example 9.16. (b) Suppose that T* is the solution to an estimating equation o f form (9.32), but that f j = 0 or 1 and f j = m < n; T" is a delete-m jackknife value o f the original statistic. Explain how to obtain a saddlepoint approximation to the P D F o f T ’. How can this P D F be used to estimate var*(T‘)? D o you think the estimate will be good when m = n — 1 ? (Section 9.5.2; Booth and Butler, 1990)

20

(a) Show that the bootstrap correlation coefficient t ’ based on data pairs ( x j , Zj), j = 1, . . . , n , may be expressed as the solution to the estimating equation (9.40) with Xj-Si

/

Zj

Oj ( t , s ) =

( Xj

\

- s2 Si)2

53

- s2j2 - s4 ( Xj - Si ) ( Zj - S2) - t{s3s4)1/2 J (Zj

V

where s T = (s1,s 2 ,s 3,s 4), and show that the Jacobian J ( t , s ; £ ) = n5(s 3s4)1/2. Obtain the quantities needed for the marginal saddlepoint approximation (9.43) to the density o f T*. (b) W hat further quantities would be needed for saddlepoint approximation to the marginal density o f the studentized form o f T ‘ ? (Section 9.5.3; D avison, Hinkley and Worton, 1995; DiCiccio, Martin and Young, 1994) 21

Let T[‘ be a statistic calculated from a bootstrap sample in which appears with frequency f j (j = 1, . ..,n ) , and suppose that the linear approximation to T ' is T [ = t + n~‘ Y s f j h ’ where /i < k < ■ ■ ■ < / „ . The statistic r2 * antithetic to T,' is calculated from the bootstrap sample in which y, appears with frequency /* +l .. (a) Show that if T [ and r 2“ are antithetic,

var{i(7Y + r 2*)} = J-n

(n-l j 2 lJ + »~l E bh' n+l - j \

7=1

,

7=1

and that this is roughly x2/ 2 n as n—► 00, where

and t]p is the pth quantile o f the distribution o f L t( Y ;F). (b) Show that if T j is independent o f r,' the corresponding variance is

and deduce that when T is the sample average and F is the exponential distribution the large-sample performance gain o f antithetic resampling is 6 /(1 2 — n 2) = 2.8. (c) W hat happens if F is symmetric? Explain qualitatively why. (Hall, 1989a)

9 - Improved Calculation

494 22

Suppose that resampling from a sample o f size n is used to estimate a quantity z(n) with expansion z(n) = zQ+ n~az\ + n~2az2 -\----- ,

(9-55)

where zo, zi, z2 are unknown but a is known; often a = j . Suppose that we resample from the E D F F, but with sample sizes nQ, m , where 1 < no < n t < n, instead o f the usual n, giving simulation estimates z ' ( n0), z ' ( n t ) o f z(n0), z( n x). (a) Show that z*(n) can be estimated by

z‘(n) =

z ’ (no) +

^

no

^

n,

(z‘(n0) - z > i ) } •

(b) N ow suppose that an estimate o f z ’(n; ) based on Rj simulations has variance approximately b / R j and that the com putational effort required to obtain it is cnjRj, for some constants b and c. Given no and ni, discuss the choice o f R q and R\ to minimize the variance o f z"(n) for a given total com putational effort. (c) Outline how knowledge o f the limit zo in (9.55) can be used to improve z ’(n). How would you proceed if a were unknown? D o you think it wise to extrapolate from just two values no and ? (Bickel and Yahav, 1988)

9.8 1

Practicals For ordinary bootstrap sampling, balanced resampling, and balanced resampling within strata:

y