Academic Performance Rating Scale

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/258129184

Views 218 Downloads 3 File size 3MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/258129184

Academic Performance Rating Scale Dataset · October 2013

CITATIONS

READS

2

3,217

3 authors, including: George J DuPaul Lehigh University 206 PUBLICATIONS 9,834 CITATIONS SEE PROFILE

Mark D Rapport University of Central Florida 101 PUBLICATIONS 3,336 CITATIONS SEE PROFILE

All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.

Available from: Mark D Rapport Retrieved on: 05 October 2016

School Psychology Review Volume 20, No. 2,1991, pp. 284-300

TEACHER RATINGS OF ACADEMIC SKILLS: THE DEVELOPMENT OF THE ACADEMIC PERFORMANCE RATING SCALE George J. DuPaul

Mark D. Rapport

University of Massachusetts Medical Center

University of Hawaii at Mama Lucy M. Perriello

University of Massachusetts Medical Center Abstract= This study investigated the normative and psychometric properties of a recently developed teacher checklist, the Academic Pet=fomnance Rating Scale (APRS), in a large sample of urban elementary school children. This instrument was developed to assess teacher judgments of academic performance to identify the presence of academic skills deficits in students with disruptive behavior disorders and to continuously monitor changes in these skills associated with treatment. A principal components analysis was conducted wherein a three-factor solution was found for the APRS. All subscales were found to be internally consistent, to possess adequate test-retest reliability, and to share variance with criterion measures of children’s academic achievement, weekly classroom academic performance, and behavior. The total APRS score and all three subscales also were found to discriminate between children with and without classroom behavior problems according to teacher ratings.

The academic performance and adjustment of school-aged children has come under scrutiny over the past decade due to concerns about increasing rates of failure and poor standardized test scores (Children’s Defense Fund, 1988; National Commission on Excellence in Education, 1983). Reports indicate that relatively large percentages of children (i.e., 20-30%) experience academic difficulties during their elementary school years (Glidewell & Swallow, 1969; Rubin & Balow, 1978), and these rates are even higher among students with disruptive behavior disorders (Cantwell & Satterfield, 1978; Kazdin, 1986). Further, the results of available longitudinal studies suggest that youngsters with disruptive behavior

disorders and concurrent academic performance dficulties are at higher risk for poor long-term outcome (e.g., Weiss & Hechtman, 1986). These fmdings have direct implications for the assessment of the classroom functioning of students with behavior disorders. Specifically, it has become increasingly important to screen for possible academic skills deficits in this population and monitor changes in academic performance associated with therapeutic interventions. Frequently, traditional measures of academic achievement (e.g., standardized psychoeducational batteries) are used as integral parts of the diagnostic process and for long-term assessment of academic success. Several

This project was supported in part by BRSG Grant SO7 RR05712 awarded to the first author by the Biomedical Research Support Grant Program, Division of Research Resources, National Institutes of Health. A portion of these results was presented at the annual convention of the National Association of School Psychologists, April, 1990, in San Francisco, CA The authors extend their appreciation to Craig Edelbrock and three anonymous reviewers for their helpful comments on an earlier draft of this article and to Russ Barkley, Terri Shelton, Kenneth Fletcher, Gary Stoner, and the teachers and principals of the Worcester MA Public Schools for their invaluable contributions to this study. Address all correspondence to George J. DuPaul, Department Medical Center, 55 Lake Avenue North, Worcester, MA 01655.

284

of Psychiatry,

University

of Massachusetts

Academic

Performance

Rating Scale

285

factors limit the usefulness of norm- possess several advantages for both referenced achievement tests for these screening and identification purposes. purposes, such as (a) a failure to sample Teachers are able to observe student the curriculum in use adequately, (b) the performance on a more comprehensive use of a limited number of items to sample sample of academic content than could various skills, (c) the use of response be included on a standardized achieveformats that do not require the student ment test. Thus their judgments provide to perform the behavior (e.g., writing) of a more representative sample of the interest, (d) an insensitivity to small domain of interest in academic assesschanges in student performance, and (e) ment (Gresham et al., 1987). Such judglimited contribution to decisions about ments also provide unique data regarding the “teachability” (e.g., ability to succeed programmatic interventions (Marston, in a regular education classroom) of 1989; Shapiro, 1989). students (Gerber & Semmel, 1984). FiGiven the limitations of traditional achievement tests, more direct measure- nally, obtaining teacher input about a ment methods have been utilized to screen student’s academic performance can for academic skills deficits and monitor provide social validity data in support of intervention effects (Shapiro, 1989; Sha- classification and treatment-monitoring piro & Kratochwill, 1988.) Several meth- decisions. At the present time, however, ods are available to achieve these pur- teachers typically are not asked for this information in a systematic fashion, and poses, including curriculum-based measurement (Shinn, 1989), direct obser- when available, such input is considered vations of classroom behavior (Shapiro & to be highly suspect data (Gresham et al., Kratochwill, 1988), and calculation of 1987). Teacher rating scales are important product completion and accuracy rates (Rapport, DuPaul, Stoner, & Jones, 1986). components of a multimodal assessment These behavioral assessment techniques battery used in the evaluation of the diagnostic status and effects of treatment involve direct sampling of academic behavior and have demonstrated sensitiv- on children with disruptive behavior ity to the presence of skills deficits and disorders (Barkley, 1988; Rapport, 1987). to treatment-induced change in such Given that functioning in a variety of behavioral domains (e.g., following rules, performance (Shapiro, 1989). In addition to these direct assessment academic achievement) across divergent methods, teacher judgments of students’ settings is often affected in children with achievement have been found to be quite such disorders, it is important to include accurate in identifying children in need information from multiple sources across of academic support services (Gresham, home and school environments. UnfortuReschly, & Carey, 1987; Hoge, 1983). For nately, most of the available teacher rating example, Gresham and colleagues (1987) scales specifically target the frequency of collected brief ratings from teachers problem behaviors, with few, if any, items regarding the academic status of a large related directly to academic performance. sample of schoolchildren. These ratings Thus, the dearth of items targeting teacher were highly accurate in classifying stu- judgments of academic performance is a major disadvantage of these measures dents as learning disabled or non-handiwhen screening for skills deficits or moncapped and were significantly correlated with student performance on two norm- itoring of academic progress is a focus of the assessment. referenced aptitude and achievement To address the exclusivity of the focus tests. In fact, teacher judgments were as accurate in discriminating between these on problem behaviors by most teacher two groups as the combination of the questionnaires, a small number of rating scales have been developed in recent years standardized tests. Although teacher judgments may be that include items related to academic acquisition and classroom performance subject to inherent biases (e.g., confirming previous classification decisions), they variables. Among these are the Children’s

286

School Psychology

Review, 7997, Vol. 20, No. 2

Behavior Rating &ale (Neeper & Lahey, 1986), Classroom Adjustment Ratings Scale (Lorion, Cowen, & Caldwell, 1975), Health Resources Inventory (Gesten, 1976), the Social Skills Rating System (Gresham & Elliott, 1990), the Teachermild Rating Scale (Hightower et al., 1986), and the WaZlCimneZZ Scale of social Chphnceand SchoolAdjustment (Walker & McConnell, 1988). These scales have been developed primarily as screening and problem identification instruments and all have demonstrated reliability and validity for these purposes. Although all of these questionnaires are psychometrically sound, each scale possesses one or more of the following characteristics that limit its utility for both screening and progress monitoring of academic skills deficits. These factors include (a) items worded at too general a level (e.g., “Produces work of acceptable quality given her/his skills level”) to allow targeting of academic completion and accuracy rates across subject areas, (b) a failure to establish validity with respect to criterion-based measures of academic success, and (c) requirements for completion (e.g., large number of items) that detract from their appeal as instruments that may be used repeatedly or on a weekly basis for brief periods. The need for a brief rating scale that could be used to identify the presence of academic skills deficits in students with disruptive behavior disorders and to monitor continuously changes in those skills associated with treatment was instrumental in the development of the Academic Performance Rating Scale (APRS). The APRS was designed to obtain teacher perceptions of specific aspects (e.g., completion and accuracy of work in various subject areas) of a student’s academic achievement in the context of a multimodal evaluation paradigm which would include more direct assessment techniques (e.g., curriculum-based measurement, behavioral observations). Before investigating the usefulness of this measure for the above purposes, its psychometric properties and technical adequacy must be established. Thus, this study describes the initial development of

the APRS and reports on its basic psychometric properties with respect to factor structure, internal consistency, test-retest reliability, and criterion-related validity. In addition, normative data by gender across elementary school grade levels were collected. METHOD Subjects

Subjects were children enrolled in the first through sixth grades from 45 public schools in Worcester, Massachusetts. This system is an urban, lower middle-class school district with a 28.5% minority (African-American, Asian-American, and Hispanic) population. Complete teacher ratings were obtained for 493 children (251 boys and 242 girls), which were included in factor analytic and normative data analyses. Children ranged in age from 6 to 12 years of age (M = 8.9; SD = 1.8). A two-factor index of socioeconomic status (Hollingshead, 1975) was obtained with the relative percentages of subjects in each class as follows: I (upper), 12.3%; II (upper middle), 7.1%; III (middle), 45.5%; IV (lower middle), 26.3% and V (lower), 8.8%. A subsample of 50 children, 22 girls and 28 boys, was randomly selected from the above sample to participate in a study of the validity of the APRS. Children at all grade levels participated, with the relative distribution of subjects across grades as follows: first, 19%; second, 16%; third, 17%; fourth, 17%; fifth, 13.5%; and sixth, 17.5%. The relative distribution of subjects across socioeconomic strata was equivalent to that obtained in the original sample. Measures

The primary classroom teacher of each participant completed two brief measures: the APRS and Attention/‘h$itit-Hperact+vity Disorder {ADHD] Rating Scale (DuPaul, in press). In addition, teachers of the children participating in the validity study completed the Abbreviated Canners Teacher Rating Scale

Academic

Performance

(ACTRS); (Goyette, Conners, & Ulrich, 1978). APRS. The APRS is a 19-item scale that was developed to reflect teachers’ perceptions of children’s academic performance and abilities in classroom settings (see Appendix A). Thirty items were initially generated based on suggestions provided by several classroom teachers, school psychologists, and clinical child psychologists. Of the original 30 items, 19 were retained based on feedback from a separate group of classroom teachers, principals, and school and child psychologists, regarding item content validity, clarity, and importance. The final version included items directed towards work performance in various subject areas (e.g., “Estimate the percentage of written math work completed relative to classmates”), academic success (e.g., “What is the quality of this child’s reading skills?“), behavioral control in academic situations (e.g., “How often does the child begin written work prior to understanding the directions?“), and attention to assignments (e.g., “How often is the child able to pay attention without you prompting him/her?“). Two additional items were included to assess the frequency of staring episodes and social withdrawal. Although the latter are only tangentially related to the aforementioned constructs, they were included because “overfocused” attention (Kinsbourne & Swanson, 1979) and reduced social responding (Whalen, Henker, & Granger, 1989) are emergent symptoms associated with psychostimulant treatment. Teachers answered each item using a 1 (never or poor) to 5 (very often or excellent) Likert scale format. SevenAPRS items (i.e., nos. 12,13,15- 19) were reversekeyed in scoring so that a higher total score corresponded with a positive academic status. ADHD Rating Scale. The ADHD Rating Scale consists of 14 items directly adapted from the ADHD symptom list in the most recent edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R; American Psychiatric Association, 1987). Teachers indicated the frequency of each symptom on a 1 (not

Rating Scale

287

at all) to 4 (very much) Likert scale with higher scores indicative of greater ADHDrelated behavior. This scale has been found to have adequate internal consistency and test-retest reliability, and to correlate with criterion measures of classroom performance (DuPaul, in press). ACTRS. The ACTRS (or Hyperactivity Index) is a lo-item rating scale designed to assess teacher perceptions of psychopathology (e.g., hyperactivity, poor conduct, inattention) and is a widely used index for identifying children at-risk for ADHD and other disruptive behavior disorders. It has adequate psychometric properties and is highly sensitive to the effects of psychopharmacological interventions (Barkley, 1988; Rapport, in press). Observational measures. Children participating in the validity study were observed unobtrusively in their regular classrooms by a research assistant who was blind to obtained teacher rating scale scores. Observations were conducted during a time when each child was completing independent seatwork (e.g., math worksheet, phonics workbook). Observations were conducted for 20 min with on-task behavior recorded for 60 consecutive intervals. Each interval was divided into 15 s of observation followed by 5 s for recording. A child’s behavior was recorded as on or off-task in the same manner as employed by Rapport and colleagues (1982). A child was considered off-task if (s)he exhibited visual nonattention to written work or the teacher for more than 2 consecutive seconds within each 15 s observation interval, unless the child was engaged in another taskappropriate behavior (e.g., sharpening a pencil). The observer was situated in a part of the classroom that avoided direct eye contact with the target child, but at a distance that allowed easy determination of on-task behavior. This measure was included as a partial index of academic engaged time which has been shown to be significantly related to academic achievement (Rosenshine, 1981).

288

School Psychology

Review, 7997, Vol. 20, No. 2

Academic efficiency score. Academic seatwork was assigned by each child’s classroom teacher at a level consistent with the teacher’s perceptions of the child’s ability level with the stipulation that the assignment be gradeable in terms of percentage completed and percentage accurate. Assignments were graded after the observation period by the research assistant and teacher, the latter of whom served as the reliability observer for academic measures. An academic efficiency score (AES) was calculated in a manner identical to that employed by Rapport and colleagues (1986) whereby the number of items’ completed correctly by the child was divided by the number of items assigned to the class multiplied by 100. This statistic represents the mean weekly percentage of academic assignments completed correctly relative to classmates and was used as the classroom-based criterion measure of academic performance.

randomly selected from class roster), to complete APRS ratings according to each child’s academic performance during the previous week, and that responses on the ADHD scale were to reflect the child’s usual behavior over the year. Teacher ratings for the large sample (N= 487) were obtained within a l-month period in the early spring, to ensure familiarity with the student’s behavior. A subsample of 50 children was selected randomly from the larger sample and parent consent for participation in the validity study was procured. Teacher ratings for this subsample were obtained within a 3-month period in the late winter and early spring. Teacher ratings on the APRS were randomly obtained for half of the sample participating in the validity study (n = 25) on a second occasion, 2 weeks after the original administration of this scale, to assess test-retest reliability. Ratings reflected children’s academic performance over the previous week The research assistant completed the behavPublished norm-referenced achieveioral observations and collected AES data ment test scores. The results of schoolon 3 separate days (i.e., a total of 60 min based norm-referenced achievement tests of observation) during the same week that (i.e., Comprehensive Test of Basic Skills; APRS, ADHD, and ACIRS ratings were CTB/McGraw-Hill, 1982) were obtained completed. Means (across the 3 observafrom the school records of each student tion days) for percentage on-task and AES in the validity sample. These tests are scores were used in the data analyses. administered routinely on a group basis Interobserver reliability. The research in the fall or spring of each school year. National percentile scores from the most assistant was trained by the first author recent administration (i.e., within the past to an interobserver reliability of 90% or year) of this test were recorded for greater prior to conducting live observaMathematics, Reading, and Language tions using videotapes of children completing independent work. Reliability Arts. coefficients for on-task percentage were calculated by dividing agreements by Procedure agreements plus disagreements and mulRegular education teachers from 300 tiplying by 100%. Interobserver reliability classrooms for grades 1 through 6 were also was assessed weekly throughout the asked to complete the APRS and ADHD data collection phase of the study using rating scales with regard to the perfor- videotapes of 10 individual children (who mance of two children in their class. were participants in the validity study) Teachers from elementary schools in all completing academic work during one of parts of the city of Worcester participated the observation sessions. Interobserver (ie., a return rate of 93.5%) resulting in reliability was consistently above 80%with a sample that included children from all a mean of 90% for all children. A mean socio-economic strata. Teachers were Kappa coefficient (Cohen, 1960) of .74 was instructed by one of the authors on which obtained for all observations to indicate students to assess (i.e., one boy and girl reliability beyond chance levels. Following

Academic Performance Rating Scale

each observation period, the teacher and assistant independently calculated the amount of work completed by the student relative to classmates and the percentage of items completed correctly. Interrater reliability for these measures was consistently above 96% with a mean reliability of 99%.

289

subscale (e.g., items 3-6 included on both the Academic Success and Academic Productivity subscales). Given that the APRS was designed to evaluate the unitary construct of academic performance, it was expected that the derived factors would be highly correlated. This hypothesis was confirmed as the intercorrelations among Academic Success and Impulse Control, Academic Success and Academic Productivity, and Several analyses will be presented to Impulse Control and Academic Producexplicate the psychometric properties of tivity were .69, .88, and .63, respectively. the APRS. First, the factor structure of this Despite the high degree of overlap between instrument was determined to aid in the the Academic Success and Productivity construction of subscales. Second, the components (Le., items reflecting accuinternal consistency and stability of APRS racy and consistency of work correlated scores were examined. Next, gender and with both), examination of the factor grade comparisons were conducted to loadings revealed some important differidentify the effects these variables may ences (see Table 1). Specifically, the have on APRS ratings as well as to provide Academic Success factor appears related normative data. Finally, the concurrent to classroom performance outcomes, such validity of the APRS was evaluated by as the quality of a child’s academic calculating correlation coefficients be- achievement, ability to learn material tween rating scale scores and the criterion quickly, and recall skills. Alternatively, the measures. Academic Productivity factor is associated with behaviors that are important in the pocess of achieving classroom Factor Structure of the APRS success, including completion of work, The APRS was factor analyzed using following instructions accurately, and a principal components analysis followed ability to work independently in a timely by a normalized varimax rotation with fashion. iterations (Bernstein, 1988). As shown in Table 1, three components with eigen- Internal Consistency and values greater than unity were extracted, accounting for approximately 68% of the Reliability of the AIRS Coefficient alphas were calculated to variance: Academic Success (7 items), Impulse Control (3 items), and Academic determine the internal consistency of the Productivity (12 items). The factor struc- APRS and its subscales. The results of ture replicated across halved random these analyses demonstrated adequate subsamples (i.e., n = 242 and 246, respec- internal consistencies for the Total APRS tively). Congruence coefficients (Harman, (.96), as well as for the Academic Success (.94) and Academic Productivity (.94) 1976) between similar components ranged from 84 to .98 with a mean of .92, subscales. The internal consistency of the indicating a high degree of similarity in Impulse Control subscale was weaker factor structure across subsamples. Items (.72). Subsequently, the total sample was with loadings of 60 or greater on a specific randomly subdivided (i.e., n = 242 and 246, component were retained to keep the respectively) into two independent subnumber of complex items (i.e., those with samples. Coefficient alphas were calcusignificant loadings on more than one lated for all APRS scores within each factor) to a minimum. In subsequent subsample with results nearly identical to analyses, factor (subscale) scores were the above obtained. Test-retest reliability data were obcalculated in an unweighted fashion with complex items included on more than one tained for a subsample of 26 children

290

School Psychology

Factor Structure

Review, 7997, Vol. 20, No. 2

TABLE1 of the Academic Performance Academic Success

Scale Item

Rating

Scale

I. Math work completed 2. language Arts completed

.30

0.02

.32

3. Math

.60

.06 .I1

work

4. Language

accuracy of work

6. Follows

group

instructions

7. Follows

small-group

8. Learns material 9. Neatness 10. Quality 11. Quality

instructions

quickly

.39

.37

.81

.I7 .50

.31

z

of handwriting

.87 -80

of reading of speaking

12. Careless work completion 13. Time to complete work

Iii .36

14. Attention 15. Requires

.24 .44

without prompts assistance

16. Begins work

carelessly

55.5

values indicate items included

.36

Ti

.61

s3 53 -02

.35 .39

.38 .67

,57

.28 6.6

in the factor named in the column

(with both genders and all grades represented) across a 2-week interval as described previously. The reliability coefficients were uniformly high for the Total APRS Score (.95), and Academic Success (.91), Impulse Control (.88), and Academic Productivity (.93) subscales. Since rating scale scores can sometimes %nprove” simply as a function of repeated administrations (Barkley, 1988), the two mean scores for each scale were compared using separate t-tests for correlated measures. Scores for each APRS scale were found to be equivalent across administrations with t-test results, as follows: Total APRS Score (t( 24) = 1.24, N.S.), Academic Success (t( 24) = 1.31, N.S.), Academic Productivity (t(24) = 1.32, N.S.), and Impulse Control (t(24) = .15, N.S.).

.23 .21

.20 .72

.82

.I6

Underlined

,Is

z .39

19. Social withdrawal

Note:

.21 .35

.66

5

of % variance

.I7

.I6

17. Recall difficulties 18. Stares excessively Estimate

.84

,82 F3 xi z 169 ,64 36

G so rl

Arts accuracy

5. Consistency

Academic Productivity

Impulse Control

67 head.

Gender and Grade Comparisons

Teacher ratings on the APRS were broken down by gender and grade level to (a) assess the effects of these variables on APRS ratings and (b) provide normative comparison data. The means and standard deviations across grade levels for APRS total and subscale scores are presented for girls and boys in Table 2. A 2 (Gender) x 6 (Grade) multivariate analysis of variance (MANOVA) was conducted employing APRS scores as the dependent variables. Significant multivariate effects were obtained for the main effect of Gender (Wilk’s Lambda = .95; fl4, 472) = 6.20, p < .OOl) and the interaction between Gender and Grade (Wilk’s Lambda = .93; F(20,1566) = 1.61,~ < .95). Separate 2 x 6 univariate analyses of

Academic

Means

and Standard

Deviations

Total Score

Grade

Performance

291

Rating Scale

TABLE 2 for the APRS by Grade and Gender

Academic Success

Impulse Control

Academic Productivity

Grade1 (n =82) Girls (n = 40) Boys(n=42)

67.02 (16.27) 71.95 (16.09)

23.92 (7.37) 26.86 (6.18)

9.76 (2.49) 10.67 (2.82)

44.68 (10.91) 46.48 (11.24)

Grade2(n=91) Girls (n = 46) Boys(n =45)

72.56 67.84

12.33) 14.86)

26.61 (5.55) 25.24 (6.15)

10.15 (2.70) 9.56 (2.72)

47.85 44.30

7.82) 10.76)

Grade 3 (n = 92) Girls (n = 43) Boys (n =49)

72.10 68.49

14.43) 16.96)

25.07 (6.07 25.26 (6.53)

10.86 (2.65) 9.27 (2.67)

47.88 45.61

9.35) 11.89)

Grade4(n =79) Girls (n = 38) Boys (n=41)

67.79 (18.69) 69.77 (15.83)

24.08 (7.56) 25.35 (6.50)

10.36 (2.91) 9.83 (2.77)

44.26 45.71

Grade5(n=79) Girls (n = 44) Boys(n =35)

73.02 (14.10) 63.68 (18.04)

26.11 (6.01) 23.14 (7.31)

10.76 (2.34) 8.69 (2.82)

48.36 42.40 (12.47)

Grade6(n =70) Girls (n = 31) Boys (n =39)

74.10 (14.45) 65.24 (12.39)

26.59 (6.26) 23.75 (5.90)

10.79 (2.25) 9.05 (2.35)

48.77 ( 9.13) 43.59 ( 8.19)

Note:

Standard deviations

are in parentheses.

variance (ANOVAs) were conducted subsequently for each of the APRS scores to determine the source of obtained multivariate effects. A main effect for Gender was obtained for the APRS Total score (fll, 476) = 6.37, p < .05), Impulse Control (F(1, 475) = 16.79, p < .OOl), and Academic Productivity (fll, 475) = 6.95, p < .05) subscale scores. For each of these scores, girls obtained higher ratings than boys, indicating greater teacher-rated academic productivity and behavioral functioning among girls. No main effect for Gender was obtained on Academic Success subscale scores. Finally, a significant interaction between Gender and Grade was obtained for the APRS Total score (F(5,476) = 2.68, p < .05), Academic Success (F(5, 475) = 2.63, p < .05), and Impulse Control (e&475) = 3.59, p < .Ol) subscale scores. All other main and interaction effects were nonsignificant. Simple effects tests were conducted

to elucidate Gender effects within each Grade level for those variables where a significant interaction was obtained. Relatively similar results were obtained across APRS scores. Gender effects were found only within grades 6 (fll, 475) = 7.02, p < .Ol) and 6 (fly, 475) = 6.61, p < .05) for the APRS total score. Alternatively, gender differences on the Academic Success subscale were obtained solely within grades 1 (F(1,475) = 4.24, p < .05) and 5 (F(1, 475) = 4.14, p < .05). These results indicate that girls in the first and f&h grades were rated as more academically competent than boys. Significant differences between boys and girls in Impulse Control scores were also found within grades 3 (fll, 475) = 8.73, p < .Ol), 5 (F(1,475) = 12.24,~ < .OOl), and 6 (F(I, 475) = 8.06, p < .Ol) with girls judged to exhibit greater behavioral control in these three grades. All other simple effects tests were nonsignificant.

School Psychology

Correlations

TABLE 3 APRS Scores

Total Score

Measures ACTRS’ ADHD

Between

Ratings

Review, 7997, Vol. 20, No. 2

and Criterion

Measures

Academic Success

Impulse Control

Academic Productivity

-m6()***b

9.43’”

0.49””

,.&4***

-.72***

0.59”’

-.61***

0.72”“”

On Task Percentage

.29*

.22

.24

.31*

AES”

.53***

.26

.41**

.57***

CTBS Math

.48***

.62***

.28

.39**

CTBS Reading

.53***

.62***

.34*

44’”

CTBS Language

.53***

.61***

.41**

.45**

‘Abbreviated

Conners

Teacher Rating Scale.

bCorrelations are based on N = 50 with degrees of freedom ‘Academic Efficiency Score. **p