2006 — 2012 |
Hill, Heather Schilling, Stephen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Measuring Mathematical Knowledge For Teaching Using Computerized Adaptive Testing @ University of Michigan Ann Arbor
This proposal draws on efforts beginning in 2000 of researchers at the University of Michigan to develop measures of elementary and middle school mathematical knowledge for teaching; this work comprises the bulk of the NSF funded Learning Mathematics for Teaching (LMT) project. In contrast to conventional assessments of mathematical knowledge (e.g., the SAT or Woodcock-Johnson assessment), these measures investigate the special mathematical knowledge teachers use to work in classrooms with students. One of the key outcomes of this work from the point of view of the NSF Math and Science Partnerships (MSP's) is the flexibility afforded by a library of more than 300 items ranging across the content areas of number concepts, operations, patterns, functions and algebra, and geometry, allowing project directors and evaluators to custom tailor assessment instruments to meet their specific needs and the content and effects of professional development efforts. This proposal seeks to fully exploit the extensive library of items developed as part of the LMT project and the psychometric information gathered using IRT as part of that effort through the creation of a web-based computerized adaptive testing (CAT) assessment. Computerized adaptive testing dynamically assesses subject performance in a particular domain by sequentially selecting items from the library in order to maximize the precision of measurement. After a subject has responded to a selected item, her/his scale score is updated, and a new item is chosen to match her/his updated scale score estimate. This process is then iterated until a specified level of precision is reached. The additional precision and reduction in testing time and effort afforded by computerized adaptive measures of teacher knowledge should enhance the ability of MSP project directors and evaluators to judge the efficacy of professional development aimed at improving teachers' content knowledge for teaching; and to estimate the effects of curriculum materials designed to improve teachers' knowledge of mathematics and students. These CAT measures will be accessible to users with limited technical expertise, comparable across a wide variety of programs and approaches to professional development, and will employ the most modern and technically up to date approaches for CAT assessments.
|
0.97 |
2006 — 2011 |
Hill, Heather Ball, Deborah (co-PI) [⬀] Phelps, Geoffrey (co-PI) [⬀] Schilling, Stephen Blunk, Merrie |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Measuring Mathematical Knowledge For Teaching: Evaluation Module Development and Dissemination @ University of Michigan Ann Arbor
Intellectual Merit:
This proposal draws on existing efforts to develop measures of elementary and middle school mathematical knowledge for teaching. Different from conventional assessments of mathematical knowledge (e.g., the SAT or Woodcock-Johnson assessment), these measures investigate the special mathematical knowledge teachers use to work in classrooms with students. For instance, these measures assess not only whether teachers know that 1 3/4 1/2 = 3 1/2, but also whether they can explain what this equation means, can construct a concrete representation or word problem which corresponds to it, and can understand and generalize from an unusual solution method for this problem. This proposal responds to a number of challenges that have arisen while disseminating these measures to researchers.
Currently, the Learning Mathematics for Teaching project (LMT) disseminates pre-piloted forms and technical information for evaluating teacher learning in K-8 number and operations, pre-algebra and algebra, and geometry. Teacher learning opportunities, however, frequently focus on more narrowly defined topic areas for instance, rational number concepts and operations. Yet items capturing teacher knowledge of rational number compose only one component of LMT's number concepts and operations measures. Because these current measures include other topics, they are unlikely to be appropriately sensitive to the content and effects of such professional development.
In response, this proposal argues for the construction of evaluation modules designed to be sensitive to and allow research and evaluation in the following areas: rational numbers; proportional reasoning; geometry; data analysis and probability. In all but the last topic area, for which LMT will create an entirely new set of items, the project will review and supplement existing items, create a form composed of items that capture key teacher competencies in these areas, pilot this form, then compose technical materials that support these modules' use. These technical materials will include two parallel forms and pre-equated raw-score-to-standardized-score conversion tables, to facilitate straightforward scoring.
This proposal also describes plans to improve the quality of dissemination training by exploring the nature of problems that arise when outside evaluators use these measures, and describes work to improve measures of teachers' knowledge of student learning around content. Intellectual Merit
This proposal grows from a well-established measures development project. At every step in this project's history, the provision of high-quality measures has coincided with and in some ways, depended upon progress in building theory about content knowledge for teaching mathematics, and exploring new areas in measurement design and scaling. This will be the case in the work proposed here, as well.
Broader Impact
This proposal enhances the infrastructure for research on teacher learning. These measures can be used to evaluate teachers' learning in pre-service teacher education programs; to make comparisons between preparation in traditional teacher education programs and alternative routes to certification; to judge the efficacy of professional development aimed at improving teachers' content knowledge for teaching; and to estimate the effects of curriculum materials designed to improve teachers' knowledge of mathematics and students. These measures are accessible to users with limited technical expertise, comparable across programs and approaches to professional development, and of high technical quality.
|
0.97 |
2012 — 2016 |
Atchade, Yves Henson, Robert Schilling, Stephen |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Faster Mixing Markov Chain Monte Carlo For Multidimensional Irt and Cognitive Diagnosis Models @ University of Michigan Ann Arbor
Fitting complex item-response theory (IRT) models has become increasingly important in educational assessment, psychological assessment, and patient-reported health assessment. The use of Markov Chain Monte Carlo (MCMC) methods for fitting these models has surged in recent years, particularly for multidimensional IRT and cognitive diagnosis models where standard methods such as maximum likelihood estimation are difficult to apply. Unfortunately, slow mixing of the Markov chains is often a severe problem, limiting the ability of researchers to draw valid inferences from the MCMC output. This project will develop fast-mixing MCMC algorithms applied to complex IRT models including multidimensional IRT models and cognitive diagnosis/diagnostic classification models. Specifically, the improved algorithms will include rescaling and re-centering approaches such as Meng and van Dyk's conditional and marginal augmentation, methods using gradient information, and adaptive MCMC methods. The project will extend the analytical reach of advanced MCMC methods in the psychometric literature, focus the attention of the psychometric community on MCMC mixing, and provide the basis for effective MCMC implementation in complex models.
Efficient and accurate fitting of IRT models is of profound importance. The high-stakes nature of most educational assessments demands valid estimation procedures. Therefore, slow mixing, leading to either slow or non-convergence of MCMC estimation procedures, can profoundly affect inference, leading to incorrect ranking of individuals. The methods developed by this project will address a critical problem in applications of MCMC in the psychometric literature. The project also will develop free, open-source software with which to implement the new methods.
|
0.97 |
2016 — 2019 |
Mcginn, Daniel Hill, Heather Herlihy, Corinne Schilling, Stephen Blunk, Merrie |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
The Mathematical Knowledge For Teaching Measures: Refreshing the Item Pool
This project proposes an assessment study that focuses on improving existing measures of teachers' Mathematical Knowledge for Teaching (MKT). The research team will update existing measures, adding new items and aligning the instrument to new standards in school mathematics. In addition, the team will update the delivery system for the assessment to Qualtrics, a more flexible online system.
The research team will build an updated measure of teachers' Mathematical Knowledge for Teaching (MKT). Project researchers will conduct item writing camps, develop new items, cognitively pilot and revise items, and factor analyze items. The researchers will also determine item constructs and calibrate items (and constructs) through an innovative application of Item Response Theory (IRT) employing a variant of the standard 2-parameter IRT model. Finally, the team will oversee the transition of the Teacher Knowledge Assessment System to the Qualtrics data collection environment to allow for more flexible item specification.
The Discovery Research PreK-12 program (DRK-12) seeks to significantly enhance the learning and teaching of science, technology, engineering and mathematics (STEM) by preK-12 students and teachers, through research and development of innovative resources, models and tools (RMTs). Projects in the DRK-12 program build on fundamental research in STEM education and prior research and development efforts that provide theoretical and empirical justification for proposed projects.
|
0.97 |