How Do You Do Diagnostic (Formative) Assessment? – A Number Concept Profile

Share Embed


Descrição do Produto

How Do You Do Diagnostic (Formative) Assessment? – A Number Concept Profile
John Gough – [email protected]

Contextual Preamble
This discussion is based on decades of reflection and extensive publishing on the topic of diagnostic assessment in mathematics education. The suggested readings, at the end, indicate the range of my work on this topic.
I am writing about my experiences of Primary (elementary) school mathematics in Australia, augmented by my decades of university teaching when I trained and supervised student-teachers (at Deakin University, and other tertiary institutions).
My references to the Australian school mathematics curriculum are not intended to be prescriptive, or limited. The Australian focus is simply ONE example of many broadly similar versions of school mathematics in the developed, and developing world.

Your D.I.Y. "pre-test" Self-assessment of personal assessment practices
What is your classroom assessment like?
Post-testing whatever topic you taught?
Pre-testing an intended topic to screen for students' entry-levels of existing knowledge?
Testing half-year or end-of-year achievement?
Rubric-structured criteria negotiated between teacher and class?
Student self-assessment or peer-assessment?
One-to-one teacher-student interviews?
Using open-ended questions, or rich assessment tasks?
Open-ended project activities?
Discussing with students what they already know, or have learned about a topic?
Or some combination of some of these?
Or do you assess in a different way?

Brainstorming and KWL – Are These Adequate Pre-"tests"?
Many teachers start a lesson or a new topic by using a "brainstorm". This is an open-ended teacher-moderated class discussion that explores what students (some students) already know about a topic, and their interests in the topic. Often the teacher uses a display board to jot down, perhaps in a loosely organised way, ideas and examples suggested by the students. Later, as the lesson or the topic, progresses, these jotted notes can remain on display, perhaps with further annotations (perhaps ticks or checkmarks, indicating that we have covered this, now), and as a sketch map for the concepts and skills being reviewed and extended.
But, however popular and common practice it may be, a "brainstorm" is NOT an effective informal alternative to a formal pre-test. Nor is this whole-class discussion diagnostic in the way that a one-to-one interview can be. Students who speak confidently, and possibly correctly, about a topic are unlikely to be typical. Other students who speak incorrectly or irrelevantly reveal what they do NOT know, have misunderstood, or only partly learned. Silent students disclose NOTHING. An absence of evidence is ambiguous and ominous!
The results of a "brainstorm" – such as the jotted notes on a whiteboard, or the dot-point listings in something called a "KWL" (the letters stand for column-headings for what students already KNOW, and what they say they might WANT to investigate, and what students may suggest will show what might next be, or has later been LEARNED) may seem to indicate starting points for follow-on instruction and activities that will build on the notes from the "brainstorm".
But did the "brainstorm" reveal only the advanced ideas of the confident vocal high-achievers?
Will a follow-on lesson benefit others who were incorrect or irrelevant?
How can the teacher know that follow-on lessons will connect, positively or negatively, with the silent students?
There are better ways of assessing at the starting point of a lesson or topic.
Traditional classroom assessment focuses on a topic or year-level
When I was a Primary school student, in the end of the 1950s and a Secondary student early in the 1960s, assessment was essentially end-of-term examination of current achievement within a particular year-level. That was before the idea of catering for individual differences radically altered teachers' expectations. (These days we talk about teaching a "differentiated curriculum", but the basic idea is the same.)
Traditionally, when teachers talked about "assessment" what they usually had in mind was end-of-topic or end-of-year testing. Less formal assessment also included small weekly tests of multiplication tables, number facts and mental arithmetic. But the marks from these quizzes (as they are known in North America) usually did not contribute to the final percentage score reported for each student. Hence these weekly quizzes were no more than well-intentioned attempts to "giddy-up" students, encouraging them to memorise and practise rapid multiplication table-facts recall and related mental arithmetic.
Moreover, the curriculum that would be taught, and then assessed, was whatever was judged to be, or officially prescribed to be the curriculum for that year-level. Education authorities knew best what ought to be taught, for example, to students about 8 or 9 years old in Year 3. In Victoria, in the late 1950s, curriculum was enshrined in the official Victorian Arithmetic textbooks from Grades from III to VIII. (Making allowances for pre-metric measurement and money, these old textbooks were, and arguably are, surprisingly effective textbooks, at least in my opinion. But perhaps that is another story, …)
Currently, in Australia, we still see this kind of approach, assessing only one year's worth of curriculum at a time, underlying NAPLAN assessment (National Assessment Program – Literacy and Numeracy) at specified year-levels. For example, Year 3 students are not asked to attempt Year 4, 5 or 6 questions in the Year 3 NAPLAN test. Nor are Year 3 students asked Year 1 or 2 questions.
But why not? We might learn something if we tried!
Assessing across year-levels
What is current practice? In Victoria, for example, for more than 10 years, teachers in Junior Primary have been required to start the school year using the one-to-one interactive Early Years Numeracy Interview (EYNI).
Crucially ENYI contains questions that are graded in difficulty ranging from the very first year of Primary up to Year 3 – a span of four school-years. Regardless of the individual student's year-placement, each student is given the open-ended opportunity to attempt to answer questions as far as possible through the graded sequences of questions that cover mathematics curriculum topics such as Number Concepts and Numeration, Numerical Calculation, Measurement, and Spatial Knowledge and Geometry.
Importantly, because EYNI is an interview, it is conducted by one teacher face-to-face with one student. Because it is presented as an interview, for students who have not yet learned to read, or who are struggling with reading, the questions are asked orally, often using physical materials or diagrams to show what is being asked, and students can answer orally.
Theoretically there is no reason these Early Years students should not be given an opportunity to attempt questions for higher year-levels. But the assessment approach of the EYNI has not yet, as far as I know, been extended to Middle Years or higher year-levels in any official or comprehensive way.
Once teachers have completed the EYNI in the first days of a school year, teachers can draw directly on the results of this entry-level screening to individualise what is then taught to the students in a diagnostic needs-based way. Naturally, because of other practical constraints, some teachers are not able to use this assessment information in individualised, or small-group ways, catering for specific needs. However, many teachers successfully do this.
Other states may assess in this way, but in my experience assessment is usually limited to exploring a topic within one year-level.
Remembering KeyMath
The remarkable testing interview KeyMath (Connolly, Nachtman and Pritchett, 1971). was a pioneering American publication, with American norms, based (of course) on the American curriculum (including non-metric units of measurement, and American coins and notes). KeyMath broke the elementary (Primary) mathematics curriculum into sub-curricula, such as Measurement, Numbers and Numeration, Logical Reasoning, and Arithmetic Operations (both Mental Arithmetic, and pencil-and-paper calculations).
Each sub-curriculum was assessed using a sequence of questions, graded in difficulty and normed for year-levels, starting at Kindergarten (the first year of elementary schooling in America – for students aged about 5 years-old) and proceeding up to about Year 8 (for students aged about 13 or 14 years). These KeyMath questions were administered, one-to-one, by the teacher-interviewer progressively showing, and where appropriate reading aloud, each successive question, and recording the individual student-interviewee's answer. This continued through the questions of the sub-curriculum until the student either could not answer, or incorrectly answered three successive questions (or some similar number of questions). No further questions were asked in that sub-curriculum as, according to the Assessor's Manual for KeyMath, these three successive omissions, refusals or errors indicated the upper-limit or ceiling the student could manage within than sub-curriculum.
The central aim of the KeyMath interview was to establish, for each individual student assessed, the detailed ceiling scores in each sub-curriculum. This might, for example, reveal that a particular student in an ordinary Year 4 class, was working successfully up to a year- or grade-level of 5 in Numbers and Numeration, and at a grade-level of, perhaps, 4 in Addition, and 3 in Subtraction, but was struggling at lower grade-levels with Multiplication and Division. The detailed exploration of these sub-curriculums, across year-levels, provided information for the classroom teacher that indicated WHAT the student already KNEW, and WHERE fresh instruction needed to begin. This was clearly constructive and diagnostic.
Importantly, unlike most traditional methods of assessing school mathematics, KeyMath used the classic assessment or measurement method of intelligence tests – administering a sequence of graded questions or tasks until a test-subject could no longer answer correctly.
Interestingly, KeyMath was reissued, with substantial revisions, as recently as 2007.
Unfortunately, perhaps, it remains resolutely American. It is also expensive, as an item for purchase. It also needs a lot of time. That is inevitable with one-to-one interviews, spanning a large and complex school curriculum.
But I will argue that anything less than such an approach is inadequate as a way of diagnostically assessing what EACH student already KNOWS about a topic, or sub-curriculum, and hence identifying the STARTING POINT for each individual student for further learning of this topic.
Good teaching, quite simply, is not easy or quick!

Understanding formative assessment and alternatives
Using assessment to shape (that is, to form, or to inform) follow-on teaching and learning activities is known, technically, as "formative assessment". The word or word-stem "form-" indicates the idea of this approach to assessment.
Formative assessment is actually a way or method of USING assessment information. It is not (!) a particular KIND of assessment. By contrast, not using assessment information to determine follow-on instruction and learning experiences, but instead only using the assessment information to summarise and report student achievement, is, of course, technically, "summative assessment". Here the word-stem "summ-" similarly indicates the idea of this approach to assessment.
The terms "formative" and "summative" are very familiar to teachers. However, in my experience, these terms are not always well understood.
For example, as noted, some teachers think that formative assessment is a particular type of assessment. However it is possible to USE almost any kind of assessment in formative ways, matching follow-on classroom experiences to the collective or individual evidence obtained by assessment.
Evidence-based teaching is the logical follow-on to evidence-based assessment and reporting, surely.
Similarly, some teachers think that formative assessment occurs midstream, DURING a lesson, or a sequence of lessons. They claim they are distinguishing in-the-midst assessment this from something they often call "diagnostic assessment" that is supposed to occur BEFORE a lesson or sequence of lessons begins.
In my view, whether assessment occurs before, or during classroom learning activities (or even at the end), if the results of the assessment are used to decide what happens next, or later, this is using the assessment information in a "formative" way.
Correspondingly, the process of using evidence from assessment to shape the subsequent instruction and classroom activities is "diagnosis". I see no reason to make any distinction about WHEN assessment occurs and is used to guide subsequent teaching. For me, formative assessment IS diagnostic assessment. What matters is HOW assessment is used!

The importance of formative assessment
Formative assessment has been shown to be the most constructive way of using assessment information. Paul Black and Dylan Wiliam – distinctively, with one "l" are international leaders in researching the constructive value of formative assessment. For example, their pioneering studies, outlining ten years of research, are Black and Wiliam (1998a and b). Many other papers have followed, including Wiliam (2005).
Interestingly, Wiliam encourages the use of assessment questions and tasks that are open-ended, or "rich", or that probe students' understanding, such as:
which of these two fractions is larger: 3/7 or 3/11?
A simple answer of "this one is larger" is not accepted. The student is asked to explain why, or how he or she knows this to be the case – showing evidence of mathematical thinking. Such evidence or explanation (similar to the traditional instruction by teachers to "show all reasoning") is inherently diagnostic!
This emphasis on the open-endedness of "rich" or "good" questions has been similarly emphasised by many Australian mathematics educators, including Peter Sullivan, Doug and Barbara Clarke, Phillip Clarkson, Sandra Dole, and David Clarke. The topic of "RATs" or Rich Assessment Tasks" may have been overshadowed by open-ended and "good" questions, but the underlying ideas are equivalent, across decades or research and discussion.
Wiliam remarks,
"in some senses this is a 'trick question' … but as a teacher, I think it is very important for me to know if my students think that three-elevenths is larger than three-sevenths. The fact that this item is seen as a 'trick question' shows how deeply ingrained into our practice the summative function of assessment is" (2005, p. 23).
Another of Wiliam's questions is:
Which of the following statements is true:
1. AB is longer than CD
2. AB is shorter than CD
3. AB and CD are the same length.

Wiliam comments, "viewed in terms of formal tests and examinations, this may be an unfair item, but in terms of a teacher's need to establish secure foundations for future learning, I would argue that this is entirely appropriate" (2005, p. 23).
He adds, "Rich questions [like these] provide teachers not just with evidence about what their students can do, but also what the teacher needs to do next, in order to broaden or deepen understanding" (2005, p. 23). (Incidentally, I do not see why the latter question is "unfair" or the former is a "trick".)
Formative assessment analysing individual levels of achievement and linking this with the developing sequence of curriculum "diagnoses" what students need, as follow-on instruction. Hence, as noted, in my opinion, there is no practical difference between "formative" and "diagnostic" assessment.
Moreover, because an individual's achievement varies for different topics (or sub-curriculums) within a curriculum, and the levels of achievement in a particular topic vary across a classroom of students – with ups and downs like mountain peaks and valleys – I like to use the term "diagnostic profile" to label a way of (formatively) assessing students across the year-levels of a developing topic sequence. (KeyMath spoke of "profiles" as part of the end-product of a completed student assessment.)
Contrast this with the more traditional, possibly more familiar, way of assessing students at year-level X with questions set only for year-level X. If students at a particular year-level are only given questions presumed to be suitable for that year-level (specifically, for "average" students experiencing "average" instruction and challenge), those more able or higher-achieving students who COULD go further have no opportunity to show this. Similarly, at the other end of the achievement spectrum, students who are struggling with the supposedly "average" curriculum for their chronological year-level, will only show they are struggling to achieve any correct answers or maintain self-esteem at their year-level.
What can we do about this?
The solution is amazingly easy: make sure that a test includes questions with a wide range of levels of difficulty, and build on the results.
What I refer to as a "diagnostic profile" is a topic-specific test (or graded, or open-ended, worksheet or set of tasks) that assesses separate sub-curricula in the Mathematics curriculum, from the early and easy questions, onwards and upwards, to the later and harder questions.
One way of making a diagnostic profile
How can we create such graded questions or challenges?
Consider the Australian Curriculum: Mathematics (2009) as a current example of a practical summary of the school mathematics curriculum. Typically for so-called scope-and-sequence charts, it is organised in topic-sections (known as "strands"), with a structure of year-levels.
The danger with such a structure is that some teachers may think that apparently separate topics are really separate. Despite this, for example, any curriculum statements about the sub-curriculum of Measurement (part of the Content Strands) naturally connect with other curriculum statements from Number. Moreover, the diagrams used to illustrate curriculum statements about Measurement inherently draw on separate curriculum statements about Geometry and Spatial Thinking, Similarly, problem solving, a major theme within the Proficiency Strands, applies throughout all the Contents Strands. Also, the apparent separation between year-levels separations is misleading, because there is no evidence that ay particular curriculum statement belongs in any one year-level. Also, the classroom reality is that, regardless of the ages of students, and the kind of teaching they receive, any year-placement of student will include students who have already mastered virtually everything they will be taught during that year, and, at the other end of the achievement-spectrum, other students who are still struggling with what they were taught in their previous year-placement, or earlier.
That is, the Australian Curriculum looks as though it consists of separate or discrete elements. But the elements are intricately woven together. And the appearance of year-levels steps upwards through the Content Strands and Proficiency Strands does not reflect the classroom reality of the diversity of students' actual achievement levels.
To illustrate my discussion of formative assessment and diagnostic profiles, I will choose "Number Concepts" as a possible topic or focus for making a diagnostic profile. To create the diagnostic profile, one possible first step is to consult the Australian Curriculum: Mathematics – or any similar graduated curriculum outline and compile all the curriculum statements about the topic. The second step is to translate each curriculum statement into a question or task.
The first curriculum statement about Number Concepts in the Foundation year is:
Establish understanding of the language and processes of counting by naming numbers in sequences, initially to and from 20, moving from any starting point (ACMNA001).

Clearly this is actually a statement about whole-numbers (positive integers, or the set of Natural numbers) expressed as spoken words, that is, their English-language oral number-names (technically, "numerals"), and their numerical sequence. Implicitly it is also, or could be, about the written number-names, or about the written symbols for numerals. Logically, and perhaps biologically (genetically) whole numbers are the conceptual beginning. Fractions (and decimals) come later, as do negative numbers, and irrationals, transcendentals, then complex numbers, transfinite numbers, and more …
Most of the curriculum statements concerned with Number Concepts will be found in the Content Strands of Number and Algebra, within the Australian Curriculum, although some might appear in the content strand of Measurement. Also the Proficiency Strands of Understanding, Fluency, Problem Solving and Reasoning suggest (less specifically) that, in the Foundation year, the strand of Understanding includes connecting names, numerals and quantities, Fluency includes readily counting numbers in sequences, and continuing patterns, Problem Solving includes using familiar counting sequences to solve unfamiliar problems, and discussing the reasonableness of the answer, and Reasoning includes explaining comparisons of quantities, and creating patterns – all of these concerned with, implicitly, Number Concepts.
Because the Australian Curriculum is available as a digital document (either as a web-site or a downloaded PDF file), apart from ordinary reading, it is also easy to use FIND options to search for "number", "fraction", "decimal", and other key-words that are relevant to the focus-topic. By Year 7, for example, some of the Number Concept curriculum statements in the Number and Algebra strand are:
Investigate index notation and represent whole numbers as products of powers of prime numbers (ACMNA149).
Investigate and use square roots of perfect square numbers (ACMNA150).
Compare fractions using equivalence. Locate and represent positive and negative fractions and mixed numbers on a number line (ACMNA152).
Once we have compiled all the curriculum statements for our focus-topic, we then translate each one into a question, or task. For example, the first curriculum statement might translate into tasks such as:
Write the number-word that comes BEFORE thirteen.
Write the number-word that is TEN more than forty.
Write the numeral that is 5 more than 28.
Similarly the last curriculum statement (about comparing fractions) might translate into a task such as:
On a given number-line, mark and label the positions of the fractions three-sevenths, two and four-fifths, and negative five-eighths.
On a number-line marked with 0, 1 and 2, show the positions of ¾ , 4/7, and 9/5.
Do something like this for all the curriculum statements, and the diagnostic profile is then ready to use.
Where do the year-levels in the Australian Curriculum come from?
It is important to emphasise that the year-levels "prescribed" by the Australian Curriculum – or ANY curriculum outline are NOT based on the results of classroom research that might show that a particular curriculum statement that is clearly suitable, for example, for Year 5, is indeed too difficult for Year 4 students, and too easy for Year 6 students. Instead, the year-levels have been decided by the negotiated collective professional opinions (!) of the author or committee of authors (experienced mathematics teachers and mathematics education teacher-trainers and researchers) who created the Australian Curriculum: Mathematics.
The reality is that each curriculum statement is collectively believed to be appropriate for "average" students, in a particular year-level, who have experienced "average" teaching in that year and in previous years. However, outstanding teachers will help more of their students to work effectively at higher year-levels, while students who are being taught by less able teachers might not achieve at the expected "average" level. Similarly, highly able, or strongly interested students are often working several year-levels above their chronological year-placement, while less able students may lag behind the expected "average".
In short, the year-levels of the Australian Curriculum are only informally (not experimentally or statistically) normative: that is, they are indicative rather than prescriptive.
This is another reason to assess across year-levels. Instead of accepting official year-level suggestions as some kind of constraining "gospel" or officially received wisdom, actively identify as accurately as possible what each individual student can, and cannot do. Then use classroom activities and instruction that build on the evidence-based foundations of what each student already knows. Closely connecting teaching, curriculum content, and individual student knowledge is, of course, the essence of good teaching.
Using our own curriculum knowledge to make a diagnostic profile
Because teachers in Australia are expected to be familiar with and use the Australian Curriculum, it is sensible to search the Australian Curriculum to compile a detailed outline of any planned focus-topic. But can we be certain the Australian Curriculum contains everything we would expect to appear in a mathematics curriculum for Primary and Secondary students? There may be gaps, or oversights. For example the Victorian Essential Learning Standards, surprisingly, left out almost all of the sub-curriculum of Money that had been included in the preceding Curriculum and Standards Frameworks (Gough 2010). Mathematics is such a large curriculum that mistakes can happen.
For this reason it may be a helpful check, and possibly quicker and equally effective, for teachers to use their own curriculum knowledge to identify the early and easy aspects of a particular topic, and the harder aspects that are usually taught later. What, then, is a possible sequence for Number Concepts? Counting whole-numbers, ordering, place-value, fractions, decimals, …
The following set of tasks is designed to illustrate such a Do-It-Yourself approach to making a Diagnostic Profile that explores how much a student knows about Number Concepts.


A Draft Diagnostic Profile of Number Concepts.
Task: For each of these number names or descriptions, write an example, and briefly explain the example:
No.
Number Names or Descriptions
Example
1
Counting number

2
Zero

3
Odd number

4
Product or rectangular number

5
Prime number

6
Square number

7
Fraction

8
Fraction bigger than 2

9
Fraction equivalent to three-sevenths

10
Decimal

11
Decimal smaller than one-hundredth

12
Base number raised to a power

13
Square root


Of course this is an extremely simplistic set of "questions", or tasks or stimulus-starters.
The tasks would be more effective if, for example, the technical terms used in some of the questions were briefly explained, and a simple worked example was included. For instance, the last question would be more diagnostically informative if it included a brief but constructive description, such as:
"the square root of a particular number is another number, that, when we square that number (that is, we multiply that other number by itself), the result is the particular number we started with. For example, 3 is the square root of 9, because when we square 3 (that is, we multiply 3 by itself), the result is 9; that is, 3x3=9".
Similarly, the word "square", used earlier in the sequence of questions, would be briefly explained, and illustrated, before stating a task such as, "Give a different square number".
By providing brief descriptions and worked examples as reminders for students who have encountered the ideas before, or as one-off mini-lessons for students who are new to these ideas, we give every student the best opportunity to answer as many questions as possible.
Do you assess in this way, in essence, asking students, "How far can you go with this topic?"
If you don't, you are not properly assessing for the spectrum of individual differences among your students, or for a properly differentiated curriculum.
Can we do better?
References and Further Reading
Australian Curriculum, Assessment and Reporting Authority [ACARA] (2009, and later). Australian Curriculum: Mathematics: http://www.australiancurriculum.edu.au/Mathematics/Rationale,
last opened 24 August 2015.
Black, P. J. & Wiliam, D. (1998a). "Assessment and Classroom Learning". Assessment in Education: Principles Policy and Practice, vol. 5, no. 1, pp.7–73.
Black, P. J. & Wiliam, D. (1998b). "Inside the Black Box: Raising Standards Through Classroom Assessment", Phi Delta Kappan, vol 80, no. 2, pp. 139–148.
Clarke, D. M. (2000). "The Early Numeracy Research Project [ENRP]: Some Insights From an Exciting First Year". In Department of Education, Employment and Training [DEET]. (Eds.), High Expectations: Outstanding Achievement (Proceedings of the Early Years of Schooling P-4 Conference, CD-ROM). Melbourne: DEET.
Clarke, D., Clarke, B., Beesey, C., Stephens, M., Sullivan, P. (1996). "Developing and Using Rich Assessment Tasks With the CSF". In H. Forgasz, T. Jones, G. Leder, J. Lynch, K. Macguire & C. Pearn (editors), Mathematics Making Connections [Thirty-third Annual Conference] Mathematical Association of Victoria, Brunswick, 1996, pp 287-294.
Dole, S. "Searching for Classroom RATs (Rich Assessment Tasks)" in P.C. Clarkson (editor) Technology in Mathematics Education, Mathematics Education Research Group of Australasia, Melbourne, 1996, pp 162-169.
Clarke, D., Gervasoni, A., & Sullivan, P. (2000). "The Early Numeracy Research Project [ENRP]: Understanding, Assessing and Developing Young Children's Mathematical Strategies": Paper presented to the 2000 Conference of the Australian Association for Research in Education, Sydney, December 2000: in http://www.aare.edu.au/00pap/cla00024.htm: last accessed 30 March 2012.
Clarke, D. M., Sullivan, P., Cheeseman, J., & Clarke, B. A. (2000). "The Early Numeracy Research Project [ENRP]; Developing a Framework for Describing Early Numeracy Learning". In J. Bana & A. Chapman (Eds.), Mathematics Education Beyond 2000 (Proceedings of the 23rd annual conference of the Mathematics Education Research Group of Australasia, pp. 180-187). Fremantle, Western Australia: MERGA.
Connolly, A.J. (Ed.) KeyMath-3 Diagnostic Assessment: Manual Forms A and B. Pearson, Minneapolis, 2007.
Connolly, A.J., Nachtman, W., & Pritchett, E.M. (1971). KeyMath: Diagnostic Arithmetic Test. American Guidance Service, Circle Pines.
Early Years Numeracy Interview (EYNI): Department of Education and Early Childhood http://www.education.vic.gov.au/studentlearning/teachingresources/maths/interview/moi.htm: last accessed 26 March 2012: part of a web-site on Assessment.
Ellerton, N.F., & Clements, M.A. ("Ken") (1995). "Challenging the Effectiveness of Pencil-and-Paper Tests in Mathematics". In L. Velardi & J. Wakefield (eds.), Celebrating Mathematics Learning (pp 268–276), Mathematical Association of Victoria, Brunswick.
Gough, J. (1999a). "Perimeter Versus Area—Fixing False Assumptions", Prime Number, vol. 14, no. 3, pp 19-22.
Gough, J. (1999b). Diagnostic Mathematical Profiles, Deakin University Press; University of New South Wales Press, Sydney.
Gough, J. (1999c). "Assessing Time — How to Profile (or Benchmark?) a Topic". In Tynan, D., Scott, N., Stacey, K., Asp, G., Dowsey, J., Hollingsworth, H., & McCrae, B. (eds.) Mathematics: Across the Ages, Mathematical Association of Victoria, Brunswick, 190-196.
Gough. J. (2000). "Chance: A Diagnostic (Primary) Mathematics Profile". In J. Wakefield (ed.) Mathematics: Shaping the Future, Mathematical Association of Victoria, Brunswick, pp. 106-117.
Gough, J. (2001a). "Weighty Matters and Dense Arguments: CSF Versus Real Experience", Prime Number, vol. 16, no. 2, pp. 10-14.
Gough, J.(2001b) "Time: A Diagnostic (Primary) Mathematics Profile", Prime Number, vol. 16. no. 1, 2001, pp. 19-25.]
Gough, J. (2001c). "A Diagnostic 'Profile' for Chance (Probability)", Prime Number, vol. 16, no. 3, pp 19-27.
Gough, J. (2002). "Money Matters: Everyday Numeracy and Profiling for a Pocketful of Learning". In C. Vale, J. Roumeliotis, & J. Horwood (Eds) Valuing Mathematics in Society, Mathematical Association of Victoria, Brunswick, 2002, pp. 221-231.
Gough, J. (2004a). "Editorial: Familiarity Breeds … What?" Vinculum, vol. 41, no. 3, p. 2.
Gough, J. (2004b). "Fixing Misconceptions: Length, Area and Volume". Prime Number, vol. 19, no. 3, pp. 8–14.
Gough, J. (2004c). "Seeing Through the CSF: Profiling Space". In B. Tadich, P. Sullivan, S. Tobias, & Brew, C. (Eds.) Towards Excellence in Mathematics. Mathematical Association of Victoria, Brunswick, pp. 156–165.
Gough, J. (2005). "Angles: Exploring Definitions, Conventions, Uses and Structure in Curriculum and Psychology". In J. Mousley (Ed.) Mathematics — Celebrating Achievement: 100. Mathematical Association of Victoria, Brunswick, CD-ROM e-book.
Gough, J. (2006a). "Mass, Maths, Physics, VELS and Other Weighty Matters". In J. Ocean, D. Siemon, J. Virgona, M. Breed, & C. Walta (eds.) Mathematics — The Way Forward. Mathematical Association of Victoria [MAV], Brunswick, 2006.
Gough, J. (2006b). "The Multiplication Curriculum: a Short Outline", Prime Number, vol. 21, no. 4, pp. 21-27.
Gough, J. (2010). "Money and VELS", Prime Number, vol. 25, no. 1, pp. 6-8.
Gough, J. (2015). "Pre-Test – Great? … but How? … and Why?", MT [Mathematics Teaching] no. 247, July 2015, pp. 33-38.
Schleiger, H.E. & Gough, J. (1998). Diagnostic Mathematical Profiles, Deakin University Press, Geelong.
Sullivan, P. (1992). "Open-ended Questions, Mathematical Investigations and the Role of the Teacher". In M. Horne & M. Supple (Eds.) Mathematics: Meeting the Challenge. Mathematical Association of Victoria [MAV], Brunswick, 1992, pp. 98–103.
Sullivan, P. (1995). "Content-specific Open-ended questions: A Problem Solving Approach to Teaching and Learning Mathematics". In J. Wakefield & L. Velardi (Eds.) Celebrating Mathematics Learning. Mathematical Association of Victoria [MAV], Brunswick, 1995 pp. 176–180.
Sullivan, P., & Clarke, D. (1988). "Improving the Quality of Learning by Asking 'Good' Questions", South-east Asian Journal of Science and Mathematics Education, vol. xi, pp. 14-18.
Sullivan, P., & Clarke, D. (1991a). Communication in the Classroom: The Importance of Good Questioning. Deakin University, Geelong.
Sullivan, P., & Clarke, D. (1991b). "Catering to All Abilities Through 'good' Questions". Arithmetic Teacher, vol. 39, no. 2, pp. 14–18.
Sullivan, P., & Lilburn. P. (2004). Open-Ended Maths Activities; Using 'Good' Questions to Enhance Learning, Oxford University Press, Melbourne: First edition 1997.
Sullivan, P., Warren, E., & White, P. (2000). "Students' Responses to Open-ended Mathematical Tasks". Mathematics Education Research Journal, vol. 21, no. 1, pp. 2–17.
Test Review: A. J. Connolly KeyMath-3 Diagnostic Assessment: Manual Forms A and B. Minneapolis, MN: Pearson, 2007 Journal of Psychoeducational Assessment February 2011, vol. 29 pp. 94-97.
Wiliam, D. (2005). "Keeping Learning on Track: Formative Assessment and the Regulation of Learning". In M. Coupland, J. Anderson & T. Spencer (Eds.) Making Mathematics Vital: Proceedings of the Twentieth Biennial Conference of the Australian Association of Mathematics Teachers, Australian Association of Mathematics Teachers [AAMT], Adelaide, pp. 20-34.

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.