What, when and how to assess in math class
For many teachers, and even more students, assessment is a nightmare. This is the result, among other reasons, of the idea that equates assessment with ‘grading’. Assessing is much more than testing and grading, it should be a reflective process that promotes learning and provides satisfaction to both the student and the teacher.
That is why it is vital for teachers to talk, share concerns, strategies and tools, discuss how to assess, when to assess and, above all, why we should assess. In this series of articles that we begin today, we will address all these questions, share reflections and classroom experiences, and explore real-life examples from children to approach a topic as complex as assessment from different points of view.
What do we mean by assessment?
When talking about assessment, its two main purposes are often mixed up:
- Formative assessment and self-assessment: assessment is a means to regulate learning, identify achievements, difficulties, errors and find ways to progress. This information may come to the student in the form of feedback from the teacher or a peer. In both cases, we speak of formative assessment. At the same time, we also expect the learner to reflect on their own learning process and develop the ability to self-regulate; this is self-assessment.
- Graded assessment: assessment is a means of testing what has been learned. This is the assessment that we refer to when we put grades on the newsletter. Grades are also used to rank (e.g. for university entrance) or to certify that a level has been acquired (e.g. in official language exams).
Although for most students and teachers and for society in general, the main concern when grading is that a truly paradigmatic assessment process must contemplate both purposes and especially promote the first one. Think of your phone’s GPS: it not only tells you where you are and where you want to go, but it also gives you step-by-step directions. Even if you don’t listen to it and end up getting lost, it is able to adapt and recalculate your journey. Assessment should be like a GPS: it should tell students where they are and where they need to go, but it should also show them the way and guide them along it.
This is explained by Neus Sanmartí, professor at the Universidad Autónoma de Barcelona and leading expert in assessment at the Department of Education, in her highly recommended book, Evaluar es aprender (2020) (Assessing is Learning):
In the development of the competency-based curriculum, assessment has a regulatory function in the whole learning process, since it must allow pedagogical strategies to take precedence and be adapted to students’ characteristics and to verify their progress as they advance in their learning. It should allow teachers to confirm the degree of students’ achievement of basic competencies and adjust, if necessary, teaching processes. For students, assessment also becomes an essential element for learning, since students who can see their progress know how to regulate themselves are more prepared to advance in their learning. Therefore, strategies should be searched for to share the assessment process with the students and make them participants and protagonists of their own learning process, and to share with the rest of the teaching staff and families of students the coherence of the assessment criteria applied in areas of learning, projects and other school activities.
Jo Boaler, one of the world’s most influential mathematics education experts and a professor at Stanford University, describes it this way (2020):
One problem in the United States is that many teachers use summative assessment formatively, that is they give students a score or grade when students are still learning the material, rather than at the end of a course. In Assessment for Learning, students become knowledgeable about what they know, what they need to know and ways to close the gap between the two places.
AFL helps teachers make their instruction more effective and students learn as much as possible. Teachers using AFL spend less time telling students what their successes are and more time empowering them to take charge of their own learning pathways.
With this idea in mind, and with a desire to simplify, we understand that assessment consists of three phases, as is described by Fernández and Morales (2022). The difficulty lies in answering the questions raised in each phase:
What do we assess in mathematics?
Traditionally, math teachers focused on content acquisition: Do you know how to add? Do you know what an isosceles triangle is? Since the early 2000s, however, the majority of experts in mathematics education and in an increasing number of institutions, have been committed to learning (and, therefore, to assessment) that also considers, in addition to content, the processes of mathematical competence.
Niss and Højgaard (2019), the parents of the PISA theoretical framework in mathematics, explain it this way:
Of course, when we focus on competence, we do not intend to discount the importance or role of mathematical subject matter, including facts, results, and methods, in the development and possession of mathematical competence and knowledge. This would be absurd, just as it would be absurd to dismiss vocabulary, spelling, grammar and syntax as significant elements of linguistic competence. However, just as it would be absurd to reduce linguistic competence to mere knowledge of vocabulary, spelling, grammar and syntax, it would be absurd to reduce the ability to exercise and enact mathematics to the enumeration of mathematical concepts, terms, theorems, rules and procedures to be known. What is important is not only what you know, but how you know it, and what you can do with what you know.
What are these processes? Depending on where you are, different approaches to mathematical competencies are found, but most are based on the ideas developed by the National Council of Teachers of Mathematics (NCTM), the association of mathematics teachers in the United States and the research that stems from it. At Innovamat, looking beyond possible adaptations for specific countries, we are committed to a global framework that takes into account the following four processes, closely aligned with what is proposed by the NCTM or the PISA framework experts (Niss and Højgaard, 2019):
A problem is something that requires a strategy to solve it. This process details the phases we must follow to solve problems. This is a core process which gives us a better competency-based environment.
This process details the skills that come from analyzing a situation to then formulate and test out conjecture, make reasoned deductions, and above all, debate any statement we make in the mathematics classroom.
This process includes all the abstract relationships we find or establish among ideas and concepts. There are two clear types of connections:
- Connections within math itself.
- Connections between math and real life.
This process details the skills related to the transmission of mathematical information, whether it is sending or receiving information. There are up to five ways of communicating or representing a concept.
These are the four fundamental processes. Skills or sub-competencies can be broken down and specified within each process, as some official curricula do. These skills, shown in the infographics above, are useful to better understand the processes, but in our classroom experience, they are too detailed to assess one by one. They add complexity to the assessment that is difficult to manage in a classroom with more than 25 students. Therefore, we believe that it is better to focus our efforts and the limited time we have available on assessing in depth these four general processes, which must be mastered and related to specific content, which must also be assessed.
What we can assess with respect to problem solving, for example, is whether the student is able to be thorough and find all solutions in an activity involving additive thinking in the 1-20 range. It makes no sense to separate the processes from the content, because they are almost always a necessary condition. The opposite must be done, as was in fact the case in more traditional approaches. It is possible, we can assess the degree of acquisition of a content independently of the processes involved. A good example of this is the progress reports generated by our app, with details of the acquisition of content, alerts on the shortcomings of each student and recommendations to adapt the teaching task accordingly.
Finally, there is a third component in the learning of mathematics (and, in fact, of any subject) that has been addressed by authors such as Boaler (2020) or Johnson (2022) and that must also be taken into account: socioemotional skills. Attitudes such as initiative, perseverance, acceptance of mistakes or the ability to cooperate, among others, are essential in mathematical activity, they must be worked on in the classroom and they must be assessed. In fact, a socioemotional problem can cause basic impediments that make it impossible to learn content or processes. For this reason, we must be very attentive to this third layer and keep in mind that, sometimes, learning difficulties are not conceptual or competency-based, but are of an emotional nature.
The most important thing of all, in short, is that we are clear about what we want to assess in our students and that we act in a coherent manner. In the end, the decisions we make, no matter how well-founded, will always be biased by our vision. Let’s take the pressure off: assessing is essentially a subjective task, which depends on both the observer and the observed. Not even particularly standardized tests such as university entrance exams are objective: the choice of which questions appear on the test is subjective; the scoring and weighting of each question is subjective; even the grading is subjective because, it depends on each examiner and the moment they correct it in (anyone who has been face to face with a pile of ungraded exams knows that the first one won’t be graded in the same way as the last one), etc. Be careful! Admitting this subjectivity should help teachers who suffer if they are not being sufficiently objective (it is impossible!), but in no case does it imply that we can assess at random or detract from the value of finding evidence to support our impressions, quite the contrary, it is the teachers, the experts in the field, who can choose how to collect evidence in the best way at any given time and decide what is best for each student to promote their progress.
When do we assess?
Always. Every time we solve a doubt, every time we guide a student, every time we ask a question, etc., we are assessing. We have to take advantage of the fact that we spend the entire year with these students to continuously collect evidence of various kinds. To assess ideally, however, we should listen, observe and collect evidence from the students as they perform each and every task, every day of the year and throughout all sessions. Obviously, this is impossible in a classroom with more than 25 students.
Think of a student in your class this year. Got one? You are probably much more comfortable determining whether the student knows how to add, multiply, or solve quadratic equations, for example. In contrast, we find it much harder to know if the student is competent in the Reasoning and Proof process. Why? Because as teachers we are much more fluent in addition, multiplication or quadratic equations than the process of Reasoning and Proof. That is why at Innovamat we have always advocated that the first step is teacher training. Only when we are able to think about processes in an integrated and natural way, will we be able to assess them with ease and comfort, on a day-to-day basis. In fact, we don’t have to wait for a specific assessment session to collect evidence, it must be a continuous attitude. Sometimes the evidence will be physical and will be kept (from answers in the logbook or notebook to written tests, if applicable), and sometimes it will be ephemeral (from an oral intervention or a gesture to a performance with manipulative material).
In our experience, working in this way reduces the students’ anxiety about the tests, because everything doesn’t come down to one moment. And it also reduces the pressure we as teachers feel when we get to the assessment period. We know, however, that some teachers feel that a continuous way of assessing is not fair, because it does not look at the same thing for all students at the same time. Well, first of all, we must remember the GPS metaphor: we want an assessment that accompanies and guides us throughout our journey, not just a simple locator test that tells us where we are and nothing else. Only in this way can we measure the progress of each student. And, secondly, continuous evidence collection, if well planned, supports a variety of formats and allows all students to have many opportunities to show their progress in content, processes and social-emotional skills (all three layers are important!). In fact, evidence can also be a traditional written evaluative test, of course. Ruiz (2021) explains that traditional evaluative tests can help in learning because they are based on evocation, if they are properly designed, they can be a very good tool for learning. However, the author also warns that if these tests are weighted too heavily in students’ academic grades, the level of anxiety they can provoke can overshadow the teaching benefits of the test. As is often the case, the key lies in finding a balance and not settling for written tests alone.
From there, each teacher must try, experiment and find what works best for them and their students. Any ideas? Obviously, the best way is to have two teachers in the classroom at the same time. That way, one can focus on assessment. We understand, however, that this unfortunately does not happen in most schools, so we have to explore other options. If this is the first time we are assessing processes, for example, a prudent approach might be to choose one of the four processes each week and aim to collect a couple of pieces of evidence from each student based on what happens in the classroom. Here, the selection of the task is relevant, because a good choice can facilitate this collection. In the choice of the sequence of activities included in the Innovamat curriculum, for example, in addition to the curricular content, we have tried to promote moments that help us observe the development of the processes. In fact, the teaching guides are full of training tips that address this. All this serves to provide timely feedback to the student, to modify our teaching work and to support our grading decisions, which can be complemented with a specific test, the Innovamat App reports and the correction of logbooks or notebooks. Next year, when we have improved in the processes, we could try to assess all four at the same time. And, if the volume of students is too large, we could focus each week on collecting evidence on one third of the students. So, after three weeks we would have gone through the whole class and started over.
What tools do we suggest for collecting evidence?
It seems to be clear that the collection of evidence is the cornerstone of a useful assessment for everyone concerned. Depending on the year, some tools are more suitable than others. Innovamat will not give you a magic recipe with universal assessment rubrics, because at the moment, they do not exist. What we offer you are many teacher training opportunities and a living and growing ecosystem of tools that are flexible enough for each of you to adapt to your reality. Some of these tools are specific tasks to assess content and processes within each year; the practice app, where we consolidate the content; indications in the teaching guide about which evaluative observations we can make at each moment of the session; logbooks with an answer key also full of evaluative indications; etc.
As we said at the beginning, this is the first of a series of articles with reflections on assessment. We will explore several of these tools based on concrete activities and real student responses and productions, assessed by us. Will you join us?