Technology and Algebra in Secondary Mathematics Teacher Preparation Programs

Session Presented at the Research in Undergraduate Mathematics Education annual conference in February, 2014, in Denver, CO.

Abstract: Most recently, the Conference Board of the Mathematical Sciences has advocated for incorporating technology in secondary mathematics classrooms. Colleges and universities across the United States are incorporating technology to varying degrees into their mathematics teacher preparation programs. This study examines preservice secondary mathematics teachers’ opportunities to expand their knowledge of algebra through the use of technology and to learn how to incorporate technology when teaching algebra in mathematics classrooms. We explore the research question: What opportunities do secondary mathematics teacher preparation programs provide for PSTs to encounter technologies in learning algebra and learning to teach algebra? We examine data collected from a pilot study of three Midwestern teacher education programs conducted by the Preparing to Teach Algebra (PTA) project investigating algebra. Our data suggest that not all secondary mathematics teacher preparation programs integrate experiences with technology across mathematics courses, and that mathematics courses may provide few experiences with technology to PSTs beyond strictly computational.

In this session, we presented results from an analysis of opportunities to use technology to support algebra teaching and learning in secondary teacher preparation programs. The data was collected during the PIlot phase of the Preparing to Teach Algebra project.

Presentation of Findings

We found that instructors responded in interesting ways regarding their use or non-use of technology in support of algebra teaching and learning:

  • Practical Concerns:
    • Not useful in certain courses:
      • …It just doesn’t strike me as really helpful… [University A, Structure of Algebra]
      • It’s abstract for a reason. [University C, Abstract Algebra]
    • Issues of access:
      • …we don’t have money in our department to buy them. So, we don’t have those and our students… need to know those things. [University A, Secondary Math Methods]
    • Not enough time or support:
      • I do not have time to work all the [PowerPoint] slides… [University B, Linear Algebra]
      • …I think …we should have a nice computer-simulated programs that make you see the difference [between convergence and uniform convergence of functions]. … For me, I see it in my head….But I can’t see how. I really can’t see how. [University B, Analysis]
  • Impeding Learning:
    • Blocks Students from Developing Memory
      • … because of the calculator and all these technologies [people] don’t … develop their memory.  But then you are asking them to develop their memory on something that is harder than adding or subtracting, you know? [University B, Analysis]
    • Computational Use Blocks Concept Development
      • But I also want them to know the concepts involved so sometimes … I make a point to tell them that they shouldn’t use technology… [University A, Linear Algebra]
      • [A]t a college level we’re now quite concerned because … we have students who can’t multiply…because they have always had a calculator, you know.  There are students who can’t tell you what the graph of y = x looks like.  …[T]o be able to think about what y = x and y = x2 looks like — they can’t do without a machine.  …So, we are actually moving to not using technology. [University A, Secondary Math Methods]
  • Enhancing Learning:
    • Making the Abstract More Tangible
      • [Technological tools] can bring some of these more abstract things to make them more tangible for students. [University C, Middle School Math Methods]
    • Allowing Different Perspectives
      • I think it …gives them a way to see the problem from a different perspective…understand it from a learner’s perspective and …to think about how to instruct students in multiple ways… [University B, Secondary Math Methods 1 and 2]
    • Conceptualizing Mathematics
      • All of these tools represent ways to represent and conceptualize mathematical ideas that go beyond the symbolic. They’re important tools to really develop a conceptual understanding of mathematics. Moreover, it’s critical that our students are prepared to use these same tools … to foster the same sorts of understandings. [University B, Secondary Math Methods 3 and 4]
  • Whether or not to use technology is complicated:
    • Which courses could use technology?
      • In this course none. …In other courses that I teach I do use technology… I know that that is kind of counter-intuitive because textbooks always have technology stuff in there and some textbooks are even focused on technology. To me that is not what this [course] is about and the more technology you have in a course like this the less that there is for algebra. [University C, Differential Equations]
    • What are instructional consequences of technology use?
      • … there are times where instructionally it may be not the best thing to always use technology and so making that kind of judicious choice is something we talk about as well. [University A, Secondary Math Methods]

We also found some examples of activities to support preservice teachers in decided whether, when, and how to use technology:

  • Affordances of technology, e.g.:
    • engagement
    • enhances some concept development
  • Constraints of technology, e.g.:
    • instructor’s/instructional time
    • impedes some concept development
  • … you don’t just use a tool or technology just because it’s going to be fun; but you really have to think about – What does this particular tool or technology afford me in terms of students’ understanding the content? …sometimes when we’ve used technology it didn’t really offer us any more than if we had just drawn [on] a piece of paper…. [University B, Secondary Math Methods 1 and 2]

We shared two examples of use of technology to support algebra teaching and learning, one from a mathematics course and the other from a mathematics methods course:



This study comes from the Preparing to Teach Algebra project, a collaborative project between groups at Michigan State (PI: Sharon Senk) and Purdue (co-PIs: Yukiko Maeda and Jill Newton) Universities. This research is supported by the National Science Foundation grant DRL-1109256.

Stehr, E.M. & Guzman, L. (2014, February). Technology and algebra in secondary mathematics teacher preparation programs. Paper presented at the Seventeenth Annual Conference on Research in Undergraduate Mathematics Education (RUME). Denver, CO.


Role of Technology in Student Learning: Two Perspectives


Over the past several decades, technology has been promoted as a panacea to the educational community. K-12 teachers have been promised a quick (but meaningful) fix to everything from behavioral issues to test scores. Availability of technology was not initially equitable which led to discussion of the digital divide (Reich, Murnane, & Willett, 2012). That is, some schools had access technology and others did not. This first digital divide has been bridged with federal and private funding that allowed the purchase of classroom technology for schools across the United States. Attewell and Gates (2001) recently described a second digital divide that has emerged: Recognition of disparity in how technology is used has grown as more schools have overcome issues of access. That is, access to technology itself has spread but access to effective classroom use of that technology has faltered. Teachers must be able to use technology intentionally and meaningfully in their classrooms to support learning.

Educational psychology is a relatively new field that has developed over the last century. Despite its youth as a field of research, educational psychology has progressed rapidly based on scientific research, first in laboratories and now in classrooms.  Teachers have always had ways of understanding their students’ educational needs. Awareness of theories of learning can help them refine teachers’ understanding by offering teachers alternative ways of thinking about student learning which can then give them more freedom in their own classroom decision-making. Theories of learning can also help teachers and researchers be more thoughtful about using technology to support learning in the classroom.

I have chosen two articles that consider the issue of use of technology to support classroom learning. Kolovou, van den Heuvel-Panhuizen, and Köller (2013) proposed use of online games as homework to support students’ development of informal reasoning about covarying quantities. Rivera (2007) proposed use of a TI-89 calculator to support students’ development of solution strategies for polynomial inequalities. Despite the difference in targeted grade levels and mathematical topics in these two articles, they together bring to light important issues relevant to mathematics education research on technology. The two papers have strong similarities as well as strong differences: Both studies centered on students’ use of technology to support their work on a series of mathematical tasks, with regular whole-class discussions of their strategies.

In this paper, my purpose is to answer the following research question: What assumptions about learning and knowing mathematics are held by mathematics education researchers of classroom technology and how do they compare to historical theories of learning? How are these assumption reflected in their stance toward technology? I first summarize each article briefly. These summaries provide context for the analysis that follows. In the analysis, I analyze the researchers’ perspectives on mathematical knowledge and student learning, first from the perspectives of some chosen learning perspectives and then in comparison to each other. Finally, I describe and compare the researchers’ stances toward technology, methods of analysis, and presentation of data.

Article Summaries

Online games – covarying quantity word problems.  Kolovou et al. (2013) described a quasi-experimental study they conducted with 236 Grade 6 students from ten schools across the city of Utrecht in the Netherlands. The study investigated the effect on students’ informal algebraic sense-making resulting from playing a researcher-designed online game at home. Within the larger domain of algebraic sense-making, the researchers focused on students’ abilities to solve word problems that involved covarying quantities. Kolovou et al. did not explicitly identify a particular theory of learning or theoretical lens in this article, but did explicitly define their central terms of information and communication technology (especially, online games), algebra (especially, early algebra), and homework.

All students took a pre- and post-test of six contextual number problems intended to assess students’ early algebra knowledge. Students from five of the schools also participated in a six-week intervention, spending as much time as they wanted in the online environment at home, working on several problems, and then discussing the problem solutions in class each week.

Kolovou et al. gathered quantitative data that included: pre- and post-test scores on the early algebra test, general mathematical ability scores based on the End Grade 5 Cito-LOVS Mathematics test (Dutch Grade 5 standardized mathematics exam), and data gathered from students’ use of the online game. The researchers described results of the study through a statistical analysis of the data, and found a significantly positive result for students who logged in the game during the intervention.

Handheld graphing calculators – polynomial inequalities.  Rivera (2007) described a classroom teaching experiment, in which he taught methods of solving polynomial inequalities to 30 juniors and seniors through 21 classroom sessions in a United States high school precalculus class. Rivera did not explicitly address why he chose this high school, teacher, or class. Rivera did explicitly address and develop his theoretical perspective centered on the ideas of instrumental schemes, instrumentalization, and instrumentation.

Although a full description of Rivera’s (2007) theory of learning is outside the scope of this paper, I attempt to give an informative (and somewhat informal) summary. In Rivera’s theory of learning, students construct their mathematical knowledge based on an interactive triadic relationship of the learners themselves, the social environment, and the mediation of tools that are used. The teacher influences the students’ learning through orchestration of mathematical tasks, tool choices, and whole-class discussions and reflections. Instrumentalization is a phase in which students attach to a tool, that is they develop mastery of features of a tool in context of a mathematical task – and thus master the task in context of the tool. Instrumentation is a phase in which students detach from the physical tool. Students internalize solution schemes by engaging in their own practice with a partner, and then demonstrating and reflecting on practice in whole-class discussion. The influence of the choice of tool can be seen in students’ gestures, actions, and language – the theory is that the tool and social interaction influences the way a student thinks about the mathematics in the tasks so that students in different communities of practice or using different tools will exhibit different ways of thinking about the mathematics of the tasks. Full detachment occurs when students have used their understanding of the mathematical features to develop practice that does not rely on the tool.

In Rivera’s (2007) study, focus was on developing a general solution strategy for polynomial inequalities. Students used TI-89 calculators in each session, working first on tasks as pairs and then sharing and discussing the tasks and strategies in whole class discussion. Rivera and the resident teacher met regularly to reflect and strategize orchestrating students’ development of mastering TI-89 practices, understanding polynomial inequalities and, eventually, general solution strategies.

Rivera (2007) gathered qualitative data that included field notes taken by himself and the resident teacher, any student material that was collected (homework, worksheets, etc.), and notes from discussions that he had with the resident teacher directly following each classroom session. He described his results through ethnographic narrative.


Mathematical Knowledge

            Kolovou et al. (2013) does not explicitly define mathematical knowledge, but conjectures can be made based on the words the authors use to describe their goals for students and what evidence is gathered. When discussing their research purpose and questions, Kolovou et al. used words such as informal algebraic reasoning, thinking, and sense-making. The authors used “complex competence” to characterize algebraic reasoning (p. 512).

The examples chose by Kolovou et al. to describe functional relationships in early algebra gives evidence of what a student who understood functional relationships would be able to do.  For example, students would describe how quantities correspond (Blanton & Kaput, 2004), discern and generalize numerical patterns (Blanton & Kaput, 2005; Beatty, 2010), and represent functional representations in more than one way (Beatty, 2010). Finally, the researchers measured students’ algebraic knowledge of function relationships by answering six word problems that involved covariation of quantities.

Unfortunately, although the researchers shared the questions, they did not also share how the questions were graded except that they were “coded according to the correctness of the answer” (p. 529) which might mean all-or-nothing grading or that partial credit was given.  Often lack of evidence is itself evidence, and I feel confident in conjecturing that the authors believed a correct answer was evidence of algebraic knowledge while an incorrect answer was evidence of lack of knowledge. This conjecture is supported by the researchers’ acknowledgement of statistical differences in score changes across the individual test items. The researchers acknowledge the differences and conclude that, even though the treatment may have affected the items differently, “it is often very difficult to understand why individual items function differently from others” (p. 529). This response implies that the items had been constructed to test the same type of knowledge and that the researchers did not look more deeply at differences in student responses across different items.

Based on this evidence, even though the researchers use reasoning, thinking, and sense-making to describe their goals for the students, I would argue that, Kolovou et al. (2013) treat knowledge as evidenced only by correct answers rather than correct thinking or reasoning.

Kolovou et al.’s (2013) description of knowledge does not seem congruent with their measurement of knowledge. According to the measurement, I would see some similarity between the researcher’s view of knowledge and Thorndike’s (1922) description of rote knowledge, in which an answer is correct because others generally agree it is correct. I would argue, however, that the researchers’ description of knowledge more closely resembles Thorndike’s proposed knowledge that is verified inductively because Kolovou et al. described algebraic knowledge as reasoning and thinking, and also used as an intervention a computer game in which students interacted with concrete covarying quantities.

Thorndike’s (1922) and Wertheimer’s (1945) descriptions of mathematical knowledge are quite different, but not mutually exclusive. Wertheimer proposed that mathematical knowledge involves understanding relationships, especially the structure that forces a particular interaction. Kolovou et al.’s view of knowledge may resemble this view of knowledge as well. My evidence that Kolovou et al.’s view of knowledge resembles Wertheimer’s view is that structure of the computer game. The researchers created it so that learners could adjust carefully chosen settings to change the relationships between covarying quantities, with particular invariants. They argued that students would acquire the knowledge to informally reason about covarying quantities based on interaction with this game that allowed students to explore rules of this type of relationship.

Rivera (2007) explicitly described knowledge as instrumental schemes quoting Piaget (1970): “whatever is repeatable and generalizable in an action” (p. 42), but added to instrumental because Rivera argued schemes are not independent of symbols and tools. In some ways schemes act similarly as Thorndike’s (1922) bonds, as far as schemes can connect to become more complex structures, like bonds. Other than their origins and creation (i.e., learning), which will be discussed in the next section, both bonds and schemes can be seen to some extent as being molecules of knowledge. Both bonds and schemes can change depending on consequences of the response or scheme, but both are difficult to change once they have been formed. I believe a large difference is that a bond refers to the link between a situational stimulus and the resulting response, where a scheme is almost the response itself. Well, a scheme is not tied to a stimulus as tightly as a response that is bonded to a stimulus. Looking past the structure of knowledge, I would say that Rivera also, similar to Thorndike, Wertheimer (1945), and Brownell (1945), would argue that knowledge should be more than rote learning. Similar to Brownell and Wertheimer, Rivera argued for students to understand the complex structure behind the mathematics in order to make and test conjectures and then make generalizations in the development of their solution strategies for polynomial inequalities.

Student Learning

            As described above, Rivera (2007) explicitly defined his theory of learning as constructivist, based on the individual, the social environment, and mediation of tools. He described the process in more detail through the phases of instrumentalization and instrumentation.

Kolovou et al. (2013) did not explicitly identify a particular theory of learning, but they argued that students’ interaction with a computer game that supported exploration of covariational relationships, along with class discussions, would result in learning informal reasoning about those covariational relationships. This view is not entirely dissimilar from Rivera’s (2007) view, in which Rivera argued that learning would occur through students’ interaction with a graphing calculator and mathematical task that supported exploration of polynomial inequalities, along with class discussions, would result in formal reasoning and solution strategies for those polynomial inequalities. I find it interesting that the authors of both articles would have decided class discussion about strategies and tasks was an important part of the learning.

Although Kolovou et al. (2013) did not explicitly identify with a particular theory of learning, the method by which the researchers expected students to learn provides some evidence. One rationale Kolovou et al. had for using a computer game was that it would provide an active and engaging learning environment with continuous and immediate feedback. The researchers tracked time spent in the online game and compared the time spent with the change in test item scores. Because of the desire for an engaging environment with feedback, and because the researchers tracked the amount of practice that students had, I argue that their theory of learning must closely resemble Thorndike’s (1922) theory of learning. Students would receive immediate feedback telling them whether they were responding correctly or incorrectly, and students who practiced longer would have stronger S-R bonds to support their reasoning on the test.

Rivera’s (2007) theory of learning is quite different than the perspectives of Thorndike (1922), Brownell (1945), Wertheimer (1945), Greeno (1987), or Briars and Larkin (1984). I saw some similarities, however, between Rivera’s and Thorndike’s theories of learning. Practice in the form of drill was not discussed as part of Rivera’s research, but Thorndike also recommended practice in the form of work on problems that would make students think. Also, I would argue that part of the social component of Rivera’s theory of learning could be seen as providing students frequent (if not immediate) feedback. A pair of students working together would provide natural feedback in their discussions, but then they would also receive feedback daily by presenting work in the whole-class discussion.  Reflection was an important part of Rivera’s process whether individual, in pairs, or as a class, and Rivera stated that it was included as a necessary part of the process of student learning and development from the instrumentalization   phase to the instrumentation phase. Rivera intended this reflection to be a continuous form of feedback to help students develop their schemes, saying especially that “individuals affirmed their internalized instrumental actions twice before they fully owned them” (p. 298). In some ways, this affirmation seemed similar to Thorndike’s theory’s need for continual feedback to strengthen S-R bonds, except that Rivera is speaking of schemes. On the other hand, there is also a qualitative difference between Rivera’s and Thorndike’s reasons for why feedback is necessary – socially-mediated dimension of constructing knowledge on the one hand and strengthening S-R bonds on the other are different views or explanations of why feedback is necessary.



Comparison: Technology

The researchers’ stance toward technology is both similar and different. Based on descriptions of technology in Kolovou et al. (2013), the researchers are arguing, at least implicitly, that the type of technology use is dependent on the type of technology. For example, they quoted Li and Ma (2010), differentiating between types of technology that could as easily be types of technology use: e.g., exploratory environments and tools. Rivera (2007) on the other hand clearly described a calculator as being used as a tool or exploratory environment. That is, for Kolovou et al. the type of technology determines its use and possible impact while, for Rivera, the way technology is used and supported determines its possible impact.

Both Kolovou et al. (2013) and Rivera (2007) use the technology in similar ways, providing tasks to shape students’ exploration of the mathematics with the technology, and then following up on that exploration with a class discussion. Kolovou et al. did not provide many details about the goals or structure of class discussion, only that discussions were 15 minutes. Rivera did not provide the time length of discussions, but described them as opportunities for students to share work and strategies and to discuss advantages and disadvantages of their methods. Rivera shared his level of deep thinking about how to use technology, and how to support students’ development mathematically as they learned to use technology in support of their tasks, while this thoughtful dimension is missing from Kolovou et al.’s written document. The lack of description does not mean Kolovou et al. did not think deeply about these viewpoints but just that they were not the focus of the paper. In fact, Kolovou et al. mentioned briefly their development of a particular type of game for students to use, and describe in detail the features of the game they believed would be important.

Rivera’s (2007) focus on a learning theory gave him a focusing device to think about these deeper issues that Kolovou et al. (2013) may not have had available. That is, because of Rivera’s focus on instrumental schemes, he thought deeply about what student learning would look like through this lens and how that learning could be supported through choice of tasks, use of technology, and structure of social interaction.

Despite the difference in focus, I believe that both Rivera’s (2007) and Kolovou et al.’s (2013) stances toward learning most resemble Thorndike’s (1922) stance. The reasons that Rivera gives for providing problems, practice, and feedback are different than Thorndike’s – they are viewing learning and knowledge through different lenses.  I find that to be interesting in a nice way – it makes sense to me that different frameworks for learning could result in similar teaching styles.  On the other hand, because the reasons for particular choices are different, the similarity in this case does not hold at deeper levels. Thorndike’s, Kolovou et al.’s,  and Rivera’s feedback is all quite different in nature, and similarly Thorndike’s authentic problems, Kolovou et al.’s mathematical tasks, and Rivera’s mathematical tasks all are designed differently.  One important way that differences show up is in the researchers’ chosen evidence of learning. Thorndike and Kolovou et al. would most closely resemble each other in that both regard a correct answer, when previously the student answered incorrectly, as an indication of learning. Rivera, however, would not agree. Rivera, rather, looked to students’ solution strategies and interaction with the TI-89 as evidence of student learning. The growing abilities of students to use the TI-89 effectively, to communicate mathematically, and to make conjectures and generalizations about polynomial inequalities was evidence of student learning to Rivera.


Attewell, P., & Gates, L. (2001). Comment: The First and Second Digital Divides. Sociology of Education, 74(3), 252–259.

Beatty, R. (2011). Pattern Rules, Patterns, and Graphs: Analyzing Grade 6 Students’ Learning of Linear Functions through the Processes of Webbing, Situated Abstractions, and Convergent Conceptual Change (Doctoral dissertation).

Blanton, M., & Kaput, J. (2004). Elementary grades students’ capacity for functional thinking. In Proceedings of the 28th Conference of the International Group for the Psychology of Mathematics Education (Vol. 2, pp. 135-142).

Blanton, M. L., & Kaput, J. J. (2005). Characterizing a classroom practice that promotes algebraic reasoning. Journal for Research in Mathematics Education, 412-446.

Briars, D. J., & Larkin, J. H. (1984). An integrated model of skill in solving elementary word problems. Cognition and Instruction, 1, 245–296.

Brownell, W. A. (1945). When is arithmetic meaningful? The Journal of Educational Research, 38, 481–498.

Greeno, J. G. (1987). Instructional representations based on research about understanding. In A. H. Schoenfeld (Ed.), Cognitive science and mathematics education (pp. 61–88). Hillsdale, NJ: Erlbaum.

Kolovou, A., van den Heuvel-Panhuizen, M., & Köller, O. (2013). An Intervention Including an Online Game to Improve Grade 6 Students’ Performance in Early Algebra. Journal for Research in Mathematics Education, 44(3), 510-549.

Piaget, J. (1970). Genetic epistemology. New York: W.W. Norton.

Reich, J., Murnane, R., & Willett, J. (2012). The State of Wiki Usage in U.S. K-12 Schools: Leveraging Web 2.0 Data Warehouses to Assess Quality and Equity in Online Learning Environments. Educational Researcher, 41(1), 7–15. doi:10.3102/0013189X11427083

Rivera, F. D. (2007). Accounting for students’ schemes in the development of a graphical process for solving polynomial inequalities in instrumented activity. Educational Studies in Mathematics, 65(3), 281-307.

Thorndike, E. L. (1922). The psychology of arithmetic. New York: Macmillan.

Wertheimer, M. (1945). Productive thinking. New York London: Harper.

Supporting spatial measurement tasks with technology

Session presented at the Michigan Council of Teachers of Mathematics annual conference in Traverse City, MI.

Download slides here.

In this hands-on session, we introduced several sample tasks that included use of computer applets to strengthen understanding of length and area concepts. Many students struggle with length and area, using only pen and paper or physical manipulatives. In this session, we provided materials to support their learning (that will be available for use during and after the session).

We asked participants to use their devices to work through one sample lesson (Area of Rectangular Regions) including Launch / Explore / Summarize components. We gave participants two sample lessons that they could in their classrooms, along with applets designed and created by Strengthening Tomorrow’s Education in Measurement project.

Stehr, E.M., Gönülateş, F., & Siebers, K. (2013, August) Supporting spatial measurement tasks with technology. Paper presented at the Michigan Council of Teachers of Mathematics Conference, Traverse City, MI.

Virtual Manipulatives and Dynamic Representations in Length and Area Measurement.

Session presented at the Math in Action annual conference at Grand Valley State University in Allendale, MI. We explored measurement concepts using virtual manipulatives and dynamic simulations to encourage reasoning and sense-making that can support robust understanding of measurement.

Download session slides here.

We created three tasks for participants to explore during the session, each considering area: one as a paper and pencil task, one using physical manipulatives, and the third using an online tool from NCTM Illuminations.

Download task sheet here.

Pencil and Paper Task – We asked participants to determine the area of parallelograms and to mark those with the same area.

Physical Manipulatives Task – We asked participants to create a parallelogram with the same base and height as a given rectangle. We asked: How do the areas of the original shape and your new shape compare? Explain.

Virtual Manipulatives Task – We sent participants to the NCTM Illuminations page:  We asked them to find as many parallelograms as they could, all with area of 88 square units. We asked them to discuss patterns they saw.

We ended by asked: What could the learning goal for these three tasks be? How do the three tasks support learning in different ways? How does each block learning in different ways?

Possible benefits of simulations:

Stehr, E.M. & Siebers, K. (2013, February) Fractions as lengths. Paper presented at the Math in Action Conference, Grand Valley State University, Allendale, MI

In the beginning, it is always dark. (Neverending Story)

I taught mathematics at the university level for several years and spent quite a bit of time learning neat stuff that I could use in my classes.  Some of it failed spectacularly but some was useful.  It can be overwhelming to find the right tool to do the thing that needs to be done – something that is easy to use, easy to learn, and that actually supports the students’ learning in a different way.  That is, something that provides affordances that aren’t granted by paper, physical objects or discussion.

It is difficult to find the time to be really thoughtful about choice of technology and its implementation without neglecting the day-to-day work that teaching requires.  When I taught, I spent too much time learning how to use technologies that I then only used once or that didn’t support learning in the way I had expected.

When I taught, I created PowerPoints that the students could print out and write on.  I printed the notes to Windows Journal (and later OneNote) and then used them in class to write on and work out examples. I used classroom response systems: First, eInstruction Crickets and then students’ own cell phones with PollEverywhere. I used interactive website and presentation creators, such as SoftChalk and Articulate Engage, along with screen capture software, such as Camtasia and Jing, to create websites for my students and to provide them with online activities and videos as resources and study aids.

I used mathematics software such as Mathematica (and Wolfram Demonstrations Project and TI-84 SmartView). I created online homework using Desire2Learn (D2L) quizzing tools, videos of homework solutions, dynamic interactions with Wolfram Demonstrations, animations with TI-84 SmartView, and even made cardstock manipulatives that students could cut out and use at home.

Looking back, I realize that often my efforts supported development of procedural fluency without attending also to developing conceptual understanding. My view of technology is that it is a tool that should be used thoughtfully and to support learning that may be difficult (or impossible) without that particular tool.