Working Group Session # 9

Larry Feldman, Indiana University of Pennsylvania (IUP), lmfeldmn@grove.iup.edu
One of the concepts I was very happy to see from the Mathematical Education of Teachers document will require a major change in K12 statistics teaching. In the section on the preparation of elementary teachers, they describe statistics as a threestep paradigm as follows. Similar statements are made for the middle school and high school levels.
The first part (data production) is severely lacking in K16 classrooms and in teacher preparation. An analogy is to compare statistics to a traditional murder mystery. In the first part, the mystery is set up – we learn about the setting and why many characters dislike the person who is going to be killed. The second part has the murder and much of the action. The final section has the unraveling of the clues leading to the murderer being found.
A statistics study follows a similar mode. In the first part, we set up the problem and the “characters”. We create the interest in solving a puzzle. This involves developing a question, developing methods for collecting data, and refining those methods. In the second part, the “action” takes place where we go out and collect the data, display the data, and compute statistical measures. In the last part, we “solve the mystery” and answer the original question, interpret the results, describe problems and errors with the study, and present the findings.
Most textbooks skip the first section and do a very poor job with the third part. They show the “action” of the middle part (graphing and computing measures) without the “character development” of a real problem. “Solving the mystery” (interpretation) is not taken seriously. It is difficult to spend a great deal of time interpreting data when there is no realistic context. Just as we would never ask students to only read the middle part of a novel, we should not ask students to only learn the middle part of statistics.
Last semester, I taught a course in statistics for elementary and middle school teachers. Most of the students were undergraduate elementary education majors who were concentrating in mathematics. The rest were graduate students in elementary and middle school mathematics education. These students put together the best set of statistics studies I have ever seen from this course. Their studies included: a comparison of sorority vs. fraternity advertisements in the school newspaper; the number of times college students and sixth graders could skip rope in 30 seconds; a comparison of the time it took to solve two different types of jigsaw puzzles, and four others. None of the studies were ready for publication in a professional journal but all of them at least made a serious attempt at going through the full story.
Unfortunately, today we cannot count on most textbooks to help us much with this threestep process. I am always amazed at how much time and thought it takes to do the first step, but it is time well spent. Participants in SEQuaL workshops do a statistical study with a group of their peers. Each participant is then required to report on activities they have done with their students in the following spring. Many of them have had their K12 students perform statistical studies and / or had students enter the K 12 Pennsylvania Statistics Poster Competition (www.villanova.edu/PA_posters/). (As a side note, it is not always necessary to have students do a fullblown statistical study to get at the first part of the story.)
Where is the time for all this? Isn’t this just one more addon to an already cluttered curriculum? There is a great deal of other mathematics that can be learned by students if they do a statistical study. The DataDriven or Real Data approach can help to integrate traditional mathematics concepts with the use of data collected by students. I am including three examples of the “real data” / datadriven approach (first grade study on pets, discovering pi, and spaghetti triangles). Statistical studies can be used to supplement and / or replace units on number operations (whole numbers, fractions, percents, ratios, etc.), on geometry, on algebra (equations of lines and other functions, etc.), and on virtually all other mathematics topics K12. Our DataDriven / Real Data workshops are planned (subject to funding approval) for Edinboro, Manheim Township (near Lancaster), and IUP. Participants will be given many ideas for integrating real data into the teaching of the mathematics topics they have always taught. As another reason for taking the time for statistical studies, today’s students will see hundreds of statistical studies in their lifetimes. It will be difficult to really understand what they are about without having actually done one themselves.
Another response to the time issue is a comic strip I remember from when I was young. Bazooka Joe was pushing his bicycle to school. When asked what he was doing he said that he was so late for school that he didn’t have time to get on his bicycle. The analogy to teaching is that sometimes we are in such a hurry to get things done quickly that we don’t feel we have the time for our students to “get on the bicycle”. Seeing numbers used in a big picture problem like a statistical study may help some K12 students “get on the bike.”
I would be glad to share with any readers any of the following activities. Some I have developed and some are adapted from SEQuaL instructors and other sources. Please contact me if you would like copies of handouts.
I believe that activities should be in alignment with the American Statistical Association’s Principles For Teaching Statistics, which are as follows.
“The focus in teaching statistics should be to foster student belief about the positive use of statistics and probability in making choices and decisions. In particular:
 Activities for students should be active, not passive, asking questions about something in the students' environment and finding quantitative ways to answer the question.
 The emphasis in all work with statistics should be on the analysis and the communication of this analysis rather than a focus on a single correct answer.
 Different approaches and solutions for a problem should be discussed and evaluated with opportunities provided for student reflection.
 Real data and handson experience in working with data should be used whenever possible.
 Exploration and experimentation with simple counting and graphing techniques should precede formal algorithms and formulas.
 Good examples of probability paradoxes and statistics should be used to build intuition rather than to deceive.
 Student projects should be an integral part of any work in statistics.
 Statistics should be a vehicle to make connections within mathematics and to form interdisciplinary links for students.
 Technology should be used to facilitate analysis and interpretations.
 A variety of approaches should be used for student assessment: reports, projects, journals, and studentgenerated tests, as well as traditional methods. “
From the American Statistical Association’s Guidelines for the Teaching of Statistics K12
One of Indiana University of Pennsylvania (IUP)’s largest teacher education grants has been housed in the Mathematics Department since 1992 with funding continuing at least through 2002. The Statistics Education through Quantitative Literacy (SEQuaL) project has provided professional development for over 800 K12 Pennsylvania teachers through workshops, conferences, and the operation of Center for Statistics Education in PA (CSEPA). Through the years, the program has grown both in size and scope. Now K12 Workshops are offered at regional sites throughout Pennsylvania. Two additional workshops, the MultiDisciplinary SEQuaL and the DataDriven Approach to Teaching Middle School and Secondary Math, have built upon the original workshop to link statistics with other subject areas and with algebra and geometry, respectively. A workshop for 11th and 12th grade teachers (AP or nonAP) began in 2001. A new “real data” workshop is proposed to expand the DataDriven approach to grades K10.
The idea for SEQuaL was born in the minds of a group of IUP professors in 1990, led by Jack Shepler. Statistics and probability were given greatly increased emphasis on the 1989 NCTM mathematics standards for grades K12. At that time, the American Statistical Association offered to assist universities in presenting a Quantitative Literacy (QL) workshop for secondary mathematics teachers that had been developed through previous NSF grants in conjunction with NCTM. Larry Feldman and Ann Massey with John Uccellini created an elementary grades workshop since no national model had been implemented at that time. Funded by an Eisenhower Professional Development Grant from the Pennsylvania Department of Education (PDE), the first K – 12 SEQuaL workshop in Pennsylvania and one of the first in the country was launched in the summer of 1992 at IUP as a joint effort with the American Statistical Association.
By 1995, word of the program had spread throughout the state. With the encouragement of Linda Benedetto, Higher Education Coordinator of the Eisenhower program at PDE, the K12 SEQuaL program was offered at four locations across Pennsylvania. It continues to be offered at varying regional sites across the state.
The K12 SEQuaL program has proven to be an exciting, standardsbased approach to teaching statistical techniques in the K12 classroom. Through stimulating and practical activities, participants in the concurrent K6 and 712 workshops explore real data and focus on classifying, graphing, sampling, probability and simulation, with the help of ASA statisticians. They experience firsthand the value of QL and gain confidence in their ability to incorporate it into their curriculum.
In 1996, teams of teachers representing various subject areas developed crosscurriculum statistics projects for their schools at the first MultiDisciplinary SEQuaL Workshop offered at IUP. Each team was composed of at least one mathematics teacher who had previously taken a SEQuaL workshop and at least one teacher from the same school who taught a subject other than mathematics. The mathematics teachers were taught inferential statistics (led by Fred Morgan and Jack Shepler), while the nonmathematics teachers were taught content appropriate for teachers who will not teach statistics but who will be working with data collection and analysis (led by Larry Feldman and Ann Massey). Each team of teachers created and reported on a multidisciplinary QL unit done with their students.
The DataDriven Approach to the Teaching of Middle School and Secondary Math began in 1999 and continues to today. It enables teachers to replace lessons from the standard prealgebra, algebra, and geometry curriculum with lessons that use data collection and analysis to teach the same content. DataDriven Mathematics, a series developed by the ASA with a NSF grant, was the main text for this first year of this workshop. Pat Hopfensperger and Henry Kranendonk, two of the authors of the DataDriven Mathematics series, were part of the workshop team for 1999 and 2000.
The next grant proposal submitted by the Center for Statistics Education in Pennsylvania (CSEPA) in 2001, which has been the coordinating body for these workshops. It is to create “real data” workshops, which will expand the DataDriven concept into the elementary grades for the first time.
SEQuaL has been at the forefront of many innovations in the teaching of probability and statistics since its beginnings in 1990. Significant milestones in the growth of SEQuaL are:
1992 Created the first K6 quantitative literacy workshop A K6 quantitative literacy workshop and a 712 QL workshop offered concurrently at IUP A model for sustained professional development consisting of a presession in the spring, a weeklong session in the summer, a postsession in the fall, and a final session in the following spring 1993 An extensive mentoring process began whereby K12 teachers move from participants, to teaching associates, to SEQuaL faculty Mathematics Academic Alliance in Quantitative Literacy meetings, a joint effort with the local educational service agency Integration of multiple QL assessment techniques for K12 Quantitative Literature newsletter issued 3 times per year 19952001 K6 and 712 SEQuaL workshops offered across Pennsylvania 1995 Quantitative Literacy in Future Years (QuaLIFY) conference 19961999 MultiDisciplinary QL Workshop for teams of teachers (grades 612) at IUP SEQuaL Facilitator's Guide developed 1998 Sequel to SEQuaL Conference to assist teachers in leading their own QL workshops 1999 2002 A DataDriven Approach to the Teaching of Middle School and Secondary Math workshops across Pennsylvania 20012002 11th and 12th grade statistics workshops for both AP courses and 11th grade statistics courses 2002? "Real Data", expanding the Data Driven approach to grades K10. 2002 An elementary grades "real data" conference
The Center for Statistics Education in PA at IUP coordinates the SEQuaL workshops, produces the newsletter Quantitative Literature, houses an extensive library of teachermade lesson plans and maintains a web site (www.ma.iup.edu/projects/SEQual/index.html). Please contact the Center if you would like to be put on the mailing list for the newsletter.
DataDriven Mathematics (published by Pearson Learning Co.) The DataDriven approach to teaching mathematics uses statistical techniques to support the teaching of traditional middle grades and secondary mathematics topics, such as prealgebra, algebra, geometry, and advanced mathematics. This series of 11 books has replacement units for parts of grades 6 –12 mathematics. They are an outstanding resource. An example adapted from the DataDriven books is the Age Guessing Activity. Students guess the ages of famous people; they then graph on the xaxis the actual age and the guessed age on the yaxis. The graphs are then compared to the line y=x, to y>x, and to y<x. Other extensions are included. The traditional topic of the y=x line is taught using real data.
K5 example of DataDriven, developed at IUP
Teachers can learn how to motivate students to collect data about a topic that is of interest to them, such as pets. First graders would get a blue Unifix cube for each dog, a red one for each cat, and a yellow one for other pets. (Unifix cubes are small plastic cubes that join together.) One first grade class might have a total of 17 blue cubes for dogs, 15 red cubes for cats, and 9 yellow cubes for other pets. They interpret these Unifix cube graphs as to which is most, which is least, that there are almost twice as many dogs as the other pets, etc. The children then make a stack of 10 blues and another stack of 7 blues to reinforce the place value concept that 17 means one group of 10 plus 7 more. They make a stack of 10 reds and another of 5 reds for the cats, along with a stack of 9 yellows. They will then combine the stacks for dogs, cats, and other and create as many stacks of 10 as possible. Then they have 4 stacks of 10 and 1 more, they learn that this is 41. This can be related to the standard symbolic addition process of 17+15+9 = 41.
A followup activity is to distribute these 41 cubes equally to the students in the class, suppose there are 18. Each child gets 2 cubes with 5 cubes left over. They learn several challenging topics for first graders, such as place value, representing numbers in written, oral, and physical form, creating and interpreting graphs, twodigit addition, the concept of fair share division, and the concept of the mean. All of these concepts are taught in a manner that is fun, meaningful, challenging to first graders, and in alignment with the Pennsylvania Academic Standards for Mathematics.
Middle Grades example of DataDriven, developed at IUP
Mathematics teachers are very familiar with linear equations such as y=3x+70. However, many of them are not familiar with how to use reallife data to drive the teaching of such an equation. One activity that SEQuaL staff has successfully taught is to have an entire class measure their height and the length of their forearms in centimeters. Each person's data is plotted on a scatter plot with the xvalue being the forearm length and the yvalue being the height. Every student uses a relatively simple technique called the “median fit line” to find an equation of a line that summarizes this data. Technology is used to compute the least squares line in a developmental sequence but the median fit line gives an intuitive handson introduction. Anthropologists actually use equations very close to y=3x+70 for this relationship. Students discover an equation very similar to this one by using the real data collected from the class. Students are able to answer questions such as, "If an anthropologist finds a forearm that is 34 cm. long, approximately how tall was the person, using our line?” Measurement is done in inches and the line is approximately y=3x+28. The mathematical concept of parallel lines between theses two lines is developed. Thus, teachers are learning techniques for data collection, for creating lines from scatter plots, using technology, and relating these techniques to the standard curriculum.
Secondary teachers and/or 11th / 12th grade workshop example, developed at IUP
Involves using spaghetti to understand the Triangle Inequality Theorem, which states that the lengths of any two sides of a triangle must be greater than the third side. While geometry teachers are very familiar with this theorem, many are not aware of a handson activity to illustrate it. In this activity, students are given pieces of uncooked spaghetti and asked to guess whether a triangle can be formed when a piece of spaghetti is randomly cut into three pieces. For example, if there are two tiny pieces and one large one, it is impossible to make a triangle. On the other hand, if all three pieces are approximately the same size, a triangle can be formed. After a great deal of data collection using students’ haphazardly created cuts and then using random numbers, students can develop the theoretical probability, 1/4, for getting a triangle by this procedure. This important geometry theorem can be discovered by students using real data and integrating analytic and coordinate geometry and algebra.
Readers who would like more information about the Center for Statistics Education in PA may contact Larry Feldman, Project Director at the following address. We especially are interested in colleagues who may be interested in collaborating with us on workshops in other states.
Center for Statistics Education in PA
Stright Hall, Room 211
210 South Tenth Street
Indiana, PA 157051087
7243576239
Email: lmfeldmn@iup.edu
Mick Norton, College of Charleston, nortonr@cofc.edu
SMFT 511 Introduction to Probability and Statistics is a threehour course offered in the M. Ed. Program in Science and Mathematics at the University of Charleston, SC, the graduate school arm of the College of Charleston. The prefix SMFT stands for Science and Mathematics for Teachers, and the course is an elective in the multidisciplinary M. Ed. program. All SMFT courses in the program share a characterizing combination of features and style that are the result of years of discussion and collaboration between science, mathematics and education faculty. There is an obligation here for me to describe a truly remarkable collaborative effort.
Specifically, the program is the cooperative effort of two schools and seven departments:
School of Education
 
School of Sciences and Mathematics

A former president of the College of Charleston, Dr. Harry Lightsey, charged the deans of the two schools with developing a masters degree program in science and mathematics for teachers. Beginning in the fall of 1992, a committee with representatives of the seven departments began meeting. The goal was to design from scratch a multidisciplinary program all departments could share in and be proud of. It took five years of spirited discussion, debate, patience and listening to work out the details of the degree proposal. I was privileged to be involved in chairing this group of great educators  fine people all  who had visions of helping K12 teachers improve the teaching of science and mathematics. For the record, I can name names. Faculty who invested much time and effort on the committee include Hope Florence, Hugh Haynsworth, Betsy Martin, Marty Nabors, Bob Nusbaum, Sara Powell, Sue Prazak, Denice Smith, Meta Van Sickle, Fred Watts, and Jeff Wragg. Supportive deans were Gordon Jones and Nancy Sorenson.
The M. Ed. program that resulted began offering courses in the fall of 1998. Students in the program are required to take a mix of science, mathematics and education courses. Most of these are courses that were designed to mix science or math content with pedagogy – the SMFT courses. All SMFT courses share a characterizing combination of features:
Content is aimed at a particular teacher constituency – e.g., probability and statistics for elementary and middle school teachers  
Courses are taught by faculty in the discipline, faculty who want to work with teachers  
Courses involve appropriate software/equipment  
Courses employ a handson philosophy  
A K12 master teacher is involved in course development  
Courses are consistent with state and professional body standards (e.g., NCTM)  
Courses involve grade level activities  
Courses involve beyondgradelevel content necessary for the teacher to see beyond the classroom 
While most of these features are self explanatory, several merit additional comments. The role of the K12 master teacher is to show the faculty member who designs and teaches an SMFT course the grade level textbooks, curriculum, and materials used in the classroom and to discuss classroom activities. Master teachers are valuable resources because of their knowledge of the curriculum and success in the classroom. Their insights and guidance have helped scientists and mathematicians design SMFT courses that are on target in terms of both content and pedagogy.
Another point of comment relates to a mechanism used to help ensure that the course involves grade level teaching techniques and activities as well as subject area content. If the mathematician or scientist who is to teach the course does not have a history of working in the schools, he or she is expected to sit in on some teaching methods classes . The School of Education offers both math methods and science methods courses. While some scientists and mathematicians have been apprehensive about this idea prior to the visits, after these visits, faculty have indicated that the time spent was helpful in designing the course.
There also have been some opportunities for faculty team teaching and for additional involvement of the master teacher. A science educator from the School of Education has been involved in team teaching with a scientist in an SMFT course on several occasions. There also are good examples of team teaching efforts between a chemist and a physicist. Additionally, there have been some opportunities for some of the master teachers to become involved in some way in course instruction. For more information about the M. Ed. Program in Science and Mathematics and its other courses, see www.cofc.edu/~medsm/.
First I must give thanks to Judy Blitch, the math specialist at Buist Academy, a local K8 school. Judy was a great resource. Out of our meetings came a number of things, not the least of which was my becoming extremely acquainted with the book that helped set the tone for the course, Middle Grades Mathematics – An Interactive Approach, Prentice Hall, 1998, the book used in our middle schools. Its probability and statistics content reflects South Carolina standards and NCTM Curriculum and Evaluation Standards. While the SMFT course has activities geared for elementary school teachers as well as middle school teachers, one of the philosophies of the program is that teachers be able to understand the subject beyond grade level. And so this book, peppered with probability and statistics throughout, helped me decide how the course should unfold.
As I examined activities, exercises and explanations in that book, a pattern emerged all too occasionally. The book’s topic of the day would be appropriate and consistent with standards. However, as a mathematician and statistician, I would think “That’s not the question that should be asked, this is,” or “That isn’t the way to come at the concept, this is” or “Here are some supplementary questions or issues that get at the kind of statistical thinking that should be prompted by this topic or these data.” When one of these topics related to our topic of the day, we would examine what that book did and then look beyond the book.
The book topics also gave guidance toward answering the question, “What do teachers need to know about probability and statistics, beyond grade level, that will help them teach grade level content?” This will be discussed in that part of this article that outlines the basic syllabus for the course.
Finally, while I was comfortable that the course would be consistent with the various standards for probability and statistics, there was the issue of how of how to give teachers experience in making judgments under conditions of uncertainty (see the recommendation on p. 23 of The Mathematical Education of Teachers). The typical first undergraduate course in statistics does this with hypothesis testing and confidence intervals. Particularly for the elementary teachers in the course, developing the machinery for ttests, say, would be an inappropriate use of course time. I was looking for an efficient way to bring to the course realworld examples of statistical decision making and its uncertainties.
As there seemed to be no single book that could serve as the book for the course, multiple materials were used. Required book purchases were:
Elementary Statistics, 7th edition, Mario F. Triola, Addison Wesley, 1997
Chapter 1 of A Quick Course in Statistical Process Control, draft version, Mick Norton, Prentice Hall.
Additionally, many classes were devoted to articles or problems from:
Mathematics Teaching in the Middle School, (NCTM)
Arithmetic Teacher, (NCTM)
DataDriven Mathematics, Probability Through Data, P. Hopfensperger, H. Kranendonk, R. Scheaffer, Dale Seymour Publications, 1999
DataDriven Mathematics, Mathematics in a World of Data, J. Burrill, et. al., Dale Seymour Publications, 1999
It is to be emphasized that only portions of the text by Triola were used, none of them dealing with inference. This text was chosen to be a source of homework problems for selected standard topics in descriptive statistics and probability, and to provide some attentiongrabber examples and exercises, and because it is readable.
The first chapter of A Quick Course in Statistical Process Control, which the students purchased from Kinko’s Copies, was my solution to bringing to the course realworld examples of statistical decision making and its uncertainties. The chapter introduces descriptive statistics through examples from industry and manufacturing, presents the view that variation has common cause and assignable cause sources, and introduces the Empirical Rule and the Central Limit Theorem, the first two bighit statistical tools introduced in the course. The Empirical Rule and the Central Limit Theorem form the basis for why Xbar charts work and for why manufacturing industries use these charts to detect when problems may be present in their production processes. And so, teachers are introduced to fundamental statistical measures, learn to apply two important statistical tools, and see reallife uses of statistics beyond the classroom.
Articles in Mathematics Teaching in the Middle School and Arithmetic Teacher were supplementary sources of classroom activities, particularly probability games. Too many elementary and middle school teachers have not had a formal course in probability and statistics. Since these teachers are to guide their own students in developing probability intuition, it is important that the teachers themselves spend time honing their own probability intuition. Games are a natural vehicle for this purpose for several reasons. Games are fun. Games lend themselves to repetition – i.e., simulation. Repetition naturally introduces the Law of Large Numbers. And teachers need to understand the Law of Large Numbers because it is the justification for comparing experimental probabilities to theoretical probabilities. The Law of Large Numbers is the third bighit concept in the course.
The DataDriven Mathematics materials provided some great examples of natural questions or natural successions of questions suggested from data. Probing for answers to these questions formed the basis for introducing major concepts such as association and conditional probability. Also in a natural way, many of these questions gave teachers experience in using everyday English phrasing that is precise in the mathematical sense.
Teachers were required to have and use a calculator with basic statistical features, and were told that the calculators they used in their classrooms would do. No particular brand was required. On occasion, students were given assignments requiring the use of the statistical software package Minitab. Also, some of our game simulations required the use of random number tables generated for particular games. Many of the teachers had access to and were quite comfortable using Excel. Learning how Excel could be used to generate random number tables made it possible for teachers to generate their own tables.
The course begins by illustrating some advantages of statistical thinking and with examples of how not knowing the level of common cause variation can lead to rewarding or penalizing people for results that are beyond their control. A favorite example of a well meaning but inappropriate comparison of today’s number to yesterday’s is the Senate panel that wanted to spend $100 million dollars to combat a plague of crime in rural America. Typical of comparative data purportedly demonstrating the existence of a plague were the percentage increases in murder counts from 1989 to 1990 in Montana, which meets the definition of rural based on population density, and Los Angeles, which doesn’t.
1989 1990 % increase Murders in: Montana 23 30 30% Los Angeles 877 983 12%
The seven murder increase in Montana represented a whopping 30% increase, whereas the106murder increase in Los Angeles was just a 12% increase. In working with small numbers, small changes can translate into large percentage increases. It would not be uncommon for a region that averages, say, 1 murder per year to have a year with one murder followed by a year with two murders. This 100% increase would not be unusual and would not indicate that a superplague must exist. Analogously, in a region that averages 23 murders/year, 30 murders in a single year is not unusual in terms of probability. The control chart below plots thirty years simulated murders for a region that averages 23 murders/year.
Based on the rough rule of thumb that statisticians don’t begin to consider an event unusual unless its probability is .05 or less, the chart indicates that 30 or more murders in a year is not unusual. This chart paves the way for introducing the general idea of a control chart as a helpful tool, common cause and assignable cause variation, and upper and lower control limits. A similar chart which examines simulated murder counts in Los Angeles at an average rate of 877 murders/year shows that 983 is well above the upper control limit, the signal of a potential assignable cause problem. Control limits, aka threesigma limits, pave the way for introducing the Empirical Rule (aka the 68%, 95%, 99.7% Rule).
Teachers readily identify with the quality movement goals common in industry  e.g., manufacturing a quality product, making the world a little bit better  and with associated statistics principles  e.g., not rewarding or punishing people based on data without first measuring the amount of variation that is present in a system when there is nothing wrong.
There are quincunx activities that can be done in class with teachers to reinforce these principles and to motivate why normal distributions are important. Other topics motivated by industrial examples in this part of the course include measures of location and variation, using their calculators, populations and samples, discrete and continuous variables, parameters and statistics, normal distributions, the Empirical Rule, zscores, random samples, applying the Empirical Rule to sample means, the Central Limit Theorem, and the Xbar chart.
Not all probability distributions are normal. A homework exercise the teachers were assigned was to toss a coin until a head occurs and record the number of tosses needed. Each teacher did 20 replications. The next day, we pooled all the data from the nine teachers and examined a histogram.
The histogram showed that roughly half of the 180 replications ended on the very first toss, roughly half of the other half ended on the next – i.e., second toss – and so on. This graphic was used to introduce topics such as class marks, class boundaries, interval width, cumulative frequency, and so on. Teachers then learned how to use Minitab to create histograms, prescribe interval width, class marks, titles, and so on. Teachers also used Minitab to produce various comparative graphics based on data from the middle school text and other sources. One comparative boxplot that teachers seemed to relate to was based on data in Triola’s text, attributed to Mathematics Teacher.
“Where is Jessica Tandy represented?” and “What could explain the age difference the boxplots was showing?” were questions that arose.
One variation of a homework assignment on distribution shape was to toss a paperclip at a spot on the floor while standing five feet from the spot and measure the distance from the spot to the paperclip. They would need to decide on the procedure for obtaining measurements. They were to replicate 25 times. Finally, they were told to not be surprised when their histogram turned out to be skewed to the right. A natural followup, once the histograms were in hand, was to discuss why the shape could have been anticipated.
Other topics in this part of the course included relationships that would be expected to occur with various distribution shapes (e.g., mode < median < mean), stemandleaf plots, percentiles/deciles/quartiles, and outliers. Also discussed were some variables children could measure.
The second time I taught the course (summer of 2001), there was a point when we were comparing different ways to measure unusualness. Everyone knew about outliers, zscores, and percentiles. At the time, it seemed marginally appropriate to introduce Chebyshev’s Inequality as a means of reinforcing the idea of tying unusualness of a value to how many standard deviations from the mean it is. This turned out to be very worthwhile for an unanticipated reason. Teachers had to be able to interpret what “at most (1/k^{2}) times100% of the data can fall at least k standard deviations from the mean” means. Comfort in using and interpreting at least and at most correctly in a sentence, especially both in the same sentence, required a new level of mathematical focus and precision for some of them. And so Chebyshev’s Inequality was a worthwhile addition.
I also illustrated the use of Chebyshev’s Inequality on data based on reallife consulting job I’d had. A company makes a product subject to radiation leakage (product and units withheld). What can be said about the probability that a manufactured unit will produce a reading of at least 35 if 100 items measured:
Measurement 1 2 3 4 5 6 7 8 9 10 11 12 Frequency 3 13 20 22 11 19 8 2 1 0 0 1
One can estimate the population mean and standard deviation from the sample data. The needed facts are:
So one can expect at most .39% of the units to have a reading of 35 or more.
In the course we treat some of the standard topics middle school teachers need to be particularly comfortable with – The Fundamental Counting Principle, permutations and combinations (by hand and using calculators) and whether order matters, and relating _{n}C_{k} to Pascal’s Triangle. Teachers enjoyed the birthday problem, which leads naturally to comparing what elementary and middle school teachers call experimental and theoretical probabilities. The basis for making such comparisons is the Law of Large Numbers. An honest phrasing of this law that is appropriate for elementary and middle school students is:
Law of Large Numbers: When an experiment is repeated many times under identical conditions, the experimental probability of a particular outcome tends to approach that outcome’s theoretical probability of occurrence.
An icebreaker activity is to have the teachers pass around the room a paper cup with a die in it. There should be instructions for both how and how many times the cup should be shaken with a hand covering the top before viewing the die face. Getting a 4 or 5 on the die is viewed as a success and anything else as a failure. On a record sheet that accompanies the die, each teacher writes what occurs – either a success or a failure  on each of his or her 10 trials, in order. Later, the instructor can show the resultant graph of how the experimental probability of success changed from one trial to the next as the cup proceeded on its journey. The graph should present a compelling picture for the tendency of the experimental probability to approach 1/3. Of course, a fair die has a known theoretical probability, 1/3, of producing a success. In tossing a thumbtack, if success means landing point up and failure means landing point down, the theoretical probability of success is an unknown. But it can be estimated by the experimental probability of success after many trials.
There are many good probability games with simple rules that two children (or teachers) can play to improve their probability intuition. The first games explored in the course are unfair games having rules that, on the surface, suggest the game is fair – i.e., that the probability Player A wins is 1/2. To many probability questions, including, “Is this game fair?” it is often easy to give an impulsive, obvious, but incorrect answer. Games allow students to commit their intuition, and repeated plays allow them to see whether their intuition is in line with or, unhappily, contradicted by experimental probability. Also, as experience is gained with repeated plays of a game, they learn to explore the structure of how the game produces winning outcomes. Once an unfair game is well understood, a natural followup is to have students explore whether the rules can be changed to make the game fair.
The course uses a variety of dice games from Norton, “Determining Probabilities by Examining Underlying Structure”, Mathematics Teaching in the Middle School, 2001, 7, 7882. In one game, A and B each toss a fair die. Player A wins if the die faces differ by 0, 1, or 2. Player B wins if they differ by 3, 4, or 5. This game actually favors A, but can be made fair in a number of ways (e.g., A wins if the difference is 1 or 2 and B wins if the difference is 0, 3, 4 or 5). Two other games examined in the article are examined. One is tied to the maximum of the two die faces and the other to a horse race involving two plastic horses and a die.
Another game appropriate for elementary or middle school students or teachers is “Who Rolls an Even Number First?” Students are divided into pairs. Each pair decides which one will be Player A and which will be Player B. This designation does not change over the course of the activity. Rules are explained to all pairs. Player A goes first by rolling the die in a cup. If A rolls an even number, A wins the game. If not, the cup is passed to Player B. If B rolls an even number, B wins the game. If not, the cup passes back to A, who tries to win by rolling an even number, and so on. When a game is over, the next game is begun with A going first. Students are asked if they think the game is fair. The typical answer is yes. The experimental probability for a win by Player A will tend to approach 2/3 in all pairs. Ideally, half of all games should end on the very first toss, with A the winner. B should win half of the other half of all games, or 1/4 of all games, on the next toss. A should win half of the remaining onefourth of all games on the third toss, and so on. This is usually enough of an observation to make it clear that the player who goes first has an edge. Also, some teachers usually pick up on the pattern for what needs to be added in order to find the probability that A wins:
Law of Large Numbers activities can work even for kindergarteners and first graders. This activity requires two cups or plastic glasses and marbles of two colors, say green marbles and yellow marbles. Tell the children they will each get to close their eyes and draw a marble out of a glass. There will be a pretend big prize for anyone who draws a yellow marble. The children can decide on what the pretend prize is to be. With appropriate fanfare, the teacher places one yellow and two green marbles in one glass and two yellow and many green marbles in the other. The exact number of green marbles in the second glass is not important. It matters only that the second glass is loaded with green marbles. Students are asked which glass they would rather draw from. First graders and kindergarteners will prefer the second glass. There are two chances to win.
Drawing Marbles from a Glass – Which glass do you prefer? 1 yellow 2 yellow 2 green 28 green Glass I Glass II
Beginning with Glass I, each child in turn draws a marble, observes the color, and returns the marble to the glass. With each draw, a tally mark for which color is drawn from Glass I is added to the record the board. The process is then repeated with Glass II, the favored glass. The comparative tallies for Glass II will tell the story. The children are too young to know fractions or to appreciate the statement of the Law of Large Numbers. But its consequences will be experienced as they witness the repeated draws and the preferred glass gets trounced by Glass I.
A game that particularly challenges the teacher’s intuition is one about Monte Hall and the threedoor problem. Teachers are divided into threemember groups. One group member plays the role of game show host Monte Hall. Another acts as the contestant. The third records and announces game results. The group is to imagine three doors numbered 1, 2 and 3. Behind one of the doors is a big prize. A goat is behind each of other two doors. Group members who are not the contestant are given a table of random 1s, 2s and 3s generated by Excel. The table identifies which door hides the big prize in each game. Monte asks the contestant, “Would you like to pick door number 1, door number 2, or door number 3?” The contestant chooses a door. Monte, who knows which door hides the prize, announces the number of one of the two other doors, says he is opening it, and then states (truthfully) that there is a goat behind it. Monte then gives the contestant a choice. The contestant can stick with the door originally chosen and win whatever is behind it. Or the contestant can switch to the other closed door and win whatever is behind it. The initial issue posed to the teachers is whether the stick strategy or the switch strategy gives the player a better chance of winning. When asked the probability of winning with the stick strategy, teachers invariably and correctly say 1/3. When asked the probability of winning with the switch strategy, teachers usually say 1/2, reasoning that the prize is behind one of the closed doors. In order to test their intuition about the chances of winning with the switch strategy, the contestant in each group always switches doors. In each game, after the contestant gives the instruction to switch doors, the data recorder informs the contestant of the game’s outcome. Suppose, for example, the contestant’s original guess was correct. The data recorder would announce “You picked a winner and switched to a loser.” The recorder would also write the outcome W L. As each group plays more games, the experimental probability that the contestant wins the prize will tend to approach 2/3. Now, WL will occur in roughly one third of the games. They also will notice that the only other outcome that ever gets recorded is LW. In addition to their experience with the game, sometimes the prompting question “Is LL possible?” is needed to help them recognize that an incorrect first guess and winning the prize are one and the same.
Another activity that allows teachers to witness the power of simulation is Gambler’s Ruin. Each teacher is told: “You have $4 million but owe $8 million to the mob, due tomorrow. One way to get an additional $4 million overnight is to go to Las Vegas and gamble. Of course, games favor the House. You choose a game for which the probability a player wins is 1/3. If you win, the House pays you an amount equal to your bet. If you lose, the House keeps your bet.
You could make a single bet of $4 million, but recognize that the probability of achieving your goal (having $8 million) is only 1/3. It troubles you that two thirds of the people who would bet this way would lose immediately and have no chance to come back. You wonder if making smaller bets, say of $1 million each, might be better. That way, even if you lose on your first bet, you have a chance to come back and achieve your goal. So you decide to make only $1 million bets and play until either you have $8 million or you have lost all your money. Is this strategy better than making one big bet?
Teachers are given or generate tables of random numbers that can be used to tell whether a player wins or loses each bet. The table is designed so that p(1) = 1/3 and p(1) = 2/3. A row of one of the tables looks like this:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
As a homework assignment, teachers use the tables to simulate how a large number of people employing this strategy would fare. Using the leftmost number in the row shown above as the starting point for the first outcome in a series of bets, the bettor’s progress would have been:
5 4 5 4 5 4 3 4 3 2 3 2 1 0
This player made 14 bets and wiped out. As teachers experience how the betting strategy unfolds for many individuals, it becomes clear that hardly any player will achieve the goal using this strategy. By betting smaller amounts, the player must make more bets in a game that favors the House, which is exactly what the House wants.
At the conclusion of this part of the course, teachers design Law of Large Numbers activities for use in their own classes.
This is the point in the course when we turn to data driven questions that can set up one or more lesson plans. The sources used for the particular questions posed are
DataDriven Mathematics, Probability Through Data, P. Hopfensperger, H. Kranendonk, R. Scheaffer, Dale Seymour Publications, 1999, and
DataDriven Mathematics, Mathematics in a World of Data, J. Burrill, et. al.,Dale Seymour Publications, 1999. The topics treated are association, conditional probability, and independence.
A data driven question that sets up conditional probability is the following.
On election day, exit polls are conducted by the media to determine who voted and why voters voted for a particular candidate. One such poll was conducted in1996 by Voter News Service in Wisconsin.
Voted For Clinton Dole Totals Men 255 269 524 Women 382 238 620 Totals 637 507 1144
What percent of the voters voted for Clinton?
Of the female voters, what percent voted for Clinton?
Questions like these pave the way for conditional probability and comfort in working with twoway tables. The questions also make it clear that how a question is worded makes a difference. Beyond the mathematics, just as was discussed earlier regarding using and interpreting
at least and at most correctly in a sentence, these questions benefit teachers by requiring them to speak and write at a high level of mathematical focus and precision.
The data driven mathematics sources listed contain many data sets or student generated data sets that can break the ice on a number of topics, and the reader is encouraged to examine them.
Burrill, J., Clifford, M., Errthum, E, Kranendonk, H., Mastromatteo, M, O’Connor, V, DataDriven Mathematics, Mathematics in a World of Data, Dale Seymour Publications, 1999.
Conference Board of the Mathematical Sciences, The Mathematical Education of Teachers, American Mathematical Society, Providence, RI in cooperation with Mathematical Association of America, Washington, DC, 2001.
Hopfensperger, P., Kranendonk, H., Scheaffer, R., DataDriven Mathematics, Probability Through Data, Dale Seymour Publications, 1999.
Middle Grades Mathematics – An Interactive Approach, Prentice Hall, 1998.
Norton, M., A Quick Course in Statistical Process Control, draft version, Prentice Hall.
Norton, M., “Determining Probabilities by Examining Underlying Structure”, Mathematics Teaching in the Middle School, 2001, 7, 7882.
Triola, M., Elementary Statistics, 7th edition, Addison Wesley, 1997.