Thursday, December 16, 2010

Uses and Misuses of Confidence Intervals in a Psychometrics Context

I have previously shared several posts written by and with  Greg Pope, Analytics and Psychometrics Manager for Questionmark. What I really like about Greg is his ability to communicate statistics and psychometrics in a manner that all of us can understand. For example, today's post is about interpreting test scores, but he also applies the same thought process to polling data that we see every night on the news or in the daily newspaper.

I think it is extremely important that we take great care when interpreting test scores to examinees and parents so I hope you will take a few minutes to read the following post by Greg:

I have always been a fan of confidence intervals. Some people are fans of sports teams, for me, it’s confidence intervals! I find them really useful in assessment reporting contexts, all the way from item and test analysis psychometrics to participant reports.

Many of us get exposure to the practical use of confidence intervals via the media, when survey results are quoted. For example: “Of the 1,000 people surveyed, 55% said they will vote for John Doe. The margin of error for the survey was plus or minus 5% 95 times out of 100.” This is saying that the “observed” percentage of people who say they will vote for Mr. Doe is 55% and there is a 95% chance that the “true” percentage of people who will vote for John Doe is somewhere between 50-60%.

Sample size is a big factor in the margin of error: generally, the larger the sample the smaller the margin of error as we get closer to representing the population. (We can’t survey approximately all 307,006,550 people in the US now, can we!) So if the sample was 10,000 instead of 1,000 we would expect that the margin of error would be smaller than plus or minus 5%.

These concepts are relevant in an assessment context as well. You may remember my previous post on Classical Test Theory and reliability in which I explained that an observed test score (the score a participant achieves on an assessment) is composed of a true score and error. In other words, the observed score that a participant achieves is not 100% accurate; there is always error in the measurement. What this means practically is that if a participant achieves 50% on an exam their true score could actually be somewhere between say 44% and 56%.

This notion that observed scores are not absolute has implications for verifying what participants know and can do. For example, a participant who achieves 50% on a crane certification exam (on which the pass score is 50%) would pass the exam and be able to hop into a crane, moving stuff up and down and around. However, achieving a score right on the borderline means this person may not, in fact, know enough to pass the exam if he or she were to take it again and then be certified on crane operation. His/her supervisor might not feel very confident about letting this person operate that crane!

To deal with the inherent uncertainty around observed scores, some organizations factor this margin of error in when setting the cut score…but this is another fun topic that I touched on in another post. I believe a best practice is to incorporate a confidence interval into the reporting of scores for participants in order to recognize that the score is not an “absolute truth” and is an estimate of what a person knows and can do. A simple example of a participant report I created to demonstrate this shows a diamond that encapsulates the participant score; the vertical height of the diamond represents the confidence interval around the participant’s score.

In some of my previous posts I talked about how sample size affects the robustness of item level statistics like p-values and item-total correlation coefficients and provided graphics showing the confidence interval ranges for the statistics based on sample sizes. I believe confidence intervals are also very useful in this psychometric context of evaluating the performance of items and tests. For example, often when we see a p-value for a question of 0.600 we incorrectly accept this as the “truth” that 60% of participants got the question right. In actual fact, this p-value of 0.600 is an observation and the “true” p-value could actually be between 0.500 and 0.700, a big difference when we are carefully choosing questions to shape our assessment!

With the holiday season fast approaching, perhaps Santa has a confidence interval in his sack for you and your organization to apply to your assessment results reporting and analysis!

Related posts:
Standard Setting: An Introduction
Should I include really easy or really hard questions on my assessments?
How the sample of participants being tested affects item analysis information

Tuesday, December 14, 2010

The Super Book of Web Tools for Educators

Free Technology for Teachers has created a new ebook entitled, "The Super Book of Web Tools for Educators."

This publication will introduce you to more than six dozen web tools for K-12 teachers. Additionally, you will find sections devoted to using Skype with students, ESL/ELL, social media for educators, teaching online, using technology in alternative education settings, and blogging in elementary schools.

So if you are an educator who has an interest in technology, but just don't know how to get started, try reading The Super Book of Web Tools for Educators!

This publication was created by the following authors: George Couros, Patrick Larkin, Kelly Tenkely, Adam Bellow, Silvia Tolisano, Steven Anderson, Cory Plough, Beth Still, Larry Ferlazzo, Lee Kolbert, and Richard Byrne.  Try each of the author's sites if you would like to know more about educational technology or if you would like to make a comment about this new ebook.

Friday, December 10, 2010

Five Tips to Prevent Your Student From Cheating

I found an interesting article written by Aisha Sultan in stltoday.com.

We spend so much time talking about how teachers or schools should be preventing cheating, but this article is really written for parents. It's my hope that parents and educators will work together to end this discouraging trend.

As the article states: 
Cheating among students is rampant. Nine out of 10 middle schoolers admit to copying someone else's homework and 74 percent of high school students admit to cheating on an exam. Technology makes it even easier, with homework assignments sent via mass e-mail and test answers showing up as text messages.

Educator and author Dr. Michael Hartnett shares five useful tips on how to make sure your child is not a chronic cheater:

1. Check your child's homework every night. ... A good sign that a teenager is cheating is the absence of substantive work.

2. Create a device-free zone of at least an hour a day for studying. ...Yes, students can multitask, but can they unitask with the intense concentration that is often required to do an assignment well? Any hour a day by themselves without connections to cyberspace or to their friends is an hour of studying and learning they have devoid of cheating.

3. Give your teenagers practice tests the day before an exam. ...Know what they are studying and ... if their materials are sparse and generated from websites, then you know they are either cheating or performing poorly.

4. Talk to your teenagers honestly and realistically about cheating. ...Acknowledge that cheating is prevalent, and understand that you are asking for your teenagers to be exceptional instead of conforming to a pervasive cheating culture.

5. Avoid clich├ęs. ... I wouldn't try "Cheaters never prosper." The truth is they do prosper...
Dr. Michael Hartnett has been a high school English teacher, college professor, and SAT instructor/tutor for more than 20 years, He is the author of The Great SAT Swindle.

(Please click HERE to read the story in its entirety and to better understand the author's rationale behind his major points.)

Wednesday, December 8, 2010

Explore the World in 3D: Google Earth 6 Released


The newest version of Google Earth includes a host of new features, but the biggest addition is 3D views.

According to the site, "With Google Earth 6, you can explore the streets in 3D like never before. Fly from outer space down to the streets with the new Street View and easily navigate your way around. Switch to ground-level view to see the same location in 3D.

Now you can see 3D trees in locations all over the world. Google has also made it easier for you to know when historical imagery is available in the location you are viewing. Download the latest version to start exploring the new features and watch the videos below to learn more."


I have to admit that Google Earth was always fun to play around with, but 3D fun is even better!

Monday, December 6, 2010

BeFunky: Photo Editing Made Free, Fun, and Simple

Oklahoma Dept. of Career and Technology Education
I wanted to share a fun photo editing site with you that I have enjoyed playing around with the last couple of days. BeFunky is a free site that offers has 190 easy-to-use photo effects in 30 different categories that are only one click away (a premium version allows you more options).

BeFunky also offers auto photo editing with a single click, but if you prefer, you have the option of basic photo editing that allows you to adjust for contrast, brightness, hue, exposure, saturation, and colors.  You can also add frames, goodies, shapes, text, and speech bubbles to your photos.

Upload your photos from various sources (i.e. Facebook, Flickr, Picasa, MySpace, Bebo, your webcam, the web, and your PC) and then download to various sources as well. You can also try the BeFunky iPhone application and take your photo editing with you.

Take a look at the following examples and then try BeFunky:

Wishing you a very Merry Christmas! J.T.


Rudy Darrow

Rudy Darrow





Have fun experimenting with your photos!


Thursday, December 2, 2010

News Map: Finding the World's News by Location

As you all know, I like reading the news from around the world. I have previously shared posts about Newseum (post) and Mapeas (post), but I would like to add another news source that helps you search for the news by location. News Map simple combines Google Maps and Yahoo News and all you have to do is select a region from the tabbed menu (or by zooming in and out and by using the direction arrows)  then click on a country to see a list of current news stories. For some larger countries you can further refine your search by state, province, or city (over 3,000 large cities in the data base).


Tuesday, November 30, 2010

The Sigmoid Curve, Personal Learning, and the "Business" of Education

I have been thinking the last couple of weeks about my own personal learning habits, the CareerTech Testing Center, and on the "business" of education.

The last few days I began to think of Charles Handy and his Sigmoid Curve. This S-shaped curve can be used to describe the life-cycle of products, organizations, and even relationships. As the curve symbolizes the fact that nearly all of life’s endeavours start slowly, dip and falter through an experimental stage before rising to a pinnacle of success, after which there is inevitable decline.


To avoid such decline, decisions have to be made about further improvement at the point where success is still growing and before the entity/individual starts to experience this plateau. For many, this is a difficult thing to do. After all, if you have just survived the difficult and trying times, who wants to begin new sacrifices and give additional effort? Shouldn't there be plenty of time to relax and bask in your success?

I think good leaders and good learners know that this is the time to think about the next phase of change; resting on laurels will invariably lead to decline.

On its own, the S-shaped curve is a kind of depressing, and not particularly helpful. Is my life and/or my organization on the downward curve? What's left? Just thinking of putting your head in the sand and waiting for the inevitable end?

Actually, the power of the Sigmoid Curve comes when you begin to add a second curve to the original curve (see below). Charles Handy suggests that constant growth and development is achievable if we start a new initiative before the first one begins to decline (point X).

Paradoxically, this means making changes when the first curve is nearing its peak and the venture is flourishing. This is when an organization has the time, resources and the energy needed to see a new curve through initial explorations and floundering. Although there will inevitably be more motivation to change at “Y” when the first curve is in decline, at that point it takes enormous effort to move to where one ought to be on the second curve.

How do you make the second curve happen?

Knowing when to start a new curve is one thing and getting it started at the right time is quite another thing. Whether the second curve is a new product, a way of operating, a strategy or a culture, it will require fresh ideas and inspiration; and always be different to the first.

Handy believes that in an organization, the people who lead the second curve will also have to be different. Not only do original leaders need to keep the first curve going while the second takes off, they will also find it difficult to abandon their first curve while it is doing so well: there is a strong temptation to recapture past glories. What this means is that for a time there will probably be tension, confusion, chaos, backstabbing, anarchy (the area between X and Y) while new ideas and new people coexist with the old until the handover between first and second curves is complete. I recommend that everyone be as open as possible and just communicate. Develop feelings of mutual trust.

As Handy puts it, the paradox of success is that what got you where you are won’t keep you where you are.

When do we begin the second curve?

The key question becomes, “where are we on the first curve, and when do we need to start the second?” Handy suggests we will only know this for sure when we look back and, without hindsight, it is best to proceed by guess and assumption.

What this means in practice is that we must constantly engage in second-curve thinking. We need to stay sceptical, curious and inventive, challenging the assumptions underlying our current curve and developing alternatives. We need to ask questions like, “if we did not exist, would we reinvent ourselves and, if so, what would we look like?”. It is this kind of thinking that gives birth to second curves.

I think this is a different way of thinking for many of us and in reality, we should celebrate different points of view and different personalities. Our culture should be one that creates an attitude of messing with success.

In summary, the message of the Sigmoid Curve is that we need the foresight to start making changes even when it is not yet obvious that change is necessary, and the courage to switch from one curve to the next when the time has come.

In these trying economic times, the management philosopher thinks he can survive -- even prosper -- in the tough new downsized world as long as he understands the forces that are shaping it.

Now I ask that you will examine your personal Sigmoid Curve for learning and also where your organization (school/Agency/business) is on the curve?

Most importantly, remember to:
  • Courageously ask the tough questions.
  • Cultivate an attitude of messing with success.
  • Creatively experiment with new combinations of old ideas.
  • Celebrate different points of views and personalities.
  • Encourage and celebrate new ideas.
  • Remember that contraries are your friends and not your enemies.
Adapted from Handy, C. (1994) The Empty Raincoat, London: Random House

Sunday, November 21, 2010

Effectively Communicating the Measurement of Constructs to Stakeholders



I wanted to share the following article written by Greg Pope, Analytics and Psychometrics Manager for Questionmark, and Kerry Eades, Assessment Specialist for the Oklahoma Department of Career and Technology Education. Both authors share an interest in test security and many other topics related to online assessment.

Greg Pope
There are many mentions on websites, blogs, YouTube, etc. about people (employees, students, educators, school administrators, etc.) cheating on tests. Cheating has always been an issue, but the last decade of increased certifications and high-stakes testing seems to have brought about a significant increase in cheating. As a result, some pundits now believe we should redefine cheating and that texting for help, accessing the Web, or using any Web 2.0 resources should be allowed during testing. The basic idea is that a student should no longer be required to learn “facts” that can be easily located on the internet and that instruction should shift to only teaching and testing conceptual content.

Kerry Eades
There are many reasons for testing (educational, professional certification and licensure, legislative, psychological, etc.) and the pressures that stakeholders feel to succeed at all costs by “teaching to the test” or to condone any form of cheating is obviously immense. Those of us in the testing industry should, to the best of our ability, educate stakeholders on the purpose of tests and on the development and measurement of constructs. Having better informed stakeholders would lessen the “need” and “excuses” for cheating and improve the testing environment for all concerned. A key element of this is promoting an understanding of how to match the testing environment to the nature of an assessment: it is appropriate to allow “open book” assessments in some cases but certainly not all. We must keep in mind that education, in general, builds upon itself over time, and for that reason, constructs must be assessed in a valid, reliable and appropriate manner.

Tests are usually developed to make a point-in-time decision about the knowledge, ability, or skills of an individual based upon a set of predetermined standards/objectives/measures. The “value” of any test is not only this “point-in-time” reference, but what it entails for the future. Although examinees may have passed an assessment they may still have areas of relative weakness that should be remediated in order for them to maximize their full potential as students or employees. Instructors should also observe how all their students are performing on tests in order to identify their own instructional weaknesses. For example, does the curriculum match up with the specified standards and the high level of thinking in those standards? This information can also be aggregated and analyzed at the local, district, or state level to determine program strengths or weaknesses. In order to use scores in a valid way to make decisions about students or programs, we must begin by clearly defining and measuring the psychological/educational constructs or traits that a test purports to measure.

Measuring a construct is certainly complex, but what it boils down to is ensuring that the construct is being measuring in a valid way and then reporting/communicating that process to stakeholders. For example, if the construct we are trying to measure in an assessment is “Surgery Procedure” and if the candidate passes the test, we expect that the person can recall this information from memory where and when needed. It wouldn’t be valid to let the participant look up where the liver is located on the Internet during the assessment, because they would not be able to use the Internet while they are halfway through a surgical procedure.

Another example would be “Crane Operation” knowledge and skills. If this is the construct being measured and it is expected that candidates who pass the test can operate a crane properly, when and where they need to, then allowing them to tweet or text during their crane certification exam would not be a valid thing to do (it would invalidate the test scores) because they would not be able to do this in real life.

However, if the assessment is a low stakes quiz that is measuring the construct, “Tourist Hot Spots of Arkansas,” and the purpose of the quiz is to help people remember some good tourist places in Arkansas, then an “open book” or an “open source” format where the examinee can search the internet or use Web 2.0 resources is fine.

Effectively communicating the purpose of an assessment and the constructs being measured by it is essential for reducing the instances of cheating. This important communication can also help prevent cheating from being “redefined” to the detriment of test security.

For more information on assessment security issues and best practices, check out the Questionmark White Paper: “Delivering Assessments Safely and Securely.”

(Click here to read the post on the Questionmark's blog.)

Thursday, November 11, 2010

Thoughts on Using Prezi as a Teaching Tool

Do you feel that slides limit your ability to develop and explain ideas? Then maybe it's time you try Prezi.

Prezi is a free, web-based, application which allows you to design non-linear presentations online. They offer a free version (and a fee-based version), it's simple to use, and you'll never need to create individual slides again.

Prezi allows you to zoom in and out, add graphics, and video(s) to your presentation. It’s difficult to explain how to use it so I recommend that you look at the example presentations and play with the system yourself. Prezi is different and so it does take some getting used to, but after some practice you will find that the user interface is quite intuitive. You can also find plenty of help, i.e. videos, forums, Twitter, and blogs, which will help you create your presentations.

Watch the Prezi below (or HERE) to get a perspective on how to use Prezi as a teaching tool:


Tuesday, November 9, 2010

DonorsChoose.org: An Online Charity That Connects You to Classrooms in Need


A teacher spends, on average, $40.00 per month on classroom essentials.

DonorsChoose.org is a site that can help teachers alleviate the burden of paying for school items out of their own pockets or to get funding for a new project. Not only can you submit a project request, but you can also choose a project to donate to. Whether you have $100.00 to donate or just a $1.00, no gift is too small.

DonorsChoose.org makes it easy for anyone to help students in need. DonorsChoose.org grew out of a Bronx high school where teachers experienced first-hand the scarcity of learning materials in our public schools.

Charles Best, then a social studies teacher, sensed that many people would like to help distressed public schools, but were frustrated by a lack of influence over their donations. He created DonorsChoose.org in 2000 so that individuals could connect directly with classrooms in need

According to the site:
Here's how it works: public school teachers from every corner of America post classroom project requests on DonorsChoose.org. Requests range from pencils for a poetry writing unit, to violins for a school recital, to microscope slides for a biology class.

Then, you can browse project requests and give any amount to the one that inspires you. Once a project reaches its funding goal, we deliver the materials to the school.

You'll get photos of your project taking place, a thank-you letter from the teacher, and a cost report showing how each dollar was spent. If you give over $100, you'll also receive hand-written thank-you letters from the students.

At DonorsChoose.org, you can give as little as $1 and get the same level of choice, transparency, and feedback that is traditionally reserved for someone who gives millions. We call it citizen philanthropy.
Take a look at DonorsChoose.org and see if you would like to fund a classroom project or submit a project proposal if you need help with funding.

What a great way to help a classroom during the holidays!

Friday, November 5, 2010

The Ultimate Twitter Guidebook For Teachers

Go ahead and admit it.  You always wanted to try Twitter, but you've never felt comfortable using new technology and you really couldn't determine how Twitter could be used in the classroom anyway. I mean who has the time during the school year to learn new technology?

The answer is quite simply....Yes, it's easy to learn and you do have the time because I've found the resource that you've been waiting for! The Ultimate Twitter Guidebook For Teachers (by EDUdemic) is a list of 100 tips, apps, and resources that are separated into the following categories:
  • Resources for Learning Twitter
  • Twitter for Educators
  • Resources for Making the Most of Twitter
  • Suggestions for Twitter Use in the Classroom
  • Apps and Twitterers to Use with Students
  • Apps to Make Twitter Work for the Educator
  • App Resources
  • Tweets to Follow
  • Fun Twitter Experiments
The upcoming holiday season would also be a great opportunity to try Twitter and begin a personal learning network. I hope you will follow the CareerTech Testing Center on Twitter @CareerTechTest

Contact me if you begin a new Twitter account (or even if have an existing account) because I would like to be a part of your learning network!

Wednesday, November 3, 2010

Diagnostic Tests That Measure Conceptual Understanding

I just read a very interesting post from Questionmark's founder John Kleeman. I think it's a great way to measure conceptual understanding and to identify misconceptions. I hope you will read John's post below and use the information to improve your assessments.
I’ve just read a thought provoking article on diagnostic tests written by Simon Bates and Ross Galloway from the University of Edinburgh Physics Education Research Group and published by the UK Physical Sciences Centre (see the article at pages 10-20 here).

The authors are particularly concerned with diagnostic tests that measure conceptual understanding and identify mis-conceptions. So rather than testing for facts or knowledge or particular skills, their interest in diagnostic assessments is primarily around whether students understand some key concepts in the Physical Sciences. If students don’t understand them, they as instructors need to correct this in their teaching and feedback.

The article gives examples of use of diagnostic tests and also gives some good and detailed guidance on how to construct them, including which statistics to look at for good results. They recommend (as proposed by other authors in the Physics Education Research literature) a p-value or difficulty index of 0.3 to 0.9, a discrimination index of 0.3 or better or a point biserial correlation of 0.2 or better, and a reliability index of 0.7 or better.

They also explain how to write questions that test why people don’t understand something as well as what they don’t understand. And they give the example below (from the Lawson Classroom Test of Scientific Thinking) as something they have used in their teaching. Here is a what-why question, which asks for a fact and also asks why that fact is the case.


Bates and Galloway report that the first, “what” part of the question is answered just as well by students coming into university as those who have completed their first year at university, but that there is significantly better performance in the “why” part by those who’ve been at university for a year.

Getting to the root of learner misconceptions is a key challenge for all of us in learning and assessment, and I recommend this article as a good read.

Monday, November 1, 2010

Are You Looking for Ways to Reduce Software Costs? Try alternativeTo

AlternativeTo is a new approach to finding good software for your computer or your mobile phone. Whether you're looking for an alternative to expensive commercial software or just trying to get a feel for what's out there, AlternativeTo is a handy service for finding software alternatives. Tell them the application you want to replace and they'll give you a list of great alternatives.

More specifically, alternativeTo allows you to sort programs according to operating system and user rankings and they provide notes about "questionable" software (one's that you should probably avoid). As always, I would hope that you would thoroughly research any software prior to using it anyway.


If you want an alternative to the "alternativeTo" try OpenSourceAlternative.

Thursday, October 28, 2010

The 10 Blogs I Read First

I wanted to share my favorite blogs with you. Athough the list of blogs that I subscribe to seems endless at times, I find that these 10 are the ones that I visit most frequently. There are a lot of other great blogs out there as well and many truly resourceful and intelligent people from around the world that create blogs, but for one reason or another, I seem to return to the following sites as resources.

I hope you will take the time to visit them as well because they offer great resources with topics ranging from educational technology to assessments. (I alphabetized the list so they appear in no order of preference.)

Dangerously ! Irrelevant - Scott McLeod, J.D., Ph.D., an Associate Professor in the Educational Administration program at Iowa State University, discusses technology, leadership, and the future of schools.

Free Technology for Teachers - this Richard Byrne creation provides free resources and lesson plans for teaching with technology. Richard is also a Google Certified Teacher and he offers many tutorials on Google products that you can download for free.

GO2WEB20 - This is a great index of a plethora of web applications.

The Edublogger - offers tips, tricks, ideas and help with using web 2.0 technologies and edublogs. This blog is created by Sue Waters and I would also recommend taking a look at her personal blog as well (catch her talking about elearning, Web 2.0 and technology while helping others.).

Instructify - This blog is a creation of the University of North Carolina at Chapel Hill School of Education. LEARN NC's goal is to find the most innovative and successful practices in K-12 education and makes them available to all teachers and students.

Jane's E-earning Pick of the Day - just like the title states, Jane Hart features a daily item of interest. for learning and/or working (and sometimes more that one). Jane is also founder of the  Centre for Learning & Performance Technologies, which is also another great site to check out. See Jane's Top 100 Tools for Learning 2010 List.

Larry Ferlazzo's Websites of the Day - Don't be fooled by his statement that this is for "Teaching ELL, ESL, & EFL" because there is much, much, more! Be sure you check out Larry's "Best of" series.

Mashable - this is a top source for news in social and digital media, technology and web culture with more than 30 million monthly pageviews. They report breaking web news, provide analysis of trends, review new websites and services, and offer social media resources and guides.

Never Ending Search - Joyce Valenza is a teacher-librarian at Springfield Township High School, a technology writer, and the maker of a great blog. This blog is found on the School Library Journal site, but don't let that fool you into thinking that Joyce only reports on issues that pertain to school libraries.

Questionmark - Although Questionmark’s mission is to provide (sell) the testing and assessment software and support services, their blog provides some excellent resources on testing theory and practices. I especially enjoy the posts made by Greg Pope, Analytics and Psychometrics Manager, and Eric Shepherd, CEO of Questionmark (His personal blog). They also offer free white papers that are excellent resources for assessment.

Remember, the best way to follow a number of different sites/blogs is to subscribe in an RSS (Really Simple Syndication) Reader. RSS notifies readers of any new content created by sites and you don't have to worry about having your email inbox stuffed full of blog posts. RSS allows you to be notified of the updates and then to read the posts or go to the sites when you have time.

Are there any sites that you can recommend to me?

Tuesday, October 26, 2010

"If Children Have Interest, Education Happens"

Education scientist Sugata Mitra tackles one of the greatest problems of education -- the best teachers and schools don't exist where they're needed most. In a series of real-life experiments from New Delhi to South Africa to Italy, he gave kids self-supervised access to the web and saw results that could revolutionize how we think about teaching.

These "Hole in the Wall" experiments have shown that, in the absence of supervision or formal teaching, children can teach themselves and each other, if they're motivated by curiosity.

I’ve watched this TED talk by Sugata Mitra a couple of times now and it makes me think, in a way, of the Suzuki Method, which is the educational philosophy that strives to create "high ability" in its students through a nurturing environment. "The 'nurture' involved in the movement is modeled on a concept of early childhood education that focuses on factors which Shinichi Suzuki observed in native language acquisition, such as immersion, encouragement, small steps, and an unforced timetable for learning material based on each person's developmental readiness to imitate examples, internalize principles, and contribute novel ideas."

Mitra truly drives home the idea that we MUST get technology in the hands of our students. our children, at the earliest age possible.  I like his idea that students should use use technology in small groups in order to reinforce learning. The interaction among students is a terrific way to move knowledge from short-term to long-term memory.

I sincerely HOPE you will take the time to watch this video and give thought to how he used technology to educate children. I think it will change the way you look at technology and possibly the way that you teach.  Mitra has certainly changed my thoughts.

As Mitra stated, "If children have interest, EDUCATION HAPPENS!"

Thursday, October 21, 2010

Top 100 Tools for Learning 2010

Jane's E-Learning Pick of the Day has announced her 2010 list of "The Top 100 Tools for Learning." The list was comprised from 545 people who shared their Top 10 Tools for Learning during 2010.

The top 10 learning tools in Jane's list are:
  1. Twitter
  2. YouTube
  3. Google Docs
  4. Delicious
  5. Slideshare
  6. Skype
  7. Google Reader
  8. WordPress
  9. Facebook
  10. Moodle
Most of these are familiar to all of us, but you can always find interesting learning tools in the rest of the list (links are provided in the list to each learning tool).

Jane identified four key trends for the 2010 list:
  1. The increasing consumerization of IT
  2. Learning, working and personal tools are merging
  3. Social tools predominate
  4. Personal (informal) learning is under the control of the learner
As Jane states:
I think these trends are making a significant impact on the how we define learning, how learning is supported and "managed".

In a recent article Top Tools for Learning: Emerging Trends I looked at these four trends in more detail, and asked what this means for the future of workplace learning and also the Learning and Development profession, which I then address in a second article, The New Era of Workplace Learning,
The Winners & Losers 2010 page shows the tools that have gone up and down the list or fallen off it completely or are new entrants this year. So for instance here you can find out:

Which was the highest ranked new tool this year?
Which tool climbest the most places on the list this year?
Which tool descended the most on the list this year?
Which was the highest ranking tool on the 2009 list that lost its place this year?

The Best in Breed 2010 page displays the tools list in tools categories, so for instance you can find the top blogging tools or top wiki tools or top screencasting tools.

A further page provides a list of all the tools that have appeared in the Top 100 Tools list between 2007 and 2010, which makes for interesting reading and analysis of trends. For example, which of the 172 tools listed have consistently appeared on each of the 4 year's lists? You can find out HERE.
Which of these tools do you currently use? Which new tool (or tools) would you like to try?
 
I strongly recommend that you take a look at this list and at Jane's blog (Jane Hart is a Social Business Consultant and founder of the Centre for Learning & Performance Technologies)!

Tuesday, October 19, 2010

Why Online Assessments? Here are 10 Reasons for Making the Change from Paper and Pencil-Based Exams

The CareerTech Testing Center has been creating skills standards and assessments for over thirty years, but approximately a decade ago, we decided to make the switch to online assessments. So why did we make the switch? Here are our TOP 10 Reasons.
1. Reduction in Costs - Development and delivery of online exams is more efficient and cost effective when compared to the distribution of paper and pencil-based exams, especially when distribution occurs over a large geographical area. Distribution of paper and pencil tests is no longer needed and there is no longer a need for costly scanners and the costs associated with distributing the test results (packaging and postage). Another advantage is the reduced costs associated with storing the enormous volume of data.
2. Reduces Logistical Challenges – Conducting paper and pencil exams is logistically complicated as you try and arrange test times around location and staffing issues. There are plenty of things that could wrong, most importantly test security.
3. Improves Test Security and Reduces Cheating – An article in EducationNext states, “One such study asked 3rd, 6th, 8th, and 10th grade teachers in North Carolina to report how frequently they had witnessed certain inappropriate practices. Of those polled, 35 percent said they had engaged personally in such practices or were aware of others’ unethical actions.” The article also mentions, “In California, 36 percent of teachers thought it appropriate to practice with current test forms.” According to Caveon’s website, “according to surveys in U.S. News and World Report, 80% of "high-achieving" high school students admit to cheating.”

Sadly enough, there are HUGE opportunities for cheating to occur. An online test minimizes the risks associated with cheating. It is much more difficult to lose physical control of the test without a coordinated effort among several people. Also, computers do not know the candidates and the scores cannot be skewed because of that fact.

Online testing also allows you to randomize the order of test questions within an assessment. This allows you to give “multiple forms” of a test without the arduous task of creating numerous paper and pencil versions of the same test. This is just one more example of how online tests can minimize cheating.

4. Increases the “Timing” of Tests – Tests are usually administered in group settings and online tests will allow you to vary the “timing” of tests. What this means is online testing allows for more individualized self-paced learning. If some students are achieving at a faster rate, then they can be administered a test and upon receiving a passing score, can proceed to their next educational objective.

5. Allows for the Development of Item Banks – Creating questions and organizing them into assessments -- tests, quizzes, exams, and surveys allows the test developer several options for delivery, analysis, and revision.

6. Time Savings - Since online tests are automatically scored, staff is relieved of the burdensome chore of scoring exams (also the task of receiving the tests and shipping the results). Students and instructors will also receive immediate feedback of results. There will be no “lag” time between test administration, scoring, and remediation. What a great way to enhance learning!

7. Opportunities for Multimedia Uses - Online tests can be embedded with graphics and multimedia (i.e. Adobe Flash animations and videos). This can create a more interesting, interactive, and challenging assessment for examinees. There are testing software options that provide auto-sensing and auto-sizing for flexible delivery options and some even allow for translations into different languages.

8. Analyze Feedback from Test Administrations – Online tests provide many more ways to analyze testing data. Testing software can provide each examinee with a coaching (scoring) report that provides not only the overall result, but scores can be broken down by duty area. Results should then be analyzed according to individual students (relative strengths and weaknesses), individual instructors, and for the overall program. In other words, did an instructor adequately cover the standards? Did the curriculum align to the standards? Did the program meet requirements at the local or state level?

9. Item Analysis – Online testing allows you the ability to introduce classical test theory to your assessments, i.e. item analysis. This involves the use of many statistics that can provide useful information for improving the quality and accuracy of individual multiple-choice or true/false items (questions). Some of these statistics are: item difficulty (p-value), item discrimination (Point Biserial correlation), reliability coefficient, item-total statistics, and distractor evaluation (see Instructional Assessment Resources).

10.Accessibility – Much of today’s technology allows you to meet your accessibility needs by providing the examinee with text-sizing and contrast controls. For example, delivery software can render HTML that is optimized to work with assistive technologies such as screen readers and the aforementioned text-sizing and contrast controls within assessments can aid participants with low/partial vision. There has also been improvement in the navigation of assessments via keyboards and/or alternate devices to accommodate participants who are unable to use a mouse.
Now the question you must ask yourself is…”If I’m embracing technology in the classroom, then shouldn’t I embrace technology within my assessments?”

I hope you will contact us at the CareerTech Testing Center if you have any questions about assessments of if you would like to discuss how we might assist you with your testing needs.

I would also like to mention that I read a guest post on the Teacher Reboot Camp blog by Shankar Ganesh that gave me the idea for this post. I wanted to expound upon his original ideas and provide our outlook on why you should make the change from paper and pencil-based assessments.

Monday, October 18, 2010

Mapeas...Placing World News on a World Map


Those of you who follow this blog on a regular basis know that I'm a news and current events junkie. I previously made a post about Newseum, which is a site that displays daily newspaper front pages in their original, unedited form from around the world. Just double click and the page gets larger....you can read the entire paper on some if you click on the right place (Some front pages may contain material that is objectionable to some visitors. Viewer discretion is advised.).

I would like to introduce you to a new news site called Mapeas. This is a service that allows you to search for news briefs and videos from around the world. Think of it as a way to combine the teaching of current events with geography. Mapeas places world news, i.e business, entertainment, general news, science, and sports, on a world map. Click any circle on the map to zoom-in on a location and select a video news story. The news reports are provided by AFP, ABC, the Associated Press, Fox, and NBC. Another interested feature to Mapeas is that you can display the map in alternative formats such as terrain, satellite, and hybrid.
 
Take a look and I think you will enjoy it, especially if you like news stories as much as I do.  J.T. 

Wednesday, October 13, 2010

Are You a Change Agent for Education?

I was sent a link to an article entitled "Change Agent" yesterday that was published in Education Week.

The article, written by Will Richardson, provides some provoking thoughts on how education should change. I can't say that I agree with everything the author is stating, but I do think he has made some valid arguments. I agree that problem solving skills should be taught at a higher level and more inquiry-based learning should be taught. Educators should also be more willing to embrace technology, share their resources and best practices, and establish an online presence as a "model" for our students.

I am in disagreement with the author when he believes that "facts" are somewhat irrelevant if a student can access the needed information in a couple of seconds on a smart phone. Richardson's example of “What was the third ship that Columbus sailed?” as being irrelevant is flawed in my opinion. The purpose of teaching history is to learn the trials and tribulations of individuals, groups, or nations. In order to understand their successes and failures, a student must understand the historical setting in which certain events took place. The goal of history isn't just learning "facts," but it is learning from the critical thinking skills that these historical figures possessed. It's really the same thing that the author is proposing that we do in the future as educators. Possibly, the facts aren't being taught in what I would consider the correct manner or assessed in the correct manner, but that is an entirely different topic.

I would also disagree with the author on his feeling toward testing. Assessments can also be used, as not only measures of knowledge, but as a teaching tool. As I have stated numerous times on this blog, just looking at a test "score" is never in the best interest of a student and more should be done with the information that you receive from testing.  A test is a "point-in-time" measure of a students's ability and many factors help comprise that score. The "score" is simply one factor of many. This "one" factor is something that must be interpreted with caution. 

There are also certain knowledge facts that an individual should possess prior to any type of certification. Would you want a crane operator or your surgeon accessing their smart phone while conducting business? I certainly don't think that would be in anyone's best interest.

I believe education builds upon itself and many facts must be taught as a foundation for lifelong learning to occur. Critical thinking skills and inquiry-based instruction are absolute musts for education! As is the infusion of technology! We need to improve our efficiencies in our instructional methods, do more with what we have, and continue to raise expecations for everyone involved in education.

The article, written by Anthony Rebora, states:
Will Richardson, a former teacher-turned-tech expert, says schools need to revolutionize teaching and learning to keep pace with societal changes.
Will Richardson was a high school English and journalism teacher in New Jersey for nearly 20 years. During the early part of this decade, he began experimenting with the use of interactive Web tools in the classroom and was soon transfixed by their potential for increasing students’ engagement and exposing them to new resources and outlets for expression. His experiences led him to write Blogs, Wikis, Podcasts, and Other Powerful Web Tools for Classrooms (Corwin). Now in its third edition, the book has sold more than 60,000 copies and become one of the most influential books available on integrating Web 2.0 technology in the K-12 classroom.
Richardson is now an educational-technology consultant and co-founder of Powerful Learning Practice, a professional development provider devoted to fostering online community for teachers. Both in his speaking engagements and on his blog, Weblogg-ed, Richardson argues that schools need to transform their models of teaching and learning to reflect broad changes in information technology and new intellectual demands and opportunities presented by global online networks.
Click HERE to read the entire article and let us know your thoughts. It's a great read!!

Thanks to Claire for sending the great article! J.T.

Monday, October 11, 2010

The Nation's Best Kept Secret...Career and Technology Education



Despite its enormous impact on the nation’s labor market, its direct tie to economic stability and recovery, and its proven success addressing the nation’s high school dropout issue, career technical education (CTE) remains the nation’s best-kept educational secret.

The Association for Career Technical Education (ACTE) has taken action to share that secret by asking friends of CTE to join ACTE as they launch a promotional campaign to promote CTE. ACTE applied for a grant to the Pepsi Refresh Project, which is giving monetary awards to deserving individuals and organizations who want to make a positive impact on their local communities. In this project, voting determines which ideas get funded. If ACTE wins, the proceeds will go directly to an image campaign aimed at benefiting all those in the CTE community. Everyone can vote daily throughout the entire month of October by visiting http://www.refresheverything.com/cteimage AND Texting 103403 to Pepsi (73774). For more information about the Pepsi Refresh project, visit http://www.refreshcte.com/.

CTE friends, we encourage you to get behind this effort and show your support. We urge you to vote twice daily by text and online. Pass this message on to your colleagues and friends!

Please vote on a daily basis and turn this situation from the best kept secret to:
AMERICA'S BEST KNOWN FACT!!!

Wednesday, October 6, 2010

Are You Too Attached to Technology?

I just came across a recent (6/6/2010) New York Times article entitled “Your Brain on Computers: Attached to Technology and Paying a Price” written by Matt Richtel.

The article looks into our ability to focus when multitasking and the basic conclusion is that we really can't focus intensely on one thing when we have other things going on simultaneously, i.e. , doing homework while listening to music, texting your friends, and chatting on Facebook.

The article, which I recommend that you read, details the lives of a family from Oklahoma and their stuggles between an "online" life and having time for family.

The article also includes two "tests" that determine your level of attention and your ability to focus.

The first assessment is called “Test Your Focus” and you have to determine whether the red rectangles, which are mixed in with blue rectangles, have rotated. The second assessment is called “Test How Fast You Juggle” and you are presented with a number and a letter and have to determine whether there are vowels or consonants and even or odd numbers. (The juggling is information, not objects). Both of these tests sound simple, but they aren't (at least not for me). They do a good job of demonstrating that our brain really does try to focus acutely on a single task.

Would the article and the tests make good examples for your students?  J.T.

Tuesday, October 5, 2010

goo.gl


Are you thinking that I can't spell "Google?"

Actually, goo.gl is the URL Shortening Web Tool from Google. Like bit.ly and tinyurl.com, this allows teachers and students to take really long URLs and make them smaller and therefore, more manageable. Google takes the concept one step further and adds analytics to this idea. You can see how many people are using the goo.gl link and when as well as keep a stored history of all your goo.gl links.

I hope you like this new tool from Google. Give it a try and let us know what you think???  J.T.

Friday, October 1, 2010

School Cheaters Often Have Personality Disorders, Study Finds

I found a very interesting article written by Janet Steffenhagen in the Vancouver Sun (September 9, 2010). So why do some students cheat and others don't? Have you ever thought that the cheaters in your classroom are suffering from one or more of three personality disorders known as the "dark triad": psychopathy, Machiavellianism (manipulativeness) and narcissism?

The article states,
"Students who cheat in school often have personality disorders that make them manipulative, callous, arrogant and difficult to handle, according to a University of B.C. study.

The study, which examined the behaviour of university students over 10 years, concluded that high schools and post-secondary institutions have to find creative ways of discouraging cheaters because many aren't afraid of punishment, are amoral and have a strong sense of entitlement.

"They aren't the ones who are in prisons, at least not yet," said lead researcher psychology Prof. Delroy Paulhus. "They haven't committed serial murders, but they're operating with the same kind of behavioural patterns. They are talented people who are taking advantage of wherever they are -- be it the stock market or be it a competitive school.""

Click HERE to read the entire article.
So what are some creative ways that you discourage cheating? I hope you will share your ideas and also you thoughts on this interesting article!  J.T.

Wednesday, September 29, 2010

The Learning Network - Teaching and Learning with the New York Times

I wanted to let you know about The Learning Network blog which provides teaching and learning materials and ideas based on New York Times content. They provide not only lesson plans that span across subject areas and levels, but also News Quizzes and Opinion questions. You can also learn the Word of the Day, try out Test Yourself questions, and complete a Fill-In or read their Poetry Pairings.

According to their site, their mission is to "offer rich and imaginative materials for teaching and learning using New York Times content."

The blog also states:
Every weekday we offer new educational resources based on the articles, photographs, videos, illustrations, podcasts and graphics published in The New York Times – all for free.

We invite parents, teachers and students who are 13 and older to use our ideas and tools. We hope that through posting your comments you’ll become part of an ongoing conversation about teaching and learning.

For the 2010-’11 school year, here are 11 great ways to use our blog.

Throughout the year, we offer the following regular features:
  • Lesson Plans — Daily lesson plans based on New York Times content.
  • Student Opinion — News-related questions that invite response from students age 13 and older.
  • Word of the Day — Vocabulary words in the context of recent Times articles.
  • Test Yourself — Questions based on Times content that aim to strengthen literacy and numeracy skills.
  • 6 Q’s About the News — An activity in which students answer basic questions (Who, What, Where, When, Why and How) about an article.
  • News Quiz — Interactive daily news quizzes on current top stories.
  • On This Day in History — Listings of historical events and more for each day of the year.
  • Student Crossword — Topical puzzles geared toward teens.
  • Fill-Ins — Times articles from which word and phrases have been dropped. Fill in the blanks with your own words, or choose from a scrambled list of the words that were removed.
  • Poetry Pairings — A weekly collaboration with the Poetry Foundation in which we feature a work from its American Life in Poetry project alongside content from The Times that somehow echoes, extends or challenges the poem’s themes.
We hope teachers will use our blog to get and exchange ideas, parents to share how news stories have resonated at home, and students to express themselves on everything from politics to popular culture.
Just like on this blog, you can join the conversation by commenting on any post. We'd love to hear what you think!  J.T.

Tuesday, September 21, 2010

How Should We Measure an Organization’s Level of Psychometric Expertise?

Greg Pope, the Analytics and Psychometrics Manager for Questionmark, made a very interesting post to their blog yesterday. I think the post really makes you reflect on where your organization is currently operating at, from a psychometric point of view, but it also brings about the question of where is your your organization headed?  Considering the level of your organization, are your assessments psychometrically sound? Are you staffed at the appropriate level?

Greg's post also brings about a myriad of other questions that I need to think about, but I hope you will take a few minutes to read this very interesting post and make a response to this blog or to Questionmark's blog (the link is at the end of the post).

Here is Greg's post:
A colleague recently asked for my opinion on an organization’s level of knowledge, experience, and sophistication applying psychometrics to their assessment program. I came to realize that it was difficult to summarize in words, which got me thinking why. I concluded that it was because there currently is not a common language to describe how advanced an organization is regarding the psychometric expertise they have and the rigour they apply to their assessment program. I thought maybe if there were such a common vocabulary, it would make conversations like the one I had a whole lot easier.

I thought it might be fun (and perhaps helpful) to come up with a proposed first cut of a shared vocabulary around the levels of psychometric expertise. I wanted to keep it simple, yet effective in allowing people to quickly and easily communicate about where an organization would fall in terms of their level of psychometric sophistication. I thought it might make sense to break it out by areas (I thought of seven) and assign points according to the expertise/rigour an organization contains/applies. Not all areas are always led by psychometricians directly, but usually psychometricians play a role.

Areas

1. Item and test level psychometric analysis
a. Classical Test Theory (CTT) and/or Item Response Theory (IRT)
b. Pre hoc analysis (beta testing analysis)
c. Ad hoc analysis (actual assessment)
d. Post hoc analysis (regular reviews over time)

2. Psychometric analysis of bias and dimensionality
a. Factor analysis or principal component analysis to evaluate dimensionality
b. Differential Item Functioning (DIF) analysis to ensure that items are performing similarly across groups (e.g., gender, race, age, etc.)

3. Form assembly processes
a. Blueprinting
b. Expert review of forms or item banks
c. Fixed forms, computerized adaptive testing (CAT), automated test assembly

4. Equivalence of scores and performance standards
a. Standard setting
b. Test equating
c. Scaling scores

5. Test security
a. Test security plan in place
b. Regular security audits are conducted
c. Statistical analyses are conducted regularly (e.g., collusion and plagiarism detection analysis)

6. Validity studies
a. Validity studies conducted on new assessment programs and ongoing programs
b. Industry experts review and provide input on study design and finding
c. Improvements are made to the program if required as a result of studies

7. Reporting
a. Provide information clearly and meaningfully to all stakeholders (e.g., students, parents, instructors, etc.)
b. High quality supporting documentation designed for non-experts (interpretation guides)
c. Frequently reviewed by assessment industry experts and improved as required

Expertise/rigour points
0. None: Not rigorous, no expertise whatsoever within the organization
1. Some: Some rigour, marginal expertise within the organization
2. Full: Highly rigorous, organization has a large amount of experience

So an organization that has decades of expertise in each area would be at the top level of 14 (7 areas x 2 for expertise/rigour in each area = 14). An elementary school doing simple formative assessment would probably be at the lowest level (7 areas x 0 expertise/rigour = 0). I have provided some examples of how organizations might fall into various ranges in the illustration below.


There are obviously lots of caveats and considerations here. One thing to keep in mind is that not all organizations need to have full expertise in all areas. For example, an elementary school that administers formative tests to facilitate learning doesn’t need to have 20 psychometricians working for them doing DIF analysis and equipercentile test equating. Their organization being low on the scale is expected. Another consideration is expense: To achieve the highest level requires a major investment (and maintaining an army of psychometricians isn’t cheap!). Therefore, one would expect an organization that is conducting high stakes testing where people’s lives or futures are at stake based on assessment scores to be at the highest level. It’s also important to remember that some areas are more basic than others and are a starting place. For example, it would be pretty rare for an organization to have a great deal of expertise in the psychometric analysis of bias and dimensionality but no expertise in item and test analysis.

I would love to get feedback on this idea and start a dialog. Does this seem roughly on target? Would it would be useful? Is something similar out there that is better that I don’t know about? Or am I just plain out to lunch? Please feel free to comment directly on the Questionmark blog.

On a related note, Questonmark CEO Eric Shepherd has given considerable thought to the concept of an “Assessment Maturity Model,” which focuses on a broader assessment context. Interested readers should check out: http://www.assessmentmaturitymodel.org/
Related Posts Plugin for WordPress, Blogger...
 
MDZE3SGDZH9Y