Available Tracks for Continuing Education Workshops
A. Full track
The full track comprises three sections and a concluding exercise.
1. Section 1: Theory and Practice - An Introduction
A good test - of any kind - is based on several fundamental principles. This section will present the basic concepts of testing. We will discuss the question of what makes a good test (and what makes a bad one). We will try to clarify the less obvious advantages, as well as the limitations, of testing. How do we know our test is fair? How do we know it is testing what we want to test? We will define the professional standards that will guide us in the process of test development.
2. Section 2: Cornerstones of Writing Questions and Tests
This section is made up of three sub-sections:
2.1 Writing closed-ended test items
In this section, we will examine several types of closed-ended test items, focusing on multiple-choice (“American”) questions. Questions of this type allow us to test a lot of knowledge in a short period of time. Yet many lecturers avoid this type of question because writing them demands a lot of effort and examinees can guess the answer. In this section, we will take an in-depth look at methods and principles of writing multiple-choice questions. How can we reduce the probability that examinees will guess the answer correctly? How can we test higher-order thinking skills using multiple-choice questions? We will also talk about the psychological impact that a multiple-choice test has on examinees, and we will ask ourselves what we can learn from these tests about the examinees’ knowledge. We will discuss the importance of choosing the proper phrasing for questions and the possible answers that are offered, and will learn how to edit, revise, and improve existing questions.
2.2 Writing open-ended test items and checking them
Tests made up of open-ended items are the most common kind of tests given in schools and universities. Open-ended questions are seemingly easy to write and ostensibly provide a good way for each examinee to demonstrate his knowledge and abilities. Yet when we begin to score the answers, it turns out that the task is very complicated. How can we compare examinees’ answers? How can we counteract the influence of various factors that can interfere with fair scoring? In this section, we will be introduced to methods for effectively scoring essays, and we will learn how to write questions that allow examinees to present their knowledge in the best possible way, and that allow us to give them the most appropriate score.
2.3 Developing a complete test
The process used to develop a test determines, to a large extent, its quality. In this section, we will learn about the tools that help us ensure we are building a test professionally and efficiently. We will learn how to decide which type of question - open-ended or multiple-choice, or perhaps a performance task - is most appropriate for a particular test. We will formulate an effective process for writing and critically checking questions. We will try to figure out the best way to check a test, and to use it to test the material students learned. How can we make sure that a test given at a later date (“moed bet”) is not easier than the test given on the original date (“moed aleph”)? This section will address theoretical and technical issues pertaining to the development of all types of tests.
3. Section 3: Additional Issues
This section is made up of three sub-sections:
3.1 Evaluating the quality of a test using statistical data
You do not need a background in statistics.
A central branch of test theory deals with the development of statistical tools for evaluating a test - before and after it is administered - and for improving it. Using these tools, we can obtain valuable information about examinees’ capabilities. How do we measure how difficult a test question is or how difficult the test is as a whole? How do we know that the test questions differentiate between weak examinees and strong examinees? This section is designed for anyone who develops tests for large numbers of examinees and anyone who might be interested in integrating statistical tools into the process of developing a test.
3.2 Adapting the test for examinees with disabilities
Many examinees have physical or psychological disabilities or learning disabilities and there is concern that tests may not be fair to them. In this section, we will learn how to adapt tests so that examinees with disabilities will be able to demonstrate their abilities and knowledge in the best way possible without adversely affecting the evaluation of those abilities and knowledge.
3.3 Assessing writing ability
Theoretical or academic writing is a skill whose importance cannot be overstated. This skill is essential for carrying out assignments, taking tests, writing papers and articles, etc. Examinees who have difficulty with academic writing will have difficulty demonstrating their knowledge and abilities in writing, even if they in fact have the knowledge and skills being tested. In this section, we will learn how to evaluate theoretical writing skills in a way that is consistent, fair, and transparent to students, while at the same time disregarding common biases that can interfere with scoring.
4. Concluding Exercise
In the concluding exercise, participants will critique and improve test questions using the principles presented and discussed in the workshop. The participants may download a school solution that discusses the questions in the exercise. In order to make the exercise relevant to the participants’ subject areas, two versions will be created: one containing test questions pertaining to the humanities and social sciences and one containing test questions pertaining to the natural and exact sciences.
B. Track for Developing Closed- and Open-Ended Questions (without the section on “Additional Issues”)
This track comprises the first two sections of the workshop and the concluding exercise.
C. Track for Developing Closed-Ended Questions (without the section on “Additional Issues”)
This track comprises the first two sections of the workshop - without sub-section 2.2, which deals with open-ended questions - and a concluding exercise that deals only with closed-ended questions. This track is recommended for people who develop tests comprised entirely of multiple-choice questions.
Participants in each of the tracks may join a 90-minute synchronous Zoom session, where they will discuss the concluding exercise. Senior NITE test developers will facilitate these sessions. The synchronous sessions are designed for groups of three participants at most.
For consultations, general information, and coordination of an online workshop: Sephi Pumpian, 02-675-0646; email@example.com.