EDUC 415 Quiz Final Educational Assessment

EDUC 415 Final Quiz: Educational Assessment of Students

Covers the Textbook material from Module 1: Week 1 to Module 8: Week 8.

  1. A strength of teacher-made tests is that they can assess
  2. In the development of assessment blueprints, the number of learning objectives in each topical area helps determine the weighting given to the particular area in the assessment.
  3. Which of the following is NOT part of a learning unit assessment plan?
  4. For the following, select the type of technology use illustrated by the question below. A college professor posts a video on her online course, then has students submit a one-paragraph response and take a quiz on the material, also on the online course site.
  5. In general, which of the following assessment alternatives would yield the most valid results for purposes of formative assessment of students?
  6. Why do you need to plan the weight each assessment contributes to a report card grade?
  7. In general, using the results of a single assessment for purposes of grading a student is less valid than using multiple assessment results because any single assessment
  8. Which of the diagnostic approaches is implied when a teacher says, “I’m interested in finding out why my students confuse the rotation and revolution of the planets.”
  9. For what kind of subject matter is the prerequisite knowledge and skills approach most appropriate?
  10. The table below shows the results (fraction of items correct) of a diagnostic test for five primary school students. Refer to the table above. According to the mastery approach, what is Brad’s weak area?
  11. What can you learn from carefully facilitated class discussions?
  12. A teacher says, “Ariel is able to add three-digit numbers without carrying and is now ready to learn how to add two-digit numbers with carrying.” Which diagnostic approach is implied?
  13. For which diagnostic approach would student think- alouds be most useful?
  14. The table below shows the results (fraction of items correct) of a diagnostic test for five primary school students. Refer to the table above. According to the content prole of strengths and weaknesses diagnostic approach to diagnosis, which student has a weakness in addition?
  15. Which of the following comments is an example of cognitive feedback?
  16. Which of the following is an example of evaluative feedback?
  17. Providing feedback in online learning environments is very different from providing feedback in face-to- face classrooms.
  18. Which of the following are useful tools for supporting peer feedback?
  19. What is a common problem teachers face when giving feedback to struggling students?
  20. A middle school social studies class did individual reports on the contributions of different explorers to European expansion into the New World. Which of these scenarios uses timing most effectively?
  21. Why are describing work and making suggestions for improvement usually the most powerful feedback?
  22. The primary value of true-false items is to assess a student’s ability to judge the correctness of verbal propositions.
  23. In classroom assessment, it is acceptable if a few test items are focused on minor points of content.
  24. In classroom assessment, it is of very little importance if students use their knowledge of flaws in the items to answer them correctly.
  25. You are writing pairs of “true” and “false” items. For one item, you can only write a “true” item. You can’t figure out a way to write a good “false” item except to insert a “not” before the verb. What should you do?
  26. The most discriminating true-false items tend to be those
  27. It is acceptable if a student can answer a test item correctly by using his or her partial knowledge of the learning objective.
  28. When crafting completion items for elementary students, it is best to provide clues by making the length of the blanks proportional to the length of the omitted word.
  29. Items written in the greater-same-less or similar format tend to be self-explanatory and do not require directions to students.
  30. Which of the two methods of scoring tabular items provides a more reliable result?
  31. Options in a multiple-choice test function better when placed in the middle of the stem rather than at the end of the stem.
  32. Distractors in experiment-interpretation items should be based on the typical misconceptions students have about why an experimental result came about.
  33. A good assessment task should help you to distinguish clearly between less knowledgeable and knowledgeable students.
  34. The difficulty of a multiple-choice item can be increased by using distractors which are very similar to the correct option.
  35. In organizing a set of masterlist items, the set of items is put before the masterlist of options.
  36. It does not make any difference whether the interpretive materials, the masterlist of options, and the set of items appear on the same page or on different pages of the test.
  37. Why are problem-solving and critical-thinking abilities often assessed together?
  38. You can assess a student’s ability to induce and to judge induction by using either the response-choice or the constructed-response item format.
  39. Students should be assessed by requiring them to reproduce what was taught in class.
  40. What is an open-response assessment task?
  41. Which of the following kind of concept is NOT based on a definition?
  42. The general approach to problem-solving strategies is almost always more powerful than approaches based on specific domains.
  43. Which of the following is the best way to assess the concept “bicycle”?
  44. For instructional purposes, it is best if teachers use
  45. What is the effect of rater drift on essay scores?
  46. Other things being equal, an extended response essay is more likely to be reliably scored than a restricted response essay.
  47. When you construct an analytic scoring rubric for essay-type items, you should specify
  48. Which of the following essay-scoring factors is LEAST likely to lower the validity of social studies test scores?
  49. For a meaningful feedback to students on their essays, the teacher should
  50. The best way to use writing standards and rubrics in the classroom to assess students is to use them primarily as summative evaluation tools.
  51. Authentic performance tasks align better to learning objectives than do paper-and-pencil tests.
  52. A teacher knows that Linda always performs at the top of her class. On one units’ summative assessment, Linda’s performance was not very good, but the teacher gave her a high grade, reasoning that Linda just had an “off day.” Which type of error did the teacher most likely commit?
  53. Before you decide on what content should be included in students’ portfolio, you should clearly identify the
  54. Which of the following rating errors is likely to be made by a new teacher who does NOT know the students very well?
  55. Which of the following is a characteristic of authentic assessment?
  56. Individual student projects should assess
  57. What is the difference between a performance assessment and a classroom learning activity?
  58. For each entry in a best-works portfolio, a teacher should require the student to write an explanation that relates the entry to the learning objective it shows achievement on.
  59. Item Analysis Information Chart Refer to the Item Analysis Information Chart above. Which of the following is the poorest functioning distractor?
  60. An advantage to preparing an answer key before you administer the test to students is that you may identify mistakes in your test items.
  61. If your students differ greatly in their test-wiseness skills, the validity of their scores on your classroom assessment results is likely to be lower than if they all had equal test-wiseness skills.
  62. What is the best way to arrange items on a test?
  63. You can use simple statistical analyses to identify the major types of errors students make in answering essay items.
  64. Why should you group test items according to the type of item (multiple choice, true/false, etc.)?
  65. A distractor is not functioning properly if no one in the upper group selected it.
  66. Which of the following grading approaches is most compatible with standard-based approaches to teaching?
  67. Which of the following grading approaches is most compatible with the theory that intelligence is fixed?
  68. A multiple marking system reports students’ progress in different curriculum areas but NOT in noncognitive areas.
  69. Formative assessment results from quizzes, homework, and classroom activities should play a major role in determining a student’s marking period grade.
  70. For the following, select the type of grading illustrated by the question below. A teacher compares a student’s achievement during the marking period with the learning ability she thinks the student has. The grades are awarded according to how well the student has lived up to his capacity.
  71. In a marking period there are 200 possible points to earn. A teacher allowed a maximum of 20 points for homework and quizzes, 80 points for the midterm, and 100 points for the end-of-unit exam. At the end of the marking period, the teacher summed up the scores for each student. On the basis of the final sum, the teacher awarded grades to the students. What type of criterion-referenced grading is the teacher using?
  72. A teacher used the following assessment tasks while teaching a unit. Choose the correct weight for the assessment tasks based on the description below. An in-depth essay question covering 1 of the 8 unit learning objectives and marked with a well-defined scoring rubric vs. A test comprised of short-answer and completion items covering 7 of the 8 unit learning objectives
  73. Grading on the curve is compatible with self- referencing at the formative evaluation stages.
  74. Standardized achievement tests provide useful information that may be part of a study of whether a local district needs to change its curriculum.
  75. At the secondary school level, use of a standardized achievement test should be guided, among other things, by the consideration of continuous educational growth over the grades.
  76. Small deviations from the instructions for administering a standardized achievement test are acceptable provided that they make test administration more practical than was suggested by the publisher.
  77. Reporting a school’s standardized test results to the school board without describing the other characteristics of the school and its students is an unethical assessment practice.
  78. A state has its own mandated accountability test. Your school district wants to use an additional standardized test. Which of the following is a recommendation by the Brookhart and Nitko textbook as the school district deliberates what test to adopt?
  79. For which of the following purposes can a teacher use standardized achievement tests?
  80. Why is it considered unethical to give students practice on the same test items that they are going to be administered later?
  81. States are said to use a standards-referenced framework when they align state test items with state standards and set the performance levels that qualify as “basic,” “proficient,” and so on. Why is this NOT simply an example of “criterion-referencing”?
  82. Which of the following describes a derived score?
  83. Which of the following types of scores is BEST to use to explain a student’s strengths or weaknesses across different curriculum areas to parents?
  84. Which of the following would be the LEAST appropriate classroom use of standardized test results?
  85. Which of the following is a necessary condition for a highly valid criterion-referenced interpretation of test scores?
  86. The norm data reported in a test manual cannot have a normal distribution unless students’ raw scores naturally have a normal distribution.
  87. A student who obtained a percentile rank of 50 in grade three social studies and obtained the same percentile rank in grade four social studies has not shown any educational growth.
  88. Where would you look to see if there was a published test that measures a specific area of interest to your school?
  89. Local teachers’ judgments about how well the results of a standardized test could be used for improving instruction in the district have no place in decisions about selecting the test.
  90. The most important materials one should obtain from a publisher in order to review an assessment instrument properly are the technical manual and the specimen sets.
  91. A school district would like to review its existing testing program with an eye toward adopting a new standardized test. Representatives from the following groups are available to help in this process:
  92. Your school has identified a test they want to use. Where can they find information about the cost of the test and whether they can receive test results electronically?
  93. When asked to join his peers in extra-curricular sports, Robertson told his teacher that he prefers indoor games to outdoor games.
  94. If a student’s socioeconomic level and family group remain stable over a long time period, the student’s aptitude scores are likely to be very similar over this time interval.
  95. What is the purpose of publishing separate male and female norms in interest inventories?
  96. On the average, scores from an unreliable aptitude test will be lower than scores from a reliable aptitude test.
  97. Aptitude tests measure a person’s future performance.
  98. How are the activities listed on an empirically-keyed interest inventory item identified?
  99. Select the concept illustrated in each statement below. At the end of the marking period, the teacher’s assessment showed that Jennifer could correctly spell eighty percent of the words she taught the class.
  100. John says he wants to be a doctor. What kind of interest is this?
$3.99
Buy Answer Key

has been added to your cart!

have been added to your cart!

Files Included - Liberty University
  1. EDUC 415 Quiz Final Educational Assessment
  • Liberty University