Analyzing-Evaluating-Creating: Assessing Computational Thinking and Problem Solving in Visual Programming Domains
Grunnleggende konsepter
Developing a novel test, ACE, to assess computational thinking skills focusing on higher cognitive levels.
Sammendrag
This content discusses the development and evaluation of a new test, ACE, designed to assess computational thinking skills in students. The test focuses on higher cognitive levels such as Analyzing, Evaluating, and Creating. It includes 21 multi-choice items based on tasks from Hour of Code: Maze Challenge. The study involved 371 students from grades 3-7 and confirmed the reliability and validity of ACE through psychometric analysis frameworks.
Structure:
- Introduction to Computational Thinking in Education
- Existing CT Assessments Overview
- Development of ACE Test
- Study Design and Data Collection Process
- Results Analysis: Internal Structure, Reliability, Correlation with HoCMaze Scores
- Limitations of the Study
- Conclusion and Future Directions
Oversett kilde
Til et annet språk
Generer tankekart
fra kildeinnhold
Analyzing-Evaluating-Creating
Statistikk
Recent works have proposed tests for assessing computational thinking skills across various concepts.
ACE comprises a diverse set of 7x3 multi-choice items spanning three cognitive levels.
A study was conducted with 371 students in grades 3-7 from 10 schools to evaluate the psychometric properties of ACE.
Sitater
"Computational thinking involves solving problems, designing systems, and understanding human behavior." - [1]
"CT is being increasingly integrated into K-8 curricula worldwide." - [4]
"ACE contains items that require synthesizing new problem instances to verify the correctness of a proposed solution."
Dypere Spørsmål
How can assessments like ACE be further validated beyond student performance?
To further validate assessments like ACE, researchers can consider incorporating expert feedback and convergent validity measures. Expert feedback involves having experienced educators or professionals in the field review the test items for accuracy, relevance, and alignment with learning objectives. Their input can provide valuable insights into the quality of the assessment.
Convergent validity refers to comparing the results of one assessment with those of another established assessment that measures similar constructs. By demonstrating a strong correlation between ACE scores and scores from other validated CT assessments, researchers can strengthen the evidence supporting the reliability and validity of ACE.
Additionally, conducting longitudinal studies to track students' progress over time and analyzing how well ACE predicts future academic success in CT-related subjects could provide further validation for this assessment tool.
What are potential drawbacks of relying solely on multi-choice tests for measuring computational creativity?
While multi-choice tests offer practicality and scalability in assessing computational thinking skills, they have limitations when it comes to measuring computational creativity. Some potential drawbacks include:
Limited Scope: Multi-choice questions may not capture the full range of creative problem-solving abilities required in real-world scenarios. Creativity often involves thinking outside conventional solutions, which may not be easily assessed through predetermined answer choices.
Guessing: Students might guess correct answers without truly understanding or applying creative problem-solving strategies. This could inflate their scores without reflecting genuine computational creativity.
Subjectivity: Assessing creativity is inherently subjective as it involves evaluating originality, novelty, and effectiveness of solutions—qualities that are challenging to quantify objectively through multiple-choice formats.
Restrictive Format: Multi-choice questions typically have a single correct answer format, limiting opportunities for students to showcase diverse approaches or innovative solutions to problems.
How can real-world scenarios be better incorporated into assessments like ACE to enhance practical application?
To enhance practical application in assessments like ACE, integrating real-world scenarios can provide contextually relevant challenges that mirror authentic problem-solving situations encountered in professional settings.
Here are some strategies to incorporate real-world scenarios effectively:
Case Studies: Present students with case studies related to industry-specific problems where they must apply computational thinking concepts to devise solutions.
Project-Based Tasks: Design tasks that simulate real projects requiring students to analyze data sets, develop algorithms, or create software applications based on specific requirements.
Simulations: Use interactive simulations or virtual environments where students navigate complex systems by applying programming logic and problem-solving skills.
4Collaborative Challenges: Create collaborative challenges where students work together on interdisciplinary projects involving coding tasks integrated with other subject areas such as science or engineering.
5Industry Partnerships: Collaborate with industry partners who can provide authentic problems faced by professionals working in tech-related fields for students to solve using computational thinking principles.
By incorporating these elements into assessments like ACE,
students will gain hands-on experience applying their
computational thinking skills within realistic contexts,
preparing them more effectively for future career opportunities
in technology-driven industries