LDTC 620
Next Generation Design: Emerging Technology, Gamification, and AI in Learning Design
Technology rubrics such as Western University’s Rubric for E-Learning Tool Evaluation provide critical evaluation frameworks for technology selection in instructional design (Anstey & Watson, 2018). However, these rubrics can be somewhat complex and may be more suitable for larger organizations. For this assignment, I developed a comprehensive but easy-to-use rubric called TOOLCASE, designed for SMBs, influencers, and smaller organizations or individual entities. This portfolio artifact showcases my critical thinking skills and an understanding of key aspects of selecting relevant educational technology, such as ease of use and accessibility (ISTE, 2024).
I created a rubric called TOOLCASE that is designed to be used by small businesses (SMBs) and influencers rather than large institutions. This rubric is inspired by my personal work experience as a web developer and content marketing consultant, as well as a solopreneur. Rubrics such as Western University’s Rubric for E-Learning Tool Evaluation are certainly comprehensive, but might be a bit cumbersome for a small business or web entrepreneur (Anstey & Watson, 2018). The TOOLCASE rubric is designed to be easy to use and completed fairly quickly by busy small entities.
I came up with the acronym TOOLCASE by starting with “TOOL” and then figuring out a word that would fit, one that would also incorporate the all-important metric “costs”! GPT was sparingly used to help me come up with categories for the few times I needed inspiration. The end result is a rubric that could potentially be memorized and used to quickly evaluate tools found online mentally, without having to write everything down. TOOLCASE stands for:
T – Technology
O – Outcomes
O – Originality
L – Longevity
C – Cost
A – Accessibility
S – Scalability
E - Ease of Use
I also used GPT to come up with scoring ranges for when the entire rubric was added up.
TOOLCASE uses a simple Likert scale that uses the same ratings for each dimension. The ratings are worded in a friendly, informal manner as follows:
1 – Nope
2 – Iffy
3 – Maybe
4 – Likely
5 – Yes
While this is a scale, it is still somewhat subjective. I provide some example rationale below, but I did not put strict rubrics in the actual spreadsheet to make it more flexible for individual use cases. Here are the criteria and example rationale, which could be tailored for specific use cases:
Technology refers to the product itself and any infrastructure required to support it. It also refers to whether the technology actually works as promised. Much of the time, SMBs will be using cloud-based software and will not need to set up additional infrastructure. On occasion, an SMB may decide to evaluate a tool that requires a webhost and some technical knowledge, such as installing a free learning management system on WordPress rather than using a commercial solution. Additionally, some apps work only on desktop or smartphones. This question can also cover any potential issues about technical support and system administration needs.
Question: Are we able to implement and support the technology?
1 – Nope: Reasons for this might include a lack of technical expertise or the wrong platform. Perhaps the technology is terrible.
2 – Iffy: Perhaps the setup is doable, but it will be tough to implement. Or, the technology does not work well.
3 – Maybe: Not sure; perhaps more info is needed. Or, the tech performance is average.
4 – Likely: This seems doable. Technology performance looks good.
5 – Yes: Yes, this is doable, and the technology works well.
Outcomes is short for learning outcomes. Does the technology improve the course and help the learners achieve the desired learning outcomes? This could be directly or indirectly. For example, a content creation tool can support learning outcomes by facilitating the development of course materials. Gamified software can support learning directly via fun and engaging technology.
Question: Does the tool support learning outcomes effectively?
1 – Nope: If I’m being honest, I’m buying this for myself, not for my courses.
2 – Iffy: Maybe I can use this for my courses, but it’s not really that great for course needs.
3 – Maybe: This tool may help but I’m not sure. Or, it can help but other tools might be better.
4 – Likely: I think this tool can help make courses better.
5 – Yes: Absolutely, this will help improve the course(s) and subsequently the learning outcomes.
Do we already have a tool that does the same thing? Is it necessary to have this tool, or can we do what we need to do with existing technology? For example, if we already have a tool that generates AI images, we do not need another one unless the new tool offers different functionality (such as inpainting). Sometimes a backup tool might be desired (in case the existing tool has issues), and this can be noted in the analysis.
Question: Does the tool introduce new features or methods that enhance the learning experience?
1 – Nope: I already have solid, reliable tools that do this.
2 – Iffy: I have a tool that can do this, but this new tool might be a good backup.
3 – Maybe: There is some duplication but some additional features that may tip the scale.
4 – Likely: There is some duplication but enough additional features that make it worthwhile.
5 – Yes: The tool brings entirely new functionality.
Longevity is really important when we are dealing with tech startups, especially in the AI world. This question becomes critical if the tool being implemented is an infrastructure tool that will contain key content or communications for the course(s) in question. LMS tools, CDN repositories, and community tools all need to be reliable and functioning. On the other hand, a content creation tool can go out of business and not impact the student experience. While it might be inconvenient to replace a content creation tool, it won’t be catastrophic.
Question: Is the company likely to remain operational long-term?
1 – Nope: There are red flags that cannot be ignored (such as reports of major outages that are not addressed or acknowledged).
2 – Iffy: The difference between “iffy” and “nope” depends on risk tolerance. A company that is a bootstrapped startup with one founder is high risk, but may be “iffy” instead of a “nope” if the founder has excellent credentials and a previous positive track record. A good team could push it into “maybe.”
3 – Maybe: This is a start-up but has a good team and backing.
4 – Likely: Company is either established or has strong VC funding.
5 – Yes: This is an established or well-known company that seems committed to the product. (A large company may not always mean a tool will have longevity. Sometimes large companies will drop popular products that do not mean their profit thresholds.)
Cost is relative and depends on the size of the organization and cashflow. A sole proprietor might find $100 to be a stretch, whereas this may be a drop in the bucket for a larger company. Cost isn’t just the cost of the initial product purchase, but subscriptions, maintenance, and also in-house support.
Question: Can we afford the tool?
1 – Nope: Out of our price range. (For some, this might be enough to mark the tool as a “no.”)
2 – Iffy: Possibly, but it’s a stretch.
3 – Maybe: It’s doable but may not be worth it.
4 – Likely: It seems like a good deal.
5 – Yes: Yes, this is a deal we cannot pass up!
Accessibility ensures that students can access course content despite potential limitations such as vision or hearing loss. Accessibility in its broadest definition can include language access and whether users can access the technology (e.g. do they need a desktop computer, or will a phone do?).
Question: Is the tool accessible? Does it support accessibility?
1 – Nope: This tool is inaccessible for reasons such as high barrier to access or lack of options.
2 – Iffy: This tool or its output might be inaccessible for some students.
3 – Maybe: The tool may meet accessibility needs.
4 – Likely: The tool has strong accessibility features.
5 – Yes: The tool or its output is fully accessible.
Scalability refers to whether the tool can grow as the learning needs grow. Will the LMS be able to support more students if the business becomes more successful? Will the organization have the ability to support the technology? Does the tool have the option to add more team members as the business grows? Scalability might also relate to longevity as well, as a startup may not be able to scale up services in the way that a company like Microsoft can.
Question: Can the tool grow with organizational needs?
1 – Nope: The tool has no options for growth.
2 – Iffy: The tool is not mature yet and promises room for growth but it’s uncertain.
3 – Maybe: The potential is there but we can’t fully bank on it.
4 – Likely: The tool appears to have the infrastructure and features we need to grow.
5 – Yes: The tool has the infrastructure and features we need to grow.
Ease of use can actually be relative. If the people who are going to use the tool are instructional designers who are well-versed in video editing, then this tool can score high on this metric even if the rest of the team won’t find the tool easy. For students, however, we definitely want the tool to be intuitive and easy to use even for non-technical people.
Question: Is the tool intuitive and easy to use for all stakeholders?
1 – Nope: The tool has a very confusing, terrible interface.
2 – Iffy: The tool is advanced or difficult to learn but maybe has some merit.
3 – Maybe: The interface may be challenging for some.
4 – Likely: The tool will likely be easy to use for the intended users.
5 – Yes: The tool is very easy to use.
Once the survey has been filled out, the numbers can be added up for a total score. This is presuming that all categories are equivalent in weight for a particular organization. Sometimes, certain metrics, such as cost, might be more important and can be considered first. Note: GPT helped create total score ratings.
Total Score:
36-40: Excellent – The tool does well in all categories. Highly recommended for implementation.
28-35: Good – The tool generally meets expectations with minor areas for improvement. Consider for implementation.
20-27: Average – The tool meets some expectations but falls short in a few key areas. Consider for implementation but review alternatives.
12-19: Below Average – The tool does not meet many expectations. Further evaluation or reconsideration needed.
8-11: Poor – The tool fails to meet expectations across most categories. Not recommended for implementation.
Example Calculation
Here is an example calculation from GPT, with rubric evaluations as follows:
Technology: 4
Outcomes: 3
Originality: 5
Longevity: 4
Cost: 2
Accessibility: 3
Scalability: 4
Ease of Use: 3
Total Score Calculation:
4 + 3 + 5 + 4 + 2 + 3 + 4 + 3 = 28
Interpretation:
A total score of 28 falls within the “Good” category, indicating the tool generally meets expectations but could be improved in certain areas.
Anstey, L. M., & Watson, G. P. L. (2018). Rubric for eLearning tool evaluation. Centre for Teaching and Learning, Western University. https://teaching.uwo.ca/pdf/elearning/Rubric-for-eLearning-Tool-Evaluation.pdf
ISTE. (2024). ISTE Standards: For Students. ISTE. Retrieved 28 September 2025, from https://iste.org/standards/students
OpenAI. (2025). GPT-4o [Large language model]. https://www.openai.com/chatgpt
AI Usage: AI was used to help brainstorm categories to fit the TOOLCASE acronym.