fbpx
Log In | Register
ideas for learning ideas for learning
  • Home
  • Virtual Seminars
  • Webinars
    • Scheduled Webinars
    • On-Demand Webinars
    • Browse by Topics A-M
      • Accountability
      • Career Development
      • Change
      • Coaching
      • Communication
      • Conflict
      • Creativity & Innovation
      • Customer Service
      • Decision Making
      • Diversity & Inclusion
      • Emotional Intelligence
    • Browse by Topics L-Z
      • Leadership
      • Learning
      • Negotiation
      • Performance
      • Personality
      • Problem Solving
      • Project Management
      • Sales
      • Supervision
      • Teams
      • Workplace Conduct
  • Blog
  • About HRDQ-U
    • What is HRDQ-U?
    • Become a Presenter
    • Contact Us
  • Shop HRDQstore
  • Home
  • Virtual Seminars
  • Webinars
    • Scheduled Webinars
    • On-Demand Webinars
    • Browse by Topics A-M
      • Accountability
      • Career Development
      • Change
      • Coaching
      • Communication
      • Conflict
      • Creativity & Innovation
      • Customer Service
      • Decision Making
      • Diversity & Inclusion
      • Emotional Intelligence
    • Browse by Topics L-Z
      • Leadership
      • Learning
      • Negotiation
      • Performance
      • Personality
      • Problem Solving
      • Project Management
      • Sales
      • Supervision
      • Teams
      • Workplace Conduct
  • Blog
  • About HRDQ-U
    • What is HRDQ-U?
    • Become a Presenter
    • Contact Us
  • Shop HRDQstore

Experiential Learning

In order to really improve business skills, employees must do. Without practice, a new skill will never truly be learned. Experiential learning activities can help bridge the gap between cursory informational training and training that actually sticks!

25
jan

Share:

5 Tips for Keeping Virtual Learners Engaged | HRDQ-U

5 Tips for Keeping Virtual Learners Engaged

Posted by HRDQ-U WebinarsExperiential Learning, Learning, Virtual LearningNo Comments

By Keith Keating

Teachers affect eternity, no one can tell where their influence stops. Henry Adams

In early 2020, much of the world shifted from in-person to virtual overnight, some struggled more than others. A silent agreement seemed to exist that we were in the situation together and we needed to stabilize.  But as time passed, stability was no longer sufficient. In order to thrive in this new environment, we had to relearn much of what we knew and how we approached our work. Learning was not immune to the change and seemingly accelerated in its evolution. The classroom and desk, iconic images representing education, suddenly and without warning were replaced with headsets and laptops. The stable environments where we learned, and the time and space allotted to our growth and development dissipated but our need to learn continued. As learning became a byproduct of the new virtual world, the feedback and messaging became “virtual learning does not work”. The challenge, however, is not with virtual learning, it is with those of us who design, develop, and deliver the virtual learning – we are the ones who determine whether or not virtual learning is successful.  And it can be successful.

One of the biggest reasons that virtual environments are not successful is the lack of engagement. In the virtual environment we tend to drop the activities that we did in person, possibly due to the restriction of time or maybe the lack of online design experience. But here is the reality: engagement and interactivity is the most important aspect of virtual learning that must be included to create a successful virtual learning environment. We cannot be delivering asynchronously to our learners, especially now. When we are together in a live virtual environment, the content and the opportunity to learn needs to be synchronous. It needs to be engaging, interactive, and it needs to have energy. Our learners are filled with distractions, probably more now than ever before. There is a lot going on in their lives and we are fighting for that attention. But attention alone isn’t the only reason engagement is important in a virtual setting, the other reason is the social component. In the virtual learning environment, we need to leverage the opportunity to create social connections for our students. Our session might be their only time that day to connect with someone else. They may be alone most of the time, some may not have friends they can see or may not have family around. When we are connected in a virtual learning environment, for some of us, this is our only connection. Take advantage of that opportunity, establish and create opportunities to build those social connections.

Icebreakers and Energizers

The tone of your session and the success is set from the very beginning. Establishing a connection, building trust, and creating an engaging environment at the open of the session can set us up for success and prime the learners to be involved. It’s equally as important to keep that energy level consistent throughout your session. Icebreakers are a great tool to use at the beginning of your session and energizers are a great tool to use throughout your session, especially after breaks or lunch to pull the learners back in and get them excited and energized to move forward.  Icebreakers and energizers can be used interchangeably – the only real difference is we call “icebreakers” the activity at the beginning of the session to break the ice and get learners connected while energizers are the activities we use throughout the session to keep learners engaged, give their brains a rest from content, and keep the energy level high. Here are some of my favorite icebreaker activities:

5 Icebreaker & Energizer Ideas

The Object of Me

Give learners 1 or 2 minutes to quickly find an item, wherever they are, that best represents them. Pick someone by name to share their object and pick someone else to guess what that object says about them. Continue the activity until everyone has shared. This is a great activity to learn more about your colleagues or fellow learners. Here’s an example to show how everyone can participate regardless of where they’re located. Recently, someone asked me to participate, and I wasn’t at home, I was in a new environment but I had just finished having breakfast. I didn’t know what could possibly represent me. And then I looked down at my plate, and I had a little piece of avocado left. And I realized I can use that, it represents me. It’s healthy, and it’s organic, and it has a little bit of good fat.

Remember Me

Find an image online, maybe one that is intricate and filled with many details.  Show the image to the learners for 15 to 20 seconds. Ask everyone to look at the picture and try and memorize as much as they can. And then after that 20 seconds, take the photo away and allow everybody to share what they saw. It’s a simple fun activity because there’s always going to be something a little bit different that each person sees. It helps us learn a little bit more about the way our learners think.

Optical Illusion

Find a fun optical illusion picture and ask learners to describe what they see. But remember – there are no wrong answers in optical illusion photos. If that is what they are seeing, that is their reality.

Virtual Rock, Paper, Scissors.

Another fun one I have been using, both with adults and children is Rock, Paper, Scissors. take a few minutes, get everybody on camera, and do a tournament of rock paper scissors. It’s a really fun, old-fashioned way to get people engaged. If you have a large group, you could utilize the breakout rooms to have tournaments. Take the winner from the breakout rooms and have a final tournament in the main room. Those who are “playing” in the tournament can keep their cameras on and those who are not can turn theirs off. This helps to easily identify who is playing.

Game Play

Games can be a great way to keep learners engaged and give them a mental break from content to reenergize. Here are some options:

  • Search online for Trivia games
  • Play 2 Truths and a Lie
  • Play charades on camera
  • Pick a popular song and find a version on YouTube that is played backwards and have learners guess

Best Practices

Ask yourself this question – when you are in a virtual setting, what is the time limit for you before you start to zone out? Maybe you pick up your phone, maybe you switch the screen…how many minutes has it taken you before that has happened? For me, it’s about five minutes. That’s my attention span when I’m not being engaged or my attention isn’t required. Whatever your answer is, that should be the baseline of how often we should be building in interactions. If we are getting bored at five minutes or seven minutes, most likely so are our participants. Practicing empathy in the virtual environment is critical for success.

3 – 5 minutes. This is the best practice for creating frequent interactions every 3 to 5 minutes. And it does not need to be a complicated activity, it can be a simple call for their attention back to the screen. For this to work though, level-set with your students at the very beginning of the session, let them know it’s going to be interactive. For example, I have a rules of the road slide that I use that covers my expectation for the conduct in the session. I clearly say this class requires your participation and you may be called upon. By letting them know that upfront this is happening, I’m setting the stage so that I can call on them, preferably by name, so that they are staying engaged. And that’s not in a rude or negative way. Using their name is another tip to make them feel recognized and validated,

Take into consideration the number of people in your session. The larger the group, the more challenging the activities become as more time will be required.  If a session is over 20 people, consider utilizing breakout rooms for longer energizers or icebreakers.  Groups over 20 are going to be designed for much differently than groups under 20 or even  5 people are going to be designed differently for 10 or 15 people. For example, open microphone discussions are going to be more challenging, the more people that you have. Even chat can be more challenging when you have over 20 people. So you want to be deliberate and intentional when designing and consider the size of your audience.

In addition to icebreakers and energizers, we can use nonverbal communications to create engagement opportunities in the virtual classroom. It could be as simple as asking a question and having students type the answer in chat or click on the agree or disagree / thumbs up or thumbs down button. Nonverbal communications tools are available regardless of the platform being used and are very powerful ways for our students to continue to engage with us. When we talk about engagements needing to be every 3 to 5 minutes, it does not have to be a full blown activity that requires you to stop teaching, it could be as simple as clicking the agree or disagree button. The idea is just to create some sort of engagement or interaction that draws them back into that virtual environment so that we can get their attention.

Remember to consider your learners in everything that we do. The most important thing that we need in this world right now is empathy, not just for our learners, but for every human being. Design and deliver with our learner in mind will help to keep them engaged and lead to successful virtual learning experiences.

This blog post comes from the webinar Optimizing Virtual Learning.

Learn More
25
jun

Share:

Turning Negative Results into Positive Change

Posted by HRDQ-U WebinarsExperiential Learning, Human Resource Training, Supervisory SkillsNo Comments

By Patti P. Phillips, Ph.D.

Chief learning officers often must evaluate their key learning programs, collecting several types of data—reaction, learning, application, impact, intangibles, and maybe even return on investment. What if the evaluation produces disappointing results? Suppose application and impact were less than desired, and the ROI calculation negative. This prospect causes some learning executives to steer clear of this level of accountability altogether.

For some CLOs, negative results are the ultimate fear. Immediately, they begin to think, “Will this reflect unfavorably on me? On the program? On the function? Will budgets disappear? Will support diminish?” These are all appropriate questions, but most of these fears are unfounded. In fact, negative results reveal the potential to improve programs. Here are 11 ways to address negative results and use them to facilitate positive transformations:

Recognize the Power of a Negative Study

When the study results are negative, there is always an abundance of data indicating what went wrong. Was it an adverse reaction? Was there a lack of learning? Was there a failure to implement or apply what was learned? Did major barriers prevent success? Or was there a misalignment in the beginning? These are legitimate questions about lack of success, and the answers are always obtained in a comprehensive evaluation study.

Look for Red Flags

Indications of problems often appear in the first stages of initiation—after reaction and learning data have been collected. Many signals can provide insight into the program’s success or lack of success, such as participants perceiving that the program as not relevant to their job. Perhaps they would not recommend it to others or do not intend to use it on the job. These responses can indicate a lack of utilization, which usually translates into negative results. Connecting this information requires analyzing data beyond overall satisfaction with the program, the instructor and the learning environment. While important, these types of ratings may not reveal the value of the content and its potential use. Also, if an evaluation study is conducted on a program as it is being implemented, low ratings for reaction and learning may signal the need for adjustments before any additional evaluation is conducted.

Lower Outcome Expectations

When there is a signal that the study may be negative, or it appears that there could be a danger of less-than-desired success, the expectations of the outcome should be lowered. The “under-promise and over-deliver” approach is best applied here. Containing your enthusiasm for the results early in the process is important. This is not to suggest that a gloom-and-doom approach throughout the study is appropriate, but that expectations should be managed and kept on the low side.

Look for Data Everywhere

Evaluators are challenged to uncover all the data connected to the program—both positive and negative. To that end, it is critical to look everywhere for data that shows value (or the lack of it). This thorough approach will ensure that nothing is left undiscovered—the fear harbored by many individuals when facing negative results.

Never Alter the Standards

When the results are less than desired, it is tempting to lower the standards—to change the assumptions about collecting, processing, analyzing and reporting the data. This is not a time to change the standards. Changing the standards to make the data more positive renders the study virtually worthless. Without standards, there is no credibility.

Remain Objective Throughout

Ideally, the evaluator should be completely objective or independent of the program. This objectivity provides an arms-length evaluation of its success. It is important not only to enter the project from an objective standpoint but also to remain objective throughout the process. Never become an advocate for or against it. This helps alleviate the concern that the results may be biased.

Prepare the Team for the Bad News

As red flags arise and expectations are lowered, it appears that a less-than-desired outcome will be realized. It is best to prepare the team for this bad news early in the process. Part of the preparation is to make sure that they don’t reveal or discuss the outcome of the program with others. Even when early results are positive, it is best to keep the data confidential until all are collected. Also, when it appears that the results are going to be negative, an early meeting will help develop a strategy to deal with the outcome. This preparation may address how the data will be communicated, the actions needed to improve the program, and, of course, explanations as to what caused the lack of success.

Consider Different Scenarios

Standards connected with the ROI methodology are conservative for a reason: The conservative approach adds credibility. Consequently, there is a buy-in of the data and the results. However, sometimes it may be helpful to examine what the result might be if the conservative standards were not used. Other scenarios may actually show positive results. In this case, the standards are not changed, but the presentation shows how different the data would be if other assumptions were made. This approach allows the audience to see how conservative the standards are. For example, on the cost side, including all costs sometimes drive the project to a negative ROI. If other assumptions could be made about the costs, the value could be changed, and a different ROI calculation might be made. On the benefit side, a lack of data from a particular group sometimes drives a study into negative territory because of the “no data, no improvement” standard. However, another assumption could be made about the missing data to calculate an alternative ROI. It is important for these other scenarios to be offered to educate the audience about the value of what is obtained and to underscore the conservative approach. It should be clear that the standards are not changed and that the comparisons with other studies would be based on the standards in the original calculation.

Find Out What Went Wrong

With disappointing results, the first question usually asked is, “What went wrong?” It is important to uncover the reasons for the lack of success. As the process unfolds, there is often an abundance of data to indicate what went wrong. The follow-up evaluation will contain specific questions about impediments and inhibitors. In addition, asking for suggestions for improvements often underscores how things could be changed to make a difference. Even when collecting enablers and enhancers, there may be clues as to what could be changed to make it much better. In most situations, there is little doubt as to what went wrong and what can be changed. In worst-case scenarios, if the program cannot be modified or enhanced to add value, it may mean that it should be discontinued.

Adjust the Story Line

When communicating data, negative results indicate that the storyline needs to change. Instead of saying, “Let’s celebrate—we’ve got great results for this program,” the story reads, “Now we have data that show how to make this program more successful.” The audience must understand that the lack of success may have existed previously, but no data were available to know what needed to be changed. Now, the data exist. In an odd sort of way, this becomes a positive spin on less-than-positive data.

Drive Improvement

Evaluation data are virtually useless unless used to improve processes. In a negative study, there are usually many items that could be changed to make it more successful. It is important that a commitment is secured to make needed adjustments so that the program will be successful in the future. Until those actions are approved and implemented, the work is not complete. In worst-case scenarios, if the program cannot be changed to add value, it should be terminated, and the important lessons should be communicated to others. This last step underscores that the comprehensive evaluation is used for process improvement and not for performance evaluation of the staff.

Negative study results do not have to be bad news. Negative results contain data that can be used not only to explain what happened but also to adapt and improve in the future. It is important to consider the potential of a negative study and adjust expectations and strategies throughout the process to keep the negative results from being a surprise. In the worst-case situation, negative data will surprise the key sponsor at the time of presentation.

Join our upcoming HRDQ-U webinar titled “What Caused It?: Connecting Programs to Results” on July 22, 2020 at 2pm ET/11am PT.

2
jan

Share:

The CEO’s Perception of the Learning Investment

Posted by HRDQ-U WebinarsExperiential Learning, Human Resource Training, Supervisory SkillsNo Comments

By Patti P. Phillips, Ph.D., and Jack J. Phillips, Ph.D.

Have you ever asked top executives or a chief financial officer about the value they would like to see from talent development? How many discussions have you had about the value of learning with the C-Suite?

We have had many of those conversations routinely over the past 25 years and we know clearly what they need. Their responses have been documented quite well, dating back to a major study that we conducted with ATD nearly a decade ago. That study, involving Fortune 500 CEOs, indicated that 96 percent of executives wanted to see a business connection to learning. Yet, at that time, only 8 percent of them had that type of data. This is their No. 1 desired data category. Further, 74 percent of the executives wanted to see the ROI from learning investments, but only 4 percent said they have it now. This is their No. 2 measure. The No. 1 measure provided to executives from L&D was reaction data, but only 28 percent wanted to see this category of data.

This study, first published in our book with ATD, Measuring for Success, was a wake-up call for many CLOs and others involved in talent development.1 Collectively, they said “we must do better.” The good news is that was 10 years ago. We are well on the way.

More recent data from the Business Intelligence Council of Chief Learning Officer Magazine showed that improvements are happening. When asked about how the learning organization shows its contribution to the broader enterprise, 36 percent said they use business data for the request, and 22 percent say they use ROI. When asked if they plan to implement ROI, 49.6 percent said they planned to implement ROI in the future. All totaled, 71.2 percent of respondents said they were either using ROI or planning to implement it. We think that is a little ambitious, although it came from 335 CLOs.

Fast forward to 2017, and we noticed a major benchmarking report from Training Magazine. This report examined the organizations that were “Hall of Famers” in their awards system. These are the organizations that are consistently at the top of their 125 best learning organizations lists. These “Hall of Famers” are very important for benchmarking because others want to know what makes them so successful. The opening statement in the report states, “Ultimately, the success of any program is based on whether it improves business results.” —Training Top 10 Hall of Fame, May 2017.

These top learning organizations advise that you must connect learning to the business to capture executive attention. This benchmarking report is generated every year. In the following year, 2018, this report contained three best practice case studies: onboarding, an ROI calculation on a follow-up basis, and an ROI forecast. You can see that we are making progress to meet the request from top executives.

What can you do if you are not showing the business value of learning? You can take five very important steps:

  1. Be proactive. Don’t wait for the request to show business value. Start delivering business value on a major program now. Take charge and drive the evaluation initiative. Keep ROI on your agenda, not your executive’s agenda.
  2. Be selective on which programs you evaluate at the business impact and ROI levels. Use ROI for programs that are very expensive, strategic, important to organizations, and yes, those that attract executive attention. That will usually be about 10-20 percent of the programs each year at the impact level and approximately 5-10 percent at the ROI level.
  3. Change the thinking of the complete learning cycle. Start with why for your programs; connect it to the business measure at the beginning. Then make sure you have the right solution. Next, expect success with very specific objectives through to impact and share them with the team. With this approach, you are designing for the results you need. With the business data clearly defined in the beginning, you will have the desired results at the end.
  4. Share the joy. Make sure that the entire team is involved in designing, developing, and implementing learning and development to deliver impact. Designers, developers, facilitators, participants, and managers of participants are critical to achieving impact success. Each stakeholder has a role, not just the evaluator. This approach makes a world of difference.
  5. Think about all the benefits. While business data will convince executives to continue to fund your programs, connecting to the business will help you build partnerships with business leaders, obtain needed support to make programs more effective, and secure the commitment you need to be successful.

Collectively, the team can make a difference.

Reference

  1. Phillips, Jack J., and Patti P. Phillips. Measuring for Success. Alexandria, VA: ASTD Press. (2009) Paperback
23
dec

Share:

Why Should You Measure the ROI of Your Program?

Posted by HRDQ-U WebinarsExperiential Learning, Human Resource Training, Performance Management, Supervisory SkillsNo Comments


By Patti P. Phillips, Ph.D. and Jack J. Phillips, Ph.D.

There are five reasons why you should consider measuring the impact and ROI of your programs. However, there are situations where maybe you shouldn’t measure the ROI—but we will come back to that. First, let’s review what is involved when measuring ROI.

Essentially, an ROI evaluation requires evaluating the success of a learning program on five levels of outcomes. The first level is the reaction to the program in terms of relevance, importance, and intent to use. The second level is learning the knowledge and skills required to make the program successful. The third level, application, is tracking how the content has been used and how much success participants have had with its use, along with any barriers and enablers.

The fourth level is impact—the connection of the program to key business measures. At this outcome level, steps must be taken to isolate the effects of the program on those measures and then convert the business measure to money and calculate the ROI. The fifth level, ROI, is the comparison of the monetary benefits of the program to the costs of the program.

Now that you know what’s involved in an ROI study, let’s discuss why you should measure the ROI of your programs:

    1. To make the programs better. This has always been our preferred reason for pursuing the impact and ROI analysis. In the past, most of our efforts have been aimed at evaluating level 1—(reaction) and level 2 (learning). However, executives who provide the support and funds would prefer to see the impact of the program. Of course, impact won’t develop unless there is application. The challenge is to improve level 3 and 4 (application and impact) results. An ROI evaluation will indicate if a program is successful or not. If it’s not, the evaluation will show what should be done to make the program better. If it is working, the analysis will give insight for how to make the program even more successful.

 

    1. To satisfy key stakeholders. Programs have many stakeholders, but no one is more important than those who sponsor, support, or provide the budget for programs. These individuals want to see the business connections. Even in governments or nonprofits, alignment to business measures is still something stakeholders want to see. Dozens of studies have shown this is needed, and it is not debatable. Just ask your top executives.

 

    1. To please your team. When we first used this methodology many years ago, we were involved in managing learning and development teams. We wanted to know that our work was connected to the business. There is a sense of satisfaction in knowing that you are making a difference and the difference is not just having people attend programs and provide good ratings. It is the realization that the participants are using what they have learned on the job and are having an impact in their work and in the organization. That’s a great feeling.

 

    1. To maintain or enhance your budget. This is the No. 1 reason we see ROI implemented now. Globally, economies are in a state of uncertainty. When that happens, organizations must be lean and agile to sustain what is next to come. To do so, budgets are trimmed—eliminating anything perceived as a cost or otherwise not absolutely necessary. Unfortunately, this almost always affects the L&D budget. One of the best ways to convince an executive that your program is not a cost (that should be cut), but rather an investment, is to determine the return on investment, using the same calculation that a chief financial officer would use.

 

  1. To change the perception of the L&D function. Sometimes learning and talent development programs are considered a “necessary evil” or compulsory. Many executives agree with this comment: “We know we have to fund training for compliance and to teach people how to do their jobs, but beyond that, we don’t like to provide courses unless there’s extra money.” This is not a positive statement. We have hundreds of studies that show that the highest ROIs come from soft skills programs, such as team building and leadership development, and it is now the time to invest heavily in these areas. But it’s difficult to invest more unless executives see the value. Having consistency within your L&D function routinely—showing the value and the connection to the business for major programs—is the best way to change the perception from a necessary process to a business driver. This will lead to more support, better partnerships, and yes, a seat at the table.

So there you have it, five reasons to measure ROI. But, as we stated in the beginning, maybe you shouldn’t measure your program. This process is not intended for every program. Only programs that are important to the organization, expensive, strategic, or those that attract the interest of top executives should be evaluated at this level.

Our benchmarking studies show that this should not be more than 5 to 10 percent of programs each year. Consider the advantages of and reasons to evaluate a program, and then decide—is it worth it? For more information, email Info@roiinstitute.net.

20
jul

Share:

Does Gamification Actually Work? Yes, and Here’s Why

Posted by HRDQ-U WebinarsExperiential Learning, Human Resource Training, Leadership, Team Building ExercisesNo Comments

written by Sharon Boller

I’m a big Twitter fan; I use it to curate content and pay attention to people whose opinions I care about and the trends I’m interested in. I believe in hashtags (#GBL, #gamification, #UX, #ATD, #DevLearn) as they help me easily curate content and monitor those trends.

Last week, on the #ATD “back channel” (conversation that gets created when you diligently use hashtags to share info on a particular topic, person, or event), the topic of learning myths was hot and heavy, largely due to Clark Quinn’s newly-released (and excellent) book on the same topic.

Though gamification is not in Clark’s book, gamification came up on the #ATD back channel as part of a discussion on learning myths… and people were suddenly questioning whether it is a myth: “it doesn’t really work.”

Whoa.

As someone who has been immersed in the arena of learning games and the gamification of learning – and whose product and custom solutions have earned Brandon Hall awards because of the results they produce and who wrote a book on how to design effective learning games – I was understandably concerned by this line of conversation.

A Gamification Myth

In the instance of a recent Twitter conversation, truth emerged that some of the “myth” comments about gamification stemmed from a single study that one astute Twitterer, Shannon Tipton, called “full of holes as Swiss cheese.” The study authors did not provide any information about the two lessons being compared except for this: 1) the lessons were supposed to teach students how to divide fractions, and 2) one was called “basketball divide fractions” lesson and the other was simply called “divide fractions.”  The authors used their proprietary platform to conduct a controlled, randomized study to compare two lessons that teach students how to divide fractions.

The study suggests (but does not directly say) that one lesson was gamified in some fashion, but we aren’t told how. The other lesson was not gamified. The results overwhelmingly favored the non-gamified lesson. Kids learned faster and performed better on subsequent fractions tests that required them to divide fractions. The gamified approach resulted in lower test scores, even though kids spent longer in the lesson by choice. The authors indicate this increased amount of time spent on the lesson was 100% voluntary; the assumption they seem to want readers to have is that students had more fun doing one lesson than the other. From that single study that omits tons of relevant information, some folks conclude, “gamification of instruction doesn’t work.”

What’s Missing From this Study?

Let’s think about what we don’t know here:

  • The quality of the game/gamification design: How many game elements did they use? Were they used optimally? Was the gamified solution too complex so students’ expended too much brain power figuring out rules and too little learning fractions? Or was it motivating the wrong behavior? (e.g. Rewarding progress versus mastery. In scenarios that focus on rewarding progress, learners earn points simply by completing a problem. Getting the problem correct is not a criteria for earning a reward. Learners could also have been rewarded for speed, which could push them to guess the right solution rather than truly make an effort to solve for the right answer.)
  • The equivalency of the quality of the instructional design across the two solutions: Were both lessons, in fact, teaching identical principles and using some of the same techniques? Did they each have solid instructional design? (Example: did they both employ worked examples, which reduces cognitive load and increases learning efficiency?)

What the study illustrated was the basketball dividing fractions was less effective in helping students perform well on a fractions test than the lesson that did not include the gamified elements used in the basketball lesson. If I were a game designer on that lesson, I would want to go back to it and figure out what I’d done wrong in gamifying the lesson. I would not be ready to assume that gamification itself was the problem.

How to Effectively Use Games & Gamification

Bad gamification or game design is going to yield bad results just as bad instructional design is going to lead to poor results. That doesn’t mean gamification or games don’t work. So how do you use games and gamification effectively? Here are some principles that will help you maximize efficacy:

1. Keep game complexity simple, particularly when you are using a game to support relatively short lessons.
Do not over-complicate a learning game or gamified lesson with lots of game mechanics (aka rules) or game elements. Elements are things such as rewards, scoring, chance, strategy, resources, cooperation, competition, aesthetics, theme, and story. Doing so increases the cognitive load on your learner and makes it harder, not easier, to learn. (See this explanation of cognitive load theory, which was first identified by John Sweller in 1988.)

2. Reward players for performance, not completion.
If you are going to award points within the game, those points need to come from demonstrating knowledge or skill, not just progress. (See Karl Kapp’s book on the Gamification of Learning and Instruction. There’s an entire chapter on research studies and another on how to use rewards effectively – and what not to do.)

3. Be cautious with leaderboards.
Leaderboards can be fun, but be sure you focus on more than who is on top. Consider letting people see more about themselves rather than just what other players are doing. We are all very interested in ourselves so we like to see our rank relative to others, improvements we’ve made over time, personal bests, etc. We also like being part of a team. So consider team-based comparisons (e.g. by location, role, etc.) as opposed to just head-to-head individual comparisons.

4. As much as possible, align the game element choices you use to the learner’s actual job context.
In other words, avoid competition in a game if the job requires cooperation/collaboration. If you want to incorporate “chance” into a game (and this is a handy game element for balancing out gameplay), make sure you use it appropriately. This means matching the types of chance a player encounters in the game with the way chance occurs in the real world.

For example, if you are teaching project management, a great use of chance is to have it come up as a factor that requires the player to consider alternate strategies. “One of your teammates just called in sick. Identify two other viable strategies for meeting today’s deadline.” “Your client just called to say she’s going to miss her deadline. She asks, ‘How can we still hit the ultimate milestone. I don’t want the project to slip overall.’ What do you do?” (Play to Learn, the book I co-authored with Karl Kapp, outlines how to combine learning design and game design together to maximize the impact and efficacy of games.)

5. Make the in-game goal align with the learning goal in a reasonable way that “makes sense” for the learners who will play your game or complete your gamified lesson.
Again, mirror job context. If the real-world scenario is to achieve certain quality ratings for a medical facility, for example, mirror that goal within the game. This reduces cognitive load on your learners as they don’t have to distinguish what’s true in the game from what’s true in their job context. This blog post on playtesting features a game where we worked hard (via multiple iterations) to get our game goal and rewards just right to maximize learning.

6. Stop thinking you have to make the game super “fun.”
Fun doesn’t really matter if the focus is on optimizing the learning outcomes. What matters is relevance and a game design that reinforces and enhances the instructional design. A game needs to be “fun enough” to keep your player/learner involved in that game. You also need to re-think what “fun” is. Kevin Werbach of the Wharton School of Business does a great job of breaking down what people actually find fun and it includes things such as strategizing, problem-solving, and collaborating – all skills required in many jobs.

Additional Resources
Want more guidelines on effectively using games or gamification in instruction? Check out these resources:

  • Implementation tips for gamification from Karl Kapp: http://karlkapp.com/implementing-gamification-consider-these-tips/
  • Design tips for digital learning games from me!: https://www.slideshare.net/SharonBoller/digital-learning-game-design-lessons-from-the-trenches-30225814?qid=a34842bb-1864-4a3c-a66b-28918fa0ff11&v=&b=&from_search=1
  • Infographic on the efficacy of game-based learning: http://www.theknowledgeguru.com/game-based-learning-infographic/

HRDQ-U and Sharon Boller recorded a FREE webinar you can Watch here! 

Page 1 of 16
123416

Blog Categories

  • Corporate Training Materials
  • Communication Styles
  • Career Anchors
  • Coaching Skills
  • Conflict Management Strategies
  • Critical Thinking Training
  • Customer Service Training Games
  • Emotional Intelligence
  • Experiential Learning
  • Corporate Team Building Games

Recent Blog Posts

  • Calculating ROI Should Not Be Overwhelming
  • Managing Up Starts with Managing You
  • Turn that Smile Upside Down – How to use failure, frustration, and lower test scores to predict better learning and results.
  • Why the DISC Letters Have Failed Us…But the Styles Could Change Everything
  • 5 Tips for Keeping Virtual Learners Engaged
Learn more


Learn more about HRDQ
HRDQ-U
827 Lincoln Ave, Suite B-10
West Chester, PA 19380
Phone: 800-633-4533
Email: info@HRDQU.com

> Present Your Webinar on HRDQ-U

Connect with us!

loader

ideas for learning
© Copyright 2020 by HRDQ-U. All Rights Reserved.



Home About Us Upcoming Webinars On-Demand Webinars Blog Contact Us