We are keen supporters of Restart a Heart day, which Ambulance Victoria promotes every year on October 16th. With COVID 19 impacting community events and opportunities to spread this message this year, Ambulance Victoria is adopting the concept of “Shocktober”. Even better! 31 days instead of just one. So for a bit of design fun for us… and for anyone who is interested in the 3 simple steps to save a life, here is our microlearning take on how to #restartaheart. Feel free to share the link – the more people who know what to do the better! And remember – if you are part of a group with an AED make sure it is registered.
If the only tool you have is a hammer, everything looks like a nail’
(Manslow’s law of the instrument).
It is easy to say a hammer is not always the best tool for a job, yet people and organisations often try to use the same method to fix everything… often without considering other options.
Structured training courses or online modules are not the only way for organisational learning and strategic problem solving.
If we can invest time in identifying root causes or requirements and then thoughtfully choosing a solution for the problem (which may or may not include structured training, technology or tons of information as a part of the answer) we can try to create a solution that relates strongly to the outcomes we want to see, in the most cost and time efficient way.
We also know that individuals learn in many ways, from many sources. We use experience from ‘on the job’, practice, self study, research, our social networks, you tube videos and ‘ask google’. Ideally, our learning solutions would take advantage of the way people are learning, integrating directly into their day-to-day activities and work rather than being a stand alone add on that takes longer to access.
In the public safety sector we utilise large amounts of structured learning, both face to face and online. This is likely due to the number of skills and knowledge required for base level proficiency, and because of the level of compliance to meet safety requirements, legislation and national competencies.
Yet we are starting to see instances of technology and training being implemented in a targeted and thoughtful way to address specific or strategic problems. Some recent examples:
- A fire service regional office have been developing video that members can access via a QR code on a piece of equipment to revise how to operate it.
- A State Emergency Service corporate section has developed instructional videos supporting use of software for specific tasks that are needing to be completed by members.
- An ambulance service has changed their supplier for equipment to eliminate root cause of user error and used an online memo to update everyone, while another uses a closed social media platform to collaborate and improve understanding of the latest clinical pathway updates.
The best thing about some of the great ideas we hear about are that they do not need to be costly and they do not rely on expensive or prescriptive systems. People are finding ways to make it work with existing systems or free systems. They are also ideas that reduce the amount of time individuals are required to spend to access the knowledge or skills development they need. Concepts can also be applied across different areas or organisations. They can also tie learning to measurable KPIs.
This approach to learning and training is considered by many to be part of creating a learning ecosystem. There is a lot of research and discussions about this concept at the moment, usually also linking to the 70:20:10 model and to online learning systems. If you want to do some further reading, we recommend you check out:
As technology transforms literacy practices, it also challenges what it means to have language, literacy and numeracy (LLN) needs. Digital literacy is becoming an essential skill for success in life and must also be considered.
The definition of digital literacy is still evolving, however it is generally agreed that it is about having the skills you need to live, learn, and work in a society where communication and access to information is progressively more through digital technologies (WSU, 2019).
Those with poor digital literacy skills may struggle to access employment, be unable to progress in work roles, have poor uptake of educational opportunities and even be unable to complete daily living tasks.
Data shows that although Australia ranks quite highly at a global level in regards to technology use and skills, as many as 69% of Australians actually have a level of skill in the bottom half of the possible range, with 25% being unable to use a computer or complete basic testing using technology. Research also shows that being ‘digitally disadvantaged’ is not solely an issue of socio-economics, but can affect individuals from all sectors of society. There may be a number of reasons as to why individuals do not or cannot access technology and develop their digital literacy skills.
Government policy and changes to VET over the last few years have attempted to address LLN needs, including digital literacy skills. There is also literature and multiple frameworks to help us build a picture of what a digitally literate adult in Australia could look like to support trainers and assessors in this endeavour.
Building a picture of what a digitally literate person looks like
A review of the Australian Core Skills Framework recognised the key role of digital technology and the embedding of digital literacy and technology in the documentation across the different levels and the three domains of communication (personal/community, workplace/employment and education/training).
However, having this content embedded within other skills makes it difficult and time consuming for a trainer/assessor to explicitly consider and review an individual’s digital literacy skill levels or give a descriptive picture of what a digitally literate person looks like, unlike other core skills identified. Additionally, it focuses heavily on use of digital technology and doesn’t fully address the wide variety and complexity of skills that allow individuals to function effectively in digital environments.
Moving beyond what is present in the ACSF to include further elements of digital capability, such as those developed by Jisc (2018), presents a more holistic picture of what a digitally literate person looks like and support meaningful integration of digital literacy tasks into training and assessment.
These six elements present the concept of proficiency in information and communication technology as a core element of being digitally literate, whilst other skills overlap and build on this capability. Importantly it highlights that overarching it all is our digital identity and wellbeing.
When a summary of indicators and performance features relating to digital technologies and literacies are placed into a similar format to the ACSF core skills performance grids, you can see the increasing complexity of digital literacy skill examples in each core skill by performance level. By also including the support considerations from the performance variables you can see the level of independence people should have. Building on from this, overlaying the six elements of digital capabilities as developed by Jisc further categories the types of skills being used at different levels and in different areas.
Note: this method places digital literacy skills within the context of English as a language. It may be very possible for an individual to have skills which are much higher when used in a context of a language other than English. These skills may therefore be more easily transferrable to new contexts within Australia, so long as adequate language support is provided.
Here is an example page focusing on the core skill of Learning. A copy of the full matrix is available to download from here: Snapshot: Digitally Literate Learner
This process of overlaying Jisc’s elements with the ACSF digital literacy examples highlights the emphasis currently placed on ICT proficiency (functional skills) in the ACSF. Of the 89 examples sourced from within the ACSF, more than half of these related to functional skills in using information and communication technology. About a third related to critical literacy skills, mainly focusing on information literacy and data literacy. There were isolated examples for creative production, participation and development. There was no evidence of examples relating to digital identify and wellbeing (self actualisation).
The ACSF does not fully address the wide variety and complexity of skills identified that will support individuals to function effectively in digital environments. Are we still in a state of training infancy with regards to digital learning skills? While critical literacy skills are important for digital literacy, there is a much wider range and more contemporary uses for digital technologies currently occurring in the workplace, education and community settings.
With consideration of the fact that the ACSF includes these domains of communication in the framework, there is opportunity to widen the scope of digital literacy skills and tasks identified within the framework. This may further encourage trainers/educators to consider the wider elements for digital literacy in learners. In particular, adding examples relating to creating digital content and skills relating to digital identity and safety would be of great benefit to learners.
This is an area for further investigation and work!
This month’s AITD issue of Training and Development Magazine focuses on learning transfer – something just as if not more important than learning design and training delivery!
The first article is entitled The Push and Pull of Knowledge and Training, by David King from Tribal Habits (A knowledge sharing and training platform for organisations).
Why did this article grab our attention?
His research noted the importance of learner’s pulling learning as we push it towards them. What concepts do this? Relevance, context and accountability are three noted as making a difference to successful learning transfer – all three are very valuable points.
Let’s focus on the relevance aspect for now.
When David spoke of relevance, he mentioned his research into online learning modules… The higher percentage of learners that found the learning relevant and applicable on the job where modules were developed by people who understood their organisation and customised the content to their environment.
Did you discover what you expected in this module?
- Internal subject matter experts/unique content/high relevance = 86% learners
- External development/generic content/low relevance = 67% learners
Were you successful in implementing on-the-job?
- Internal/unique content/high relevance = 96% learners
- External/generic/low relevance = 56% learners
David noted that training developed externally from an organisation could suffer from assumptions and generic examples, reducing relevance to learners. YES! This is why we love what we do.
Off the shelf is never the best fit. Working closely with an external provider to create something for your organisation will always be better – planning meetings and discussions with developers help get a better fit. More than this, we believe that no-one can understand the Public Safety Sector better than people who work in it – who know the relevance and who have the lived examples. You don’t need to explain every tiny detail to us, we already get it!
David says “when someone is speaking your language, you tend to turn up the volume”. So tune in all Public Safety learners, trainers and organisations: We speak your language. We can jargon with the best of you, relate with most of you and look forward to working with lots of you!
Here is an issue many of us battle with sometimes: Being a passionate people about our areas of expertise when training and assessing, we get the urge to share everything and help people new to a topic learn as much as possible. Many of us have to constantly remind ourselves to be less “nice” and let them get on with learning!
If this is you, you are not alone in this ongoing challenge! If you are a training developer, this is also an issue often faced as you work with Subject Matter Experts (and let’s admit it, the Public Safety Sector is full of technical content and SMEs are everywhere!)
If you ask a Subject Matter Expert what information a newbie needs to know to do a certain job, often the answer is WAY more than you have the training time to share and WAY more than a new person’s brain can cope with. Experienced people often forget how it feels to be learning something new and how overwhelming it can be.
With the increase in online learning, (where it is oh so easy to include even more information for learners) the risk of being told to include the “nice” increases dramatically. Please note, for those of us also working with nationally recognised training, units of competency and the amount of required knowledge and skills criteria is a whole different post of its own!
In developing product, our job is to remember and remind that the research shows – just because you include it or train it, doesn’t mean they learn it! So how do we pull out the “need to know” from the “nice to know” when developing training?
Stop including everything that people believe is important:
- Identify who is the “boss” of all training and get their backing about the whole issue. You can be more confident excluding information if you know that they understand what you are trying to achieve.
- Remember that an SME opinion alone is not enough reason to include information.
- Be careful how you phrase questions – asking if something should be included will often result in the answer YES. More justification is needed here!
- Remind everyone involved that KNOWING something doesn’t always change how you DO something. This is a key part of the difference between “need to know” and “nice to know” – particularly when developing training for people new to the job.
Ask the right questions:
We need to sort through all the information and make sure the answer to these questions which focus on DOING the job are a YES:
- Is this information something that would put people in danger if they didn’t know it?
- Is this information something that you can’t do the job without knowing?
- Is this information something that would be used frequently to do the job safely?
Negotiate and find alternatives:
- Reverse the process – start with the skills and then fill in any knowledge gaps found that learners needed to the job. Do a training trial run without the extra information. Does it affect the outcomes of the learners? Only put back in what makes a difference.
- Incorporate glossaries, case studies and further information links that can be accessed by learners in their own time and not overtake the “need to know” training focus. Make it clear these are “nice” and that it doesn’t look/feel like a must do.
- If there is insistence that something be included, present the information in a simplified/reduced format. Have the SME or a trainer explain the concept to you in 15 seconds or less, in a way you can understand it. (This may take them several attempts, but it really pulls out the key points!)
- Consider a staged approach to the training – the bare essentials with a follow up day/session at a later time to cover more advanced information and contingencies after learners have a chance to get the basics sorted.
This post will likely be most relatable to anyone who has invested time and effort in training a great person for a role, only to have them leave (sometimes unexpectedly!) Research indicates in many organisations that close to half of roles cannot be easily replaced with another internal applicant… and that about 50% of organisations don’t even have any succession planning in place.
The best of us who have considered succession planning in training – think about those who hold key training positions, including our own roles. No-one is irreplaceable after all!
Strategies used in our sector for succession planning commonly include:
- Having “deputy” positions so that there is a back up person learning the ropes
- Offering professional development opportunities to those people we identify as potential future role holders
- Cross training several people in several roles/areas of expertise so there is always a pool of knowledge and skills (Think SME groups or training teams)
- Scanning our current teams and identifying potential holes coming up – and formally starting some succession planning and changeover so its a gradual change
In the ideal world these are all great, however I bet many of us have (regardless of best efforts) experienced holes (or sometimes chasms!) opening up in front of us anyway. Research suggests this happens more often than we think, and due to a number of reasons including:
- Having people identified as “future potential” who in actual fact don’t have the time or interest in taking up these roles (did we ever actually discuss our plan with them??)
- The person we identified and have prepared as the successor also leaves! Despite all best efforts to identify good successors, new opportunities or personal decisions can cause the next-in-line to leave the organisation. Do we also have a retention issue?
- Using people we thought were great or ready to fill a hole, only to find out that their skill levels are stretched to far – maybe they are really better for us being just that awesome training delivery person, not the course organiser!
- Sometimes we start preparing a successor based on old or current skill set needs – only to find that when they are ready to move into a role, we actually should have focused on what future skills are needed to move things forward… so the person who has been earmarked is no longer the best solution!
SO WHAT ELSE CAN WE DO?
- Don’t give up on the idea of succession planning, even if it has failed before!
- Wait to develop a succession plan until after you have checked your organisation/region/unit strategic plan. Think about the future WHAT when deciding on potential future WHO
- Make sure everyone knows about any succession plans and opportunities – and that if they are a targeted person they are on board with the plan. Some people may even step forward and identify themselves as wanting to progress in roles
- Ensure that those who are the next generation are getting the professional development opportunities they need to fill future roles – they may need some individual plans to target their particular needs (and don’t forget to focus on what the roles/needs will look like in the future not just now)
- Don’t be afraid to regularly update or change your planning – circumstances and people in an organisation change over time. Move with things so your plan has greater chance of success
- Ensure that current role holders don’t block or make the next generation feel like there is no point/movement happening. They need to have responsibilities and chances to practice skills. They should feel like their chance in the role will come sooner than 20 years from now – even if it is just sometimes/for some things or when covering leave. Some organisations or units use fixed terms for roles to encourage more frequent changes, fresh thinking and chances for more people to be able to fill different roles, thus building resilience and a better appreciation of how every part helps support the whole
- Consider the worst case/deep hole/’black swan’ scenario before it happens. What will the plan be if you suddenly lose a person in a role and the hole that is left is not filled by someone prepped to take it over? Consider it business continuity planning – put some contingencies in place even if its just enough to keep “business as usual” on track till someone can be found
As trainers we know that at the end of a session their should be time allocated for summing things up. This usually involves:
- Restating learning outcomes
- Providing (and hopefully also asking for) feedback
- Linking to what happens next
Depending on how strapped for time you end up being this may be 5-15 minutes of the session (although I have seen this done in under 60 seconds when time has escaped some trainers!)
When we invest so much time and effort into creating an engaging and effective session at the introduction and the middle, why don’t we invest just as much at the end? Is there no further learning gain to be had here???
How about tweaking the summary of learning at the end, based on the following Information Processing Approach to thinking and learning:
The ability for a learner to share what the session has covered (as opposed to the trainer reading off a list of outcomes) shows how learners have understood information. You can help this happen at 3 levels:
“We covered basic anatomy of the respiratory system”
Recall is pre-requisite to understanding, but it doesn’t really demonstrate it. It is just a memory. If you ask learners to share what you have covered in the session, it is likely you will get a list of topics or outcomes. Not high order thinking, but at least if they share it instead of you it’s making them think back and strengthen links to what was covered.
Verdict: Better than nothing, good option for when you left just 60 seconds to sum things up.
“The airway includes the nose, mouth, trachea, lungs and some muscles. It all works together to take in oxygen and get rid of CO2”
Asking learners to summarise activities in their own words moves beyond recall and demonstrates comprehension. This is great for you as a trainer – you can check and make sure everyone is on the right page. There are a bunch of different yet quick activities you can use to end a session that will achieve this and all of them can guarantee participation of every learner. Tweak your session and try including something like:
- Pass the ball – Have everyone stand up and throw a ball (or whatever) around the group. Each person summarises one thing learnt during the session. If done successfully they can sit down. No repeating ideas! Increase challenge level by asking them to share things it in order from the start of the session.
- Rally robin – in small groups everyone takes turns to verbally summarise what was learnt during the session. Alternatively take the team rally robin approach – ask each group to take turns summarise something learnt in order. They can talk among themselves to come up with the answer.
- Writing relay race – have teams line up with one pen and piece of paper. Each person runs to the paper, adds their summary point before racing back to hand the pen over to the next person. No repeats count. Compare summary lists of teams at end to see who has summarised points from session – either most number or most accurately
Verdict: Takes more time but worth it as uses higher level thinking skills. Good way to end session with high energy. Can include writing or be just verbal depending on group and available time.
Symbolising is to represent experience usually in non verbal ways. It requires learners to really think about and interpret what was covered. Activities that you can tweak your session to include as a summary includes:
- Team mind map – Have everyone work in groups. Grab a texta and help summarise all the big ideas covered during the session using just pictures. People can add to other people’s drawings to show more detail as they recall it. If you want to break it down a bit more, pair it up with a recall summary – once the key parts of the session have been stated, divide up the topics so each group focuses on a different part.
- Hashtag generator– admittedly it uses words not pictures, but having to come up with a word or phrase that shares the concept is still symbolisation in a more modern social media form… and results can be quite hilarious! Depending on your organization’s social media set up and policies, groups can always take a photo and caption it to visually represent a key concept covered at training. You can then also add a common tag for the group/course.
Verdict: Definitely not a 60 second strategy. It does produce some hard copy results you can review and evaluate later. High engagement, potential for some laughs and fun. Plan your session timing accordingly and check out any policies that may effect what you can/can’t do.
In the public safety sector we frequently use standardised, unit of competency based training and assessment activities. At the big picture level across the agencies there is a lot of sameness – but yet there is multiple layers of difference that need to be considered.
Units of competency include a range statement to account for differences in workplaces and organisations, but further to this as trainers, assessors and learning designers we are expected to customise things further to suit our group of learners. This doesn’t just mean at an individual level, but for those of us helping to create the “standard” course it also means looking at a bigger picture of organisational learner characteristics, or even characteristics across multiple organisations.
Let’s look at some considerations for each of these.
Individual learner characteristics
When faced with a group of individuals that should have their learning needs catered to, it is possible to determine individual learner characteristics such as:
- Previous experience in work, life and training
- Age, gender, cultural requirements
- Location and access to resources
- Language, Literacy, Numeracy and technology levels
- Other identified needs requiring adjustment to learning
- Learning styles
Depending on how well we complete this process, we can then adapt our training style and focus to best meet the need of this group of individuals, while still working within the requirements of the course and organisation requirements.
Organisational learner characteristics
What about when you are designing training or assessment at a higher and wider level?
If you are developing material to meet an organisation full of learners, how do you consider and address characteristics then? Is there a generic profile of what a learner or learners are typically like to help guide your design?
Luckily annual reports, workforce planning, research and literature, as well as surveys can help build a picture of learner trends across an organisation. The information may require some digging and extrapolating, but you should be able to find statistics or anecdotal evidence that will give you organisation wide learner profiling including:
- Average length of experience with the organisation
- Average age ranges and gender
- Location and access to resources (including computers and internet connection type)
- Previous education, as well as language, literacy, numeracy levels (For some emergency services this is tested as part of the employment process. For other organisations, especially those heavily involved with volunteers, there has been some great generic research on education and foundation skill levels across communities, which could be applied to organisations that are a reflection of the communities they serve)
- Previous implementation and success of learning initiatives dependent on delivery method and learning approaches
By considering these factors, it is possible to design something that will hopefully meet the requirements of learners across the organisation.
Sector wide learner characteristics
One level higher again – if you are trying to engage multiple organisations or agencies in training together or encouraging use of common resources (because often the content is the same in many ways!) how do you consider characteristics and needs of learners across different organisations? This is something we should have been thinking about for a few years now, especially with the push for multi-agency training programs gaining momentum in several states.
To a certain extent, the learner needs identified at an organisational level will provide the key information. It will need to be compared across the different organisations involved, especially in areas where there are differing minimum levels of skills and LLN needs.
Consideration should be given to the differing language and slightly unique processes used by different organisations – always looking for commonality and helping to ensure learning is understandable and meaningful for all learners. In particular, look at:
- Scenario choice – some emergency situations and events are more generic and inclusive than other emergencies
- Graphics choice – especially anything that denotes uniform, ranks or positions during an emergency
- Audio choice – especially applicable to online learning! This includes any music and choice of characters for voice overs. This may also include clips from radio chatter
- Word choice – Jargon, jargon, more jargon, acronyms, lots more acronyms, some mnemonics… this is all especially hard when different organisations call the same thing different things!
We were recently asked to explain to a client the benefits in shifting to digital learning. Realising that this is obviously a trend that is increasingly hard to buck while staying comparative with other organisations in the Public Safety sector, they wanted to have some good reasons to promote to members and encourage them to embrace the digital option.
Benefits to highlight
- Personalising learning – the ability to access desired content at the time and location that meets the needs of individual learners, and to move through the content at their own pace, focusing on what is most relevant to them.
- More opportunities – learners can access content previously not available in their area, or use content from other organisations/locations around the world that meet their learning needs. Heaven’s above, some is even free!
- Engagement of new generations – for the learners who have grown up with technology and internet connection, digital learning and strategies such as game based learning and interactive activities online increases engagement and motivation. It may also appeal to other generations who have also embraced this type of interaction.
- Tracking progress – for trainers, assessors and organisations, digital learning makes it possible to see how learners progress through materials and where their strengths and needs lie by tracking completions and responses. For learners, things like Experience API (TinCan) mean learning in a broader sense can also be recorded and recognised by their organisation.
- Opportunities for social and collaborative learning across the organisation – helping to break down silos of organizational structure and geographical locations, digital learning can include social media platforms (such as the really great Facebook Workplace platform recently implemented by Ambulance Victoria).
- Preparing learners in our organisations for larger digital trends – opportunities for learners to gain confidence and skills in navigating digital devices and interfaces will support them with other tasks that are increasingly more digital – easily seen as more pictures of Incident Management Teams, Control Centres, response vehicles/equipment and the focus of AFAC and groups such as Bush Fire CRC are circulated online and in the media. An interesting article on some of this is available from The Australian if you want a quick read. A Day in Your Life 2020 is also an older but still eye opening video (although maybe still slightly ahead of the general times!) There are lots of other videos that show other technology that will be changing our sector, such as this look at Firefighting in the Future.
We hear you say BUT! What about…
These benefits above don’t eliminate the challenges that digital learning also presents to our organisations. Lots of members will be quick to highlight these when you start talking about a digital shift. Change is rarely easy, cheap, well implemented or the solution to all problems…
- Issues of cost and infrastructure are common, but organisations are slowly overcoming these, as has happened with other changes through history in the sector.
- Challenges of interoperability, security and authenticity are being increasingly addressed by developers.
The challenge you should be talking about
If we are going to focus on and talk about any of the challenges, lets work out ways to address the risk of organisations developing “More of the same”. There is minimum benefit gained by layering technology on top of the same old way we have done things. Let’s face the challenge of developing new paradigms and models and finding ways to shift people’s training and assessment practices at the same time we shift to digital learning.
It wouldn’t be the training industry without the introduction of new buzz words every year! Here is a new one if you haven’t heard it yet:
LXD: Learner Experience Design
What is it?
LXD is about applying principles of User Experience Design (often referred to as UX Design) to the learning process.
UX is a term that has been used for a while to explain an approach to design where diverse fields such as psychology, service design and graphic design are brought together to understand a user, their context and what they want to achieve. This helps put them at the centre of design in order to create the best overall user experience – making things work for the user, and not the other way around. I always thought this picture helped explain the potential mismatch of not doing UX well.
So LXD takes things one step further and applies the same ideas to the learning environment. We need to ask ourselves “how do learners best reach the desired learning outcomes?” And this is not just about online learning – it should be applied to every context that is being used including face to face training and any social learning.
How do I use it?
As is often the case with new buzz words, you may already be doing this (and now you know what it is currently being called). You are using LXD if you are:
- Considering the target learner group for any training: background experience and previous training, demographics, preferences, motivations, needs
- Putting yourself in the learner’s shoes and experiencing the training from their perspective
- Mapping out progression of learners through different training and through this particular training – seeing how the parts and the whole fit together and support each other
- Including some learners, some subject matter experts and other trainers in the design process. Too often we end up designing in a vacuum and fill in any gaps with what we assume to be right or think will work.
- Testing training ideas by observing learners complete the training to see what sticks, then refining or changing things based on the learner’s experiences of completing the training. Real user testing is crucial!
- When creating online learning or print materials considering not just content, but also the visual design and using brain supporting strategies such as colours, grouping, white space, images and learner interactions
- Allowing learners freedom to determine what they complete or what they skip, based on their own assessment of their abilities… (as long as they can do it, does it matter how they learnt it?)
- Focusing on what creates behaviour change and skills development rather than content (I love this diagram below! It really explains what needs to be included in training and what we can live without/put elsewhere). Click on the picture to go to a great presentation by Julie Dirksen from Usable Learning that explains it more.