Evaluating informal learning – some more thoughts
September 24, 2013 Leave a comment
Yesterday I started talking about how we might look at evaluating informal learning and I suggested that there were a number of steps or processes we might need to implement if we really wanted to look seriously at getting meaningful metrics out of the informal learning that was occurring within our workforce. I suggested the following list of steps which might make up the process of evaluation;
- Skills Outline for each role type within the organisation,
- Assessment of Staff member against skills outline – there are a range of options here, but I think there has to be at least self-assessment + manager assessment at the very least,
- System for capturing staff informal learning activities,
- Regular (6-12 Month) updates of Staff skill assessments,
- Data capture of changes in skills levels across the organisation,
- Method of mapping skills changes to competencies. and
- Methodology of converting skills changes to organisational metrics and ROI.
Today I want to look at some of the options and some other ideas around some of the items on this list. I think there are a number of key issues here, firstly, the issue of establishing a baseline skill level, the issue of measuring changes from that baseline and finally the issue of taking the data relating to those changes either at an individual level or at an organisational level and translating them into something meaningful. I think that, as I said yesterday, there are a number of ways in which that initial baseline data can be captured, however the more I think about it the more it becomes clear to me that there is a definite need to have robust skill groups determined for each role type within an organisation. These skill groups are entirely separate to any particular position description and are tied to role types and levels rather than to specific staff or positions. Now to what level of granularity an organisation is going to need to go to is going to have to be determined by each organisation. My thinking however is that for most organisations there would be general role types to which skill groups could be attached. The skills contained in these skill groups would also have some similarity through the hierarchy of the organisation, everyone in the organisation needs to be able to communicate, but the level of skill expected may well be different.
Once we have the skill groups and have assessed staff members against their relative group (through whatever method we choose), we then need to come up with a method or a process of regularly assessing individual’s fluctuations in the skills within their skill group. Again it is going to be up to an individual organisation how robust they make these assessments, at the lowest end would sit, I think, a 6-12 month self-assessment by staff of where there skills now sit. At the highest end would be some form of regular, controlled, formalised testing process which provided solid evidence in relation to changes in key skills. If we were thinking about say desktop applications such as Word, staff members could at set time intervals be required to undergo a standardised external test of their skills and knowledge, which should give solid, meaningful data on changes to skill levels.
At some point however, we are going to have to decide what this data means either on an individual level, an organisational level or both. It is here, I think, that the real challenge may lie. If we look at the following example; an organisation has been capturing data on its staff skill levels across, they have also been providing staff with access to and encouragement to utilise and learn through informal methods. After 12 months they see an average rise in skill levels across the organisation of 5% and over the same period a rise in sales of 3%. Can we make a connection between the two, is there any correlation between the increases.
At least at this point I think you would be hard pressed to make much out of it. It may even be difficult to make the correlation between the informal learning and the rise in skill levels, why? Well simply because unless all they have been doing is engaging in informal learning and there has been no formal activities, or coaching and mentoring, supervision etc., then we are going to need to come up with a process of separating out the gains make from formal learning and the gains made from informal learning. At this point we might be tempted to fall back on something like 70:20:10 and say something like, well 70% of learning is informal so 70% of the skills increase is due to informal learning, namely a 3.5% increase. I that I would and I think that a lot of other people would find an argument like that to be somewhat less than satisfying. It could be the case that it was the 10% formal learning component that produced the entire 5% rise in skill and informal learning did not contribute to the skills increase at all.
Even if we can make a strong case that informal learning contributed to the skills increase, we then still have to make the case that the skills increase contributed to the sales increase and unless we can show a link between the learning, the skills and the increase in sales that is going to be difficult. Here I think it is the level of granularity that we apply to the data we are collecting. If we can show that members of the sales department accessed a range of informal learning resources all related to closing sales, and that their skill level at closing sales went up (by whatever means of assessment we are using) and then the overall sales figures for that sales department increased, then I think we might have a strong enough case to suggest that the informal learning the staff did, had a correlative effect on the sales figures.