NCVER’s Government funded Student data; What does it tell us?

So for those of you who aren’t aware, NCVER released its government funded student data for 2016 recently and I think it has some interesting findings contained in it.  Firstly though what is the overall picture which the data presents us with. The big thing which should jump out of this data for anyone looking at this data is that 7.8% of the Australian population aged 15 to 64 years participated in the government-funded VET system in Australia in 2016.  That is about 1.3 million students, a 3.3% increase from the previous year.  This shows the enormous part that funded training plays in the VET landscape in Australia and the importance that it plays in allowing  people to undertake post secondary education.  Without this funding a significant amount of that 7.8% of the population would not have otherwise been able to access the training they needed to improve their workforce participation options.

Interestingly while there was an increase in students there was also a decrease in subject enrollments, primarily due to the fact that there was a significant (nearly 300%) increase in the number of people undertaking funded skill sets as opposed to full qualifications.  This points out a growing industry trend and one which must be acknowledge and properly dealt with by all of the various funding bodies involved in the sector, that of increasing demand for focused skill sets to meet the needs of an industry or a particular employer.  This is a trend which is on the rise rapidly not just in VET but across organisational learning and development and post secondary education in general.  Organisations and students are looking for short, focused courses containing a small number of units to fill skills and knowledge shortfalls and to be more competitive in rapidly changing markets.

Interestingly 52.2% of funded students, were enrolled in their study at TAFE or other government providers, with only 40.8% enrolled at what would generally be defined as private providers.  The balance of enrollments were through community education and other providers.  This represents an increase for TAFE in terms of students of 14.8%, with both private and community providers both dipping by around 7+%.  I find this interesting (and yes I know these are last years numbers and things can change) because there has been significant media coverage of the downturn in student numbers enrolled in TAFE’s.  What this seems to suggest, at least to me, is that if TAFE is clearly improving its position in the funded training market, then it must be losing substantially in the more competitive fee for service markets, including income contingent loans which as we all know are not Funded Training.  To be fair, the non-TAFE sector has for a long time (even before VFH) traditionally done better in the fee for service space for various reasons.  I will be interesting to see what the total VET activity data says this year, when we can get a picture of all enrollments to compare against the funded enrollment data.

Every demographic with the exclusion of 15-19 year old’s increased in terms of student numbers as did Females, indigenous people and people with disabilities, which is win as often these groups are the ones most in need of financial assistance in terms of their ability to undertake training.  The community services training package was the largest contributor to student numbers at 18.5%, which given the numbers of staff which will be needed in this sector in the coming years is probably a good thing.  The most popular fields of education though were engineering and education however information technology and natural and physical sciences had very significant drop offs at 14.6% and 16.4% respectively.

Overall the real impact of this report is that it shows that enormous value that funded training contributes to this country.

Anyway that’s just my opinion.

P.S.  As some of you know I will be moving on from my current role at the end of this week, to take on a more traditional, less VET centric organisation Learning and development role.  I will be still quite strongly connected to the sector, just in a different way than I currently am.  It is also probably the case that (and I can’t promise this) that I will take a break from posting for a couple of weeks as I get up and running in the new role.

 

Paul

Advertisements

The business of vocational education – introductory thoughts

This is the start of an occasional series of posts I am going to be writing on the delivery and management of vocational education.

For a very long time, since I first dipped my toes in the world to training, way back in the late 80’s, doing internal sales and compliance training for a financial services company (if you think there are issues in the financial sector now you should have seen it then, it was like the wild west, which is in the long run why I got out), I have held the belief that training and education is a business or at the very least should be treated as a business.  Whether this is from the perspective of organisational L&D or from that of public or private education in schools, universities or the VET sector, we do the delivery of education and the outcomes it brings a serious disservice if we don’t think of it as business.

Now I know that there will be people out there who will jump at this statement and say things like education is a social or public good, it is this kind of business thinking that has got us to where we are today.  To those people I say you are right and you are wrong.  Learning and education are a social and public good, there is not a question about that, however there is nothing incompatible with holding that position and thinking that we can improve the way in which deliver that social and public good, by applying business models and principles to it.

When I first started in this industry and whenever in have been involved in projects or roles around organisational learning in particular, the concept of return on investment (ROI) has always been thrown around.  Now this and other post around this subject will not be about how to apply ROI to training (there are plenty of books, institutes and consultants out there who will do this) because how this is done isn’t what interests me, what interests me about ROI is what lies underneath it.

Essentially all the concept of ROI is saying to us is that if we invest a sum on money in the education of people, be those people staff of our organisation, school students or members of the public, there should be some kind of return for the money that is invested.  Now that return may vary from something very tangible, like an increase in sales revenue to something more intangible like a populous what are better able to critically think about issues and information. However it seems that we should expect that there is some return, the money that we spend on all forms of education should not simply disappear into a black hole and not return something back to whoever is spending it.

This to me also points to the idea that the delivery of education and educational services are going to work better if we think about them more like a business.  I am often reminded of a conversation I had more than a decade ago with a then school principle who is now in charge of one of the states largest public schools.  We were talking about the stress on teachers,  how schools were run, administration and a range of other topics and he said to me I had always been amazed by the number of people in the education world who didn’t treat what they did as a job or recognise that essentially what they were part of at a school was a business.  Even when I was first starting teaching I treated it like any other job I had had, I spent 8 hours at school.  I didn’t leave just after the kids left, I stayed there until five and I tried not to take work home and I have done that all the way through my career, he said.  Now does this mean that he didn’t think of his various roles in education as vocations, or careers, but rather like working in retail or at a supermarket, of course not.  He was and still is a deeply committed educational professional who has over the years created great outcomes for his students and other staff.

I have often suggested that things like non-attendance time and term based delivery for example, are detrimental to the ability of those providers who deliver under antiquated conditions such as these to meet the ever more demanding needs of their students who now more and more want their educational institutions to be available to them at times that suit them.  With more and more competition across the entire educational market survival business models which meet consumer need  are the only ones which are going to survive.

Clearly though it is the case that some of the business models which have been adopted, particularly in the private sector have been badly flawed and have led to damage to the sector, students, and to the businesses themselves.  Again the underlying reasons why these models have not worked are as with the previous example mostly to do with not meeting the student needs and demands as well as failing to provide a reasonable, or in some cases any returns an investments.

There is also a significant argument to be made that one of the reasons why various models have failed or not been as successful as they could have been is that perhaps their focus has been not as centred on what should be integral parts of any VET sector business as they should have been. Additionally this seems to also have led at least in some cases to questionable and in some cases possibly unethical behaviour.

What needs to be the central part of the focus of the delivery of vocational education and part of any ethical, useful business model for the sector is the outcome for students. It should be noted that by student experience I don’t just mean a piece of paper I mean a worthwhile educational experience which leads to a worthwhile outcome.  What a worthwhile outcome might be is something which I will explore later in this series, primarily because there may be valuable and worthwhile outcomes for students which do not directly connect to employment or workforce participation which also create a return on investment.

It seems likely that if these ideas of educational outcomes for students and return on investment for money spend on training, regardless of whether this money comes from government funding, individuals pay for their own training of other sources, are the central core components of a delivery and management model, we should be able to develop ethical business models which meet the needs of students and other stakeholders while still being profitable and cost effective.  It is this idea of what ethical, cost effective, well managed business models look like which will be the subject of the continuing posts in this series.

 

 

Measuring outcomes in Vocational Education – Are we doing enough?

With the deadline for Total VET activity reporting fast approaching us my mind has turned to how we evaluate outcomes in Vocational Education in Australia, or at least the data that we collect nationally which seeks to provide us with information on the Vocational Education Industry in this country.  Now we all know and love the Learner Questionnaire and the Employer Questionnaire, but really what are we capturing here and is it really giving us any real useful information.  Lets be honest, when we look at the Questionnaire’s, even the employer one, they really nothing more simple level one satisfaction (smile) sheets and anyone who has been in the L&D industry for even a few weeks knows how resounding useless smile sheets are if you want to convince anyone about anything that has to do with training.  Now I know that these are only one source of data, but if I had used that sort of data to convince an executive team as to why they should invest in a learning program I would have been laughed out of the boardroom.

If we look at the well know Kirkpatrick model of evaluation with its four levels

  1. Reaction.
  2. Learning.
  3. Behavior.
  4. Results.

realistically here all we are doing is evaluating reaction.  We are not really looking at anything that comes close to the other 3 steps, at least not on any consistent regular basis.  Now of course, someone could suggest that levels 2-4 and even level 5 (ROI) if we want to include that is something that should be done in the organisations that are sending their employees to be trained and that is a fair point and I would suggest that most organisations with L&D units are already doing this.  However, what about all of the people who are undertaking training not at the behest of their employer, they may be unemployed, casual, wanting to change careers, we seem not to be collecting and analysing any higher level data on these people.  We also seem not to be asking those employers who are collecting robust data on the learning of their employees for their data either.  Now admittedly in Queensland, and this may be the case in other states as well there is some collection being done of employment outcomes for students undertaking funded programs, but really if we are trying make a better system don’t we need some real data to allow us to make decisions about what is working and what isn’t.  (if we are collecting this data and someone knows about it and how we are doing it please let me know)  I come back again to the point that if I was to try and justify an organisational L&D budget on the kinds of data that is being collected on training, I would be told to go away and come up with some real figures.

So what should we be capturing, If we look at the Kirkpatrick model (I will talk about some other options later)  we should at least be capturing whether or not the student actually learnt something, where the learning objectives met, and how much change has there been in their skills, knowledge or attitude.  Now of course to do this properly, we need to collect data before and after training, and it could be suggested that by virtue of the person going from not having a qualification to having a qualification (becoming competent) that there is an increase in knowledge, but is this actually the case, has their actually be a change in the persons skills, knowledge or attitude.  With the data we are collecting at the moment we just don’t know we are making assumptions, but without good data those assumptions could very well be wrong.

What about whether or not we are seeing any changes in behaviour, are the learners putting what they ‘learnt’ into action in the workplace, are their behaviours changing as a result of the training that was delivered to them.  This is not something that can be answered with a smile sheet at the end of training, or an employer questionnaire with general questions about perceived improvement.  This is data that needs to be collected once the learner has been in the workplace for a while, once they have had time to adjust and integrate their new learnings into their work environment.

Then of course there is the question of what the result was, did this training improve anything organisationally, on a state level, on a national level.  Was it good for the organisation, the country, the learners, the bottom line, whatever.  We need to figure out what areas we a seeking to see improvements in and then track and see whether or not we are seeing those improvements.  Again this is a step up in complexity in terms of the types of data needed and how we might collect it.

Finally of course there is the question of Return on Investment.  It lets us begin to answer the question, has the billions of dollars the government has handed out in VET-FEE HELP and other programs and the money spent by business and organisations on top of that has been worth it, have we got the return on that money that we expected or did it just swirl away into a black hole where it didn’t improve anything which we would consider enough to measure.

Is all of this easy to do? No.  Am I suggesting that the reporting requirements on RTO’s change so that they are required to capture a whole range of additional data over a timeframe long after the student has finished their courses?  No.  Am I suggesting that perhaps we need to do a yearly survey, akin to the kind done by ATD (what was ASTD) on the state of learning, and include employers, organisations, training providers and learners and try to ask them some serious questions?  Maybe.  Am I suggesting we need to do more with this so called big data thing that has been talked about so much?  Maybe.  I don’t have a definitive answer, I just think we need to be doing more if we really want to capture the true outcomes from our Vocational Education system.

 

My favourite L&D Books

I thought as a start to the year I would begin building a list of my favourite Learning and Development books, these are all books that I have read and would recommend highly.  If you have any other suggestions then feel free to let me know.

The business of learning – David Vance

The success case method – Robert O Brinkerhoff

High Impact Learning – Robert O Brinkerhoff

Ten Steps to a Learning Organisation – Kline and Saunders

Building the Learning Organisation – Marguardt

The Fifth Discipline – Senge

The Training Measurement Book – Bersin

The Six Disciplines of Breakthrough Learning – Wick, Pollock and Jefferson

How to measure training results – Phillips and Stone

 

So as I said if you have any other books that you like let me know.  I intend to build this list over the year.

 

2014 ATD (ASTD) State of the Industry Report

Well for those of us fascinated by L&D statistics and the meaning and implications behind them, ATD (Formerly ASTD) have just released their annual state of the industry report for 2014.  So what does it have to say and what implications can we draw from it.

So what did it cost?

Firstly we see that spending on training for organisations has gone up, not by much, around 1%, but still it has gone up to an average of $1208 per employee.  The interesting thing about this number is that it is much higher for smaller organisations (less than 500 staff) at $1,888 and much lower for large organisation (over 10,000 staff) at $838 per employee.  Much of this can be put down to larger organisation being able to take advantage of economies of scale when it comes to development, maintenance and delivery costs of training and have the same dollar spend spread over a large group of employees.

We see also that learning hours used is about 31.5 hours per employee across the board which is relatively the same as last few years.  An interesting wrinkle to this average is that medium size companies (500-9,999 employees)  only come in at about 27 learning hours used per employee and while this might be interesting to attempt to investigate further, it may simply have to do more with the relative size of the data samples then any other actual trend.  Again we also saw that direct expenditure on learning as a percentage of revenue again remained relatively stable at around 1.2%.  The vast majority of this spend is, as it has been for many years, made up by the internal costs to organisations for the delivery of training, remaining again in the mid 60% range.  With external services (27%) and tuition reimbursement (10%) making up the balance.

 

So what did we deliver and how?

The three content areas that made up more than 34% of all the training delivered were;

  1. Mandatory and Compliance Training
  2. Managerial and Supervisory
  3. Profession or Industry specific

with the bottom 3 areas being;

  1. Executive Development
  2. Interpersonal Skills and
  3. Basis Skills

As far as delivery methods for training goes the winner and continuing champion by a long margin is of course – Instructor Led Classroom Based.  Yes folks yet again, face to face classroom bases training got the gong for being the most frequently used delivery method at 54.6%.  Not a bad effort for the old-timer in my opinion.  To be fair to the up and coming, much-lauded new world of learning deliver self paced online learning came in second with 17.9% and the most important game changing learning and development technology mobile or m-learning came in with a massive 1.7%.  All right I apologies for being a little facetious there, but I think what these numbers show is something quite simply for all of the rhetoric about mobile learning being the most important development in L&D ever are simply well not stacking up at the moment at least. Even when we throw all of the technology based delivery methods together they still only account for about 38% with the balance being taken up by options like self based print based learning (which by itself and I find this incredibly interesting  accounts for 4.75% of delivery, three times higher than mobile learning).

So what is this all mean.  Well I think for the most part we as an industry should be happy with the results.  We are seeing consistency in spend and the kinds of training being delivered.  There seems to be no great surprises (well except for those who tout M-learning as the next big thing, ok I will stop now) and seems to be to be much what you would expect from a stable, mature industry that know what its goals are.

 

 

 

 

 

Rapid Skill Acquisition and Instant Evaluation – The Evaluation of Interfaced Learning

Ask yourself this, if we think about just in time learning, utilising YouTube or videos to impart skills to staff, or even just staff reading a policy or procedure through an online portal, what result do we want this?  How can we tell if this skill acquisition through some form of interfaced learning has been successful?  How can we evaluate skills or knowledge, which as I have discussed in another post  may be disacquired as rapidly as we have acquired them.  The more I think about evaluation the more I feel that our traditional models aren’t designed to cope with a world in which just in time, rapid skill acquisition is becoming more prevalent.

If I think about the example I have used previously of me acquiring the skill to use a mitre box to cut ceiling trim and install it, I think this provides where my think sits on this.  So the question is how to evaluate whether or not the ‘learning’ was successful?  Well what was my (or to be truthful my wife’s) success criteria,  it was quite simply install ceiling trim.  So if that was the success criteria, then it would seem that I was successful.  I know I was successful because my wife was happy with the result.  The real question here for me is how long did that evaluation process take, well in reality it was almost instantaneous.  I completed the task and then got my wife to come in and evaluate it as soon as I was finished.

Now you might say that is a very simply example and that evaluating the  success or failure of a learning program at an organisational level is much more complex than that and while I think that is at leas to some extend correct I think there are also a range of learning interventions where trying to do something other than what I have outlined above simply overcomplicates the matter.

Certainly there are tasks, skills and knowledge that we want our staff to have learnt and integrated so that they can perform them independently, and without additional learning when they need to.  However there are a range of tasks within any organisation where what we want from the learning process is that when the person needs to undertake the task, they simply access the relevant information, perform the task and then move on.  It is the Interfaced Learning process that I have described elsewhere, where we don’t actually expect that the staff member will retain skills or knowledge for any longer than it is necessary for them to complete the task (Acquire, Utilise, Disacquire).  In these cases I would suggest that the best time and in really the only time we should be evaluating the effectiveness of the learning should be at the time, that is directly after they have completed the task in question.

Sure we can accumulate all of this data, from all of the Interfaced Learning activities across the organisation and then begin to analyse what worked better, what was successful and what wasn’t, but the success of the actual instances themselves is something that should be determined as soon as possible after the completion of the task in question.

Chasing Butterflies – Evaluating the organizational impact of informal learning

Turning informal learning into measurable business outcomes

We all learn things all of the time.  It is part of being human; we pick up a snippet of information during a conversation that we remember later, which makes whatever we are doing easier or quicker.  We watch a video on the internet to show us how to do something we haven’t done before or can’t quite remember how to do.  We read, we interact with people and things and we learn informally every single day.

Informal Learning and ideas like 70:20:10 have been on the minds of Learning and Development Departments and organisations for a while now.  Everyone knows that staff learn while they are at work, and that while a proportion of that learning comes to them through formal means, another proportion comes to them informally, through talking and interacting with others, through reading and watching videos, posting on forums, attending MOOC’s and this learning can happen both inside and outside of working hours.

What does all of this informal learning mean for organisations though, what kinds of impacts does it have on the overall performance of the organisation or the individuals which make it up?  Does it increase productivity and efficiency, does it increase the competency of staff, and does it improve the bottom line?  Anecdotal evidence suggests that it does, and if you believe the pundits and evangelists for the value of informal learning, it is far more valuable to an organisation than all of the formal learning that happens and it is the thing that organisations should be investing in and assisting their staff access in order to maximise their ability to operate in their chosen markets.  Is this really the case however, and is it the case across the board for organisations, does informal have a real and significant business impact, does it make a business and its staff better at what it does, and does it provide individuals and the organisation with identifiable increases in competency, efficiency and overall effectiveness?

This post will not actually argue the case either way for informal learning, it will not pit informal against formal learning and it won’t make an argument for an investment strategy based around informal learning.  What this book will seek to do however is;

  1. Look at what organisations need to think about when they think about the value of informal learning,
  2. Look at the information an organisation needs if it seriously wants to determine the value of informal learning, and
  3. Look at how to capture that information and what methodologies to use to make sense of it.

This post is also only concerned with learning that has some organisational impact, where the skills and knowledge gained by the learner, is translatable in some way to their current or future role within the business.  If a persons what to go off in their own time and as a colleague of mine likes to call it, study underwater basket weaving, then that is fantastic, but it is only of interest to an organisation if some formal link to an improvement in day-to-day work can be seen.

 

 

What is formal and informal learning?

 

Formal Learning is any course or program designed using industry-recognised disciplines and methodologies that have a formal structure, and specific, well-defined learning objectives, which may lead to formal outcomes or qualifications for the participant and which is delivered through a form of Student-Teacher relationship.  That is formal learning is what we are all used to as learning, it is the learning we did at school, at university and at the courses and programs that we attend at work.  It is structured learning, which has learning outcomes defined not by the participant, but in the training itself.

So then what is informal learning?  One could say that it is simply everything else, every other way in which we manage to learn new skills and be fairly accurate.  Informal learning is where there is no set learning outcome, at least not one that is set by anyone other than the participant.  It is unstructured, not designed using robust principles of instructional design and often not even seen as learning by the participant.  It is as I have said before the day to day conversations, the videos, the books, the jobs that we do, they all for part of our informal learning.

 

 

Evaluating informal learning

 

If we look at the definition of informal learning as opposed to that of formal learning, the problem in terms of evaluating its impact can be clearly seen.  There are no outcomes that can be looked at to see if the participant has successfully achieved, it is not delivered in traditional methods and formats, there is by its very nature no assessment as part of it and in a lot of cases we may not even be sure when or where the learning itself actually occurred.  So it is no wonder we struggle with not just the concept but with the practicality of how to evaluate the informal learning that staff do during the course of their days and what if any personal or organisational impact that learning may have.

It seems to me that there are two types of informal learning which are important to organisations, learning that simply increases, builds or improves a skill and learning provides some ‘formal’ recognition pathway for the learner.    It is important to note that it is some kind of formal recognition that is important here, recognition, where there is some ‘qualification’ style outcome as a result of the learning, where the participant is assessed in some way prior to being deemed as competent in the skill or knowledge.  I am not talking about badges or other such methods to capture the results of informal learning, be they peer-reviewed or not, because from an organisational point of view they in these methods are (at least currently) virtually meaningless.

Why meaningless; I have often recounted a story of being asked as part of an investigation into an incident involving a member of staff, “How did you know this person was competent?”  Now if my answer had been, well he has a badge for it, I think I may have gotten a much different reaction to saying, as I did, “Well, they have completed all of the assessment tasks, including a third-party observation, necessary for us to be satisfied that they were competent under the rules of evidence set out in the legislation pertaining to the operations of Registered Training Organisations, in Australia.”

This is not to say that the person with the badge was not competent, or the one with the assessments didn’t make a mistake or had forgotten what they had learnt.  It is to say however that it seems more likely that we can be confident that the second person was actually competent than we can be of the first.

Is this to say then that if informal learning does not lead to some kind of qualification that it is of no use, or much less use than what we would normally see as formal learning?  No, not all informal learning though is going to lead a staff member to a qualification, some of it is not related to or captured by the range of qualifications available, some simply adds to the skill set they already have, making them better at their role, but not providing them with a new skill. This suggests that any process of evaluation that we may seek to apply to informal learning must be capable of dealing with both that learning that will lead to a formal outcome and that which will not.

To this end there seems to be a number of things that we need to know in order to be able to begin to evaluate informal learning, and I am indebted to Saul Carliner for some of his thinking around this.  In order to be able to evaluate effectively the impact of informal learning both at an individual participant and an organisational level we will need the following information;

 

  1. A baseline – what is the staff members current skill level,
  2. What they have learned,
  3. How they have learned it,
  4. New skill level,
  5. Is there a competency attachable to the learning,
  6. Effect of learning on organisational metrics – reduction in customer complaints, less injuries etc., and maybe
  7. Return on Investment?

Interestingly this list looks very much like what we need know about any learning process in an organisation.  This of course then leads to the question, if this is what we need to know to evaluate the effectiveness of organisational learning, why are we treating the outcomes of informal learning differently to how we would treat formal learning.

Now it may not be relevant, we may not or able to or we may not want to, capture the standard ‘smile’ sheet satisfaction style data that we collect from formal training and yes, the natural of the learning, pull not push, driven by the individual, just in time etc., all make the nature of the process of learning different.  However, when we look at it from an organisational point of view are we not looking for the same thing as with formal learning?  We are looking for an increase in the skill level of the staff member, such that increase in skill will have an effect on the relevant workforce metrics that relate to their role in the organisation.  If we aren’t looking to improve the skills of our staff and the organisation as a whole, what are we investing in informal learning systems in the workplace, and why has it become so important.

So how do we achieve this, the process outlines what we need to have in order to really make this idea work;

  1. Skills Outline for each role type within the organisation,
  2. Assessment of Staff member against skills outline – there are a range of options here, but I think there has to be at least self-assessment + manager assessment at the very least,
  3. System for capturing staff informal learning activities,
  4. Regular (6-12 Month) updates of Staff skill assessments,
  5. Data capture of changes in skills levels across the organisation,
  6. Method of mapping skills changes to competencies. and
  7. Methodology of converting skills changes to organisational metrics and ROI.

 

 

Creating a baseline

 

If we start with the idea of a baseline we might be able to sort out some structure and processes around this idea.  So, where might a baseline come from?

  • Position Description,
  • Performance and Professional development plans,
  • Self-Assessment,
  • Formalised Assessment, or
  • Job skills analysis plus a rating system.

But how can we do it without it being onerous on everyone involved.  If we use position descriptions as our starting point, we have the problem of there not being enough detail or they are not skills based or we don’t assess the person against them in a really formal way that gives us any real data to work with in the first place.  They could be coupled with self-assessment, and direct manager assessment to give a fuller picture of the skill set and levels of an individual staff member.  PPD plans can be seen in the same light, in order to make them more useful in terms of presenting us with a baseline we need to capture more granular detail about the role and the staff member’s skills relating to that role.

To really make this process as robust as possible there is a definite need to skill groups determined for each role type within an organisation.  These skill groups are entirely separate to any particular position description and are tied to role types and levels rather than to specific staff or positions.  Now to what level of granularity an organisation is going to need to go to is going to have to be determined by each organisation.  My thinking however is that for most organisations there would be general role types to which skill groups could be attached.  The skills contained in these skill groups would also have some similarity through the hierarchy of the organisation, everyone in the organisation needs to be able to communicate, but the level of skill expected may well be different.

 

Once we have the skill groups established, we then have to come up with a way of assessing the current level of competent staff have in relation to the skill group which applies to their role.

Again a number of ways of doing this have been suggested;

  • Self-assessment
  • Peer assessment
  • Manager assessment
  • Formal evaluation and testing

The problem associated with most of these suggestions is that they either don’t really provide us with strong evidence of skill levels or competency or they are or are likely to become onerous and time consuming.  Self-assessment is probably not going to provide us with the robust kind of evaluation that we require, particularly if we are looking at this assessment process as providing us with a baseline from with to build evidence towards a qualification from.

Peer assessment and management assessment may also fall into the same trap and in addition they may also depending on the number of assessments and the number of people managed, become onerous and time consuming.  While formal assessment processes certainly could meet the criteria in terms of robustness, there is again, depending on the kind of assessment and how it is delivered the real problem of it becoming expensive and time consuming for everyone concerned.

This is of course less of a problem with new hires into the organisation than it is with those staff currently employed.  With new hires a system which captures their current skill levels with respect to their specific skill group can be quite easily developed and maintained, providing baseline data, at least on that group relatively easily.  The reason for this is that the process for capturing the data can be made part of the recruitment and induction process.  Current staff are going to be the group for which capturing this data will be most difficult, as any process is going to additional to their current workloads.  One way would be make the Performance and Professional Development review process more robust and include a form of assessment against the skill groups as part of it, in addition to staff and manager perceived assessments of skill levels.

All of this data begins to present us with other problems as well however.  How and where do we capture this data so that it is accessible and usable both in terms of individual staff progress and advancement in the relevant skill areas, but also on an organisational level where roll ups of this data could and would be invaluable in determining trends and needs across the entire organisation?  How this will be achieved will of course a decision that individual organisation make according to their current systems and needs.  It is vital however that this and the other information that is going to be generated through this process is captured, because it provides a wealth of information not just for the organisation, but for the individuals themselves.  It allows the organisation to see the value and the benefit of its learning investments, be they formal or informal and allows individuals to track their own progress towards a variety of goals.

 

 

What they learnt and how they learnt it

 

Having the baseline data available gives organisations a place to start when looking at the evaluation of learning, be it formal or informal, however one of the problems with capturing and evaluating the effects and impact of informal learning, is quite simply identifying when and how it happened.  Unlike traditional formal learning, there are no classes, no events, no calendar of activities to point to, to show when a staff member undertook training on a particular subject.  There are no learning outcomes, no standard content to point to establish what is a participant may have actually learnt.  This has and does present a significant problem for evaluating the impact and effects of informal learning, if we don’t know what, when or how someone learnt something then how can we determine where that knowledge came from?   Unfortunately from the perspective of this work there is little to talk about here except to say that organisations will need to find a way to capture this data, which is meaningful to them and suits their needs and individual situations.  One method would be to establish a database of ‘learning activities’ which staff could update as needed or which could capture data from a range of sources using technology solutions around a staff members interactions with informal learning opportunities.  There is no single answer here and no single best way of achieving this.  There will even be differences of opinion about the level of granularity that is sufficient to show that a participant has in fact had an informal learning experience.  Again, as with the baseline data what is important here is that data of some description that is meaningful to the organisation and individual is captured.

 

 

Changes in skills and knowledge

 

We have captured the baseline skills data across individuals in the organisation and we have also captured when and how they are learning, what is needed now  is a method or a process of regularly assessing individual’s fluctuations in the skills within their skill group.  Again it is going to be up to an individual organisation how robust they make these assessments, at the lowest end would sit, I think, a 6-12 month self-assessment by staff of where there skills now sit.  At the highest end would be some form of regular, controlled, formalised testing process which provided solid evidence in relation to changes in key skills.  If we were thinking about say desktop applications such as Word, staff members could at set time intervals be required to undergo a standardised external test of their skills and knowledge, which should give solid, meaningful data on changes to skill levels.  The other point of note here is that there needs to be a correlation between the kind of assessment done at this stage and initial baseline assessment.  While some variety of methods of assessment will work and provide meaningful data, this cannot be said of all methods.  For example if the initial assessment had be a rigorous formal evaluation, carried out through validated assessment tools, administered professionally, but then when it came to the point of reassessment a simple self-assessment tool was utilised, it may be the case that the results of the data are not as useful as they could be.  My thinking here would be simply that the same type of assessment, even the same assessment should be used to detect changes in an individual’s skills and knowledge; otherwise we are simply not comparing the same things.

 

 

Metrics and measurements

Once this data has been collected we are at some point going to have to decide what it means either on an individual level, an organisational level or both.  It is here, I think, that the real challenge may lie.

If we look at the following example; an organisation has been capturing data on its staff skill levels across, they have also been providing staff with access to and encouragement to utilise and learn through informal methods.  After 12 months they see an average rise in skill levels across the organisation of 5% and over the same period a rise in sales of 3%.  Can we make a connection between the two, is there any correlation between the increases.

At least at this point I think you would be hard pressed to make much out of it.  It may even be difficult to make the correlation between the informal learning and the rise in skill levels, why?  Well simply because unless all they have been doing is engaging in informal learning and there has been no formal activities, or coaching and mentoring, supervision etc., then we are going to need to come up with a process of separating out the gains make from formal learning and the gains made from informal learning.  At this point we might be tempted to fall back on something like 70:20:10 and say something like, well 70% of learning is informal so 70% of the skills increase is due to informal learning, namely a 3.5% increase.  I that I would and I think that a lot of other people would find an argument like that to be somewhat less than satisfying.  It could be the case that it was the 10% formal learning component that produced the entire 5% rise in skill and informal learning did not contribute to the skills increase at all.

Even if we can make a strong case that informal learning contributed to the skills increase, we then still have to make the case that the skills increase contributed to the sales increase and unless we can show a link between the learning, the skills and the increase in sales that is going to be difficult.  Here I think it is the level of granularity that we apply to the data we are collecting.  If we can show that members of the sales department accessed a range of informal learning resources all related to closing sales, and that their skill level at closing sales went up (by whatever means of assessment we are using) and then the overall sales figures for that sales department increased, then I think we might have a strong enough case to suggest that the informal learning the staff did, had a correlative effect on the sales figures. For me however, this is still quite a big question mark.  It is hard enough to do real, robust return on investment calculations with traditional formal learning; with the complexities surrounding informal learning it is much more difficult.  Does this mean it is impossible?  No, it simply means that organisations need to really figure out what it is that they are looking for in terms of success and what kinds of metrics and measurements will provide them with evidence of that success.  If an increase in sales is an indicator of success and sales increase there has been a success, the problem comes with trying to determine what it was that caused this success.

One way which might allow us to hone in more on what it was the catalyst for the changes and therefore the success would be apply a process such as Brinkerhoff’s ‘success case method’ for evaluating training.  If we formally ask those staff who met the criteria for success what they did, or what they think was the reason they were successful and we formally ask those who were not as successful what they did and what they think was the reason they were not as successful, we can then compare the sets of answers.  This sort of process may give us some solid insights into whether there were specific things which were the root cause of the success.  The successful staff may tell us that they think the formal training was responsible for about 50% of the success, because it gave them the skills they needed, but the other 50% came from the fact that their manager was supportive of them utilising the new skills they had learnt.  On the other hand staff might tell us that one member of the team found a really good YouTube video that had a lot really helpful ideas and skills in it and shared it with the entire group and the successful staff were the ones that watched and applied the information in the video.  This sort of information would give us data from which we could begin to look at what were the significant drivers of the successes.

 

 

The competency connection

 

The other part of the puzzle around the evaluation of informal learning for me is the link to formal qualifications and measurements of competency.  Now for non-Australian readers and readers not familiar with the Australian competency based vocational education training system some of this might seem a bit foreign to start, but essentially it is all part of the same puzzle.  One thing that I think it is necessary to be clear on here is that I am talking about formally recognised types of qualifications, there is a lot of talk about things like badges (peer-reviewed or otherwise) and other types of ‘endorsements’ of people skills.  These are not what I am talking about here, and from a talent management, recruitment and learning perspective I see little value in these badges and related concepts.  What I am talking about here are formally recognised kinds of qualification or certification where there is a robust, standardised and formalised assessment attached to the awarding of the qualification itself.

One of the values I see coming from informal learning is the ability to use the information collected about the kind and type of learning undertaken and results this learning has caused, is that it provides evidence which could be used to show that staff members met the requirements for certain formal qualifications.

Take for example the following;

A staff member, who is in a retail role and has a desire to move into store management, begins to take some online courses/MOOC, both in their own time and with the support of their manager, at times when they are able to during their working hours.  The manager also allows them to undertake some stretch tasks around stock management and ordering and financial management.  The staff member also actively becomes involved in a number of online and face to face discussion groups with other retail managers and staff around increasing store sales, better stock management and staffing and HR issues in the retail environment.  As a result of these activities the staff members own personal sales increase, they receive numerous positive compliments from staff, the manager is comfortable letting them handle some of the stock ordering and closing off of the store financials at end of day.  The manager and the staff member then decide that as a result of this the staff member should look at undertaking a formal qualification around retail or frontline management and decide that they will undertake the Certificate IV in Frontline Management (An Australian Vocational Education Qualification).  If good records of the learning activities and achievements have been kept by both the manager and the staff member, they may find that significant parts, if not all depending on the length of time and the amount of experience the staff member has, of the qualifications requirements may have already have been met and that the staff member may be able to undergo a process, in Australia referred to as recognition of prior learning, to show evidence of their competency and be awarded the qualification by  the appropriate authority.

This for me is one of the great values that lies untapped as part of the informal learning process, that is transitioning from just learning that may have quite a significant outcome in terms of both the business and the individual in terms of quality of work, productivity, effectiveness etc. and adding to that the additional value of giving the staff member access to formal qualification outcomes.  This will be of particular advantage where the staff member in question does not have formal qualifications either in general or in the particular area they are working in.  This is a definite value added outcome for the concept of formal learning and one that I think really needs to be explored further.

 

 

Conclusions and final thoughts

 

Informal learning is an incredibly valuable tool for both organisations and individuals, it provides individuals with the learning that they need, when they need it, in environments which may be very conducive to their learning.  It provides organisations with an avenue to reinforce and build on the skills and knowledge that staff obtain through formal training activities and to provide them with additional skills through alternative methods of learning.  We need however to understand the value, both in terms of what that value is and what that value means for organisations.  Utilising anecdotal rules of thumb about the value of informal learning and what it provides are not useful tools, particularly when considering investment strategies around Learning and Development in organisations.  We need to treat to treat informal learning like any other part of the learning business and make sure that we know what our success criteria are, what the real value is and the return on investment that we are getting for informal learning.  We also need to embrace the concept that informal learning can be a pathway for staff to formal qualification outcomes and that this can only enhance the value that is gained both individually and by the organisation from its investment in informal learning.

 

%d bloggers like this: