VET REGULATION – DRAFT RTO STANDARDS
My response to the draft revised Standards for Registered Training Organisations (RTO Standards) is below.
Response to Draft Standards for RTOs
29th January, 2023.
This submission addressed what I perceived to be the critical issues with the language of the draft Standards. The survey required that comments be submitted by section, so this response addressed two sections: 1.2 Assessment and 5. Governance.
1.2 Assessment
Currently, the glossary definition of validation, and the intent of the definition, do not carry through to the Standards unambiguously, so there is a lack of clarity and duplication.
Glossary definition
Validation is the review of assessment systems designed to:
• ensure that the assessment tools are consistent with the training product and the requirements of these Standards AND
• ensure consistent outcomes are achieved through assessment practices and judgements.
This definition of ‘validation’ requires it:
(a) happens at two points during the workflow of an assessment system (prior to use of the assessment tools and after the conduct of assessment).
(b) scrutinise two aspects of assessment (assessment tools and assessment outcomes).
Comments on the current wording
1.2.1. The assessment system and practices are consistent with the training product.
There is a strong possibility that an RTO will be automatically non-compliant with this Standard because they are not compliant with one or more of the other Standards in the Assessment section. Is there a need for this duplication?
1.2.2. Assessment is conducted in a way that is consistent with the following principles of assessment (etc)
Principles of Assessment apply to assessment tools; to the products/materials that will be used by assessors. They must be valid, reliable, flexible and fair in their design, just as set out in the draft.
The issue here is that assessment tools are NOT ‘conducted’. They are ‘designed and developed’. Assessment tools must be consistent with the principles of assessment BEFORE they go into use.
1.2.3. Assessors make individual assessment judgements that are justified based on the following rules
of evidence (etc)
Yes, the rules of evidence apply to the individual assessor’s judgement. They are a check on the judgment and integrity of the assessor. The language of points (a) to (c) ‘assuring the assessor’ skews this meaning. Rules of evidence are not about ‘assuring the assessor’. They are asking:
(a) did the assessor collect enough appropriate evidence?
(b) is the evidence the candidate’s work?
(c) is the evidence current?
Practical questions are answered here based on a review of the completed assessment or at audit time:
• Did the candidate complete all the tasks or are there blanks?
• Did the assessor judge/mark the work?
• Did the assessor collect all tasks or are some missing?
• Did the assessor accept ‘old’ or ‘irrelevant’ evidence?
• Did the assessor check it was the candidate’s work?
1.2.4 Pre-validation of assessment tools occurs before use to ensure they are fit-for-purpose.
This standard requires that an RTO validate the assessment tools before use. Excellent. It would be much clearer if the language matched the glossary definition ‘to ensure they are consistent with the training product.’
Pre-validation – This is as real a construct as ‘pre-death’.
Fit-for-purpose – What does this generalisation mean? Who decides on the actual purpose and what is being validated? What evidence must be available to show compliance?
Requirements of these Standards – This is a generalisation that is likely to cause confusion as it is open to interpretation. If it is included, then there is a danger that an RTO will be automatically non-compliant with 1.2.4 because they are not compliant with 1.2.5 or 1.2.6.
1.2.5. Validation of assessment tools and practices is undertaken for each training product on scope to ensure assessment tools and practices are fit-for-purpose and consistent with the principles of assessment and rules of evidence.
Currently, this is a mouthful. Of course, the assessment tools and practices of every training product on scope must be validated, so just say so.
1.2.7. Validation outcomes are documented, used to inform revisions to the assessment system, and are not solely determined by those who have delivered or designed the training or assessment.
Design comes before delivery, therefore the phrase is ‘designed or delivered’.
Possible order and wording
• It would be useful if the workflow of assessment was reflected in this section.
• Summative assessment is not a learning situation. Once the assessment commences, the person becomes the candidate.
1.2.1 Validation of assessment tools occurs before use to ensure they are consistent with the training product and the principles of assessment.
1.2.2. Assessment tools are consistent with the following principles of assessment: (etc)
1.2.3. Assessors make individual assessment judgements that are justified based on the following rules of evidence:
(a) sufficiency – enough appropriate evidence is collected to enable a judgement of competency
(b) authenticity – the assessor is assured that the evidence presented is the candidate’s work
(c) currency – evidence submitted reflects the current skills, knowledge, and competencies of the candidate.
1.2.4 Assessment tools and practices of every training product on scope must be validated.
5. Governance
5.1.1. Management is accountable for leading a culture:
a) of quality training and assessment and continuous improvement,
b) of integrity, transparency, and fairness,
c) of inclusion, safety and wellbeing for staff and learners, and
d) free from discrimination and harassment.
Although the intent is laudable, the language here is problematic. It is difficult to see how it can be audited effectively. It is also likely to be a legal minefield.
Culture is a massive word that encapsulates the attitudes and behaviours of the people in the organisation. Can a ‘culture’ be non-compliant? Culture is also dynamic. If there was evidence of transgression and that transgression is addressed by the RTO, does the culture become acceptable at that point?
(a) The whole document is about ensuring the quality of training and assessment. Therefore, non-compliance in another part of the Standards would mean automatic non-compliance here.
(b) ‘Integrity, transparency, and fairness’ are subjective concepts, so there is plenty of room for interpretation. Difficult to see how this could be audited effectively.
(c) ‘Inclusion, safety and wellbeing for staff and learners’ and (d) ‘free from discrimination and harassment’ are such broad statements that they are open to interpretation. Does one complaint against the RTO, by anyone, render it non-compliant?
5.1.5. Systems are in place for ensuring high managerial agents and executive officers are fit and proper persons to oversee the operations of the RTO.
In many cases, the people responsible for operational compliance with the Standards are employees of the organisation. There is a power differential here. It is a bit of a reach to expect employees of an RTO to have access to, or monitor, the backgrounds of their employers.
If this sentence remains, then ‘Processes’ is a more accurate word. This issue does not need a ‘system’, which is defined in the Glossary of these draft standards (Assessment system) as ‘a coordinated set of documented policies and procedures.’
Possible order and wording
5.1.1. Management is accountable for ensuring continuous compliance with the requirements of these Standards.
5.1.2 Management is accountable for ensuring organisational practices are inclusive, and safe and take into consideration the wellbeing of staff and learners.
5.1.3 Management is accountable for ensuring that organisational practices are free from discrimination and harassment of staff and learners.
5.1.4 Management is accountable for ensuring the RTO meets regulatory requirements related to high managerial agents and executive officers of the organisation.