Core principles

Assessment design is foremost guided by the principles outlined within ACU's Assessment Policy, the Principles for Use of Artificial Intelligence in Teaching, Research and Research Training Policy, and TEQSA's Assessments and Academic Integrity.

Assessment security

Assessment security serves to ensure confident judgements about student learning and uphold the integrity of ACU's degrees. To encourage the enactment of the assessment principles and ensure assessment security across ACU's degrees, assessments are categorised into two lanes.

  • Secure Assessment Lane: Secure assessments are supervised assessments that directly evaluate students' knowledge and/or skills, ensuring that learning outcomes are met independently by each student. Supervision can be human or technology-based and assessments can be in-person or online. These assessments incorporate full or restricted use of GenAI by students in accordance with the requirements of the assessment task, as determined within the discipline.
  • Open Assessment Lane: Open assessments are conducted in unsupervised conditions and are aligned to learning outcomes. These assessments can incorporate explicit use of GenAI by students in accordance with the requirements of the assessment task, as determined within the discipline. Recommendations for responsible and effective use of GenAI are provided.

Content on this page has been adopted and adapted with permission from the University of Sydney.

Clarifications on Secure and Open assessments

Not quite. Secure assessments are where students’ attainment of learning outcomes is checked under secure conditions. This may or may not involve the use of AI. In some situations, it will be perfectly legitimate and realistic that course learning outcomes need to include the productive and responsible use of AI.

An example we use often is business: a growing number of professionals are using generative AI tools in developing business proposals, so it follows that business courses help students engage with these tools, and therefore learning outcomes would be aligned to this. A secure assessment (for example, an interactive oral assessment) where students are in an authentic 1:1 situation (perhaps a mock client-consultant interaction) could be perfectly valid. Many secure assessments, though, are likely to prohibit the use of AI.

All secure assessments carry some level of vulnerability. The advent of AI-enabled wearables and tools such as AI pens means that even invigilated, in-person tasks are not immune to misconduct. However, secure assessments increase our confidence in the authenticity of student work because they include conditions—such as time constraints, supervision, and restricted resources—that make misconduct more difficult or more obvious. While not infallible, these assessments reduce the likelihood of undetected misuse, particularly when used strategically and diversely across a course.

The inclusion of some online tasks as part of the two-lane approach also provides us with more options when designing our courses and units, as well as ensure our online degrees, such as those offered by ACU Online are considered in the two-lane approach. The primary focus of the two-lane approach is to ensure that assessments validly measure student achievement of course learning outcomes. While academic misconduct remains a concern, prioritising assessment validity – balanced alongside assessment security – helps us ensure assessments are inclusive, effective, and meaningful.

There are several reasons why open assessments are valuable. First and foremost, there are simply some learning outcomes that cannot be measured – or measure validly – under secure conditions. Learning outcomes that require students to create, reflect, develop, and analyse often require time and an iterative process. Secure assessments don’t afford these requirements and trying to make these kinds of learning outcomes fit into a secure assessment can jeopardise assessment validity. We need open assessments because our learning requires them.

Open assessments are also valuable because they provide students opportunities to develop, iterate, correct, and self-regulate. Secure assessments often tell us if students have learned, while open assessments tell us if students are learning. Both are necessary in a balanced curriculum.

A third reason why open assessments are valuable is that they provide students with opportunities to engage in their studies in ways that authentically reflect the worlds they currently inhabit and those they’ll move into after graduation. Through their studies, students can explore and learn the opportunities and limitations of different information sources, including AI tools. Open assessments allow students the best opportunities to engage in this process. Not offering such opportunities risks graduating students who are not prepared to serve communities as best they could.

The two-lane approach to assessments is a structural mechanism designed to assure student learning, but best-practice in assessment design needs to be layered over both lanes. Assessments that are valid, inclusive, and contextualised – regardless of lane – are critical to ensuring student learning is meaningful.

Open assessments are primarily about learning the ‘knowledge, skills, and soul’ of the unit, course, and discipline. As the TEQSA guidelines point out though, in a world where AI is ubiquitous this involves the use of AI. However, learning about and how to use AI is not the primary goal of open assessments.

Open assessments emphasise assessment for and as learning, guiding students in using tools like generative AI responsibly as part of their education. Like past innovations, AI is transforming how students learn and educators teach. Its purpose is to integrate it where relevant and teach discernment where it isn’t.

As with earlier technologies, AI reshapes how students gain skills and knowledge. Our role as educators continues to be to design tasks to foster learning, to support adult learners' self-directed and experiential engagement, and use Secure Assessments to evaluate progress securely.

As covered in the previous answer, generative AI will and has already changed the ways in which we work and the nature of disciplinary skills and knowledge. As a new technology, there is a period of adjustment and educators have a role in informing and helping students use it. However, as the tools become more embedded in our everyday tools and the ways in which we research and create knowledge, our need to narrowly help students use AI will be replaced by how we help students select and use the appropriate tools for our discipline.

Equipping our students with critical literacy evaluative judgement capabilities and understanding of how our discipline generates, tests, and creates new knowledge with all the tools at their disposal are still the most important attributes for our graduates.

Assessment and curriculum design

Students sign on to do a course or degree. Course-level assessment refers to the intentional and coordinated design of assessment practices across an entire course to ensure that students develop and demonstrate the required knowledge, skills, and qualities from the beginning to the end of their studies. Unlike assessments focused on individual units, course-level assessment takes a broader view, aligning assessments across multiple units and progression points to validate learning and support student development holistically.

Key characteristics of course-level assessment include:

  1. Integration across the course: Assessment and feedback are deliberately connected across all units and year levels to support cumulative learning and progression.
  2. Alignment with learning outcomes: Assessments are aligned with the overarching goals and outcomes of the course, ensuring that students meet the competencies expected by the time they graduate.
  3. Validation at key progression points: Learning is evaluated and validated at specific stages within the course, such as after completing foundational units, core units, or at certain year levels.
  4. Support for individual development: Assessments are designed to help students develop progressively, considering their growth from enrolment through to graduation.
  5. Trustworthiness and assurance of learning: Especially in the context of generative AI, assessments are structured to assure academic integrity and validate student learning in a way that is supervised and credible.

Course-level assessment focuses on aligning and validating assessment across units and progression points within the context of diverse course structures, as guided by principles like those in the Higher Education Standards Framework.

Selecting which tool to use for a task, using it well to perform that task, and evaluating the final output requires skill, disciplinary knowledge, and evaluative judgement. These skills are core to course-level outcomes and must be developed in the units of study that form its curriculum. Educators have always given students a choice of resources and tools, and explained their strengths and weaknesses.

In recent years, students may have become used to thinking of units as being separate entities with their role in developing course-level outcomes unclear. It is important that students realise through our conversations with them and through our course and assessment design that their skills and knowledge will ultimately be tested with controlled, or no, access to generative AI in secure assessments.

The answer to this question is in part pragmatic and in part about the purpose of assessment in education.

Generative AI tools are an intimate part of our productivity and communication tools and their use cannot be reliably detected. Generative AI is already being added to ‘wearables’ such as glasses, pins and hearing assistance aids. It is not only sci-fi writers who are beginning to think of a future where AI implants are available. Genuinely securing assessment is already expensive, intrusive, and inconvenient for the assessed and assessor. Crudely – there is often a higher workload associated with secure assessments and this will only grow over time.

Assessment drives and controls behaviours that are critical to the process of learning. Each of our courses needs to include an appropriate mix of open and secure assessments, noting that the former includes using generative AI productively and responsibly in the context of the discipline. For courses thought of as a series of core, elective, and capstone units of study, this may mean that (i) some units have only secure assessments, (ii) others have only open assessments, and (iii) some have a mixture.

It's also important to consider the implications for staff and student workload as per the current Assessment Policy, which states that assessments need to be both equitable and manageable for students and staff, while supporting optimal student learning outcomes.

Within a course: (i) some units will have only secure assessments, (ii) others will have only open assessments, and (iii) some have a mixture.

Take the example of ACU’s Bachelor of Arts course. It might be that the learning outcomes for a major should all be assessed through secure assessments in a capstone, and other units include open assessments.

In the context of the TEQSA guidelines for assessment reform, it’s important to consider that “trustworthy judgements about student learning in a time of AI requires multiple, inclusive, and contextualised approaches to assessment”.

Secure assessments assure the learning outcomes of the course (e.g. degree, major, specialisation or perhaps even level of study) rather than of the individual units, following the Higher Education Standards Framework (Threshold Standards) 2021 legislation which emphasises outcomes at a course level. These learning outcomes are available in CMAS (Curriculum Management System). Many courses, particularly those with external accreditation, also map and align the outcomes of their units to course learning outcomes. The move to the two-lane approach to assessment may require some rethinking of how our curriculum fits together, rewriting of these course learning outcomes, and even reconsideration of whether we only assess in individual units of study.
We know that marked assessments can drive student behaviour and promote effort, and engage students in the process of developing disciplinary and generic knowledge and skills, Whilst applying marks to open assessments is likely to be motivating for students, the focus of marking should be overwhelmingly on providing students with feedback on both the product of the task and on the process which they used. In some contexts, it may be appropriate that the assessments and even the units become about satisfying requirements rather than for marks (i.e. pass/fail). Since assessment can engage the process of learning, aligning learning activities to open assessment is crucial even to the extent that assessment and learning become synonymous. 

Units with only open assessments contribute to the learning outcomes of the course. Open assessments for learning drive and control the processes by which students develop disciplinary and generic knowledge and skills, including but not limited to the ability to actively use generative AI ethically and responsibly.

Units with only open assessments may have specific roles in a course (e.g., developing particular skills or dispositions) which students also use later or elsewhere in units with secure assessments. In a Biomedical Science course, for example, a series of units might develop experimental and analytical skills through open assessments which are then assured through a capstone research project unit with an oral exam and supervised skills assessments rather like research higher degrees. In a humanities course, students might be exposed to different methods for critical analysis of texts in exclusively Open assessments units, with a later capstone unit requiring a writing task assured through a series of supervised short tasks and an interactive oral assessment.

Large cohort units tend to occur in the initial years of our courses when students need to acquire foundational knowledge and skills. The change to the two-lane approach to assessment is a good opportunity to think about how these units contribute to the outcomes of the course in which they sit. For example, knowledge of the fundamental physical and biological sciences and experimental approaches is important across many disciplines across Health Sciences, but these could be assessed without a requirement for grades.

At the moment, many of these units already have secure assessments in the form of exams. As the foundational knowledge and skills in these courses also includes how to select and use appropriate learning tools and resources, these units will also likely need to include open assessments. Accordingly, it will be important for unit and course coordinators to consider reducing the weighting of secure assessments in these units and considering, for example, the use of a year-level secure assessment instead of many secure assessments within each unit.

The emergence of powerful generative AI tools has already started to change the ways in which we work and in which disciplines create and extend knowledge. It makes sense that course-level outcomes, which describe what our graduates know and can do, are likely to change. Law practices, media companies, banks, and other industries are already requiring graduates to be skilled in the effective use of generative AI tools. Our PhD students will similarly need to be equipped to perform research and create knowledge using current tools and to adapt quickly to new ones.

Alongside rethinking course learning outcomes to reflect these changes in technology, the two-lane approach requires detailed reconsideration of how unit learning outcomes are written, aligned, and mapped to the course-level ones.

Additionally, the rapidly expanding abilities of AI are an important moment for higher education to reconsider what it is that we want students to learn. For a long time, higher education has focused on the acquisition and reproduction of content knowledge, only more recently reprioritising towards skills. It’s possible that a more drastic reconsideration of learning outcomes is in order, in a world with this capable co-intelligence.

Assessment validity

Please refer Phillip Dawson's paper on the importance of assessment validity, where he advocates for assessment designs that are authentic, inclusive, and capable of withstanding challenges posed by emerging technologies, such as AI-enabled cheating.

It is already not possible to restrict or detect AI use in assessments which are not secured, and it is not possible to reliably or equitably detect that it has been used. Any unenforceable restriction such as stating that AI is not allowed in open assessments may be untenable. In line with ACU’s Principles for Use of Artificial Intelligence in Teaching, Research and Research Training Policy, ACU seeks to "maintain and reinforce academic and research integrity by establishing clear guidelines on how Artificial Intelligence can be used in teaching, learning, assessment and research ... and should consider the creative and appropriate uses of Artificial Intelligence to foster a responsible approach".

With AI tools becoming part of everyday productivity tools we all use, such as free tools including Microsoft’s Copilot for Web and ChatGPT, it is important for educators to discuss their expectations with students.

As noted above under ‘Assessment and curriculum design’, Open assessments can powerfully engage students in the process of developing disciplinary and generic knowledge and skills, including but not limited to the ability to select and use generative AI tools effectively and responsibly. Whilst applying marks to such assessments is motivating for students, the focus of marking should be overwhelmingly on providing students with feedback on both the product of the task and on the process which they used.

As noted above, it may make sense for such assessments to be pass/fail only with no other mark applied, however the decision will lie with the discipline.

A true open assessment will involve students using AI to support their learning and develop critical AI literacies. Fundamentally, open assessments aim to develop and assess disciplinary knowledge, skills, and dispositions, as any good assessment would. You may wish to focus open assessment rubrics on higher order skills such as application, analysis, evaluation, and creation. You may also wish to include how students are productively and responsibly engaging with generative AI in a disciplinary-relevant manner.

Plugins exist for all of the common browsers which give students undetectable tools that leverage generative AI to answer multiple choice and short answer questions in Canvas and other assessment systems. Testing internationally suggests that these generative AI tools are capable of at least passing such assessments, often obtaining high marks and outscoring humans even on sophisticated tests.

Any assessment which doesn’t secure the conditions of the task (which includes using particular technologies) is open. This includes open-book online quizzes, take-home assignments, and traditional essays and written reports.

Because collusion or plagiarism are not graduate qualities we wish to develop. The ethical and responsible use of generative AI will be a key set of knowledge, skills, and dispositions that our graduates will need, much like our graduates need the knowledge, skills, and dispositions to use the internet, to work with others, to influence, and to lead.

Supporting students and staff

The shift to the two-lane approach will require rethinking of both assessment and curriculum. It will take time for us to embed the approach in all courses and longer still to perfect it. However, this approach is designed to be future proofed. It will not change as the technology improves, unlike approaches that suggest how to ‘AI-proof’ or ‘de-risk’ particular assessments. As this happens, open assessments will need to be updated and the assurance of secure assessments will become more difficult but the roles of the two lanes will be maintained.

As noted in each section above, the two-lane approach needs to be thought of at the course level. If done well, it will reduce staff and student workload through reduction in the volume of assessment and marking with most being open assessments: fewer exam papers and take-home assignments to prepare and mark and more in-class assessment. The focus on open assessments will also change the need for special consideration, simple extensions, and academic adjustments through the universal design of assessments and reduced emphasis on take-home assignments and grading.

We have worked with members of our community to develop AI at ACU staff and student resources that provides advice, guidance, and practical examples of how staff and students might use generative AI in their activities.

The AI Hub, developed by the ACU Library, has a series of self-guided modules that can help staff build capacity regarding AI tools and AI techniques. In addition, staff can join the EdTech community of practice, which often showcases ways staff are using GenAI tools in teaching, as well as other educational technologies.

State-of-the-art generative AI tools are not cheap, and most are only available on paid subscription plans that many students will not be able to afford. This access gap, coupled with the variable engagement with generative AI between independent and government high schools, means that students have varied access to and abilities with AI.

The AI at ACU site for students (and another for staff) aims to help address this divide through resources and guidance to help students develop AI skills and literacy, and awareness of GenAI tools that are endorsed and supported by the university.

Page last updated on 11/07/2025

Service Central

Visit Service Central to access Corporate Services.


Other service contacts


Learning and Teaching
Library
Request Something

Make a request for services provided by Corporate Services.


Request something
Knowledge base

Find answers to frequently asked questions 24/7.


See Knowledge Base