In accordance with the instructions of the rector and dean, the syllabus of every course advertised by the Faculty must include a separate section entitled "Rules for the use of artificial intelligence."
This section must specify:
- the AI usage requirements formulated on the basis of the framework below
- the documentation and disclosure obligations related to the framework, and
- a link to the directives of the rector and dean
Priority area: Rethinking the assessment system for courses that rely heavily on unsupervised work
A complete prohibition on the use of artificial intelligence can be ensured with sufficient certainty in controlled environments (e.g. closed-book exams, oral exams, supervised computer lab tests).
In unsupervised environments, compliance with the ban is difficult to verify, so the ban alone cannot be considered a sufficient quality assurance guarantee for such tasks.
If a significant portion of the final grade for a course (guiding principle: approximately 30% or more) is based on assignments that are completed in an unsupervised, asynchronous manner, and the instructor wishes to apply the "Prohibited" (Red) label to them, the assessment system must be reviewed in consultation with the programme coordinator when planning the semester.
Review options:
The instructor must consider modifying the assessment to ensure its validity:
Change of environment: Moving the assessment to a controlled space (e.g. oral exam, written exam in class).
Methodological adaptation: Modifying the task so that the use of AI is permitted (Yellow or Green signal), supplemented by the documentation and process monitoring requirements detailed in these regulations.
Exception procedure and approval:
If the instructor considers it professionally justified to maintain unsupervised, prohibited tool use (Red) assessment, and the modifications specified above cannot be implemented, the course is subject to approval. The program coordinator will inform the instructors about the process.
Principles of transforming assessment
It is the professional responsibility of the instructor to adapt the course requirements so that the grade credibly reflects the student's actual knowledge and the achievement of the course learning outcomes. In view of the possibilities offered by generative AI, this requires particular attention in the case of tasks performed in an unsupervised (asynchronous, e.g. at home) environment (e.g. online tests completed at home, papers to be submitted).
Process-based assessment: In order to preserve the integrity and quality of learning, it is recommended that teachers transform their assessment methods to take into account evidence of the learning process, thereby reducing the exclusive weight of the product. We recommend that the prescribed documentation requirement should not be merely an administrative appendix but should form part of the student's grade (e.g. with weighted assessment), thereby recognising the student's work invested in the transparent and reflective use of tools.
Further regulations regarding instructors
Transparency: In order to create transparency and mutual trust, teachers are required to indicate in the course material or during the lecture if they have relied significantly on generative AI in the creation of the educational content (e.g. slides, notes, exam questions).
Data protection: Teachers may not upload students' intellectual property (assignments, exam papers) or personal data to public AI tools that do not have a data processing agreement with the University.
Tool fairness: In order to ensure equal opportunities, it is strictly prohibited to make the use of AI tools that are only available through a paid subscription mandatory. All mandatory tasks must be achievable at a high level using free versions or licensed tools provided by the University.
Inclusive approach: When planning assessments and assignments, teachers should be mindful of student diversity (e.g. linguistic background, disability, socio-economic status).
Making the use of AI mandatory should not disadvantage students with special needs.
The use of AI as an assistive technology for students with special needs is expressly supported and encouraged, provided that it does not compromise the fundamental learning outcomes of the course.
Use of AI in assessment and feedback: The use of generative AI cannot replace, but only complement, the pedagogical relationship between teacher and student. Although AI can be used to generate preliminary, formative feedback, summative assessment and decision-making (grades) as well as final, personalised feedback must always be provided by the teacher.
High-risk application: Given that Annex III of the European Union's Regulation on Artificial Intelligence classifies as high-risk AI systems those applications that are used to assess the learning outcomes of natural persons (including the management of the learning process), assessing the appropriate level of education, and observing and detecting prohibited student behaviour during tests and examinations,
The Faculty strictly prohibits the use of these systems without substantive human review to determine student grades or to make decisions that have a legal impact on students' academic progress (e.g., access, classification, advancement).
Right to change and legal certainty: The AI usage rules specified for the course at the beginning of the semester may be modified during the semester in the event of changes in the technological environment. However, any restrictions imposed during the semester shall not apply retroactively to work that has already been started or submitted. Students who have acted in good faith based on the previous rules shall not be disadvantaged.
Rules of procedure in case of abuse
Standards of proof: Alerts from AI detection software (e.g. Turnitin AI detection, GPTZero) cannot be used on their own as exclusive evidence of student misuse. This prohibition is based on the scientific fact that the reliability of these tools is currently limited, they often produce false positive results, and they have been shown to exhibit systemic bias against the writing of non-native speakers.
Conclusive evidence: In all cases, the determination of plagiarism must be based on conclusive evidence from multiple sources. This includes, in particular:
The absence or manipulation of mandatory declarations and documentation;
Lack of logical or substantive connection between documented prompts and the final result;
Fundamental, irreconcilable inconsistencies between the student's oral validation, performance, and the content or language of the written assignment.
Scalability and spot checks: In order to protect the integrity of the assessment, instructors reserve the right to conduct random oral checks and audit process documentation on a random sample of submitted work. Spot checks or auditing of process documentation are standard elements of pedagogical assessment and quality assurance, so we recommend incorporating them into the course as part of the learning process. The purpose of this is to verify that the student has the knowledge and competencies reflected in the submitted work.
Pedagogical feedback and professional correction: As part of their autonomy and commitment to quality education, instructors are entitled to provide professional feedback without resorting to formal verification procedures if the style, quality or coherence of the submitted work raises suspicions of inappropriate use of AI tools or inadequate student performance. In such cases, while maintaining a partnership with the student, the teacher may initiate:
a discussion of the professional shortcomings of the work (e.g. excessive generalisation, lack of sources, stylistic inconsistencies);
revising or supplementing the assignment in order to reinforce the student's own voice and ideas;
drawing the student's attention to the importance of conscious and ethical use of tools.
The aim of this pedagogical intervention is not to punish, but to correct the learning process and develop academic culture.
Encouraging transparency: In order to support the learning process, the Faculty distinguishes between pedagogical errors and ethical misconduct.
Pedagogical error: If a student violates the rules for using AI in a course (e.g., relies excessively on the tool, or the work does not meet the expected level of independence), but honestly indicates this in their statement and documentation, their action is not considered an ethical violation or academic fraud. In this case, the instructor will evaluate the work according to professional criteria (even as unsatisfactory) due to the lack of independent performance.
Ethical misconduct: If the student uses a prohibited tool and conceals this in the declaration, or makes a false declaration, their actions constitute a violation of academic integrity and will result in disciplinary proceedings in accordance with the rector's instructions.
In the event of a violation of these regulations, the disciplinary and ethical procedures specified in the rector's instructions shall apply.