The Faculty considers compliance with the rules set out in the Dean's directive to be not only a matter of control, but also a social and cultural issue. Maintaining the quality of education and the value of degrees is a shared responsibility of the university community, based on integrity. The following principles provide guidance in this regard:
(1) Responsibility and academic integrity: All members of the Faculty have exclusive ethical, legal and professional responsibility for the content of the intellectual products they publish, regardless of the tools used in their production. Since AI tools do not possess understanding, intent or moral accountability, it is the author's responsibility to verify the factual content of the generated content, statements and references. The communication of false information, the uncritical acceptance of distortions, or the unchecked use of sources (whether the result of intentional or negligent use of technology) constitutes a violation of academic integrity.
(2) Decision-making responsibility and intellectual work: The use of generative AI must not lead to the abandonment of human decision-making, critical thinking and intellectual autonomy. The purpose of these tools is to expand the learning, teaching and research capacities of the Faculty's members, not to replace human intellectual effort and cognitive processes. In all cases, the user must retain control over the work process and the final result.
(3) Informed use and competence: The professional and responsible use of AI tools requires basic AI literacy. It is the shared responsibility of the Faculty's members to inform themselves about the operating principles, limitations and possible biases of the technologies they use, thereby ensuring the appropriate, safe and ethical use of the tools. To this end, the Faculty provides additional support and development opportunities and encourages knowledge sharing and the collection of good practices.
(4) Human-centredness: The integration of AI cannot replace, but only support, human interactions within the university community. Technology cannot replace the professional discourse between teachers and students or colleagues, personal mentoring and community knowledge sharing, which form the basis of university life.
(5) Cultural and linguistic sensitivity: It is the shared task and responsibility of the Faculty's members to pay critical attention to filtering out cultural, linguistic and other biases when using AI tools. Users must check that the output of AI does not contain unintended biases that are alien to the educational or research objectives of the task or the cultural background of the target group. The Faculty supports the culturally conscious use of technology based on local knowledge bases and taking into account the specificities of the international educational environment and cultural diversity.
(6) Environmental sustainability: The Faculty is committed to responsible resource management and environmental awareness. The regulations encourage Faculty members to use AI purposefully and sparingly, considering the necessity of using energy-intensive models for trivial tasks, given the significant ecological footprint of the technology.
(7) Data protection and research ethics: It is prohibited to upload personal data, sensitive research data or participant responses to public AI tools that do not have a data processing agreement with the University. Exceptions to this rule may only be made if the research participants have given their express, informed consent, or if the data has been completely anonymised and is processed in a closed system that guarantees, either technologically or contractually, that the input data will not be used by the service provider for model training.