Global practices in regulating and implementing generative artificial intelligence in higher education
This article analyzes international recommendations for the application of generative artificial intelligence (AI) in higher education over the past five years. It identifies key trends, ethical challenges, and strategies for integrating AI technologies into academic practices. The study focuses on the tensions between AI’s innovative potential (e. g., personalized learning, automation of routine tasks) and associated risks (e. g., academic dishonesty, digital inequality). Regulatory initiatives, such as the EU AI Act, China’s AI standards, and developers' ethical declarations, are examined alongside successful implementation practices, including MIT’s adaptive learning platforms and SberUniversity’s AI-driven digital assistants. Key findings emphasize: the need to balance technological progress with ethical norms, including mandatory AI-generated content labeling and the promotion of AI literacy; the importance of global standards to overcome legal fragmentation and bridge the digital divide. Recommendations for universities: phased AI integration, infrastructure investments, and staff training programs. The study contributes to shaping strategies for adapting higher education to the era of generative AI, highlighting universities' role as drivers of responsible technology adoption. The findings are relevant for university administrators, policymakers, and EdTech developers.