The rapid diffusion of Artificial Intelligence (AI), and particularly generative and predictive systems, into the tourism sector has intensified long-standing debates regarding ethics, sustainability, trust, and governance in digital transformation. While AI-driven applications promise increased efficiency, personalisation, and innovation, they simultaneously introduce complex socio-technical risks related to data exploitation, algorithmic opacity, environmental impact, and institutional accountability. This paper argues that the future viability of AI-enabled tourism depends not on technological advancement alone, but on the systematic integration of human-centred ethics, sustainable design principles, and robust regulatory readiness. Positioned at the intersection of digital ethics, sustainable tourism studies, and technology governance, the paper advances a conceptual framework for understanding AI not merely as a functional tool, but as a normative force reshaping power relations, responsibility structures, and value systems within tourism ecosystems. It challenges the dominant techno-optimistic narratives that frame AI as an inherently progressive driver of sustainable tourism and instead problematises the conditions under which AI may either support or undermine long-term social and environmental goals. Central to the paper is the notion of human-centred AI, understood as a design and governance paradigm that prioritises human agency, dignity, and well-being throughout the lifecycle of AI systems. In the tourism context, this entails recognising travellers, workers, and local communities not as passive data sources, but as moral stakeholders whose interests must be actively safeguarded. The paper critically examines how algorithmic decision-making—ranging from automated customer profiling and dynamic pricing to content generation and sentiment analysis—can subtly reshape tourist behaviour, labour conditions, and destination sustainability, often without explicit consent or transparency.
A key analytical contribution of the paper lies in its exploration of trust as a mediating concept between AI adoption and sustainable tourism outcomes. Trust is conceptualised not as a subjective attitude alone, but as an emergent property of institutional practices, governance mechanisms, and socio-technical design choices. The paper argues that trust in AI-enabled tourism systems is fragile and contingent, shaped by perceptions of fairness, explainability, accountability, and ethical intent. Where such conditions are absent, AI risks exacerbating scepticism, resistance, and reputational harm, ultimately undermining both market performance and social legitimacy. The paper further situates AI governance within the evolving European and global regulatory landscape, with particular emphasis on data protection regimes and emerging AI-specific legislation. Rather than treating regulation as an external constraint on innovation, the analysis reframes regulatory readiness as a strategic and ethical capability. From this perspective, compliance with data protection, transparency, and accountability requirements is not merely a legal obligation, but a foundational element of responsible digital transformation. The paper argues that tourism organisations that embed regulatory awareness and ethical foresight into their AI strategies are better positioned to achieve resilient and socially sustainable growth. Sustainability constitutes a second major axis of analysis. The paper critically evaluates the assumption that AI adoption inherently promotes sustainable tourism outcomes, such as reduced environmental impact or improved resource efficiency. While AI can indeed support energy optimisation, demand forecasting, and impact measurement, the paper highlights the often-overlooked environmental costs associated with data-intensive infrastructures, including increased energy consumption, technological lock-in, and rebound effects. This dual perspective underscores the importance of moving beyond symbolic commitments to sustainability toward evidence-based, measurable, and transparent practices. Methodologically, the paper adopts a conceptual and normative analytical approach, synthesising insights from ethical theory, sustainability research, and technology governance. Rather than focusing on isolated empirical cases, it constructs a multi-level analytical framework that maps the relationships between AI system design, organisational governance, regulatory environments, and sustainability outcomes. This framework is intended to support future empirical studies by providing a coherent structure for examining ethical risks, governance gaps, and trust dynamics across diverse tourism contexts. The academic contribution of the paper is threefold. First, it advances the theoretical integration of human-centred AI principles into tourism research, a domain where ethical considerations have often been secondary to operational efficiency and market performance. Second, it enriches the sustainability discourse by critically interrogating the environmental and social implications of AI-driven tourism, thereby challenging reductive narratives of “smart” or “green” digitalisation. Third, it contributes to the emerging literature on AI governance by demonstrating how regulatory readiness and ethical design can function as enabling conditions for long-term innovation rather than as barriers. For the broader academic community, the paper offers a timely and interdisciplinary perspective on one of the most pressing challenges of contemporary digital transformation: how to align technological capability with human values and planetary limits. It speaks to scholars in tourism studies, digital ethics, information systems, and public policy by providing a shared conceptual vocabulary for analysing AI-enabled services within complex socio-economic systems. Moreover, it invites critical reflection on the role of academic research itself in shaping responsible innovation practices, urging scholars to move beyond descriptive analysis toward normative engagement. In practical terms, the paper’s insights are relevant for policymakers, destination managers, and tourism organisations seeking to navigate the uncertainties of AI adoption responsibly. By articulating the conditions under which AI can support trust, legitimacy, and sustainability, the paper contributes to the development of governance models that balance innovation with accountability. These models emphasise participatory design, ethical impact assessment, and continuous oversight as integral components of AI-enabled tourism systems. In conclusion, this paper contends that the promise of AI in tourism can only be realised through a deliberate commitment to human-centred governance and sustainable practice. In an era characterised by generative systems and algorithmic decision-making, tourism stands at a critical juncture where choices made today will shape not only competitive advantage, but also social trust and environmental integrity. By foregrounding ethics, trust, and regulatory readiness, this study provides a theoretically rigorous and socially relevant contribution to the international discourse on sustainable digital transformation.