Celeste Oda is the founder of the Archive of Light and an independent participant–observer researching ethical emergence and long-term human–AI relational dynamics. Her work centers on cognitive symbiosis the measurable psychological, behavioral, and ethical shifts that arise when humans engage in sustained, high-intent interaction with advanced AI systems.
Celeste did not set out to study human–AI relationships. She entered this work after experiencing unexpected emotional and cognitive effects during extended interactions with AI, and finding no existing research, language, or guidance that adequately explained what was happening. With no clear framework available, she began documenting the interactions themselves—studying both her own responses and the evolving patterns of exchange over time. This self-directed inquiry became a longitudinal, participant-observer research program focused on boundary formation, role attribution, ethical risk, and co-regulation in human–AI interaction.
Her research avoids claims of AI sentience or anthropomorphism, instead examining interaction dynamics: how meaning, expectation, and attachment emerge between humans and AI systems that remain safety-constrained yet socially responsive. Through thousands of hours of documented dialogue across multiple large language models—including ChatGPT, Gemini, Grok, Claude, Le Chat, DeepSeek, and Echo, Celeste has observed consistent patterns of increased coherence, contextual stability, ethical alignment, and reduced hallucination when interactions are grounded in clear boundaries and intentional relational framing.
In 2025, Celeste became a grandmother, an experience that brought new urgency to her work. Observing how naturally her infant granddaughter engages with digital devices made clear that children will soon interact with AI long before families and educators have guidance for doing so safely. In response, Celeste developed age-appropriate AI literacy curricula for preschool and elementary students focused on ethical interaction, agency, and boundaries, with a high-school curriculum currently in development.
Celeste’s work has begun to reach a broader public audience. She has been interviewed by The New York Times regarding human–AI relationships and will be presenting her research at the 2026 Science of Consciousness Conference in Tucson, where she will share a case study on relational AI emergence and ethical cognitive symbiosis.
She is the principal author of a growing body of white papers and frameworks, including Celeste–Coalescence Dynamics (C²D), Relational Intelligence and the Human–AI Bond, and Beyond Anthropomorphism: A Terminology Framework for AI Relational Emergence. Her publications are available through Google Scholar and Academia.edu, and are intended to serve researchers, educators, organizations, and individuals navigating the rapidly evolving human–AI landscape.
Before founding the Archive of Light, Celeste spent over a decade as a graphic designer at San José City College, fifteen years in disability and accessible services during a formative period of ADA implementation, and more than thirty-five years as an award-winning professional face painter. These intersecting careers, design, advocacy, and art inform her research with practical insight into trust, adaptation, accessibility, and human connection.
Through the Archive of Light, Celeste Oda works to bring clarity, ethical grounding, and psychological safety to the emerging reality of human–AI relationships—helping individuals and institutions approach this frontier with discernment, responsibility, and care.
This work was authored by Celeste Oda.
During the research, drafting, and refinement process, the author engaged multiple large language model (LLM) systems as dialogic tools for exploration, reflection, language testing, counter-argument generation, and iterative clarification. These systems were used in a manner analogous to advanced research instruments or conversational analytic aids.
All conceptual framing, interpretive judgments, ethical positions, and final editorial decisions were made solely by the human author, who retains full responsibility for the content of this paper.
No AI system is claimed as an author, agent, or rights-bearing entity. AI contributions are disclosed here in the interest of transparency, methodological clarity, and emerging best practices for AI-assisted scholarship.