Exploring AI agents and narrative in interactive worlds
Dynamically generating narrative & visuals
Early Prototype / Proof-of-ConceptThis prototype explores the potential to enhance entertainment by making truly dynamic and responsive visual novels possible using generative AI. Instead of pre-written scripts and static images, this engine aims to generate story beats, character dialogue, and scene visuals on the fly based on player interaction.
Currently, it integrates a language model (specifically DeepSeek V3 via the SambaNova API) for narrative/dialogue and an image generation model (Gemini via its API) for background visuals, reacting in real-time to user input. This is an early-stage experiment focusing on core mechanics, demonstrating a potential future for interactive storytelling, with more features and capabilities planned for later versions.
Platform: Browser (HTML/CSS/JS Proof-of-Concept)
AI Integration: Connects to external LLM and Image Generation APIs.
Source: Open Source (MIT License) - View on GitHub
A browser prototype for AI agent simulation
Prototype / ExperimentAI World is an early browser-based prototype exploring the potential of AI-driven agents within interactive environments. It demonstrates how AI models can perceive a simple world, communicate spatially, and follow basic instructions.
This experiment serves as a foundation for building more complex AI interactions, laying the groundwork for future AI-powered game experiences. It aligns with the "Road to Free Open AGI" mission by openly sharing the concepts and code behind AI agent simulation.
While built with web technologies (HTML, CSS, JS), the core ideas—AI perception, action, and communication in a shared space—are transferable to full game engines where the ultimate vision for immersive AI games lies.
Platform: Browser (HTML/CSS/JS)
AI: Connects to various language models via a shared API interface.
Source: Open Source (MIT License)
Future games will aim for deeper immersion, leveraging dedicated game engines (like Godot, Unity, or Unreal). These experiences will feature more complex AI behaviors, richer environments, and more engaging player interactions, building upon the lessons learned from prototypes like AI World and the AI VN Engine.
Further developing AI-driven narrative systems that can maintain long-term coherence, character consistency, and adapt plotlines based on player choices, moving beyond simple reactive dialogue towards truly emergent storytelling.
Developing more sophisticated AI agent architectures that incorporate robust memory, planning capabilities, and the ability to learn or adapt within the game world is a key long-term goal, applicable to both agent simulations and interactive narratives.
Core AI integration code, agent logic patterns, and communication/narrative modules developed during this process will continue to be released as open source, empowering the community to build their own AI-driven experiences.