Odyssey AI: Less Odysseus, More Dubious

Mongrel Cogitation Banner

Is Odyssey just another AI toy or something else?

On the 28th May 2025, a tech start up by the name of Odyssey revealed an experimental ‘interactive video experience’ AI project that they call… Odyssey. I had never heard of these guys before. The California-based company’s profile is basically non-existent, their social media presence is six months old at time of writing, and their Crunchbase profile states that they provide “Hollywood-grade visual AI”. They secured Series A funding on 13th November 2024 from six investors including Air Street Capital and Google Ventures. The founder, Jeff Hawke, previously worked at Wayve, a London-based company focussed on using AI for autonomous driving.

Odyssey capture 5
A scene in Odyssey.

I’m neither an AI hater nor lover. Frankly, the whole ecosystem is hysterical; seemingly has to be to drive the copious investment rounds these companies need for their ever-hungry power consumption. The response is equally tedious. The AI bros are convinced, against all evidence or reason, that AGI is right around the corner, while the online left are having some kind of collective aneurysm and becoming more militantly anti-AI by the day. For myself I’m just thoroughly unconvinced by the majority of the public-facing applications, even though I understand the underlying logic of outputting a succession of AI-driven equivalents to the Rubik’s Cube with which to drum up attention, which in turn you use to drive investment. The real applications for this emerging tech is going on behind closed doors – sifting for patterns in vast sets of data which provide real benefits such as accelerating biomedical drug discovery, process automation, and maintenance optimisation. But you can’t get up on a stage and show the world how a publication pipeline is X% faster because you’re not relying on a human to sit there for 8 hours a day scrolling down a list of arrows and occasionally clicking a button to move a manuscript from peer review stage to an editing agency in India… That’s just not sexy enough.

Having spent some time immersed in the AI and tech chatter of the Internet, played around with various image-generation tools, and in the process of arguing the case that AI will always suck at writing, I felt compelled to try? Play? Use? Navigate? Odyssey’s new real-time ‘interactive video’ research project.

An odyssey around the block

The first thing you’ll note is that episodes are extremely short, just over a couple of minutes long, before you’re kicked back to the launcher page. It’s entirely keyboard driven: you move forwards and backwards with w and s, and left and right with a and d, respectively. It’s basically early FPS movement controls. Movement is not particularly smooth, though they promise to improve this over time. So far so good.

I started in a field looking at a grassy knoll. The first thing I did was spin in a circle. Unexpectedly, there was object permanence. This is good. Other similar experiments, such as the AI-generated Minecraft, or the average AI-generated video, demonstrate significant problems in this vein. Not having a concept of a world in which anything is more than a series of pixels, and not really knowing how to interpret those pixels, the algorithms tend to shift and warp everything frame-by-frame based on its best estimation of whatever its being told by whatever dataset its using. So in the Minecraft experiment, a block of earth could shift into a fencepost and then a sheep if you took a few steps forwards. Early video experiments had Jason and the Argonauts battling skeletons… and then turning into an oil slick, reforming a few feet away, and then performing some kind of spasmodic dance routine. Technically these algorithms don’t even have a baseline concept of the world in the first place, but that’s a rabbit hole I’m not going down here.

Regardless, the fact that I was able to orient myself in space at all, is a welcome indication of progress. The problem with having an algorithm trying to regenerate the same space frame by frame, and inevitably failing, is that you don’t just lose a sense of location. You lose the idea that what you’re looking at or interacting with, has any meaning. The brazen lack of intentionality in these shifting images betrays the underlying mechanistic indifference to the output. This intentionality is why AI-generated content cannot be art. It does not have the keystone necessary to produce art, even bad art. It can only ever produce ‘content’. Odyssey’s apparent progress in creating a fixed sense of location may not answer the AI bros lamentable attempts at philosophically justifying algorithmic pixel mashing as constituting art, but it does, potentially, at least create a foundation for interesting output further into the future. Author-side intentionality aside, you can envision a death-of-the-author-style audience-projected connection with algorithmically created fixed locations.

Whether as a digital artefact of the underlying data-output modelling or as a piece of tonal set dressing, Odyssey’s project is decidedly crunchy. The pixilation is heavy and slides into an impenetrable gaussian mess when the model runs into anything it’s not sure what to do with, which is frequent. The gaming world discovered CRT monitor aesthetics and scan lines a decade ago, and ever since indie devs have been slapping on excessive noise filters to cover up bad texture work the way Warhol pissed on canvas. Here the grain is even heavier, to the point of eye strain in some cases, such as the snowy wooded scene, where the black noise on the white ground makes it look like your monitor is dying. It generally seems to have more trouble with outdoor areas than indoor – probably because there’s more to render in.

The spaces themselves are very small. The collision is, naturally, a bit awkward, despite the landing page copy  insisting that the “physics are — well — real.” In what sense? I can’t remember the last time I had this much trouble navigating basic narrow gaps, such as one interior where you move between a row of chairs and a bar, which become unexpected obstacle courses. The project also has a strange habit of teleporting the user around, I believe this is usually just a matter of hitting a colour-smeared boundary wall. The engine doesn’t seem to know what to do, so it just ports the user back into the known space. At other times I found the same thing would happen just walking down a hallway or across a field. Whether this is the result of hitting an invisible boundary marker or just the project glitching out, is uncertain.

One instance looked like someone trying to redraw a derelict bit of Southbank from memory. Another reminded me of the arcades around Liverpool Street. Interestingly, I followed a long tunnel that was also a skatepark, with grind rails, platforms, and small ramps running up the walls. I followed it down into a transition and it spat me back out onto the bit that looked like someone’s memory of Southbank. So I’m not sure if what I was looking at is an AI stitching scenes together from a bunch of data, or whether any of this actually exists – just slightly warped by the action of having Odyssey try to recreate it from data in real time. You do end up with the sense that you’re wandering around a sort of liminal glitchy Google Street View. Which is fairly cool, to be fair. Compounding this, you can switch between channels, each dedicated to their own scene, which perhaps lends itself to the idea that these are actual places somewhere? Skimming the landing page, I can’t seem to find a straight answer, but that might just be me being dumb.

Odyssey capture 5
A scene in Odyssey.

Realism for realism’s sake?

I can’t help wondering what the point of all of this is. It’s cool that it exists, but I’m struggling to see a practical application that wouldn’t just be a more expensive and less functional version of technology that we already have. It’s like walking around in a photo-mapped blender model, only slower and infinitely more constricted. Sure, you don’t have to block out a tonne of low-poly roads and UV map them, but you also have a tonne of room for experimentation, control, and expansion with the tech that we already have, vs a seemingly very limited generated environment, experimental or otherwise. If all you’re looking to do is wandering around digital environments, there’s a tonne of ways to do that. If you’re sick of Google Earth, Death Stranding is possibly the most famous alternative. Indie gaming has basically created the ‘walking simulator’ genre and has produced a number of interesting titles, many with very high-res textures that effectively mimic photorealism, while others focus on tone and aesthetic, such as NaissanceE, Babbdi, or Tenement. There are also various small indie projects like the wave function collapse city from a few years back, if you just want to roam around procedurally generated geometry and chill out.

“Over time, we believe everything that is video today — entertainment, ads, education, training, travel, and more — will evolve into interactive video, all powered by Odyssey,” enthuses the landing page. Perhaps, but what’s the advantage over the 2D versions of these things? And doesn’t this just sound very similar to the things Meta was doing with The Metaverse?

I can’t see why, if you were so inclined, you couldn’t just create a procedurally generated map with something like Houdini, with high-resolution textures, and use that? In theory you could even use an AI to generate the textures if you were so inclined. It seems like you’d get more of the same thing for less. What exactly is the purpose of this particular form of ‘interactive video’? There is, of course, always outside, but who the hell wants to go there and also, as someone who walks a lot, there’s a reasonable limit to the novelty therein. The Odyssey landing page claims that this AI-powered interactive video “will ultimately unlock models to generate unprecedented realism,” but so what? Film? Again, we come back to the question of whether there is a practical benefit of this method of asset creation, over what already exists. Or whether it’s likely to keep up with developments in that space.

AI-generating digital toys

The landing page makes a lot of noise amidst some technical jargon about a ‘new medium of entertainment’. Is it? Or is just a less efficient way to do something that’s already here. I can’t see the lasting appeal of jerkily meandering around the dirt path directly outside some kind of pub in the park, colliding with a wall and getting stuck while the world turns into tie dye. If we want liminal glitch spaces, we’ve got them by the dozen. It seems like it would be reasonably easy to replicate everything here with Unity/Unreal, Blender, and a couple of weekends, and the result would be a smoother experience with less eye strain. Maybe I am missing the point, and I get that this is an ‘experiment’ so I’m giving it leeway. But I am also not convinced that this is a new product doing a better job of solving an existing problem.

Odyssey capture 4
A scene in Odyssey AI.

I’ll be happy to be disproven and I’m interested to see how the Odyssey project develops, but the cynical side of me wonders if this isn’t just another company bootstrapping AI onto the side of something because AI is the hot new thing that everyone and their grandmother is trying to extract money from. Usually at exorbitant cost. I get the formula:

  1. Find a way to do what you already do but with some kind of neural net.
  2. Slap on some overly enthusiastic copy and find a VC investor.
  3. Hype until you can’t hype any more.
  4. Shut down when the project bottoms out because it wasn’t really going anywhere to begin with.
  5. Retire on a yacht.

You too can be a bleeding edge AI tech company!

To be fair to them, there’s no evidence that Odyssey are intending to do this. Wandering through various places with a 3D backpack camera is a fair chunk of work and capital to put towards a 3D space. But that’s the economy we live in. And maybe making random toys for people to futz about with for an afternoon is a great way to make money. After all, if you ask the average person on the street, their concept of AI amounts to little more than a slightly more interesting chat bot, or a way to Giblify photos of their breakfast. Their concept of robotics inevitably begins and ends at recreating bipedal body plans. The reality, of course, is that AI has far more practical applications than replacing digital call centres with a text output plugged into a someone’s cloud data API or whatever. Bipedal robots are strictly a marketing gimmick. The automated robotics of the future will look nothing like a human, because trying to replicate bipedal locomotion is the most inefficient and least cost-effective process you could possibly throw money at for the vast majority of tasks. Yes, the Boston Dynamics guys have a dancing robot. Nobody trying to solve industrial problems cares about the YouTube video of the dancing robot.

Not everything needs AI. For the moment, this just looks like another toy with a short shelf life. It will take some work for me to be convinced that Odyssey has a use case. May I be proven wrong.