[CES2025]AI, Cinematic, Spatial and XR: The Next Level of Creativity

CES2025에서 진행됐던 엔터테인먼트 테크, 미디어, 콘텐츠  세션들을 모두 정리합니다.

“XR·메타버스와 AI가 결합해, 콘텐츠 제작·유통·사용자 경험이 대전환 중이며, 음성 인터페이스와 공간 인식 AI가 ‘차세대 컴퓨팅’의 큰 축이 될 것. 다만 윤리‧공동체적 측면도 놓쳐선 안 된다.”

"The combination of XR-metaverse and AI is transforming content creation, distribution, and user experience, and voice interfaces and spatially aware AI will be a big part of the 'next generation of computing'. However, we must not lose sight of the ethical and community aspects."

Monday, January 6 1:00 PM - 1:40 PM ARIA, Level 1, Joshua 9

패널 구성

사회(Speaker 1)


찰리 핑크(Charlie Fink)

  • Forbes 테크 칼럼니스트, 대학 교수
  • 테드 실로위츠(Ted Schilowitz)와 “This Week in XR” 팟캐스트 공동 진행
  • 이 세션에서 팟캐스트명을 “AI XR 팟캐스트”로 변경한다고 발표

패널:

Speaker 2: 에런 루버(Aaron Luber)

  • 구글(Google) Director of Partnerships
  • AI 하드웨어, XR 분야 담당

Speaker 3: 레일라 메시데히(Layla Mesidehi)

  • 마이크로소프트(Microsoft) 산업용 AI Acceleration Studio의 Technical Program Manager

Speaker 4: 테드 실로위츠(Ted Schilowitz)

  • ‘퓨처리스트(futurist)’로 활동
  • “This Week in XR” 팟캐스트 공동 진행자로도 알려짐

Speaker 5: 케이티 한(Katie Hahn)

  • “CS Studios”의 SVP(수석 부사장) - 포스트 프로덕션 담당
  • VFX, 특수효과, 영화·시네마틱 관련 업무

Speaker 6: 레베카 바킨(Rebecca Barkin)

  • 라미나원(Lamina1) CEO & 공동 창업자
  • 공동 창업자: 닐 스티븐슨(“메타버스” 용어를 만든 SF작가로 유명)

(주: 세션 중 “Sphere(스피어)” 공연장, “메타버스” 관련 언급이 빈번함.)

주요 내용

1. AI가 엔터테인먼트 산업에 미치는 영향

  • XR·게임·영화 등 “크리에이티브 영역”에서 AI는 폭발적 성장을 이끌 것
  • 예컨대 AI가 영상 편집·VFX·시각효과 등의 “반복 작업”을 대폭 자동화하여 창작자들이 핵심 아이디어와 스토리텔링에 집중할 수 있게 함
  • 제작비·인프라 문제
  • 기존 대형 스튜디오는 여전히 배급·마케팅력 등에서 우위
  • 그러나 AI·블록체인 결합으로 독립 창작자들이 자금 조달·제작·배급을 혁신할 가능성 있음 (크라우드펀딩, 토큰화, Web3 등)

2. 블록체인 & AI를 통한 새로운 투자·배급 모델

  • 레베카 바킨(Lamina1)
  • “크리에이터들이 ‘플랫폼’이나 ‘대형 스튜디오’에 종속되지 않고 직접 자금을 모으고, 팬들과 수익을 공유하는 탈중앙화 생태계 구축”
  • 「프레스먼 필름(Pressman Film)」 사례 (규제 준수한 채 토큰·크립토·스테이블코인으로도 투자 유치)

3. Personal AI Agent(개인 AI 에이전트)와 윤리적 우려

  • “Character AI” 등 대화형·인격형 챗봇이 폭발적 유저 유입
  • 강력한 “친밀감” 유발 사례(10대 청소년 사건 등) → 윤리·정신건강 문제 대두
  • 패널 발언 요지: “AI 기술은 중립이지만, 사용·규제·윤리적 가이드라인이 중요하다.”
  • 대기업들조차 경쟁 압력 때문에 안전장치를 충분히 테스트하기 전에 ‘출시부터 하는’ 분위기도 있음

4. 메타버스와 XR: “M단어” 재조명

  • 2021~22년 “메타버스” 열풍 이후, Chat GPT·생성 AI 등장으로 이슈가 AI로 급속 전환
  • “메타버스” 자체가 사라진 게 아니라, 하드웨어·네트워크·AI가 아직 완성 단계가 아니기에 속도조절 중
  • “AI가 실시간으로 공간·환경·콘텐츠를 합성해주는 능력”이 향후 메타버스 발전의 핵심 촉매가 될 전망

5. XR·음성 기반 인터페이스의 부상

  • 음성(Voice)을 ‘새로운 운영체제(인터페이스)’로 활용하는 시대 도래
  • 예: 구글 Gemini, Project Astra, Ray-Ban Meta 등의 음성 입력 + 시각 출력 = 차세대 UX
  • 이어폰·마이크·스피커 + AR 글래스 → “손 사용 없이 대화하듯 사용하는 컴퓨팅”이 보편화될 가능성
  • 시각(Visual)·공간(Spatial) 이해도
  • AI가 사용자의 실제 공간·위치를 인식하고, 적절히 정보나 디지털 객체를 배치·합성하는 기술(AR occlusion, 공간 매핑 등)이 빠르게 발전 중
  • 이는 극사실적·몰입형 경험(예: 스피어 공연장, 테마파크형 XR)에도 큰 역할

6. 공동체(Communal) 경험 vs 1인용 몰입

  • 패널들 모두 “인간은 궁극적으로 함께 즐기는 경험을 선호한다”는 점을 강조
  • 고도로 개인화된 AI 에이전트·XR 환경이 늘어나도, 영화관·콘서트·공연장(예: 스피어) 등 “공유 체험”은 여전히 인기가 높음
  • 그래서 “XR도 집에서 혼자 헤드셋 쓰는 것만으론 한계가 있다. 여러 사람이 동시에 몰입할 수 있는 ‘공유형 XR’이 차세대 기회”라고 진단

8. 결론 및 전망

  • AI와 XR의 결합은 이미 실현 단계에 접어들었으며, 콘텐츠 제작(영화, 게임, VFX 등)부터 사용자 인터페이스(음성비서, AR 글래스)까지 전방위로 확산 중
  • 메타버스에 대한 관심이 갑자기 식은 것처럼 보이나, 실제론 인프라·기술적 문제를 해결하는 과정이며, 특히 생성형 AI의 등장이 “빠르고 효율적인 3D 공간 생성”을 가능케 해 메타버스 확산에 긍정적 영향을 줄 것
  • 블록체인과의 결합(크립토 펀딩, NFT, 탈중앙화 플랫폼 등)으로 창작·투자·유통 모델이 다변화되면, 기존 대기업 중심의 엔터테인먼트 생태계에 큰 변화를 가져올 전망
  • 음성 인터페이스가 차세대 운영체제로 부상하며, “내 주변 공간을 인식하는 AI”가 일상에 녹아들 가능성 큼
  • 다만 윤리·보안·프라이버시 문제와 “공유·연결(커뮤니티)” 욕구 등 ‘인간적 요소’를 간과하지 말아야 한다는 견해가 강하게 제기됨

“AI, Cinematic, Spatial and XR: The Next Level of Creativity” from January 29, 2025. The discussion runs approximately 41 minutes and features several speakers (listed below) exploring the intersection of AI, XR, cinematic experiences, and the future of creativity.


Panel Composition

Moderator (Speaker 1):

Charlie Fink

  • Forbes tech columnist, college professor
  • Co-host with Ted Schilowitz of the (formerly named) “This Week in XR” podcast.
  • Announced during the session that they are rebranding the podcast to “The AI XR Podcast.”

Speakers/Panelists

Speaker 2: Aaron Luber (Google)

  • Director of Partnerships at Google
  • Works on AI hardware and XR projects

Speaker 3: Layla Mesidehi (Microsoft)

  • Technical Program Manager in the Industrial AI Acceleration Studio

Speaker 4: Ted Schilowitz

  • Longtime futurist
  • Co-host of the “This Week in XR” podcast with Charlie Fink

Speaker 5: Katie Hahn (CS Studios)

  • SVP of Post Production, overseeing VFX and cinematic work

Speaker 6: Rebecca Barkin (Lamina1)

  • CEO & Co-founder alongside author Neal Stephenson (who coined the term “metaverse”)

The panel also references the new “Sphere” venue in Las Vegas and frequently revisits how AI and XR intersect with film, entertainment, and immersive experiences.


Key Discussion Points

1. Quick “True/False” Lightning Round on AI & XR

  • Will AI surpass the internet in impact?
  • Some say it’s essentially part of the internet and thus hard to compare. Others believe its impact could be even more dramatic.
  • Will AI cause large-scale job losses?
  • Likely a reallocation or redistribution of jobs, but also true that certain roles will vanish.
  • Will a synthetic AI performer win an Oscar?
  • Possibly, though the Academy might create a new category. Current rules pose challenges.
  • By CES 2030, will everyone wear XR glasses?
  • While “everyone” may be too strong, usage will certainly increase. Form factor still uncertain.
  • Are personal AI agents (like in the movie “Her”) the next big development?
  • Most panelists say yes. AI assistants/agents are already showing real product-market fit.
  • Within five years, will users simply “prompt” entire feature-length movies into existence?
  • Technically feasible in some form, but high-quality storytelling still needs human creativity. Likely more workable for short-form or lower-budget content.

2. AI’s Influence on Entertainment

  • Transformative power in cinema, gaming, and XR:
  • AI can automate repetitive creative tasks (e.g., basic VFX, editing, rendering), freeing creators to focus on core storytelling and artistry.
  • Production cost and distribution:
  • Major studios still wield distribution leverage.
  • However, AI plus blockchain could allow independent creators to fund and distribute content in new ways (crowdfunding, tokenization, Web3).

3. Combining Blockchain & AI for New Financing/Distribution Models

  • Rebecca Barkin (Lamina1)
  • Envisions decentralized, creator-owned platforms.
  • Example: Pressman Film using crypto and stablecoins to raise funds under regulatory compliance.
  • The idea is to bypass exclusive dependence on big studios or streaming platforms.

4. Personal AI Agents and Ethical Concerns

  • Character-driven chatbots (e.g., Character AI) have skyrocketed in popularity.
  • Incidents highlight the psychological impact and ethical dilemmas of ultra-personal AI.
  • Key takeaway: Technology is neutral but must be guided by human ethics, safe use, and transparent algorithms.

5. “Metaverse” & XR: Revisited

  • Hype about “the metaverse” soared in 2021–22, then quickly shifted to generative AI after ChatGPT.
  • Still, the concept hasn’t disappeared; it’s undergoing a slower rollout.
  • AI for real-time world-building can unlock the “infinite content” vision of the metaverse.

6. XR and Voice Interfaces

  • Voice as a ‘new operating system’ for computing:
  • Google’s Gemini, Meta’s Ray-Ban glasses, etc., showcase natural voice interactions with AI.
  • Glasses tethered to a smartphone can handle real-world sensor data, enabling hands-free XR experiences.
  • Spatially Aware AI
  • True immersion hinges on AI’s ability to perceive real-world geometry, lighting, occlusion, etc.
  • Venue-scale XR (like the Sphere) benefits from accurate spatial computing.

7. Communal vs. Individual Experiences

  • Despite personalization, humans value shared experiences like cinemas, concerts, and collective XR.
  • High-end or “premium” XR experiences (sphere-like venues, theme parks, location-based VR) emphasize communal immersion.
  • For mass adoption, the panel emphasizes accessibility, ease-of-use, and the fundamental human desire to gather and share.

8. Conclusion & Outlook

  1. AI + XR
  • Rapidly converging to reshape entertainment (from big-budget studios to indie creators)
  • AI speeds up 3D asset creation and real-time rendering, helping build persistent virtual worlds.
  1. Blockchain
  • Opens new avenues for project financing, crowd ownership, and equitable monetization (NFTs, tokens, etc.).
  1. Voice & Spatial Computing
  • Emerging as the next wave of user interfaces: hands-free, context-aware, and integrated with the environment.
  1. Ethical & Communal Focus
  • Growing concern about how these technologies impact mental health, authenticity, privacy, and social unity.
  • People still crave communal, shared experiences, balancing the hyper-personalized and isolated use of AI/VR.


AI is accelerating every aspect of production, distribution, and user engagement. XR and metaverse visions remain vital but are evolving at a measured pace, partially thanks to the sudden explosion of generative AI. The rise of voice- and context-based interfaces could define the next “operating system” layer for pervasive computing. Meanwhile, ethics, privacy, and the enduring human desire for shared experiences must not be overlooked.

AI, Cinematic, Spatial and XR_ The Next Level of Creativity

AI, Cinematic, Spatial and XR: The Next Level of Creativity
Visionaries are created after a lifetime dedicated to the joy of discovery, Today, we welcome cutting-edge creators, from among the technology industry’s best.

SPEAKERS

Speaker 5, Speaker 7, Speaker 2, Speaker 4, Speaker 3, Speaker 6, Speaker 1

Speaker 1 00:00

Is Charlie pink. I am a Forbes tech columnist. I am a college professor, and I also co host a podcast with this man here Ted show. It's called This Week in XR, which I will announce today is we are changing the name of our podcast. After almost a million downloads in four years. We are changing the name of the podcast to the AI XR podcast. So thank you very much. So before we get started, I'm going to ask the panelists to introduce themselves, starting with I've actually worked with everybody here as a panelist before, so this is going to be easy, and it's sort of like a reunion every year.

Speaker 2 00:43

So Aaron Luber, Hi, my name is Aaron Luber. I Director of Partnerships at Google, working on a lot of our AI hardware and xr efforts.

Speaker 3 00:53

Hi there. I'm Layla mesidehi. I am a technical program manager over at Microsoft on the industrial AI acceleration studio.

Speaker 4 01:02

I'm Ted. I'm Charlie's partner in crime in a futurist. For a long time, and lately, the moniker has obviously changed. People are now calling me technology impresario, which, I don't know if that impresario makes sense or not, but I kind of like it at this sort of arc of my career,

01:19

circus. Kind

01:21

of PT, Barnum issue.

Speaker 5 01:24

I'm Katie Han. I'm the SVP of post production over at CS Studios, which is, you know, the big one, if you've all seen it.

Speaker 6 01:35

I'm Rebecca Barkin, and I'm CEO and co founder of lamina one alongside my renowned co founder, Neil Stevenson.

Speaker 1 01:44

All right, I've heard of it. Let's, let's start off with just a little fun true false game. So we'll just go down the

Speaker 4 01:52

line see why Charlie's good at this, different than everything else you can see today.

Speaker 1 01:57

So let's start with this one. AI will be more impactful than the internet. Maybe

02:08

I'm leaving, yes, it's

Speaker 4 02:10

a trick question, because it is the internet. It's right. That was, that was my

02:15

answer to it is the internet, so not

Speaker 6 02:17

although it can run without the Internet. So I might go, yeah,

Speaker 2 02:20

there you go. Define what's what impact you mean, but yeah,

Speaker 1 02:24

ai, ai, is going to result in astronomical job losses. True, False.

02:33

There will be an evolution of job loss

Speaker 7 02:38

Next, I'd say redistribution of jobs. That's right, exactly true and false. False. I mean,

Speaker 1 02:48

true, yeah, it is true. So, I mean, we don't like to say it out loud. Let's keep going. Of course, a synthetic AI performer will win an Academy Award.

Speaker 4 03:01

Awesome. This. Probably true.

03:06

False. The Academy won't allow it

03:08

true, but they will create a new category.

Speaker 1 03:11

There we that was the best answer. All right, true, false. Adam. We will all be wearing XR glasses at CES 2030

03:21

Yeah, we already are true. Likely, yes,

03:26

I don't think it'll be glasses.

Speaker 6 03:29

Oh, man, I'm gonna go with false, with the operative word being all, okay,

Speaker 1 03:35

let's see. Oh, here's, I personally think that personal AI agents is the whole game. That's what that's what everybody is playing for. That is the Super Bowl of this competition. That's my personal opinion. So true, false, personal AI agents like her will soon become, Yeah, true.

Speaker 3 03:57

I think this idea Absolutely, absolutely, we'll

Speaker 1 04:01

have to unpack that one in a few minutes. Okay, last one in the next five years, consumers will prompt movies into existence.

Speaker 2 04:14

Maybe, I don't know, true and true, maybe false capability wise, true. But will they do it? I don't know.

04:23

That's how you define a movie. I'd say it's possible.

04:25

I think we're all sort of partially true.

04:29

Han movies, maybe

04:31

true, but mediocre. Yeah,

Speaker 1 04:33

that's the one I would I would without a human loop. It's hard to imagine. But on the other hand, a lot of the things that we're experiencing every day now were once hard to imagine. So let's, let's go to entertainment. So how do we think? Well, I'll make this a two part question. How do we think AI is affecting the entertainment industry writ large, and then specifically the XR immersive entertainment industry, which many of us

Speaker 4 05:08

take that to start, I think with every page that we turn, in terms of our compute, literacy, use case and value, we know how this story works. We have learned for the last 120 years how technology affects the creative arts and the most populous creative arts, movies, television, video games, etc. This stage of what we do with computer. Remove the AI nomenclature from it, just talk about advanced compute and everything that it can bring to the table across all forms of creativity and creation. We are entering into an age of astronomical growth and new opportunities and new probabilities around what computers will do with creative tools and creative spirit. That's, that's the the umbrella that we all live under.

Speaker 6 06:00

That's a starting point. I think what's interesting is, like the entertainment business needs to really hold this kind of moat around distribution and creation. And even as the internet came along and took the you know, made distribution democratized, the cost of making creative was still really high. Over the last 20 years, went up 20% so this kind of the the advent of AI, is helping people concept a lot more efficiently. It's helping people get through some of those early stages, you know, cost inefficient, and so I think it's helping to take down that, that final remaining kind of note and stranglehold that the traditional entertainment business had on economics.

Speaker 1 06:41

Yeah, but distribution is still, of course, big media companies still very hard, no, without them,

Speaker 5 06:50

I don't agree with that at all. I think that's one of the most was that it actually took down the barrier to distribution. And the means of distribution being in the hands of anybody has actually brought forward some incredible creatives that we wouldn't have had, and in fact, right at the top levels of Hollywood, like Issa Rae, for example, who wouldn't be where she is today if it weren't for the fact that she had the ability to distribute her own content. Well, this

Speaker 1 07:15

is an interesting topic, because something that has been discussed a lot recently, people may have been reading about it recently, is the switch in social media that's taken place over the past couple years, driven by Tiktok and shorts, which is content recommended not by what social media has done for the past 15 years, content and recommended by your friends and acquaintances. Instead, it's content recommended by an algorithm. And the thing about it is that it's much stickier, and people like it worse.

Speaker 3 07:49

I mean, it's personalized. The algorithms like catering to you, right? It gets

Speaker 1 07:55

better. It learns what what you like. So in that sense, that's AI taking over the viewing experience.

Speaker 2 08:01

Yeah, I like the YouTube analogy, and I think the missing piece, the key piece, is still kind of the monetization and attribution. These are key pieces that entertainment requires for there to be enough of a groundswell. I think Ted, your umbrella is correct, but that's still something that's kind of in in development and needs to be really figured out before there is an ability for a YouTube type of a moment platform that could come in,

Speaker 5 08:38

but that's where blockchain is going to come in, and I'm sure you can speak to that. Yeah. Thanks.

Speaker 6 08:44

Nice. Hand off. Yeah. I mean, really, what we're trying to do with lemon, oh one is build a truly decentralized, creator owned platform, so owned by the people who are actually populating the content on it, but built on blockchain, so you have transparency and co creation, transparency of the capability to actually distribute equity in these projects, you can co create directly with fans, because, to your point about social and what we've seen with like tick tock and some of these short form content pieces is that the Creator economy is killing the traditional studio, which, to be fair, I don't even think it's great, Like, I don't even know if it's good but, but I think what it says is, like, we we do need a platform where the funding is creative and the funding of content is not driven by by tech platforms who who really operate under a very different type of incentivization and monetization model than those who we want driving our stories.

Speaker 1 09:43

Can we elaborate on that a little bit? Because it's not platforms. I mean, it is a different platform. So how would that work?

Speaker 6 09:52

Well, so we have at lamina, there's there's partners that we're working with names, but they are working on revolutionizing the kind of the financing model around independent, creative projects. So if you're in that, like, five to $15 million sweet spot for funding, it's really, really difficult to get financing. And some of these projects, which are, they take stable coins, they're crypto based, and they leverage traditional GP and LP networks. They can do regulation, cognition, Reg D and REG CF, crowdfunding projects. So it's changing the financing model as well. And I think our thesis was that distribution, you have a capability, but it's really difficult to get to the size of a network that traditional studios have where, like the monetization, the upside is big enough where creators will take that risk. We felt blockchain could do that. If you had a global network that was that big, and the validators were part of the monetization of the economics, that you could get a sizable network to be able to rival traditional distribution models.

Speaker 1 10:54

So we're talking about Republic is the platform, as opposed to go fund me or one of the other crowd, Kickstarter, instead of getting goods, you get tokens, right? Who, if you are investing in tokens presumably represent a share of profits or ownership, yeah.

Speaker 6 11:16

Republic, actually, you can do traditional financing as well. So, like, people can invest in a traditional way. They can use stable coins, they can use crypto. Eventually, we would love for them to be able to take something like 1101, and be able to accept that as a form of investment. They don't today, but, you know, we're new. We're working on it. But what you're doing is, like, you might have seen Pressman film. They're kind of film company. They have, like, American Psycho and Wall Street, an incredible catalog of film. They did a recent raise, yes, there is $2 million in like, less than 90 days, I think, to be able to fund creative development of a new slate. And those who are contributing to that investment will own a piece and be a part of that community of future productions and whatnot. So I think we're really looking at a revolution in the way that we're financing creative projects. And I think that's really, really important for all of us who have worked in AR and VR and XR. You know that a lot of your projects were funded. I was at Magic Leap for a long time. Projects were funded by the platform to push certain feature development, and they're abandoned as soon as that's deprecated. And so you have this huge shelf of content that can't be accessed, isn't owned by them, isn't monetized in a way that reflects their values. We're looking for a change.

Speaker 1 12:36

Let's, let's switch to AI agents, topic that I brought up earlier. So we have this platform. I don't know how many people here have used character, AI, but it's rather astonishing and very powerful, the way that it imitates another person. As we know. There's a famous story of news about a young teenager who committed suicide because he was so overcome with this character, created using character AI, now part of Google, directly or indirectly, and but it's certainly you've got meta Just released its version of character AI, and I guess it was taken down right away because there were some tactical issues with this, obviously, not exactly, but obviously that's a direction that they also like character AI is really committed to. So instead of like the anonymous person I deal with when I'm using open AI for voice, this is like a real personality that almost has an agenda.

Speaker 2 13:50

So I think there's a number of things that are happening. The character and Google relationship was more of a rehire the founders, the researchers that came back to Google. Characters completely separate. I think that no doubt what we saw in 2024 and kudos to character and others, there is real product market fit with the way people interact and engage with large language models and characters and chat bots, whatever you want to call them. And there's a lot of usage, lot a lot of usage. It's, it's product markets, no doubt, a lot of the things that are being developed in AI right now, that is clearly one that you point at the bigger you know. And I'll just talk about Gemini for a second, you know. We released Gemini 2.0 at the end of December, and these are, you know, major changes. If you look at just what's happened in 2412 months, 24 months, 12 months, major advances happen to large language models, from from reasoning to cost, to compute, to all of the things, right? I think in in one year, you have 20, at least 20 companies, only 20 companies that are organizations that are building GPT four like models right in one year. And so the Gemini 2.0 side, and the evolution of what's happening across the industry is we kind of think about Gemini 1.0 is as that ability, when we released it, to really understand and and understand information, and Gemini 2.0 as finally a moment for it to be, for a large segment to be really helpful. And so the agentic side of this and era, I think, is very true, because of a lot of what is the capabilities are now in terms of multi modality. And so you have what was been very much multiple models for doing video or doing text or doing audio or whatever. And these things are now coming together into one can be one model that can have multimodal input and multimodal output, and that gives you the capability. We announced a project called Project America, which is essentially allowing you to look at any website and not just ask information the large language model, but then large language can understand what you're looking at to take action on them. So if you're on a travel website, it can say, you know immediately, what is this or something about your travel, and then go ahead and start planning my travel.

Speaker 1 16:12

The agent suddenly penetrates almost every part of your life when you consume entertainment, the way you relate to others, the way you spend your leisure time, the way you consume games and other content. I mean, it really has, you

Speaker 2 16:27

know, some profound implications. Yeah. So my point is, and I'll just hand off, like it, you know, you have, you have one absolute product market fit that's happening in the space. We've seen that, and the capabilities of the large language models are now coming to a point where you can do some really extraordinary things from a helpful perspective for

16:45

what they can do. Yeah, I

Speaker 3 16:46

was just gonna add to that as the agentic model. I think beyond just looking at a piece of content, looking at a website and pulling information out, starts to be able to analyze the content that you're looking at. So in I mean, just an example of a genetic AI framework recently developed. It took four or five personas and managed to deliver. It was basically in the construction space, not in statement related, but from a daily report function. It managed to deliver within minutes, what took hours,

Speaker 4 17:20

I think an important thing to reflect on in this conversation that you're having calls it out pretty robust in terms of what happened with character AI and some other things. We are in an age of unintended consequences right now that is unparalleled in any other era of what we've done with humanity and compute and the acceleration and the competition that all this is created of get it out there, and we'll see what happens. Is very hard to put a gate off your competition is driving you toward Don't worry about the gates, right? I mean, couples had an interesting kind of arc with that trying, with their best ethics, to hold this thing at bay and watch others kind of run over it, and then had to kind of orient their ethics, and then constantly, you have these unintended consequences. So I don't necessarily have an answer for this. It's just worth reflecting on what is going on with our human species and how we relate to these powers of computer. We haven't

Speaker 1 18:26

fully processed the effects of social media and mobile phones,

Speaker 5 18:30

but Ted, you're you're doing what a lot of people do in this space, which is you overestimate technology and underestimate humanity. And we attribute far too much to the power of technology, but technology doesn't exist without humans. And I think it's an important thing for us all to remember here, when we start talking about what could happen, technology doesn't exist without users, and no technology is either good or bad or ethical or unethical. It's all about how we choose to use it. And I think you know, from from our perspective, a lot of what we're seeing is that, you know, we're developing the technology world is developing all these very personalized, individualized experiences, you know, but what we're actually seeing is that people really want these communal experiences. The reason why theater never died when televisions came into people's homes. And, you know, we're seeing so much of development in the XR space and the AI space of these very individual, personalized experiences. You know, when, you know, in my sort of, you know, forecasting side of my career, I've always subscribed to this ABC of future adoption, right, which is it has to be accessible, better and communal, and we're forgetting that communal element that's, that's, again, you know, with my sphere head on, that's why we are so successful in the sphere, as opposed to other immersive and xr experiences that have had limited adoption, because it's the only one that is communal. It is an immersive experience that is actually communal, that people share, and the way that that's why people love still going to the movies, and the sphere is better than going to the movies. It is accessible, because you have to put off, you don't have to put glasses on, you don't have to do any of those things. But it's also a communal experience that everybody can share. And I think we forget about that when we talk about technology, we forget about the human experience, which is really, you know, something that we're really seeing people coming back to these, these really basic human connective experiences. And the more we go down the road of having these very personalized, very isolating technologies, I don't believe that humans themselves are going to necessarily want all of that. There's

Speaker 6 20:42

also just quickly, like, there are options outside of

20:49

Google. And I think this is like, we forget

Speaker 6 20:52

there's, I mean, it's really like, if you're worried about the speed and the pace and the lack of, like, understanding and transparency around waiting and how answers are coming about. There are programs like banister AI, which is fully decentralized and fully open source, and it publishes all of the information you could ever want about how it's making decisions about what it tells you. Like, where did it get the answers from? Why did it give you that answer so that you can it still inspires critical thinking, which I think is like one of the things that's scariest about just relying on incumbents.

Speaker 1 21:23

I'm really glad that you brought up this idea of the communal, the shared, because I do think that's important, and it leads me to the next topic area, which is the metaverse, right? So the M word something we haven't said at CES a whole lot the past couple of years. And we are you going to get kicked out for that? I know, but it still, obviously is a thing. It's still growing. It's still real, and I think huge impact on its ability to scale and attract more user attention, more time. Yeah,

Speaker 2 22:00

Rebecca and I were talking about it. I think that again, just look at like, everything that's happened in 24 months. We were up here two years ago, and you told me about COVID moments at Google, and it feels, you know, hot and heavy in the moment, but like so much happened so fast, or that was right after chatgpt, short period of time, things have evolved. And so there's been projects that we're working on, projects across the industry, absolutely right? There's a lot. There's so much happening, phenomenal. But text to the procedural mapping. I saw a text to CAD kind of somebody built a text to CAD model there. These things are moving very, very quickly. Big in the gaming space already happened in the gaming space. And I think if there's a path that the hardest thing, as someone who worked at the beginning and middle and still working, whatever you want to call it, stages of Metaverse, the hardest part was getting the thing created, the development right. So much work has to go into if you're talking about this instant experience, that it can be anything and everything. Is a world that doesn't stop How do you create that? How does a human create that? And so AI coming to be helpful to create that in real time and do the work that clearly, from a cost, from a time, from all the things that are required to do this that's happening, that we are seeing those steps. And again, I go back two years. And so what I think we might be talking about as

Speaker 5 23:32

well, the metaverse concept was, like the original AI concept that Han had, you know, it was, it was ahead of its time. It couldn't be done. And when everyone started talking about, I mean, where it is in the heart, where it was in the hype cycle. You know, really, two years ago, was at a point where conceptually amazing, like reality, physics wise, compute wise, would not, not, not possible, and everyone's going to forget about it very quickly. And now, where we are in the hype cycle is very interesting, because everyone's kind of scurrying away and waiting for compute to catch up, just like they did in the 80s and 90s with with AI. I mean,

Speaker 4 24:08

I think the ultimate provability Is there are certain things that need a word and certain things that just don't need a word. So Neil would be the first to tell you like it came from fictional story. It's a piece of nomenclature that kind of sense inside inner circles, but it didn't to live on a layer that we already had plenty of words in nomenclature for which is how we use, you know, compute in all these different ways. So I think that's kind of what isn't. Of course we have it, and of course it happens all the time, just in various ways. You pointed out communal activity and different artistic identity, but it just didn't require a new piece of nomenclature to latch on to me that was a COVID. I'll tell people we're like, why do we need to say this? I mean, I think

24:57

obviously given meal because,

Speaker 6 24:59

you know, he was that word was necessary at the time, because I think, to your point, Ted, like, there just wasn't a way of talking to people about how much time they would be investing in persistent games and e com. And at the time, it just wasn't that familiar. And so coming up with this sort of marketable gimmick of a name of like, how we're going to progress was, was really important. We backed away from it a little bit because it just has solo baggage. So when we were trying to fundraise around Metaverse and crypto, which years ago was a nightmare, we kind of just were like, look, the metaverse is, is all around you, which, as long as you continue to invest in digital assets, digital identity in playing video games, in e com like that is the progressive like, sort of dissolution of a barrier between physical and digital identities. And that's really what we're working towards, is an identity that can fluidly move between those different spaces you me.

Speaker 3 26:18

Allow me to be part of the experience. Allow me to create my experience and then, and then it learns about me and then delivers the experience that it thinks I want. I think

Speaker 1 26:28

that's a great point. I think it makes it more social, more personal, just to digress for one second, or rather explain something that came up regarding the metaverse, AI and XR, which is more interesting topic, is that we all started talking about the metaverse. Now, I wrote a book in 2017 called Charlie thinks Metaverse, and Metaverse was used, and I'm not promoting it. It's a tech book that's eight years old, so don't go back. But you know, Metaverse, in that sense, was everything in tech is connected to one larger vision of how we will live within the future. So that's how Metaverse there. But then in 2021 right? Sort of at the end ish of the pandemic, and if you remember, the beginning of the pandemic, not yet have requests, right? VR really was not ready for prime time, but we in the immersive industry did not get to choose the timing of the pandemic. But VR came out, and it turns out VR is really great when you have what you want to be together, what you have to do. And so that led Mark Zuckerberg, who is one of the most influential and wealthy people in the world, to decide to change the name of his company to meta, to reflect the potential of his company to do that. So that's why we all started talking about the metaverse, and the minute that open, AI came out with chat. GBP, 13 months later, we stopped. So that's, you know, says the professor. That is what happened. We were all there anyway. So let's move on to AI. So Ronnie Abbott's, the third member of our podcast team, said pretty much in November of 2022 AI is what XR, that useful day, everyday thing, without it having context and some awareness of who you are, where you are, what you're doing, what you usually do. But then we have so so what does that mean? I seem to like,

Speaker 4 28:52

I'll start the conversation. Start the conversation. So I think what I always look at around this terminology of XR, which is kind of the global referral of all these different parts. It's collective of technology that we call immersive or all encompassing virtual reality. Is it's a stair step from the most pedestrian, most mainstream, most collectively used, moving up in various forms of exotic and more hard to achieve and hard to monetize. But it's a stair step right that continues to move from things that we do on our mobile phones that have an XR component now and an AR Mr. Component, to various mainstream headsets that are selling in the 20, 30 million sort of range, to more exotic location based entertainments, the things that enter into like the theme parks of the future, the most exotic use cases. And that's an interesting and then the sphere is kind of a great example of now, take it and use that as an XR platform, but not visualized as a group sitting inside a gigantic virtual reality experience, right, which can be done with the right sort of software approach to it. So I think it's interesting and interesting to get everybody's comments as to we're on this stair step trajectory. And people need to figure out, as they choose, what business trajectory they want to do, the big corporations, startups, where do they want to be on the staircase? Do they want to be taking a shot at the biggest pool with the most competition? You talk about something like YouTube, 500 hours a minute, and the double edged sword of yes, we can create lots of content with AI, but we'll create way more content than anybody can ever consume and monetize things like that. Two. Where do you sit on your exotic scale? If I want this to be something like sphere, which is now not a one off, it'll be a couple of them, but still a very small amount of these things that can only monetize in a certain way, and you know where you fit in that. So I think that's the XR cologne as important well. You

Speaker 5 30:46

did speak about sphere. So I'll take this one look from our perspective. I think, you know, there is obviously XR is something that people are interested because it's better than what you have at home on your flat screen. People are looking for more immersive experiences. There have been since the beginning of entertainment, people are always looking to be more and more immersed in an experience and taken away from the doldrums of their own lives. So we're all looking for experiences in entertainment that is more and more immersive. And I've also said, and I believe that the future of integrated entertainment is that we will essentially define not by television, games, things like that, but by passive or, you know, sort of a passive or active experience. That's right. So if we think about defining entertainment as passive or active experiences, I think there is room in that new world for everything from small headset based, phone based, things too, like, you say, more exotic, premium experiences like sphere and in the same way as the kind of technologies that we have now, you know, in our homes, you know, do range. In that sense, they will continue to range outside of the home as well. So, you know, there is definitely room there. And in the way that sort of you sort of saying, How is AI going to impact the XR ecosystem? You're right, there is going to be because there is a place for that small, smaller scale, less premium experience. There is a lot of room for that to be built and enabled by AI. And then where there is, the more premium stuff, like we see in the theatrical experience as well, the more handmade or to or to driven really unusual high, you know, high sort of production value experiences are also valued by audiences.

Speaker 4 32:31

What I will do, this will refer to your thing is it will give new forms of breakout hit, just like YouTube gave new forms, gave new forms of things that were budgeted completely differently, had a completely different reason to exist on the planet, and became that one in a billion shot that kind of holy cow, yeah.

Speaker 2 32:52

Again, what I said, we'll be just talking about Metaverse capability. That's I want to bring about use cases, right? So before business, you want to talk about consumer use cases. And I think that in all the years, I've been working on this since the car club days, this, AI is providing I worked on the very earliest stages when that was a 20% project in 2018 and at the time, you know, we always thought like, wow, if you could put this in a pep war, kind of experience that's that's going to be our holy grail. I think just there are so many ways to think about this, slices talk about but the use, from a use case perspective, personally, from working in this for solo AI, is providing unique type of experiences to xr that I think are really starting to show this beautiful narrative of either a VR headset, AR headset, air classes, we just announced, Android XR, completely integrated with Gemini. We are, you know, we have Gemini built in for people to experience things like Google Maps, Google search. Have multi modal experience so you can see and understand and talk to a Gemini model that will give you information back. Ray Ban, meta has done a great job in getting their product out glasses and having an ability to talk to a large language model is our use cases. Project Astra is one that we've been in development for a while. We're very excited about that. Bringing that to a wearable form

Speaker 1 34:23

factor. Glasses are a great platform for sensors.

Speaker 2 34:27

Yeah, yeah. And it's providing the ability to finally do these things. And so, yeah, I think that, like, personally, I love it. I'm using it I do these things. And for me, as someone who's been in XR for a long time, I'm like, these are use cases that are very helpful, useful to me. Come back to the synthetic side of what we're talking about, and just what this could be for other things. And it's we've been working on for a while, and starting to see this horizon really finally start crisply,

Speaker 1 34:53

come together. Do you think that voice could be the new operating system? Voice

Speaker 2 34:57

is massive. Voice is huge. And if we demonstrate this from Android XR the UI element. I think this is the major thing that we've announced, is that with this deep integration with Gemini, it's really all built in the UI and user experience interaction. You're using voice, you're talking to something. You're talking to a model that's going to get you a better way to get around than using the hand control your hands, or whatever that is. You're seeing those things, but voice, I mean you literally do a notebook LM and audio overview, podcast stuff. Voice, I personally, if

35:27

you haven't tried notebook,

Speaker 5 35:29

it is mind blowing. Hey, as somebody with a funny accent, voice has always been an issue for me, and I think that accessibility is a big key to adoption of these things. And yellow merits. We love the voice thing when it came out, and was a whole lot of memes on the internet about Scottish people trying to use Siri.

Speaker 3 35:51

Nobody could use Siri. I agree with that, because Siri doesn't she understand me and she's British on my phone. But voice natural language. I do think, I mean, the models are getting smarter. You can pick up any accent, any you can speak in any language, but the voice interaction, taking it back to content for a second, we talked about like the sort of the premium versus maybe the less premium content. I think one of the things, no matter whether maybe your studio, or you're a small creator, is the ability to focus your attention on the temple, on the creative that really you might not have had a budget for. You might not have had time for by enabling AI to just take these sort of the grunt work that I don't mean that in a disrespectful way, but take on a lot of the basic content creation around you, and then allowing to focus on the way to,

Speaker 6 36:44

I think one of the one of the simple sort of frameworks that we use is like, you know, if AI is really about sort of decision, decision making, and xr can be the layer of sort of experience, right, user experience and interaction with that World. And then blockchain adds to that transparency and security, so transactions and identity. And if you put those all together, and you look at sort of the way that the Saudi market, for example, is investing in large scale, mixed use, retail, immersive environments, as well as Macau, there's a lot of investment there. There's investment in Africa, in North Africa, and these large scale sort of immersive cities, almost idea gaming cities, a 10 square mile radius immersive gaming city. I mean, it's wild, but there needs to be sort of a through line that that runs through all of that, where you take your identity and your assets and all these sort of things through you, and then AI adds the personalization aspect, right? So, like, that's the contextual awareness that allows me to make this reactive and make it personalized to you. And so you'll see all these different barriers of entry, based on ease of you, based on frequently you engage, how much information you give it, and that'll sort of give you your kind of different levels of engagement and quality of experience

38:05

and premium

Speaker 4 38:08

about what you guys were talking about. You're all talking about personalization, and that the audio, the mouth and ear have kind of entered the age of its first maturity, right? Its first kind of almost mainstream touch points with people that are tech focused. But the third leg of that stool is we've got the mouth and the ear working, so we've got these two the Holy Grail, maybe. And this is the question part of everybody, is that third component, the visual component you're provoking?

Speaker 1 38:35

Yeah, I want to see what he wants me to say, a debate that we've been having on our podcast all year. Okay, so here's the thing, you have to have your smart glasses, your XR smart glasses that have all the great sensors on them tethered to your smartphone, right? But this is a pretty good display. I mean, we watch movies,

39:01

I'm sorry, as a as a VFX professional

39:07

for me,

Speaker 4 39:10

worldwide, success. Profile, yes, yes. This form factor,

Speaker 1 39:16

my question is, and it sounds like doing XR for 40 years, but let me just say it finally, if I'm carrying this around, what would the display in my glasses before?

Speaker 4 39:28

It's a really good question. I don't want to answer it. I want to hear. I

Speaker 3 39:33

mean, I agree. I mean, first of all, I mean carrying a phone around you, you're holding it. You're not necessarily present in the moment to the experience that you're having. So having different display output and using the phone as a compute side of things, I think that makes it very present very well. And when we talk about communal I mean voice part

Speaker 1 39:52

I was thinking, and then when you needed to see something, you could see it. But meanwhile, you'd be like in the movie, or they'd be starting your voicemail, skip, skip, skip, skip, you know, and you go through your voicemail like that, and that's all voice. I mean, it's pretty powerful scene, because it's just a completely different way of thinking about your relationship with your computer. When you tell everybody on the train in that scene, sorry, everybody on the train in that scene is mumbling to themselves, that's the really interesting thing about that. Sorry,

Speaker 5 40:22

so in the conversation about, you know, visual, you're saying sort of the next thing you know, beyond voice and hearing, is sort of visual. I think, you know, I subscribe to what faith Ali has been working on, which is that spatially aware AI is actually the next frontier. And from the as a visual effects professional, for me, that's that's very important. And I think it's the one barrier to always looking to give audiences a more immersive experience, a more real feeling, you know, and again, spatially aware compute. I think it's where that's going to really be the holy grail for those real, premium visual effects cinematic in immersive experiential.

Speaker 6 41:05

If you can combine contextual awareness with real world occlusion, like the level of immersion is really outstanding. That's

Speaker 1 41:12

a great question. Thanks for listening everybody. Enjoy the rest of CES, when they can.

Newsletter
디지털 시대, 새로운 정보를 받아보세요!
SHOP