Hi. So uh all right I probably do have a little bit of headset face still because I spent quite a bit of time in Venues through this day. And, when there was a good crowd of people there, it did sort of feel like Connect. We've obviously got some work to do on managing the population dynamics. As we get people through and keep them from being loners in some places and more conglomerated in others. But I think it's actually a pretty good sign.

So, you know, after Connect last year I was given an award for a lifetime achievement in VR. And I gave a kind of grumpy acceptance speech saying that I was really not satisfied with the pace of progress in VR. And generally speaking I'm still not. Most of what I talked about last year at Connect is still unresolved and still relevant. But shipping Quest 2 in the middle of a pandemic is something that I'm really proud of. We didn't think that it was a slam dunk even at the very beginning. Where for a long time we had thought that shipping headsets on like a two-year cadence was probably about the right thing. Where, you know, you go ahead and maybe you alternate between PC and mobile or something like that but with Quest shipping middle of the year going ahead and planning on shipping a Quest 2 in a year and a half instead of two years it was already pretty compressed and tight. You know, there's all these gates that you go through where you get your engineering samples and you get ready for production and you're supposed to take all of these steps where you've got opportunities to fix things and it was going to be tight. So when we got told to not come into the office just stay home I really thought that it was over that we had blown it. And it's a different thing when you aim for March and you slip a little bit but when you aim for the holiday season and you miss it it's a real real problem. So uh the fact that we are ready and it is going to go out the door is really pretty great. My current thinking is that the electrical engineers and hardware engineers in general on this they must be channeling a little bit of Scotty from Star Trek. Where they maybe hold a little bit back so under pressure they can pull off a miracle. Which is definitely not the way software developers work, but it seems to have really paid off for us this time.

Someone had commented that the rebranding here with Oculus Connect becoming Facebook Connect and rebranding again to Facebook Reality Labs that this does kind of feel like the end of the beginning and they might be right. Now personally I'm very fond of the Oculus brand and as far as i know Facebook is also, and that there's no signs of it going away. But the larger organization... VR is less than half of what the organization is. And more than half of it is doing, you know, "researchy things" like Michael Abrash talks about some of this stuff being a 10 year time frame. And that's worthy of being called labs in some ways. But I am personally like putting resources on products that are going to ship in the near term rather than technology research things and 10-year windows so i've always focused on the VR side of things but Facebook reality labs is reasonably representative of the larger organization.

Now this could be kind of insensitive, but um you know, global lockdown and pandemic should have been sort of the global coming of age for virtual reality where this was the opportunity to defy distance, defy reality and all of that, but we're only sort of accidentally benefiting from this, where not only were we sold out most of the time we couldn't just produce units that people wanted to buy. And that is not an easy thing to just rapidly change. We couldn't just say: hey the demand outstripped our expectations, let's ramp up a whole bunch more. It's a long process and it's been unfortunate that most of the time here we've been sold out, but worse all of our social experiences were basically killed or deprecated. You know we had Rooms, Spaces, co-watching and all those are gone and Venues has been in maintenance mode for this entire time. We made this huge bet on Horizon and we've had all these people working on it and you're seeing some of the fruits that finally with the Venues 2.0 now. But basically we weren't ready. We had all this effort going into it, we had let the previous products more or less rot or go away. And I had made a pitch that well can we just resurrect rooms, for this time. I mean rooms for the pandemic here. We can spin it back up, there were people that were enjoying it, we were getting to the point where it was a good experience, but it was still 3-DoF optimized. It wasn't set up for 6-DoF. We could have run it, but nobody wanted to basically stop the scheduled things and everything that was already planned for this time to go work on something like that. So frankly I'm kind of embarrassed about our social story here but thankfully the slack's been picked up by a lot of third parties. And I frankly envy the learnings that they're getting out of all this. Where you know we see the numbers and we see lots of time spent in these. And there are lessons that you just don't learn even with a big well-resourced team that's set out to, you know, told build something great. But just it's a different world when you've got thousands of real users going through it versus just your internal testers. We are getting close to the point where we're going to kind of learn those things but a lot of it's going to be relearning things that other people already have.

Unfortunately like location-based VR has probably uh taken a terminal hit from this. It's going to be a long time before people probably feel comfortable going someplace and putting on a shared public headset. And that's you know, too bad. There's been a lot of companies that have tried innovative things and some of the experiences were really pretty magical but I think it's just going to be a tough business case there. But on the other side exercise as a primary application of VR is really winning. Where at the beginning we thought that this seemed a little unlikely. That sweating in the headset just seemed like it was going to be a real problem. We had a lot of worries about fogging that we had on some of the earlier headsets that turns out not to be as much of an issue on the standalones, where they pump out a fair amount of heat and keep the lenses up. But it seems like people are okay with kind of making a sweaty mess in their own personal headset. And I mean I use practically every day VR is part of my kind of personal exercise regimen. I kind of trade off where I'll do you know my expert plus Beat Saber stuff one days and then the next day I'll put on arm weights and work from that you know as extra exercise. So some of the fun stuff is we are adding global system-wide tracking of some of the movement so we may need to stick in some extra thing there for you can click it when you have extra added weights or resistance but that's been you know a real positive thing.

Now everyone's video conferencing now. And the limitations of that are grading on everyone. As you have to kind of do it sometimes multiple times a day. Where video conferencing on the one hand yeah it's got a lot of positives to it but at the low level sort of speeds and feeds level of it the latencies are just really not that great. I mean on the one hand even cell phone latencies are not good. So many people, young people, just never experienced hardline wired telephone conversations and don't understand how bad cell phones are. And of course video conferencing is even worse. The last time I timed some of these things we had like an AT&T cellular call was 700 milliseconds latency. And the last time I timed it in Horizon it was a little bit worse like 770 milliseconds it may have improved a little bit since then, we do have sort of a mandate to go lean on this a little bit. But I want people to not make small weak goals you know none of this well we want to be better than a cell phone we should go really hardcore and say well we want 50 milliseconds of latency for conversations something better than any other kind of electronic communications medium that we've got right now. You know that would require going in and writing custom firmware for the noise cancellation and the way the audio codecs work on the headset, doing really top-notch networking having things set up properly all through everything. We're like we have layers of abstraction like on the Oculus platform, now we have a shared microphone service, which adds this extra unnecessary layer of indirection and more latency and jitter and things that we have to worry about scheduling around and backing off. But it's all fixable. I mean and maybe we probably don't get down to 50 milliseconds but we can cut this in half, we could make really significant improvements. And this is a tangible benefit. In all the times that I was talking with people this morning in Venues, we many times ran into that same thing: multiple people in the room, uh no you talk no you go uh just the whole, it's not instant like you're right there, where you get to do the social things that people evolved over hundreds of thousands of years to do well. And that sense of we talk about presence in VR you are kind of getting that sense of you are there with another person you hear the audio but lag in reality like that it grates. And it is fixable and it's something that we can make a difference on. And that could make a real difference in a lot of these meetings. When you have meetings and you get into sort of the.. um just the way you do your video conferencing.. it's not as interactive as it could be. The flow of information is not what it could be, it's somebody's talking everybody else is listening. There's not as much of the give and take as you would hope for in a good in-person meeting. So I think that our technologies have ways to improve that.

Like the spatialization again in Venues is pretty good. I was happy hearing people even up on the balconies or down below. I was able to locate people, but we can get better. There was a good video put out recently by the FRL people, their audio team, about making audio that was often indistinguishable from reality where you could have somebody talking, and if you had your eyes closed and you could guess is this computer projected through the headphones or somebody actually speaking in the room and you could get very close to kind of coin flip levels. But i've been pushing for us to go ahead and do that with our headsets where you set the headsets up and you've got the microphones and you've got the headphones and that's a wonderful case where we know what the hardware is on, both sides we can accurately map all these transfer functions and you can make that really sound like they're there. An interesting side effect of that is you lose the ability to have an actual volume control on people because there's no greater or lower volume if you're doing that. There's only the right volume. If a person is four feet away over on that side talking at this volume there is a correct volume to be coming out of the headphones and just going ahead and doing the right thing. And that sort of let's match reality. In a lot of ways this is kind of like things the revolutions that happened in rendering when you went from just well we just tweak everything you give the artists a lot of knobs, and you turn it into physically based rendering when you're talking about energy conserving uh light reflection from surfaces where we should be like that for our audio too. Where it should be physically based audio where it just is what it is and you give up some degree of control there. But it feels right. And then also in that way of matching the world one of the things that it still makes me smile about our systems is matching the reality with the virtual view , where the fact that I've never liked the fact that we still have this big light leak around the noses. There's still some of the early Gear VR headsets that blocked out 100% of the light that were in some ways a better experience there but all of our standalones have had this reasonably sizable gap around the nose so you can look out and see your hands a little bit there. And it just fills me with a good warm glow when you can take the controller and kind of like look through one eye, hold it there and see it flow seamlessly from reality up through into the lens and the whole thing fits together. And it's not there just naturally. When early on of course you've always got things where they look offset and broken but people go in and drive through all the different parts of the stack, and it's pretty complicated with the tracking service and the application the compositor and distortion all these things, but you have to get everything right and then you've got reality as something just seamlessly flows into the simulation.

Sometimes I talk about how, it would be interesting to go ahead and take like the lighting in a room and synchronize that up with our screen flashes and say well we argue about how much does it mean when you're at 60 72 90 120 Hz or something it would be fun to go ahead and take that and say all right here's our headset let's also synchronize the lighting in the room. No external ambient lighting but just high frequency LEDs pulsed exactly like that and then map it the same way where map the exact room in virtual reality and then be able to move the headset off and kind of squinting off the side and have it be a completely seamless situation. So I think that that matching reality is an interesting approach. And eventually when we have people in there with sort of the Codec Avatars level things that'll be a whole another level. But that's not coming real soon for uh for our mobile systems. Also on the video conferencing side of things... You know, video conferencing is generally done in a; you want it to work everywhere you've got uh every possible client endpoint going over the web different WebRTC, native applications and so on. And it means that again it's not pushed as hard as it possibly can be. As opposed to something like Oculus Link where we are really pushing hard to minimize the latency and, you know, sending fractions of the screen over decoding it pipelining it in all these different ways. Certainly we can apply that type of latency to conferencing in virtual reality but maybe something like that moves over to Portal in some way to improve the kind of latency over there apply some kind of Oculus Link level grinding on the low-level latencies to kind of generalize video conferencing systems.

So uh the original Quest turned out to be more right than we really expected. And the biggest problem was that we really didn't make enough of them. But Quest 2 is better, faster, cheaper... And we're making a ton more of them. And it's really rare to be able to honestly say things like that where you improve on all the axes. Because usually you wind up saying you've got a tripod of different constraints and you get to pick two, you know one leg's got to give in some way. But this is very close to a pure win. And, you know, by my normal methods here point out every last little thing where it's not a pure win. But on net this is great.

So the biggest thing is uh every Quest app should look better on Quest 2. Where it's a higher resolution screen, it's faster processor so things should be smoother, we automatically adjust up the resolution so things wind up looking better. The actual resolution is 3664 by 1920, but it's a full RGB stripe which, compared to Quest, with the pentile OLED screen, that means it is a little over twice the number of subpixels, or, conversely, a little under twice the number on Go which always had a few more subpixels than Quest did. And so what this means for content is, in our current systems, Go and Quest, if you looked at something like Netflix, where you would have a screen that was fairly large and occupied a good fraction of the field of view you could have a 1280 by 720 screen. And it would be great in the center and it would be aliasing a little bit out at the edges. On systems when we were able to turn on super sampling, like in browser or Fandango now, you could have a 1280x720 screen that looked perfect pretty much all the way out. So on Quest 2 that means basically we can bump those numbers to a 1080 screen. So you can either have a non-supersampled screen that stretches kind of like IMAX style. Or if you're supersampling it can turn into something that's just home theater style. Or you can take something that's a little lower resolution like a 720 screen and it now works out as something at monitor distance. Like a real person's monitor, rather than these gigantic screens that we've typically used. But this means that you can have multiple 1080 screens, you know, in a big setup I mean like I, you know, I have triple screens here set up for my desktop work, you can set up triple 1080 screens or something they're bigger now in VR but this is getting to the point where you can start doing real work with it. It might have some advantages over laptops in some situations.

Now, the newest feature that we've got that we've never tried on any of our systems before is this notchy interaxial displacement adjustment. Where Quest had fully smooth adjustment where you could kind of analog slide it to exactly where you wanted and Quest had two independent screens the screens moved with the lenses and the entire thing moved. With Quest 2 it is a single LCD screen, and the lenses move on top of it and they only move to three separate positions. We've got kind of a standard, and narrow, and a wide. And in wide, because it is just a single screen, when you move them all the way out, you do wind up sacrificing some field of view; so the field of view pulls in a little bit where you've just got black at the edges. So it's not ideal. People that have wide fields of view will notice that lack of field of view in Quest 2 relative to Quest. But it's probably again a net the right solution. Where it's better than Rift S or Go that had no adjustment at all, and it has allowed us to get this, kind of much better, screen for it.

Now, in general the LCD versus OLED trades. We've popped back and forth between all of our different headsets on this. And most of you know the kind of the trade-offs here where the big ones are with OLED you get a little bit less latency because they change instantly, you have a little bit pure colors and you have at least, in theory pure blacks, but, because we had to do mura correction to kind of get rid of some of the speckling and inconsistencies on the displays we wind up not quite getting that whole win; and we wind up getting this black smear and sometimes two frame rise at low ends of the display range. So in general I've thought that like on Go in Rift s that the LCDs have netted out a little bit better, even at comparable resolutions. Now when we're at twice the subpixel resolution I think this is a really clear win. And it is clearly the best display that we've ever had.

But the one drawback that still does matter is the latency hurts you a little bit. Where, have to, you command the LCDs to switch but it takes them a few milliseconds to actually get done changing, and then we blast the back light behind it. We still do have a little bit of an advance here relative to the previous ones. Where our previous displays had a single backlight, we command it to flash and it would do a one millisecond burst over the entire thing. Now we have that split uh into two pieces, so the left eye and the right eye kind of get their own separate bursts. Which gives us a little bit more cushion to pull things in, where we don't have to wait till it's scanned all the way across and the very last pixel has had time to transition before we blast the screen. We can scan out half the screen, wait a while, while the second half of the screen is scanning out, blast the first eye, blast the second eye. So this is, sort of, a very limited version of the rolling shutters that we would have on Gear VR or on Quest where it was continuously scanning out.

And there we've always had arguments about the relative merits of rolling shutters versus global shutters. And somebody did point out that with the two eye flashing at one part in Home, when we're scrolling the content sideways at kind of a constant rate, because there is this slight delay between the left eye and the right eye for the same content being rendered, it felt a little bit like um the panel was moving in and out in distance, because your eyes would recognize a little bit of a disparity because of the same image coming at different times. Now, we always had that on the rolling, the full rolling shutter systems, but it is still something that makes a little bit of a difference.

So right now, at the same frame rate, there's a little bit more uh latency on Quest 2 than there is on Quest 1. Because we have to wait a little while for the LCD to settle and before we flash the backlight. We can usually claw that back by running at a higher rate. Now, we were designed to run at 90 frames per second on this system. We ran into some kind of last minute problems, where it's shipping at 72 by default, but we've got a little experimental option that you can turn on. Where what we missed for a while was that Guardian now runs as a whole separate process. And a lot of our stuff now can take; our compositor takes input images from multiple different clients, where like the pop-up dialogues, the actual game screen UI, all these can come from different places. But it winds up causing a little bit of a problem now that we have both 72 Hz and 90 Hz and, possibly, 60 Hz for some media "cases". But guardian rendering at a different rate than that, winds up making the whole thing feel pretty jittery. So we're shipping it initially at 72. We'll sort this out and get darn Guardian to dynamically adjust to the right frame rates for that. But when you run at 90 then our latency is, you know, it's pretty much a wash.

We do actually have the possibility of running this display at 120 frames per second. This is again, one of those cases, where the display engineers are like well, this is we designed this for 90, it's certified for 90, but somebody just went in and said: "hey it kind of runs okay at 120". And just like with many of our previous cases with this, where we get into all these arguments, saying that all right, it's not exactly certified for it, if you have a really cold headset, like if you were, left it out in your car overnight in a cold climate and you bring it in, you put it on. Even at the normal rates it's going to wind up having some ghosting, and having some problems, and it would have much more at 120, until it winds up warming up to the point. Because LCDs switch kind of temperature dependent wise.

But I... And realistically, there's not many mobile applications that could run VR at 120 Hz well. But there are some, like our shell application, you could be in a browser or something like that, and watching it 120 Hz. And for some people, there is still a slight difference, between 90 and 120 Hz. And I'd love to see something that, really competitive games, like Beat Saber or something being able to run at 120 frames per second. Going by the way things have gone historically it probably won't happen, but I hold out some hope. You know, to kind of, I go again with my Star Trek metaphor there it's like:

[In James T. Kirk voice] - Scotty, Give me warp 10! The ship can take it.

You know, we can hold together on this. But it's, that would carve a little bit more latency off still, it would be a little bit more stable.

We also have another new tool for this that I was super excited about. This idea of dynamically firing off the retraces, instead of always waiting for exactly whether it's 60, 72, 90, 120 or whatever. The idea that on PCs you have a lot of monitors that just let you run your frame when you want to. And that is great for smoothness. Instead of getting into any of these lurchy steps, where you just barely miss your frame rate that you're targeting for, you just run it out when you want. And it is an almost pure win on a PC monitor but the difference is that that's full persistence display, where the backlight is essentially always going, or maybe it's pwinging at some incredibly high rate for dimming. But the difference is in virtual reality we just have this one blast of the backlight. And I had hoped that it would be okay of just like reading it okay we missed 72 it's actually only 70 frames per second. But our early experiments show that modifying this much dynamically leads to a visible flickering, but I'm not sure; I'm not convinced that this is the case yet. This is one of these things where we need to get back into our laboratories and put on some really high-end sensing equipment and run all of this, because there's a lot of things that could be going wrong with this. Because we have kind of different timing for the backlight flash versus the scan out. And it doesn't look good right now, but I have some hope that we just haven't, we haven't done things exactly right and that we may be able to get some more out of this. And that would be great. And we also had somebody made a clever suggestion for a fallback plan where, if the killer is the flashing of the backlight, it's possible that we might leave the backlight at an exact cadence, but delay the scanning out until we're really ready. And that could get rid of a bunch of our cushion where it would still be coming out at exactly the same time, but if we started the scan out late it might leave a hint of a ghost, but that would still be better than an entire frame jutter. So we have some fallbacks that we might be able to pull in, you know, one way or another.

But on the topic of extremely high performance systems. I'd like to make a bit of a pitch to game developers to consider possibly architecting the games where it really really matters, you know we have these things like Attica and Beat Saber and Synthriders and things that are competitive that i that people really care about the difference between 72 and 90 and 120 and it's great to see in the PC space people doing these tests, showing that for elite competitive gamers, they can tell the difference (and it makes a meaningful difference in kind of objective tests) the difference between even a 240 Hz and a 360 Hz monitor. So I like to think that there's some headroom for doing uh kind of exotic things here. And the way games are structured right now is you get you start your game frame you say what's my predicted display time (you ask the VR system for that) and it tells you that whatever you do now I am going to wind up showing to the screen usually like 48 milliseconds or something in the future. It has to go through the game simulation, the rendering simulation, the gpu needs to draw it, the compositor needs to put it together -- it's this long line of things. So there's this 40-something milliseconds of delay. Now you can carve that down if you don't have extra latency on you can pull one frame out of it, you can kind of phase align different things, and you can nudge this in different ways, but it's still a substantial double digit number of milliseconds. And those wind up being extrapolated, you know, if you're pulling your arm out, and the trigger gets pulled here, it'll return saying okay trigger went down, but it's predicting a little bit further ahead. This extra 40 or 50 milliseconds. And that's just the way the games are set up right now. You get this stuff, you do your simulation, and then you do your rendering and it goes through the whole pipeline. But it's possible to change this so that instead of saying I am simulating what you're going to do 40 milliseconds from now, you could be simulating at an arbitrarily high rate. There's lots of stuff where if you're doing simple things, like some of these things that are almost on rails and you're just pointing at things, that could be done at a significantly higher frame rate. You could do it at 200, 300 whatever frames per second, you can go all the way up to.. we deliver a 1000 unique IMU samples per second, and it would be possible to do some __very precise__ positioning with that. But you would have to structure your game so that what gets rendered is not just what's been most recently simulated. So you'd have to do a little bit more decoupling. But it's an interesting thing that i'd like to see somebody take a stab at, and then do a rigorous objective A/B comparison. You know, set somebody up here, is conventional frame synchronous rendering, and here is 500Hz super sample stuff and see if it actually makes a difference. It might.

So I'm, now Link is supposed to be the convergence .. replace PC headsets. Where we have, you know, Quest 2 is supposed to basically be the future of the Rift line as well as the future of the Quest line. And on day 1, when you plug it in, it's still not going to be quite as good as you know as a PC headset. We're still doing the compression, there's some extra latency, there are these things that we go through, where it's pretty darn good but it's not as good in every respect. Now we have the potential for actually making it better where we can run at these higher frame rates. You know, while 120 Hz might not be anything that a mobile system can do, there are PC system, you know, PC games where that might be more reasonable..and we have a much higher resolution screen. So even if we do wind up doing video compression we may be able to get higher quality images in some ways, we may be able to do various image amplification things. So I have reasonable hopes that it can get better. I mean last year I laid out some of the opportunities that we have for really taking advantage of the full USB 3 bandwidth on the Link. Where we could have much better latency and quality than what we're doing today, but it would be a very different system. So I still think that there are some useful things that you know that we can pull together with that.

There's going to be easy things, just turning up rates for all things on Quest 2 where you know it's got better video codecs, we can do 8k video, the latencies probably aren't significantly better, but we can throw a lot more bitrate at it. It's been you know almost; it was silly shipping a USB 3 cable initially and making a big deal out of it when we couldn't ship more than 150 megabits and when, you know, USB 2 cables worked just fine. Just a couple milliseconds more latency in the actual wire transit time. But we've got more opportunity to throw more through it, but still nowhere near the full bandwidth with conventional encoders. But there are other, more dedicated things that we can do, when we get the time to it.

It's possible that the Q-Sync type things, with variable frame rates, if it turns out that we are mistaken with our current approach and the flickering is not absolutely inherent in dealing with that on a low persistence display, that should help with Link. Being able to make dynamic frame rate all the time would be a positive thing for that.

And we should be able to, if you're going to plug a cable into from the headset to a PC there's things that we should be able to do kind of going the other way. Like we should be able to make recording, and casting and things like that work more reliably when you've got a wired connection to a PC. You know, there's lots of places when you are in Wi-Fi hell zones at conferences. And things that you just can't expect to do anything wirelessly, you know whether it's casting or streaming or anything.

There would still be another thing that we are not quite as good at. The Rift S has five cameras, so there's an extra camera, and there are some controller poses that are still, will remain problematic. That still just won't be as good. But we do have other tracking improvements coming along, where the cameras that we have on Quest 2 are basically the same as on Quest that has some implications for tracking and for resolution various other things as well.

I'm.. we still haven't announced a full like wireless connection system for Link, and we have these interminable arguments internally about this, about quality bars, and I keep saying that, you know, I love the fact that we have, I have existence proofs where whenever we argue about this I can say, right this very minute someone is using a wireless VR streaming system and getting value from it. It is not as good as being wired, it is not as good as we might hope, it might not meet your personal minimum quality bar, but it is clearly meeting some people's minimum quality bar and delivering value to them. Because they keep coming back and doing it. So I continue to beat that drum where, you know, we should have some kind of of an Air Link.

And then it gets even more controversial when we say, well if you've got an Air Link that you can talk to an arbitrary IP address then well what about cloud VR gaming. And that just turns the knob even further there where okay obviously it's even worse. Obviously more people are going to find this unacceptable, and it will be a terrible experience for more people. But still, I am quite confident that for some people, in some situations, it's still going to be quite valuable.

So i think one of the probably big shocks for people with the Quest 2 reveal.. or leak.. was that we're using the Qualcomm XR2 chipset. Where for the first time we are using a state-of-the-art chip. It was the right thing for us to do on Go and Quest to be using more trailing edge chips. We did not need the extra grief and hardship of working with something that hasn't been fully debugged, worked out, been through multiple other vendors for it. But again, our hardware team, has been maturing in multiple ways, all of our software is maturing, and using the state-of-the-art chip on this has you know has delivered some real benefits.

It is a bigger boost on the GPU than the CPU, which is good, because we've got this higher resolution screen, that can run at faster frame rates. You know, GPUs generally deliver value by scaling wider, you just get to have more shader units, and you can generate more pixels and it winds up by being a nice thing, where it still takes more power but you can still keep it at a lower clock rate and derive more value from it. But the CPUs unfortunately, while if you look at benchmarks, we can run them at fully __twice__ the speed that we're running them at right now for the way we're shipping Quest 2. But unfortunately that would mean that it takes 4 times the power. When you get performance by cranking up the clock frequency you wind up in this quadratic power and thermal regime, which winds up being really being painful.

And this is only going to be getting worse for us. Where uh the power and thermal on mobile, I mean we went through this really hard on Gear VR, where we had all of these thermal problems, and things were overheating and shutting down and it was one of our big complaints. Like a large fraction of users would wind up using it until it overheats. We got away from that on on Quest, especially, where we had active cooling. We had a fan going there that could basically cool the entire thing running at pretty much peak clock rates. But with Quest 2 we're at a point where we have a fan, and it adds a lot to the cooling capability, but we are still not close to being able to run everything on the chip flat out. So we have to kind of carefully balance out the different things here.

And it's interesting where a few years ago I made the comment where I'm trying to get people used to the notion that the mobile may never catch up with where we were at the PC. Like, at a time, maybe a 1080 ti or something. You know, a high-end PC system which is many many many times faster than the mobile chips. And we're still getting faster, but there's this hazard that a lot of people think that Moore's law is going to, you know, is basically nearing the end, but people have been saying that for a long time. And I mentioned this to Jim Keller, who was one of the really senior team leaders for a lot of important teams at AMD, Apple, Intel, and Tesla, and I said, you know, I'm worried that mobile may never get there to where we are on PC, that we're going to have to start learning to live within some of these limits. And he was basically no, we got this. There are a lot more. Moore's law is far from dead. So I'll be really happy for that to be the case. I've only got kind of middle confidence on this prediction. And so far as we go from kind of the [Snapdragon] 805 on the original Note 4 to where we are now with the XR2 -- that is a lot of performance that's come up from there. That was a dual processor system that, interestingly, was not running at much lower clock rate than what we're running at now, and could, in fact, run it faster than what we're clocking the base rates at here, but the GPU is a whole lot faster. And instead of dual core we've got 8 cores, you know, obviously we keep most of them for our system software, but it is still a really significant increase in power. So don't give up on Moore's law yet, things are still working pretty well.

But on the other hand you still do have these PC supercomputers, where Nvidia just announced their BFGPU, the RTX 3090, which is just an astounding system. Where, in a big PC you can be drawing 500 watts of power as you're powering this. And heck, you can stick two of them in, and put an NVLink between them. So there will always be things that you can do on these big systems that can't be done on the self-contained mobile systems. So doing it, having Link, and having our ability to continue to take advantage of those amazing things on the PC is a really great thing. Again, this is what I always wanted from the beginning. I wanted the self-contained system that could plug into a PC and take advantage of all that power.

So, one of the things that we introduced on Go and Quest, that didn't work out quite as well as I had hoped, was this fixed foveated rendering. The idea being that we know that our optics are such that they're clear in the center and they ..wind up getting.. they have more problems at the outside, you can't see as much clarity so we should just render less pixels out there. And Qualcomm did a good job with this extension that allowed us to kind of block up the things that all the renderings divided into bins on the Qualcomm chips and we could assign bins to be cut into, you know, half or quarter resolution. And it seemed like a really powerful thing where you could set this up and you could render half the pixels and still cover the screen. But it turned out that when you go that far it really didn't look very good and it only wound up giving 15% to 20% more performance. And there's a number of reasons why it wound up like this. And unfortunately a lot of developers just kind of left it turned it on, cranked it up to the maximum, said I want all the benefit I can get. And there are a lot of applications that I think made kind of a poor choice there where it really doesn't look that great. You see things, especially anything like a sign, anything with text, as soon as it goes like halfway out to the edge of the field of view and all of a sudden it's this ugly pixely mess.

And there are a few reasons for this. One that there were some subtle things with the way the blocks were aligned they weren't set up as symmetrically as they could be. So sometimes you add lumps protruding in a little bit further than they otherwise would have. And even though we say, all right it's half the pixels, instead of being what you'd expect from that, with a nice kind of even bilinear stretch up from that. What happened was, the pixels were doubled. It would render the lower resolution then it would double the pixels, scan them out to the texture, which meant that instead of a smooth interpolation between two neighbor pixels you had no interpolation for one pixel and then interpolation for the next one. That's why everything looks kind of blockier and notchier than you'd like. Now, we have some fixes for this where we have some new approaches to this that let us get the proper bilinear approach and it doesn't waste the bandwidth of instead of writing out the doubled size it just writes out the normal size and in the compositor we're able to do the interpolation there. So it saves us some bandwidth as well as avoiding the pixel rendering. And there's also some really twitchy geeky level things where instead of doing these teeny tiny little bins we're able to pack a bunch of the bins together and get them rendered a little bit more efficiently. So I'm hoping that the fixed foveated rendering winds up being both higher quality and a little bit more of a performance win going forward.

But one of the things that we're doing that mitigates some of this is we're moving to an automatic performance management system. Where instead of having the developers set like the clock rates and then the fix foveated rendering, we for a while now have been doing dynamic clock rate management where we monitor the frame rate, we clock it up as your frame rate is starting to dip. But what I had suggested last year that we've got implemented now is that you can ask for it to go ahead and make the fixed foveated rendering also dynamic. So this means that you start off at whatever your minimum clock rate is, the clock rates on the GPU go all the way up and only when the GPU is maxed out do we start bringing in the fixed-foveated rendering from the minimum level. And I'm really happy with this. This means that most applications most of the time can then avoid the fixed foveated rendering. But then when they get into some really over committed, oversubscribed scene it'll come in just as much as it needs to and then start to go away when it's no longer necessary.

And we've got some more things like that we could conceivably take advantage of. The fixed foveated rendering is one way to deal with rendering less pixels. Another, more direct way, is to scale the entire buffer down. And last year I had been suggesting that, with the state of fixed foveated rendering then, in many cases developers that need 5% or 10% might have been better off just scaling the entire screen down. Hopefully with these fixes FFR is better for a little bit. But you don't want to go all the way to where it's still looking bad. At that point you really would be better scaling the screen down. So maybe we can start taking advantage of that and then maybe we can also start taking advantage of slowly ramping the frame rates down where again it's better to drop even to 60 frames per second and maintain one frame per refresh than to have it be a juddering mess afterwards.

But all of these do still make me nervous. Because many of the things that we had to do in the original Gear VR days to make it possible was stop doing all of the performance management things that Samsung was automatically doing on their phones. Now they were much more obsessive about battery life than maintaining, you know, one-to-one frames. So it's not clear that it can't work out well. But there's a hazard here you know the more control we take and we start making decisions. Developers always can do a better job, but most of the time they have a million other things to worry about. They're worried about the game play rather than whether they should be tweaking things on a frame-by-frame basis. So, I think it's going to net out to be a good thing having us take over more and more of the control there. But I always do recall that there's part of the system called the MPD, and it's part of the power management system, but somebody kind of mocked it as Make Poor Decision. And there's always that hazard at the system level about thinking that you know better and making decisions that wind up hurting the app. So we have to kind of always be a little bit vigilant about that.

On the power budgeting side, the CPU clock speeds.. one thing that's especially unfortunate.. the XR2 has not only big cores and little cores, like we've had for a while, they have.. One of the big cores is a __prime__ core, which has a little bit higher clocking capability, and more cash, and it should be kind of: burn more power but it could run faster. And for a lot of game systems that could be a really great thing. I mean, usually you have kind of your game thread or your render thread that you know is the critical path, that it's the long pole and that's what always causes frames to miss and it seemed like that might be a good idea to pin that to the prime core. But it turns out when we did a bunch of measurements the prime core just uses more power, even at the same clock rates. I mean I guess bigger cache means something, there may be other architectural issues about it, but it's really unfortunate. Where we wind up clocking the prime core down a little lower than we clock the other gold cores. If we had, you know, if we just had more thermal margin then we could run these things significantly faster, we could unlock other performance capabilities.

And, you know, in some other ways I regret that I see us becoming almost more like Samsung, as the company matures and ages. When I started on Gear VR I remember doing systraces and going: "what is all this garbage that's taking up our CPUs! what are all these processes running and these are not part of the application that's running now". And we try our hardest to keep all of our system stuff off of the cores that are reserved for the games, but some things kind of creep on and it's not free, the other cores still do have parasitic bandwidth losses, and we have more and more processes and services that we're spinning up for different things. And it creeps into everything. The whole independent processes are a leaky abstraction. We have to continue being wary about this. But as the company grows, everybody wants to have your own team, have your own application, have your own part of the system there. And they run independently, and they wind up tripping over each other in a bunch of different cases.

I made one (I knew it wasn't going to go anywhere) but I had made this point that all of these services that we have going on.. it could just be one monolithic service. It could just be.. we could throw it all into.. combine VR shell, VR runtime, Guardian, Horizon. All of these different things. We could put it into one process. All the teams would hate it, because they'd be stepping on each other's toes, but it would save us memory and resources we wouldn't have like the frame rate correspondence issues we're having with Guardian, but this is trying to push back the tide. It's not going to happen. I'll fight the good fight as much as I can there, but this is going to be a kind of a continuous problem.

Now we got a lot of benefit, in Quest especially, from using the DSP on the 835. We have several new toys on the XR2 to deal with. We have some little computer vision accelerators, and we have the tensor accelerators for some of the neural network work, and we're going to get good value out of this. And it's a good thing that we don't have too many people really twisting our arms for like external access to this, because we wouldn't have the bandwidth to deal with it. You know, if I was writing some from scratch custom engine for VR I would want to get my hands on these, but they're just.. they're not coming to user space for you any time soon unfortunately. But it does let us do more inside the power budget that we reserve for ourselves.

The idea of custom silicon, like what can you do for this. There's for the most part like the GPUs are doing what VR mostly needs for it. And some of these things with the computer vision and tensor accelerators are getting us some good value. But there is always this case of you can always specialize something a little bit more and take more advantage of it. But the lead times on this are really long. Qualcomm was asking us it's like "hey what should we put in these things" really early on. I mean way back in Gear VR days again when they're planning XR2 they were coming and saying uh what kind of custom stuff would you like to see in there. And i was kind of like i mean heck that's years in the future I don't know what our computer vision stuff's going to look like, most of the neural network acceleration stuff wasn't even really on my radar at that time. So they had to have a lot of foresight to be looking ahead to get those things in and land and I'm happy to be taking advantage of them right now.

So on the temperature limiting side I am having some of these little battles internally that are flashbacks to Samsung. Where with Gear VR we had all these issues about temperature shutdown. And with mobile chips there's really two different thermal limits. There's a thermal limit where the very low level stuff shuts the chip down because it says okay, something is about to burn in a non-recoverable way and it just shuts it off. But far far before that is the system designer's decision about this is as hot as we want to allow the chip to run. And you make that decision based on like well what's going to be the case temperature in different places, do you care about average or the very worst case, what's the hottest spot on the case.. And you know there are there can be legitimate differences of opinion about where the right place to draw that line is where there can be a temperature where you can say all right this temperature if you held somebody down and you just like hold this against their skin and they can't move for 30 seconds that this is going to leave a mark on them. You know, but I would say in many cases that if you've got something that's really hot you just move your hand. And there might be an opportunity to let some areas of it get a little bit hotter than others.

And there's lots of options on cooling where you have the choice between a completely passive solution like Go was very nicely engineered. An aluminum front plate, heat spreaders behind it and it was possible to overheat Go but for the applications that it had it worked pretty well. You know, with Quest 2 we've got a fan inside there and it can spin up and move quite a bit of air when it needs to. There was one build where the fan was broken and it was on all the time and I was like I thought something was wrong with the audio circuit because it sounded like there was a whole lot of static coming through the audio system, because it was just making this buzzing all the time. And there's the trade-off with fans where you can be small and fast or big and slow. And heavy goes with the slow. It's kind of the helicopter versus jet engine a way of moving air. But there are limits to what we can do there, although we could thermal engineer a lot harder than we do right now. But you know in the end my kind of turbo Ferrari style of engineering is probably not the right endpoint for consumer head mounted displays. We don't need melted pistons and broken input shafts here.

But I do try to tug a little bit here, where there is a tendency to perhaps over conservatism in some ways and we can take a few more steps towards, you know a little bit better performance. Now, in some non-obvious ways with Quest 2 we really are optics limited now. Where instead of being able to look at the screen and say well clearly I can see screen door effect I can see the individual pixels. Now this is for applications that do everything right now there's a right way to make peak quality content. You know, if you wind up with a super sampled layer, with properly sized content, that's sRGB and all the right things set up, you know and at that point overall but a very small part of the screen, we are rendering pixels that wind up not being directly perceivable. You know, the obvious thing is that the quality of the optics where we've always had the Fresnel rings around the outside that casts our our god rays in wind up kind of chopping up things at the edges. The optics designers have to make tough calls as they trade off between, you know, fidelity in the center and we've got a flat screen and curved optics and it's just hard to make it work the proper focus all the way across the edges. And trade-offs are made. But there's some other things that are not as obvious. Where chromatic aberration correction we do correction for this where the chromatic aberration you put white on the screen and you wind up with red green and blue being stretched apart from each other and it's not.. you know we do a good job now where you can still see a little bit of a fringe from it, but if you look at the uncorrected, screen like if you popped the lenses off of the head mounted display, like a little letter 'a'.. near the edge of the screen.. the chromatic spread is so great that it's not one letter blurred with offset color bands. You have a red 'a', a green 'a' and a blue 'a' that are actually completely separate from each other, they're spread that far apart. And because these i the uh filters on the LCDs are not perfectly chromatically pure, the corrections that we do from this does mean that while we're able to mostly smoosh everything back together there's still a little bit of a smear between all of those regions that winds up acting as a little bit of defocus for things. This is also why subpixel rendering just doesn't work on VR. Like the font rendering techniques that people do on desktops to go ahead and get kind of ClearType subpixel like red, green and blue independent rendering. It's pointless to do on VR because we do not resolve chromatic aberration down to this subpixel level. You can go ahead and say well I'm going to render something different for red and blue, but it's going to be moved around based on how you've got the headset on your head, you know, versus, you know, how wide your eyes are, what you've got set up. Like this is something that you can do in your in your own headset where if you look at a screen you look over in kind of one corner and look at how much the fringe is there and then just adjust the headset a little bit and that fringe will move a macroscopic amount like potentially a pixel or two depending on where your eyes were. Of course, if you don't have perfect IPD for whatever it's set up for that becomes more of a problem.

Now this is possibly something that eye tracking could help with. In general I'm not as bullish on eye tracking for foveated rendering as most people are. You know, it's exciting this you know you've seen the demos with the ray trace stuff where you look over here and you can throw away 95% of your pixels. For a bunch of reasons, this is probably not going to work out as well for our conventional headsets, with conventional rendering and the way we can do foveation. But interestingly this might be a possible way where, even if you don't have super fast reactions, if we're able to tell where your eye is relative to the lenses we still can't correct focus but we could have theoretically unique chromatic aberration corrections for each eye position. And that could let us claw back some of this to the point that maybe we could do sub pixel resolution. And that lets us in that small sweet spot where the optics still are good enough, we could claw a little bit more resolution out of it.

Now, there's another interesting kind of techie bit with Quest 2 where I was so proud of the hack for using the display processing unit the DPU to do chromatic aberration on Go, and then Quest, you know, it was this wonderful way that this part of the hardware that we really weren't using for anything could do this part that took up a significant chunk of the GPU. And we got a little bit blindsided by one of the internal details on the XR2, while for almost everything it's just a superset and better but there's one tiny little thing in the display processor where somebody must have gone ..nobody uses all of these channels.. and some of the channels got a little bit downgraded so our old scheme for doing the multiple windows and stretching the red and the blue separately from the green no longer worked directly.

But one of the engineers made a fairly heroic fix, late in the game, that used yet another feature that we weren't using, of display write back to memory, that allowed us to go ahead and composite things together and still get our display CAC. It's wasting even more main memory bandwidth, to the point that I'm really wincing about it. But it turns out these chips really have more main memory bandwidth than most of what the GPU and CPU use, so it's still kind of working out okay for us.

So if we are optics limited where do we go from here. You know, there's different optical things that future headsets could have, where you could have more complex optics you, could have doublets or ... you know, many of the kind of the "ancient age" head-mounted displays, when they still had terrible displays, before Rift, some of them had crazy insane optic paths, some of them had literally like a dozen lenses they were designed by like microscope designers or something. That would just have all of this stuff that would give probably this perfectly square, they didn't even do distortion correction, so they would go through all of this it's possible to to have flat focal fields and notice you know very minimal chromatic aberration correction. You can get a lot of this if you're willing to do exotic optic trains like that. But, you know, they were in those cases they were glass, they were heavy, and one of the things I would worry about consumer wise is drop testing. Where if we were doing this in the normal auditorium I would ask for everybody to raise their hands, like how many people have knocked one of their headsets like off a table and had it clattered to the ground. You know and wondered did I just knock something out of alignment. Now in our current situation the most hazardous thing is the cameras. The lenses, with a single lens is in pretty good shape but on Quest cameras getting knocked out of line but that was one of our real concerns with drop tests and we've got various things with dynamic calibration going on. But if we had an exotic multi-lens optic system that might be a problem. You know, there are other exotic optics with pancake lenses and multi-bounce polarized systems, that can offer some more robustness and, possibly, higher quality at the exchange of, possibly, having some other issues with ghosting.

If we don't wind up getting better optics and we still wind up with something similar to our current Fresnel systems, then it might turn out that displays are maybe better served by going to high dynamic range. Doing some of the things with localized dimming, and rescaling everything, and having brighter areas, because even if we didn't get more fidelity that could still make better experiences and have some good benefits there.

Another one of the things that is not obvious that winds up limiting us on our current resolution and fidelity is that the tracking cameras are still basically the same low resolution that we had on Quest. And it's surprising that these low resolution cameras track as reliably as we do. They are very sub-pixel accurate pretty deeply sub-pixel but it still does wind up that if you hold something or if you have something you move up so that it's a couple feet away from you in VR and it's presented at the peak quality ..like my best time warp layers and everything super sampled .. and you're looking at that and you're close to it .. you will notice as you're reading carefully that everything is slightly jittering around. And that's because it's at the limits of our tracking precision. Now, various things you work around this where if it's a giant screen, on a billboard far away then it's not a problem because the translational rotations, you know, our attitude changes super precise. You know, the IMUs are very very good. It's the tracking that we base off of the optical cameras that has a limit to it. That implies design things where people want to have things that they interact with in arm's distance, but that has problems both for the focal stuff without varifocal, but also this kind of it's at the limit of tracking and you can have problems with that. But of course, most people that are just rendering things into the world they're still not at the absolute limit of quality, and it's more at the level where it kind of works out now. But we are absolutely at the edge of what we want to do with that. And whatever the next headset is, we got to get higher resolution on the tracking cameras.

So ergonomics wise, we are a little bit lighter, a little bit smaller, it's still not quite quite where Go was but better than Quest was. The great thing is we have these accessory head straps. We have the hard, the rigid, strap and then we've got the battery counterweighted one. Which a lot of people have wanted for a long time. Where it's more weight on your head, but the counterweight winds up unloading it off of your face more. So that should hopefully be, for most people, the most comfortable headset we've ever had. But there's still so much room for us to go there. Where, you know, I still want to see ultra-lightweight headsets. You just.. you still don't want to be in VR for hours at a time right now. And as we start going to productivity applications you really need to sort that out. And also as you go to ultralight there may be these synergistic benefits where you make something that fits more like glasses than you put on nose pieces and so that you can locate the lenses much more precisely to the eyes, and it can help with our kind of optics limited out of resolution there. But, you know, we've got.. there are very strong opinions about what can work. Whether wired versus wireless.. I still tend to think that there might be useful things with wired cases. I keep pointing out billions of people have used wired headphones and gotten value from them. Now, obviously, a thin little health headphone wire does not carry all of the data that we might from a completely separate compute puck of some kind. But we did look at this for the latter days of Gear VR doing this kind of two-part plug-in instead of drop-in. And there may still be some useful things to go there.

On converging with Go. You know, there are still some things for which Go is the best headset. Where if you just have something where you want to look at an immersive video, you want to set something up at a display and just put it on and just you are there, magically everything works. Right now, you put on a Quest, and you usually have to acknowledge Guardian. We still fail too often, where you have to wind up resetting a Guardian, you may have to acknowledge things inside your area, we have, you know, moving towards like selectable users. All these steps that you go into before just have everything appear in front of you. And this is something where it's still kind of a glitchy mess. Where you put on the headset it usually blinks in some scene of where you were and then Guardian comes up in some way, you acknowledge something, maybe it flashes up a bit of shell before showing a separate application. And these are hard to track down now when you might have literally four or five separate teams at the company, that have to coordinate and make this all happen. So, again I kind of beat the drum of we could integrate more of this and make sorting these things out making them perfect a lot more easy.

So, you know, eventually the pitch for that is eventually putting on the headset should be as seamless as answering a phone call. Because eventually you might sort of be answering phone calls in VR. If we get to where we want to be with communication, you want to be using VR to communicate with people. You want to be able to you know, be paged, put on the headset and just immediately be there. You know, every second counts. Counting this down as does it take 10 seconds, 15 seconds to get back to where you were, this all really matters for the experience. But having things converged now, on our VR platforms is an enormous relief. It's really hard to overstate how much,you know, how much drama internally this has been over the years. Where, you know, my vision for VR was always as this universal device, you know, we should be able to play games, we should be able to browse the web, we should be able to do productivity things, we should be able to connect to a PC, to cloud services, all this, you know it's virtual, we can do anything, it should be universal. But, you know, a lot of the.. most of the other founders were really about "we want this high-end awesome gaming system". And this caused enormous tension through the years. And it's kind of ironic how we wound up with a system where we have this low-powered gaming focus device. Which wasn't really what anybody was aiming for, at the beginning. But it's doing well for us, and we're clawing our way back towards a universal platform in various ways.

And, you know, it's great to have a team that is really kind of all pulling in the same direction now. And we are getting people that have shipped a few headsets now, we know what we're doing. And I'm always pushing for go faster, you know, I'm never satisfied with all this, there's so much more that we can do. But we're at least, you know, the derivative is in the right direction now. We are making some progress with it.

Now, there's a place for high-end headsets. People, some people, are disappointed that we're shipping this super cheap 299$ headset that's amazing, because they want a thousand dollar headset that has every feature, you know throw in the kitchen sink and everything. And I think there is a place for that, but I do always caution that there's a hazard, even when money is no object, these things have costs. You know, money does not fix our thermal problems, it does not make cameras weigh nothing, you can't just throw all of these things in, even if you're willing to pay whatever. In many ways, Quest 2 can be the best headset you can get. You know, that money can buy right. Now, there are some things that we can do better, but not as many as you might think. There's not many screens that would be better than what we've got, there's not that many sensors in some ways. But still, in the same line I think there's the possibility of having a low end and a high end. You know, as long as they are the same line, they're the same software, that it's not something that's really competing with this. So uh you know I mean I would love to see super expensive stuff.. that's the types of things that.. you know.. that I would buy. But I think we're doing the right thing about concentrating on broadening the market. It's great that we have fifteen hundred dollar BFGPUs available on the PC, but that's only possible because there's 99$ video cards that enable an ecosystem. And I think that getting the more inexpensive systems out on VR is critical. And eventually, we can have our super high-end boutique things and that'll be great and wonderful.

Now, the controllers are one of the things that is a real anchor on the cost. The controllers are.. you know you buy a high-end gaming controller like a high-end Xbox Pro controller and you can be what's like 80-90 dollars more.. Our controllers aren't quite that expensive but they are a significant chunk of the bill materials when you're looking at something like Quest 2. So of course we are thinking about how can we make things that could fill exactly the Go niche of things that are used for , you know, media, location based things. And say, well can we make something like this that works seamlessly without the controllers. And you can see the steps that we've been taking towards this with hand tracking and with voice control and it's not there yet, but we can see a path possibly to doing that where if we have all of these other applications being really valuable and functional without the controllers..I mean when you get a laptop you don't get an Xbox controller shipped with it. Because there's just lots of things that people might want to do that don't need those types of controllers. And I think that that's a really valid direction.

There's also the exotic things, like possible brain computer interfaces. You know, the non-invasive stuff. Even if we could tell just one bit with a brain test I think that could be really magical. If you could be looking around and just, essentially, have a brain click option. Where, you know, eye tracking possibly combined with that could be a magical thing. You're just looking at things and things are happening as you wish them. The latency on the non-invasive brain stuff is not what we'd like for precision control, but it might be possible to get there. And that would be, you know, pretty neat.

So our current controllers, we call it our constellation tracking system, with the little LEDs on the controller, that winds up driving a lot of our decisions on the systems. Where we use the same cameras, that are our global headset tracking also for the controller tracking. And we alternate frames and we have different exposure settings for it and that winds up being .. then we have to have yet another exposure setting for the hand tracking. So we have to juggle between all these different things. And, unfortunately, the controllers.. they're carefully calibrated so it's hard to make third-party controllers in different ways for it. So there's lots of different possibilities. Valve does the Lighthouse tracking, which is an external system sending signals out that the controllers get, which has some advantages for being able to track behind your back and so on. There's possibility of making cheaper controllers, where instead of having the active LEDs, which cost more than you might expect, on the controller bill of materials, you could have completely passive things with different shapes that instead of tracking the exact dots it's more like the hand tracking. Anything you do with a known shape thing, is going to be far easier than hand tracking. Tracking hands is hard. All the different ways that positions and poses hands get in. It is a tough problem, I mean if you say well you can apply all of these resources and here is the exact CAD model of the shape that we're going to track.. that could just be much much much better. And you could still have an IMU in it at the cost of a Go controller. So there's possibilities there to make much less expensive controllers.

And then there's possibilities for more expensive controllers that track themselves. You know you could basically put cameras and tracking inside controllers, and then they don't care where they are relative to the headset. And that brings a lot of flexibility. So maybe you have a controller-free SKU, but the controllers are more expensive but they never lose tracking in terms of being behind your back for too long, different possibilities like that.

The grip fit on our controllers is something that I do think there's room for improvement for. Where they feel really good in your hands, they feel like console game controllers. Tons of ergonomic work goes into this, and you're comfortable holding them for a long time. But the difference is, you do not sling your game controller around like that so much [left-to-right slicing gesture] like you do when you're playing Beat Saber for a half hour. And eventually you wind up kind of making the claw and hooking your finger over the top of the controller. And that's just not ergonomic. I've tried putting friction tape around the controllers and then it's a huge mess when you have to change batteries. But it seems like there probably are better designs. I'm saying we should look towards sports equipment, and tactical combat equipment. Things where performance really really matters. And you do not have the grip of a gun shaped like a bar of soap, that's going to squirt out of your hands, when you squeeze it when you're sweaty. But, interestingly, I was told that there are non-trivial industrial design cost issues with how you set up molds to make kind of better grips for things like that. That may impact us, but I think that we will have some improvements going forward.

I wish that it was easier to make real third party controllers for this where, you know, I'd love to have something.. I'd pay significant money to have some custom like renaissance festival fantasy designed.. carve them out of brass.. with shark skin grips.. make something really cool that is waiting for me on my heavyweight Beat Saber days. Open up a velvet case and pull out two awesome controllers for VR, you know, that'd be cool. But it's just not really possible right now. You have to make these holders around the controllers which.. kind of defeats.. it limits what you can do for that. There's still interesting things being done but one day we'll have all this to the point where we can make that kind of work a lot better.

The haptic situation is something that makes.. I wasn't a huge fan, believer in haptics before, but it's been interesting where we had a build where haptics were broken, playing Beat Saber is like: wow this really does, you know I miss that, it makes a difference". And we just have the most trivial, kind of buzzy motor thing right now. I thought that it would be interesting to be able to have something that's a lot stronger, where for the punching things if you could wind up basically pull back a spring and then let it go for a real hard smack into your palm.. I think that could be good for intersecting physical systems in a way that you don't get with just kind of the gentle buzzing. And maybe we should put haptics into the headset. Some people theorize that little buzzing on the head actually can help with simulator sickness. But it could be another feedback for you-just-got-popped-in-the-face in the boxing game, or you stuck your head into the block, or into the wall in any other game. So there might be some useful stuff there.

Tracked keyboard and mice are..we've got a direct partnership to do a specifically branded one.. but it'll kind of work with a lot of things, and that'll get better as we get more experienced with all of these things. And there's a lot of trends that are pushing together here with hands, body, keyboard, environment, intrusion detection -- it is all about this.. the headset learning to understand the world around it. And we have lots of teams working on this. And this is some of that big deal long-term technology about machine perception and understanding the world, and figuring out how we can use it inside VR. And there's a lot more yet to come there.

I do worry sometimes that we're in the position of pushing technology into products without the products pulling in things that need. There's a real hazard in the fact that we are power limited. We care about every watt that's going different places. The fact that we could have some of this deep machine perception stuff that really does not carry its weight for the experience value to the user, not something we have to worry about.. But that can be a tough fight when somebody's had a big team.. that's been researching for a long time.. and now it's time to go into production.. to be told it's like: well, maybe that's really not justified and we'd rather spend that wad on our game applications or something. Just another one of the internal things we have to deal with.

I won't spend much time on media this year. I've spent like half the time talking about it in many previous years. But it hasn't been our push on Quest. It pulls a little bit back more in with Quest 2, as we're trying to subsume the Go market for different things.. but we know exactly what to do -- we just haven't done it all right yet. There's not a single application that's really absolutely nailed everything. That's got color space, resolution, encoding, tempo .. all of those things done perfectly .. but we know how to do it.

During Quest 2 development, I got some of the final footage from NextVR (and before they got swallowed up into the belly of Apple) their very last camera rig shows some just absolutely breathtakingly good looking footage. On the Quest 2 screen, when we've got things encoded at 8k encoders, sliced up in however best ways to get our best frame rate, resolution, pushing pixels around it really does look amazing. But when you go back to it's like well, what do we wind up just streaming.. we have limits on what we can stream. But I keep coming back to: we could be streaming in most cases, and for many people, for even 10 times what we do right now. Like so many of our streams are 10 megabits.. where we have lots of people that you're on connections that you go ahead and you set up by like a quick connection or multiple TCP connections and you can pull 100 megabits down. You can do some of these amazing things. But almost all of our stuff is still down at 10 - 15 megabits.

So one of the big things that I'm looking forward to is kind of "the great re-encoding", where we take all of the stuff that we've got, and redo it with exactly the right codec parameters, the right resolutions, the right bit rates, and make it all available. So yeah, we still have 5 megabit crummy things.. Although, immersive video is really not even worth watching at those low bit rates. We should just, you know, stop and say: you need to get a better connection. But stream all the way up to.. you know, give us 60-70 megabits if it's available.. it really doesn't hurt us to have that there. We can scan ahead, and make it possible. And it's better than what people think from what they see right now. And that's been really frustrating.

Another thing, our browsing experience is still terrible. You could be forgiven for thinking that we've only got a 100 videos. And when you kind of go into tv.. and scroll through the pages.. maybe click see all in a few places.. But we had, there was an internal post, where somebody said it's like oh we have this many hundred videos that are above our quality bar -- and I was just like no this can't be right, I know that there are far more than that! And eventually they were found, there was a mistake in the query, it turns out no we have 7000 videos, that are above this quality bar. We need to make them discoverable, we need to make them all the best presentation, and get it out where people can see it.

The Quill Theater was a surprise hit. Quill, designed on the Rift, is an extremely expensive rendering system. It's all about these lots and lots of thousands of strokes from the artist. Just the way you go in, and the speed you get from that is by doing this kind of sketchy way of doing things. And it is hard to make that run at full performance on Quest. But theater they put.. they did an interesting thing in there where they have this HD button, where you can have it go in this cut down way, where it's generally kind of holding frame rate, but if you want a closer look at something, you can press the HD button and it will usually start chunking down and missing frame rate but it gives you a little bit of a crisper view. But it turned out that this sense of.. this kind of magical fantasy world of sketched up by artists, with good audio, and simple animation, and the ability to peer around like through the dollhouse at it -- there was some real magic there, and it seems to resonate a lot more with people than a lot of the even highly produced immersive video content. So that was a great data point.

On the conventional media side of things, Fandango Now is kind of one of our best quality presentations. We finally have essentially all the movies. They're available. This is the thing I point to where this is a 720p screen, but it's presented with super sampling, with the right color space, it's doing basically everything right there, and the screen looks really darn good. And of course in Quest 2 it looks even better, because there's no super sampling effect out at the edges, it is all you see every pixel matters, it's all within the sweet spot. And I want us to go to streaming 1080p. Especially for 3D movies, where we have to cut it in half. And you only get really half the screen, so at that point you're at, you know, 960 by 1080. Which is definitely even less than what you could show on Quest with super sampling and with Quest 2 we could use even more. But it is getting to be a legitimately good movie viewing experience. It's a little bit more comfortable, much higher quality, everything is there, and we're supposed to be getting essentially all of the 3D movies. We have not gotten them all rolled in, but I'm really looking forward to having.. that was a promise from day 1 on VR: of course you want to watch a 3D movie! Unfortunately, like the sign up flow and everything is still pretty horrible: you have to go through and make a new account there, register on the web browser before you can get back in, but once you do it's a good high quality experience.

So.. boy.. I am almost out of time and gee i've got a lot more to go through.

But we've got our push to be a general-purpose computing system. And all the things that we're doing in the shell environment for this, where we're adding kind of free resize.. you know right now we throw the things up in front of you, but as we move towards a multitasking system.. where we've got a multi-tab browser, multiple ones popping up you can start using that for legitimate stuff. You know, like I sat down trying to do some of my AI research work all in headset.. I critically need a PDF viewer.. so that's something that needs to show up in our browser before it can take over some of this work for me. But you can set up some pretty good areas, the higher resolution matters and then adding the freedom to be able to nudge things around. Like some people want things more down low versus up high, stretching out the different things, we still need to have the ability to go into portrait mode instead of landscape mode. But we're getting some real values there. And it also works cooperatively with sort of our Android applications like Fandango or the other things there.

And that's still one of the things that absolutely kills me, where I think we need more Android applications. And we do not have a sorted out strategy. I've got a long spiel about this that I'm not going to have time to get to, but we have all these existence proofs and examples of.. Microsoft tried really really hard to move all apps to a brand new system and it just doesn't work out. I don't think it's going to work out for us. I think that we need to support our Android apps there in a broader sense, we have progressive web apps as the backstop for everything, but on the mobile platforms the progressive web apps.. they generally lose out to native applications. And we care more about performance in VR than mobile systems, so I think we __need__ a solution there. And we haven't sorted it out. We've been in this situation where we have React VR running for our internal applications, but we haven't been willing to really productize that and put it out. We have ways of doing direct low-level access to the panel apps, like browsers written in a different native way, we have Android apps we're projecting for some of our settings, different things for UI.

And I am getting our time limit here.

So, I know this is not ideal, but after this I'm going to go into the Horizon beta. I know everybody doesn't have this available, and we don't have large seating availability there, but there's going to be a "Connect With Carmack" world there, and I think they're going to be able to let in 20 people at a time. I'm going to go in there and I'm basically going to go till my headset batteries die. Well, whoever can get in, make your way up, happy to answer questions, it will be very much like the hallways afterwards at Connect. And I'm hoping that next year we can have this sorted out to the point where we can have the Venues presentation, we can have everybody there, we can have people making their way to the front, we can migrate through all of this and have it be a seamless experience. So we've got a north star that we should be shooting towards.

And I think I'm about done here.

---------------------------------------

INT. AUDITORIUM IN VENUES

Plain gray and black walls. Neon, hallway at the far back.

The same low-polygon green plant-like things positioned in one corner of the room and on both sides from the Facebook Connect circle-logo in the background.

Twenty or so users scattered on the perimeter of what appears to be a cylinder-shaped stage.

Carmack's avatar floating above in the center.

-------------------------------------

USER 12

That was a fantastic

J. CARMACK

(points hand)

All right, yeah, right there. Yeah, let's try and do raised hands for this. Again here's where we are getting killed by the latency that I was talking about. Where it is not quite that live completely interactive ..uh. sense. But we have a path towards that. But, I'll try to point at people. Raise hands or something.

J. CARMACK

(points hand again)

Did you have something? Yeah, right there.

USER 11

No I don't have anything, I close this.

J. CARMACK

All right, sly_dog

SLY_DOG

um, does it have a wif-fi stick, uh, antenna on in the Quest 2?

J. CARMACK

No, there is no external antennas. It is all internal. You work in a situation where you've got a really marginal signal?

SLY_DOG

No no. I'm just thinking of reducing the latency, you know. I use 5 GHz now, with Virtual Desktop and it works great and I'm just thinking that maybe it'll work better.

J. CARMACK

Now, actually the network frequency has almost nothing to do with the latency. Where you get on almost all the wi-fi frequencies it takes.. you know it's microseconds to send a packet through. It's all about congestion and higher levels of.. kind of the operating system and application levels that adds all the latency in. So even on 2.4 GHz it could be super low-latency if everything was set up right and it wasn't having any problems

J. CARMACK

(points at a user)

USER 10

I'd love to hear uh anymore you'd like to talk about like the tech from, you know, like the tracking software and stuff that can be used for the AR glasses.. [incomprehensible]

J. CARMACK

So it is, you know, internally, the AR teams are almost completely separated from the VR teams. And the way this has gone, it almost feels like a coup in some ways where there was Facebook Reality Labs that was basically doing the AR side of things.. and, for a long time, they literally weren't allowed to talk to the VR people. They were worried about that they would.. that they would want to get excited about working on cool production stuff instead of this long-term AR stuff. We talk more now, but the technology stacks are really surprisingly distinct. They have their own computer vision work .. we are almost.. we are moving towards combining all of them. Because they had some good stuff, we had some good stuff in Quest for the tracking, and we are working on getting those together. But the systems, the actual sensors are going to be very different. [buzzing-mic-sound starts] The Quest and Quest 2 basically have four 640x480, relatively low-res, VGA-sort-of monochrome cameras. And we want to move to color cameras. I wish that we had been able to move to the color cameras on Quest 2 ..

[Buzzing continues]

J. CARMACK

(gesturing at the crowd)

Somebody got the microphone problem going on. Yeah, possible over there.

J. CARMACK

.. because I knew that we were right at the limit on our position tracking, where you can tell. You know if I held a sheet of paper up, render properly at this distance.. I could tell a little bit of kind of jittering going on. So and obviously for the pass-through video, something like that would be much much better with high-res color, because that gives you a view of kind of what the tracking system gets to look at except it's got four views instead of this one merged view that you've got. So it sees more of the world, but that's about that kind of grainy, low-res, low quality view. And it's pretty amazing that it's able to do this sub-millimeter tracking based on that. Where it looks at features.. so the way these tracking systems work, they identify little things that seem to be stable in the world. And usually it's kind of...

[screen goes to black for 10 seconds]

INT. AUDITORIUM IN VENUES - CONTINUOUS

J. CARMACK

...what pixel location is this corner at and you did your tracking like that it would be very steppy. So instead they wind up resolving these down to something like a tenth of a pixel. And when you look at the pass-through cameras, and you see how grainy, messy all of this is, it's again kind of amazing that it works as well as it does. That is can resolve things from this mess of pixels down to a tenth of a pixel. But the easiest thing to do is just give us higher resolution cameras. We have all these megapixel cameras on phones that.. for a long time, a lot of computer people were kind of afraid of rolling shutter and color cameras.. for a lot of things.. the cameras that we have on Quest are global shutter cameras, where they take kind of a snapshot of the world continuously, while phone cameras are rolling shutter. They're kind of continuously taking a view, and you see this when you've got moving things like videos of propellers that wind up all distorted, or cars going by .. they've got this sheared look to it -- and that makes it a lot harder. It makes the math harder for figuring out how you do things. And another kind of subtle thing, this subpixel tracking that you could with these grayscale images.. the way color cameras work it's basically the same sensor, but it's got red, green, and blue filters on top of it. So it's just one pixel instead of being the gray pixel a little over here this is only seeing the red from it. But that means that you've got a red pixel here and then a red pixel three more units over.. it's actually in a different, Bayer, pattern.. but they're not right next to each other. So that makes resolving some of these sub-pixel things a little bit harder. So in simplest way, these make life harder and life is just easier if you say: give me these global shutter cameras, that happen to be lower resolution. But we have some of the best computer vision people in the world working at the company, and I have all the confidence that, if we give them a hard problem here, they can sort it all out and solve it. And high-res color cameras would both give us super precise positioning for like looking at close monitors and getting great shots there, it would give us high quality for the passthrough... If we had our passthrough at the same quality of these peak quality immersive videos? It looks really good. It's still not exactly reality, but that's another thing when I was talking about looking underneath the headset, and being able to match the controllers with reality, it would be great to have super quality passthrough and literally match the world to that. And be able to say, you know, lift up the headset, put it down, and it looks almost the same. We can still tell we are looking at a screen, but the closer we can get to that.. that's one of the grand goals.

INSOMNIA_DOODLES

(unmutes mic)

First of all thank you, I really appreciate your honesty at the beginning of your talk. You kind of being open about your opinions on where VR is at the moment. So thank you for being open about that. My question is actually related to the Quest microphone bug. I was wondering if that was addressed in the Quest 2?

J. CARMACK

So we believe this is.. people are kind of freaking out right now with this going on, but it's almost certainly a software problem. I mean, I don't think that's an issue that's going to happen that will plague Quests. We just need to go in and sort that out.

This whole comes back to one of my themes through a lot of this is, we have so many stacked layers of abstraction where it really gets in the way. There's layers that people don't even know exist. I wish that our headsets had something where it just streamed in the analog to digital, it's just going into a buffer in memory. But the microphones, and the speakers, are actually connected to this completely separate little bit of hardware, that in some cases is running proprietary algorithms for noise canceling and things like that, before it even gets to the Linux kernel side of things in Android. Which then goes through like the kernel, and then goes into the Android HAL, and then it goes an Oculus level microphone abstraction so that every application didn't wind up competing against the microphone. You know, we had the problem where our shell system wants to use the microphone for voice control, but then an application wants to use it, and that's normally prohibited in Android to have like multiple ones at the same time. So we've got this extra service layered on top of it. But it just makes tracking these things down.. like why do we have 770 milliseconds of voice latency between all of this.. why do we have a microphone problem going. And it can just be hard to track it down through all of these layers. It's even worse, because some of this stuff winds up.. ok, this is written in Java, this is written in C++, this is in C over here. They have different build systems. Like, the core Android system, you've got one build system for the Linux kernel, you've got another build system for the Android system, you've got yet another build system for our applications.. And this is the type of thing that I love software, but some days I hate the software development process, because there are all of these things there. And I have this, you know, this fantasy almost of saying couldn't we have just written this with one cohesive team, do it all in C++, in the same build system, and I've kind of bemoaned that to Boz at times, and he basically said: I don't think we could hire enough good C++ programmers to do that.

I don't really agree with him. I think there's still are enough people.. I mean, there are some great C++ programmers in the game development industry, and some of the best people that I work with came from game dev. But the kind of Facebook world, the Silicon Valley world, a lot of that is more like, let's do lots of things in JavaScript, you know, let's do lots of things with kind of the.. you know the..

.. the big company in the startup company, there's a Silicon Valley way of doing things that is not the game dev way of doing things. And it has advantages, you know, it spews out billions of dollars in many ways. But it is not the same type thing where you have game devs that are like obsessing over VTune or, you know, various PIX tuning tools and making sure that really this is.. every cycle counts. We really get this shader down, we tweaked something in the shader.. I mean.. We're not worried about tweaking variables and shaders. We're worried about too many processes running in the current case. And it does make me despair a little bit, but, you know, I think, like, you know, this is great. This is kind of working. This is kind of what we want VR to be. And it's been a 7 year journey on this, but it's, you know, it's not all wasted, it's not where I wish it was, but we are making progress. This is something we didn't have even a year ago. And this is just early beta and it's going to get a lot better.

USER 9

Could you give a brief description of what you want it to be in maybe 5 to 10 years?

J. CARMACK

I really don't like looking that far ahead. I mean, there's places that I think we could be today if we had just prioritized a little bit better. I wish we had the Google Play store and every Android app that you use on your phone, showed up better in VR than it does on your phone. Where you go ahead and bring up everything you might want to do.. you can do all of your applications there, we have a keyboard, you know, all the things that you might want to plug in input wise. Keyboard and mouse is as good as a PC, you have remote desktop streaming, we have all the things that we do in VR but everybody does everything right. Like, I'm looking here at the.. just a little.. neon hallway going back .. I'm like uh! Something's not sRGB right, because these aren't anti-aliasing as well as they should be. There's little things that should get sorted out. And just if everybody did everything right, which, of course, will never happen, but it's possible to do now. Which means that maybe we just get it two or three years from now, when everybody is doing everything right, the media streaming is all perfect, it goes up and maxes out everybody's connection, everything looks glorious and all the little different things that they look at; everything is happens instantly, it's snappy, there's no long delays for the loading. So, I try not to make these giant visionary pitches about where could we be with, you know, arbitrary amounts of funding and resources and time, because there's so much -- there's hundreds of things we can do right now. That you could just go and make a list and say: fix this, fix this, make this better. I think we've got a path to this. For the last couple of years now, it's just felt like all the ingredients are on the table, we just need to stir them in the pot right, that we can get there, it's close.

So if we go big visionary things, I do wish that we had the form factor to the point, where.. everybody's talks about glasses-level AR.. I mean.. I'd be thrilled to get glasses lever VR. For an AR, it's just strictly a superset harder than VR. Because everything you might want to do in AR.. you could do the VR version of it and it's just easier. You've got easier display technology, you have a little more control over the presentation, timing, and all that. So, yeah, I would love to see the fantasy vision everybody pitches for AR glasses, of things you just put them on and you see all this magical stuff.. which is not the way any known displays work right now. But that's not completely crazy to say for VR. You can make pretty thin pancake lenses, you could tile waveguides and black out the back so you don't have to worry about the light level relative to the environment.. So, something like that could be pretty awesome. You go ahead and nail everything right, but make something that is 60 g - 70 g, that you can put on and wear all day long, you do all of your work like that. The visions of a productivity office in AR? I don't care about the AR side of that! Just give that to me in my virtual reality. I mean, I think that most people.. while the passthrough stuff is kind of neat, for some people it's important to be able to see their environment.. I think the whole point of VR is that we can make a better environment that what you've got sitting outside of your glasses. You can have the perfect environment outside.. So, in most cases, I think that we have, really since Quest, we've got the right critical things. I think that we have.. there's controller hand tracking, headset tracking, and then the basic rendering stuff. The things that are still maybes: do we need eye tracking, do we need body tracking, do we need varifocal -- these are maybes. I could be completely thrilled with a system that we build with the technologies that we've got now. Crank them up a few notches, do everything perfectly right, and it'll be amazing.

Not, some of these things may turn out to better than I expected. I've heard people make really passionate descriptions about how important body tracking is for them, how much having their full body, presented in VR, means to them. And similarly, some people have said that.. okay, even if eye tracking doesn't give us this huge foveated rendering win, it may make computer interfaces feel magical; just this sense of, you know, looking around and, maybe tied with that little bit of brain sensing there, where you just glance a things and it just does what you want.. you know, that might be a magical thing. So I am open-minded to think that some of these things, that I don't think are critical, might turn out to be... amazing? But I think that we have everything that we need. You know, we need more software to go with it, we need the magic software, but no miracles need to happen for VR to get where I want it to be.

J. CARMACK

(glances at silent audience)

USER 8

I was worried about the camera placement on the Quest 2. It looks like it's a little bit different than the original Quest. And I'm sorry if you answered this in your live stream already previously, but I was just wondering, does that affect tracking volume? Because I know that sometimes like going behind your head can be a little bit tricky on the first Quest.

J. CARMACK

Yeah... So, it's pretty close to the same. There's some subtle differences in it, bit it's not really radically changes. And this is one of those really.. kind of.. tough trade-offs. One of the things that we did have a lot of discussion on early on was the exact orientations of all the cameras. And we've got all these internal renderings of the tracking volume around things. And we always know there's problems, you can't do kind of unicorn head things and have it track.. So, it's a trade-off. I thought it was important to keep some stereo overlap in the front, because at the beginning I thought we might be able to do hands, which was not broadly agreed upon. There are a lot of people that just though that Quest was too underpowered to even to VR, let alone add hand tracking on top of it. But I'm happy that I insisted on keeping the tracking there. And it turns out it also helps for doing the stereo overlap and things for the passthrough; kind of mapping the world for that. But it would be a little bit better if we had those pushed off to the side. You'd get a little bit more..; and since I play so much Beat Saber, I'm a little more sensitive to it now, where I'm not sure if I went back in time 2, you know, 3 years at this point, would I rather have more camera angles over there. Because, yeah, sometimes you lose tracking. We coast as long as we can, when it doesn't have a view of it, it just dead reckoning off the IMUs, but ...

[screen goes to black for 3 seconds]

INT. AUDITORIUM IN VENUES - CONTINUOUS

J. CARMACK

... seconds and then by that point it says I really don't have a good sense of where I should be and gives up. But we've been pushing that number further and further each time, as we calibrate our sensors better, we do better dynamic monitoring of them, we can let it coast longer. What I think this is going to get us, as we extend it from just controller tracking, and we have our hand tracking as well.. right not we can't run them at the same time, because they take alternate frames, and slightly different exposures, but we can probably compromise a little bit more there, where, maybe the hand tracking and the environment tracking can use the same exposure settings.. and at that point, if we're tracking the hands, the arms, and the body.. then even when we've lost the controllers then .. [puts hands behind back of the avatar] .. even when we've lost the controllers, there's only so many places the controllers can be when you know that your shoulder is over here. So I think we've got a good path to really solving the controller problem just by integrating our hand tracking. That's something that they come along later in Quest 2 as a software upgrade. So..

J. CARMACK

Okey, hold on everybody, I am going to go ..camera people are leaving here.. I'm gonna go shut down here. I will be back. I get automatically let in, this should work ok, but give me a minute.

USER 7

Man, listening to him talk... I can listen to John describe the velcro head strap for about an hour.

[incomprehensible]

J. CARMACK

(Enters through the improperly aliasing neon hallway)

All right, so..

USER 7

Traditionally in all the different Facebook connects my favorite, one of my favorite things other than standing in the hall and listening to you chat is probably the coolest thing, but, running up is watching you play other people's content.. that's like a really amazing session every year, because it kind of brings a lot of improv, just content advice and well-rounded just kind of things that are interesting to talk about..

J. CARMACK

Yeah, I could imagine technically some way to pull that off in a virtual conference, but it would not be easy. There would be lot of things to try to do. But I could see it happening where if I did it basically as just a live stream, and I played and narrated through all of it but, you know, I'd have to work something out for people.. because a good chunk of it is talking to the people that are doing it. So there would have to be some extra thing.. And in the ideal world, it would be some kind of, you know, avatar, my shoulders were going through this.. by we are not close to having that sort of thing sorted out generally. It is going to be a hard fight to get remote views.. It's going to be really valuable when we can get that, but anything where we try to intrude other things into them, kind of into the arbitrary games.. we can pop stuff up, like pannels, really easily and, potentially rendering with a little bit more trouble, but fully integrating in different ways is.. is tough. It's easy to imagine, and ask for some of that type of stuff, but the games tend to be their own separate world and there's a limit to what we can do with integrating on top of them. But I could imagine just simple party chats with that, where I could be in a party chat going through that, live streaming it while I'm talking to somebody else.. but coordinating, getting people in queues and lines up.. it would be hard to run that seamlessly. But that's one of those things we should have in the back of out mind.

BOTH J. CARMACK AND USER 7

[both talking, incomprehensible]

USER 7

I'm sorry, go ahead.

J. CARMACK

Here we are with our latency problem. There you go.

USER 7

Yeah, so one of the things that just hearing you speak got me thinking and wondering, do you have an opportunity with your role.. and if not, i think it would be of great benefit to the community as well as your organization.. how much do you get to interject into the creative process and play with the developers, you know, [incomprehensible], that are building some of the interactive content and stuff. Do you get to interject with some of the early creative brainstorming?

J. CARMACK

Yeah, so right now I'm... I'm super happy that I'm here this year, because it wasn't clear that I was going to. Because I am mostly spending my time on artificial general intelligence research. I'm only part time consulting at Oculus now. So.. It's working out really well, we weren't sure how it was going to go, but so far it's going great and everybody seems happy with the situation. But it does mean that I'm not.. I have far less time than I used to, and I spend about half the time that I'm in just digging out through mail and meetings and things like that. I'm lucky if I get to check in code once a month on something now. Which bums me out a fair amount. But I'm still all about maximizing the value that I can contribute in various ways with a limited amount of time.. it does tend to be more of these other things...

But the content side, honestly, it has been a little bit weird, almost, through the whole time of Oculus. The way our organization is set up there a little bit of kind of five doms and content is like this separate org. And a couple times I've made the point that: look! We have some game industry veterans, and I'm here, and we are available to help more on some of these things; and I've been upset sometimes when some high-profile marquee titles have come out with things that I look at and just like go .. there are mistakes here, we should have done this better. So.. I wish we did a better job on that, but, honestly, I have not been that much involved in .. especially the early stages, when things could be easily fixed.. often I'll get a build like when it's like: oh, it's just about to go master -- and I'm like, ah! But there's something wrong here that needs to be fixed! Like I am testing the new Beat Saber stuff right now, which is great and they do a great job, but even there I was like: No! Don't make the panels transparent, that's a bad thing! Please I hope you can fix this before it goes out.

But in general, outside of the things like Minecraft, that I basically drove through myself, and some of the things like Netflix, that I did literally write. I am.. not as much as you would think. And not as much as probably would have been good for the company. Where, I was more than ready to spend more time dedicated to going through other things like.. Honestly, I spend more time with the.. like the Start people and the app review stuff at Connect than with our professional developers that I should be more involved with. And it's been a little bit weirdly standoffish. And that's something that probably could have been better.

USER 7

Those sessions are just absolutely great, I mea, I see it..

J. CARMACK

We'll see what we can do next year. I mean, at this point I'm almost hoping that we go all virtual next year and do it right, you know, and nail it. That's like what we should be doing. But, I mean, maybe we do go in person and we have the app reviews and everything there...

CHOCOLATEYSM

My senior dad, he's not very tech savvy, so I was really excited about the social friends party system, and it seems to be getting better and better, but it's not really...

J. CARMACK

.. but it's nothing to write home about, yeah.

CHOCOLATEYSM

Yeah, how fast you see that developing so that at some point I can really just have them pop up right next to me all the time without him barely pushing any buttons?

J. CARMACK

I.. I wish that was a north star view on it, but our social stuff has been, I mean, like I was saying at the beginning, it is embarrassing how bad we as a Facebook company have not handled social in VR. Whether it's, you know, our four different avatar systems, all of our different like social parties tune-ins, all these independent little things.. and they're all pretty bad. Because that is the vision that you want. You want to be able to give this to an elderly person, and they now that it's just like put it on, you get walked through meeting your grandkids or something. There are these things that we can do, this can be great. Even what we're doing here, if this was just an absolute zero steps. I talk about that with all the things like on casting, where casting is.. it's still this multi-step process.. and it used to be just almost ridiculous: start this here, accept this here, acknowledge this, and then it starts going.. I'm like: No. Zero steps. If somebody opens up the Oculus app on their phone.. it should be showing what's happening in their headset. Not "go find the little icon, and start a cast, and accept", because we have to deal with people that.. you know, most of the world is not super tech savvy like Oculus Connect or Facebook employees or anything. So I totally feel that you're absolutely right and I'm not sure that I have the best news to give for this. Honestly, I.. I was worried that Horizon might turn out to be like yet another disaster.. because we have ..just our history of this.. every social thing we've done has been a disaster. But, Horizon is looking kind of good. Maybe.. I hope it's not just the secret of low expectations for this, but.. this is kind of working, there's all these things that we need to stick a wrench on, and we can make better, but, maybe we've done it right this time. Maybe fifth time is the charm for it.

USER 6

Exactly on the Horizon topic, I absolutely loved the way in which you talked about Spaces and the opportunity.. and there was one feature in Spaces that I believe is notably absent in Horizon. And that was the ability to do Messenger directly from the environment. I was very lucky to have early access to Horizon and I built a number of worlds, and it's a lovely experience, but it's really decoupled from everything else. There is no way of bringing in any content, I cannot show a screenshot of a document, I cannot call in Messenger anyone. For what you know and for what you can share, is there some way of connecting Horizon to the external world in the future?

J. CARMACK

So, I have good news for you on this. Where we are moving towards is allowing all of the kind of system utility things to come up in any application. Like, I'm not going to risk doing it right now, but on current builds it's supposed to, like I could hit my home button and bring up the UI inside things.. and they announced that we're going to have like a Messenger approach inside that.. So in theory that should just magically start working at some point. You can be in any application, bring it up, and then you can work on Messenger, while the application is still running. Spaces did a lot of things like tossing people photos and things, but it's worth noting that Spaces brought Rift PCs to their knees... I think they tried compiling it on Quest at one point, and I was like: oh, it runs 2 frames per second. There are a whole lot of things that.. you just bring in every dependency in the world, and throw it all in, and hey, it magically works on your desktop PC, but it's just.. it was not close to working on a mobile. And I was worried that Horizon was going to turn out like this, because at the start, everybody want to work on the Rift, because you get happy fun results fast. Unlike, you know, mobile, where you may struggle for a long time. But it looks like.. Again, Quest turned out better than a lot of us expected. It was more right that we thought it was gonna be. And I think even all the people on the Horizon team got to the point where it's like: yeah, maybe this is our platform. Let's make the right cuts, let's build around the lower performance specification, you know, and work there. But my pitch about the universal environment, the virtual machine that does everything -- yeah, I want us to be able to pull up .. whatever.. I want to pull up Twitch live streaming or something here. And eventually you'd like to be able to show it to other people.. We are working towards that in the shell environment, where, that was always my pitch from the very beginning, the shared social substrate. We didn't get that, but we're slowly tacking back to this. So many of these things that I wanted years ago, and laid the groundwork for.. we pivoted away from.. but we're slowly coming back towards. So, at that point that's where we do want this setup where you can bring something up and everybody in the room sees what you see. But we can't do that generally across all applications. We just don't have the right interfaces, we don't know everybody that might be in the application.. There's ways that we could conceivably have structured the software stack for that, but that's just.. right now, Unity apps are basically.. they're their own world in a lot of ways. We can put stuff on top of them, but we have little introspection inside them. But the basic idea of like: yes, you should be able to pull up Messenger, send a voice Message or something to somebody.. and we really should be getting that. And I think it's probably going to be rolling in.. I'm not sure of the exact time frames.. but over the coming year.

USER 5

I know that this is Connect, but you mentioned your work with AGI and I was curious if you would talk for maybe a minute or two about what you've been excited about in that space.

J. CARMACK

I actually did have a little bit in my notes that I had a whole, probably, hour or more of talking that I did not get to there. Where.. I try to keep a pretty hard firewall between my personal AI work and the Facebook VR work.. Let me tell you, be involved in a 2 billion dollar lawsuit and that incentivizes you a little bit to try to make better boundaries for things. But it does provide an interesting new lens for me to look at some VR things through. My take on AGI is it's should be an embodied being sort of thing. It's not just, you pass it data and you get a result from it. It's got to be a real time continuous thing. But there are all these interesting about how learning goes on, and how time is discretized, and a lot of things wind up working with, say, discrete time steps. But, like when I'm playing Beat Saber, I'm looking at things where.. why does it work that you play on slower song and learn patterns and then slowly speed up? If you had a convolutional network or transformer or something, that is just taking these multiple steps back in time.. that does not work directly. There is clearly some kind of continuously variable timing speed that you've got going through this, that lets you learn at a slower time, build connections, and then slowly, incrementally, move it up to faster speeds. And then, like, a lot of people.. another thing that makes me excited about: eye tracking, which, again, the thing that wants it for is to make rendering go 10 times faster, which it's just not going to. But one thing that excites me, is learning more about exactly where people look. They've done all these psychological studies, with expensive eye trackers, that are separate, that.. watch what people look at, watch how they move around. Some people look at AI work and say: well, people have a foveated region, but if we just throw fast enough computers at it, you just look at the whole frame image, however many millions of pixels... And I don't think that's actually going to work. I think it is a critical part. The temporal tracking of our eye around things is critical to the way we learn and understand the world; that we do not just look at million pixels. It's the tiny little foveated region and how we track around the world. And looking at the eye tracking data for that is going to be interesting. So being able to say: ok, I did a task, where exactly were my eyes looking? And there's a lot of work that says, when you look at a face, there's this little bounce that you do between the eyes, nose, mouth, to identify people. It's not just that like you glance over there and all those pixels determine the face. You do this little temporal dance of tracking around. So there's things that I'm excited to learn about that. And then there's the whole sort of reinforcement learning side of things, when I'm trying to push on Beat Saber, and I've got it set on .. I want to learn, I want to get faster .. so you put it on instafail so that you get the feedback right then when you've missed something, rather than all the stuff that you learn about how if you go on later and you get one sparse feedback at the end -- it's very hard to, kind of, back propagate all of those to the things where it actually mattered, but, the best way to learn something is to get instant feedback. So, you turn on the instant fail there, it's like "damn it, missed it again, go back. damn it, missed it again." And eventually that's the fastest way to, kind of, progress through it.

So, I take some interesting.. I am, you know.. Well, pretty much everything that I do is a matter of looking at things through the lens of all the skills, and techniques and knowledge, that I've accumulated through the different things. I mean, there's things that I learned in aerospace, about physically building things, that affect how I think about VR headsets, all the gaming stuff, and it's all great and wonderful, and I wind up being this hugely optimistic person. Because, every.. practically every day, something adds a little bit to my body of knowledge, that makes me more powerful in accomplishing the things that I want to.

MICHAEL

Sorry, I am a Michael. So what's about field of view. What do you think about that... a wider field of view.

J. CARMACK

(audible intake of air)

So, there's.. There's definitely a lot of trade-offs involved with this, and, for most people Quest 2 does lose a little bit of field of view. A little bit more eye stand off, and in the wider views you have some trim, where it's off the edge of the screen. But, you know, we had one version of .. it was actually a custom modified Gear VR, that had like an enormous 120 something field of view, with giant lenses, and it was pretty awesome. It was something that when we were still deciding, would we maybe do another Gear VR, and I was kind of pitching for that. But it was awesome for things like 360 and 180 videos, and that's because the videos -- they're already there, we have the pixels, they're sitting in memory, we can present them as whatever warped field you want. It's a very different question for rendering synthetic environments. Because the way all GPUs work to render conventional rasterized graphics, you render a flat plane. And if you just take out your graph paper, and you draw.. you've got a 90 degrees is nice and simple, you just draw the little right angle there and you can see that, all right, whatever your resolution that you're rendering is stretched across these two lines. But then you say: all right, I want 120. So that's more like drawing, you know, a one by two there, and, all of a sudden.. ok, going to 120, you have to be four times as many pixels along that line. And, of course, you say: well, you know that's in one dimension, I can't render 16 times the pixels going for all of this. So you wind up crunching it down. So ok, now I've got a wide view, but now I've got much lower quality in the center. And this also.. it is harder for the lenses. The cool prototype thing that we had was a doublet lens, that was more expensive, more fragile, probably, in some ways, all those trade-offs there. So it does feel like we're still, probably, at the sweet spot for media presentation, although it might be worth going a little bigger. Certainly there's a possibility of just going slightly bigger, you know, you can nudge up a little bit, a 100 degrees instead of 95. I mean, already you kind of make that trade-off, when you are rendering right not, where you can see.. if you really push the headset into your eye, and look up towards the corner in the best way, you can the edge of the rendering, it's like "oh, you could render 5 degrees more", and it would cost you a lot of performance, just to get rid of that tiny little edge there, so it's hard trade-offs and on mobile, and I feel pretty good about where we're at.

J. CARMACK

(The bumblebee sound in the background. As in Flight of the Bumblebee, not transformers)

Somebody's got the buzzing microphone problem. Ok, Brandon, next.

BRANDON

So, there's only 20 of us here right now. But is this being recorded, so that everybody else can watch it?

J. CARMACK

Nope.

And that's another thing that we don't have right now. Like, we should, obviously, have this. Now, I probably put the Horizon people on the spot with all of this. I probably put a bunch of people on the spot this Connect, because everybody else is like: ok, it's all pre-recorded, everybody was making all of the pre-recordings last week. But I was like, you know, that's kind of.. you can't have a virtual conference if you're just dumping videos out. So I insisted on doing my talk live. And it's kind of funny, because first I was like look, I'll just Facebook Live this. We'll work this all out. But they.. they over professionalized everything, and I had three camera crew like in my house for this. You know, which probably would have been better, you know, without that. And then I was thinking it's like, well, I'll just jump into.. like, can I jump into Horizon afterwards.. And .. I'm really happy that, you know, they rose to the occasion here. They got the special stuff. Normally you're limited to like 8 people in a world, so they bumped this up to 20 in a simple thing, they set up a little stage, and the gateway for me. I was willing to maybe just wing it, like I was hanging out in Venues for quite a bit earlier on.. and it's clear that we need to.. it feels like a ghost town most of the time. There's hundreds or a couple thousands people in there, but, because it's sharded up, you can't really get to people, and find people, and would, kind of, walk in and out of areas.. Like ok, one person in there, one person there, ok, we've got a full house here with 8 people I can hang around and chat for a while there... And I could have, you know, I would have been willing to just kind of do that, for whatever afterwards, but I'm glad they, kind of, went the extra mile and got this set up..

INSOMNIA_DOODLES

Um .. John, I'm actually, I'm live streaming at the moment. Is that an issue? [indistinguishable]

J. CARMACK

No problem with me, but I don't know if anybody at the comms team cares about.

USER 4

I'm live streaming too on my Oculus. And I hope this is good... [indistinguishable]

INSOMNIA_DOODLES

..I don't want to step on any toes.

J. CARMACK

No worries for me.

J. CARMACK

So. But the issue is that really.. and the interesting thing is Venues that we're heading towards now, it's very close to what we originally wanted to do on Venues. Like, the very first prototype that we had mad for Venues, we called it Stadium, and it was basically we had the...

[Sudden silence. John's avatar fixed in place.]

USER 3

Oh no...

J. CARMACK

... had a blue lobby area, it was just this great big thing, we pulled out.. there are some fountains and things in there we just got rid of.. And you could just put 50 people in. Eventually we got like over 100 people working in there. And it was just this big area, you walked around, and you talked to people. I thought this is.. this is kind of what I want. Like a Connect sort of thing. And again, this.. it happens over and over with a lot of Oculus stuff. People think that, like, well, this is too... it's not professionalized enough. So it got very designed into the pods, and the different stuff you could do there.. And there was some good stuff that we learned from that. But it was not as good as like what we've got here. I mean, I wanted.. I would go into Venues, and I would talk with a couple people around me, and maybe the people in the row ahead would crane their head back and talk.. but.. it is was hard to navigate around the different areas... What I wanted was basically this. I could just walk in and go, and people could get around, and we could have a conversation. So I'm happy that it's coming back like this. But the limits of only getting 8 people, or something, in on Venues is.. is really tough. It blows.. It destroys a lot of what we want from that. But even if we stick with that, we could probably make that a lot better, where you could see kind of the visualization of the neighboring pods off to the side. You know, I wish that they just put a couple bits of communication.. if all I could see was just people jostling around, so you could see if there's one or two or eight people over there. And then, if you could basically sort of just point and say hop over to the adjacent areas..

And then the other big thing, the design for the original stadium stuff, was that we have a giant screen up there, that everybody could look at, but the world was big enough that you could walk to the back, far enough away so that you're not bothering people. Because... the couple ones that I was having real conversations with people.. I mean, maybe there was somebody there that actually wanted to watch the video that was streaming there and I'm making all this noise, talking with everybody turned around. But, you know, what I really wanted was.. it's great that you can have the upstairs there.. that you should have more passages that go to kind of the back room. Where you still have all the people, but you can just kind of walk back there, not bother the people that are actually watching the concert. Because it's one thing in Connect, maybe I do.. it's reasonable to have a "Carmack override" to take over a room here, but if you're at a concert or something, or, a movie, even worse, and you say it's like oh, there's my best friends there, let's just go walk off to the.. pull back to the area and talk there. But it feels like maybe we are finally on the right track though. We're not there, yet, but this feels.. ..pretty good for some things.

USER 2

I was wondering on the front of a potential wireless Link solution. If you think there'd be value in making some kind of external like-accessory that plugs into your PC and could talk directly to the Oculus Quest?

J. CARMACK

So yeah, the way the internal politics, and drama, on a lot of this have gone, is that.. we absolutely could make a specialized dongle, that uses frequency bands that aren't very useful, there's good firmware stuff that we can do to improve things, and, you know, hardware people want to make hardware. So, depending on where the program gets driven from, if it comes from hardware people, they're going to design a solution that has you plugging in custom hardware. But, from the software side, many of us were like: "look, it works! We have an existence proof, people are doing this on regular Wi-Fi". And then it becomes a question of quality bars, where people say: "yes, but when I tried it it was terrible, it was a garbage experience, this will poison the well, people will be sick of this, and they'll never want to buy something from this later.. even if we make a better one." And I've never really bought that argument, because there's always this spectrum where, you know, the streaming solutions.. and like I did this where we have.. we have a demo of running stuff to cloud computers. And I started in one room of my house that was kind of at the limit of Wi-Fi, I put it on and started saying, well, this is terrible. This is not good, it's jittery all over, I'm gonna get sick. But then I walk down the hall, to my office where my router is, and now it's like, oh, this is surprisingly good.

And also like one of my little gripes about.. in that exact thing, I tried to get up and literally walk down the hall to my office, but, of course, you walk outside your Guardian, it shows passthrough (!) I'm like ok, great, I'm walking down the hall, but then, after a little while, it goes all the way to black. Where you're like "oh, go back to your place" or whatever. I really want that fixed, because that was a totally legitimate thing that I was doing there -- walking from one known Guardian area to another one in my office.

But yeah, there's..

there is very...

Hardware projects inside Oculus are big deals and expensive. There are lots of scrappy little companies in giant endstop that can whip things together a lot faster and more cost effectively than we can. We have to do our headsets, that's just .. every time we've worked with another company, we felt that we need to take ownership of it to kind of get all that done, the way we want it to, to our standards. But, there's certainly the possibility of something like a Wi-Fi dongle that.. we could take somebody else's Wi-Fi dongle. And there are legitimately valuable things we can do in the firmware, there are things within the Wi-Fi spec. There's ...

[screen goes to black for 2 seconds]

INT. AUDITORIUM IN VENUES - CONTINUOUS

J. CARMACK

... long you'll wait for retries, how often you'll broadcast or check for different things. And these make real differences, and there's some really good work that was done internally to kind of check what can we do on the headset, that affects everything, and then what can we do on the other side. And there's valuable things that we can do in both cases. So.. my ideal case is we offer the whole spectrum, where you have Link for top quality, where you run the wire up to the headset -- and we have lots of headroom for making that better than it is now. Take full advantage of this awesome USB 3 high-spec cable that we made. So we can make that much better. Then you can offer Wi-Fi to a dedicated dongle, where you plug it in and this is.. it makes the pairing perfect, we do cool stuff to make it just a magical experience, where your PC just always shows up there, and it does the best possible job for wireless, and if you're in the same room it's going to be great. But then we should still support just Wi-Fi through your normal access port. There's going to be more problems, I mean if the rest of your house is all streaming movies and playing games you're going to have more trouble which is going to bother you more in VR than it will for video. You know, Netflix can buffer 15 minutes ahead in a stream, while, VR, you know, we're like well, we'd really like it to be 50 milliseconds or less total end-to-end latency. But still, lots of people will say: "it's the middle of the night, nobody's on the Wi-Fi spectrum, this works just fine". Or they can say: "well, I really need to get this done in this situation and I can tolerate some disruption from it". And then even after that, if somebody says: "well hey, I don't have a gaming PC, but my friend does. He opened a tunnel in his router and gave me his IP address". You should be able to go ahead and connect to somebody else's PC or a cloud service. But.. probably most of the time for most people it won't be a spectacular experience. But even if 10% of the people, I mean there are niches in the VR market.. people buy foot trackers and stuff right now, which is just a tiny tiny fragment of the market. And if only a tiny tiny fragment gets value out of even a cloud solution, I'm all for it. It's like I don't care if I think it's valuable as long as one of our customers think it's valuable. And they show up.. their preferences are revealed by their actions. People are doing these things. And so they must find it valuable and we shouldn't say it's of no value.

All right, sly_dog, again.

SLY_DOG

Um.. I have to ask you, what do you think about Simon's aka @DrBeef's ports of some of your awesome games.. like Quake and DOOM and stuff like that. I assume you've played these, right?

J. CARMACK

So. It's an interesting story about that. So when it was.. when the Quake port came out, we were still in development on Quest, and I downloaded the source code, I played it and like ok this is kind of cool, you know, the frame rate wasn't all everywhere it should be.. But it was to the point where I made it 6DoF myself for Quest, internally. Where, I went ahead, you know, went into the source code, found where I needed to go ahead and set things up so that ..you know, didn't get the weapons in exactly the right position and various things.. but I made it 6DoF for myself and I had.. actually had my lawyer send a formal request to ZeniMax. It's like: "hey, do you have a problem if John contributes to this open-source project". Now, really.. there's.. I don't have to ask to do that. It's open source, I can do anything I want, I am not prevented from doing that. But I just didn't want, you know, I didn't want ZeniMax to be mad about something. Because, you know, I wouldn't want the innocent developer to get caught in the crossfire .. if they were just going to do something.. you know, peevish about it. And, you know, they blew me off. They didn't respond. So I didn't follow up on it... But I think that.. and it's a shame where I think that it could be.. you know right now it's a hobby project. But, the path is clear for what it would take to turn that into something that belongs on the Store as an official application. Evidently, from the.. especially from seeing reactions today with the Nine Inch Nails stuff, they still hate me. So.. I don't have much confidence.. much confidence that that's ever gonna happen. Which is a damn shame. Because, I think that it really.. it's neat looking at it now, running on sidequest and whatever, but it could be.. really good. And I'm sad that we can't do that. I would.. you know, I would give quite a bit to.. to turn that into a real product. I would love to see my old titles there, actually available. And they really are at the right level of performance, for doing something like this. You know, I looked it up at the code, where it made me really wince, where I'm like why isn't this perfectly frame rate fast?! And it's like oh this is like five generations of game mods on top of the original Quake code, and there's layers of abstraction over different 3D rendering systems. And like ok, I'd have to go in with a big axe and cut a whole bunch of stuff.. but, you know, this should run at 100.. this would be one of those games that we could run at 120 Hz if we went ahead and nailed everything right. And.. and that would be a glorious thing. But I'm afraid it's not gonna happen.

J. CARMACK

(looking at audience)

Anybody else?

USER 1

When you link your device... When you link your device to the computer, how much involved is the video card itself? I mean, does it matter a lot with the type of video card like I hav..

J. CARMACK

Yes.

USER 1

...e a minimal 970 GTX. Does it really matter if it's a lowe..

J. CARMACK

Ok, yes. It matters a lot. Because it actually matters more than even on the regular Rift, because, in addition to rendering all of the actual imagery, it has to do video compression on top of that. So like a 9xx series cards probably not a good idea.

And like so many things, another thing that I've lobbied for getting rid of, is the whole annoying "your system doesn't meet the minimum Rift spec" or whatever. And it was funny, because AMD got me an absolute, top of the line, 64 core, 128 thread system. It's an engineering sample, though. So, I start up the Rift stuff on, and it says, you know, hundred-twenty-eight gigs -- all this does not meet the minimum Rift requirements. Because it wasn't on the whitelist of all of these. So we just need to get rid of that. That comes back to the theme that I've mentioned so many times, about this professionalism, and minimum bar, and all the worries that we can't allow somebody to have a bad experience. Or we have to, you know, in some ways, shame people that want to have an experience that's not what we consider the baseline of how we want to be represented. I've never agreed with it. I've.. And I think that's going to go away, so that's a good thing.

J. CARMACK

(looking at audience, nobody raises hand)

All right, well, if everybody else is done, you might want to clear out and see if there's a line of people out there trying to get in and trade places if anybody else wants to ask questions.

MULTIPLE USERS AT THE SAME TIME

Thank you. Thank you so much. Thank you very much. You're awesome bro, later.

J. CARMACK

Thanks. Everybody had a good time.

INSOMNIA_DOODLES

Can I get a picture with you?

J. CARMACK

Yeah, sure.

J. CARMACK

(flying all over the stage, before getting off)

Can I pop off? I might be stuck on this..

[everybody tries to take a selfie]

USER 0

I'm really glad we got to participate in a hallway talk of sorts, because I would have never been able to go to a physical convention. So it was... just a delight to be able to hear you talk ... [incomprehensible]

J. CARMACK

Yeah it's like.. it is a shame that we are limited like this on 20 people. I mean, what I would love to see is if, ok, there's thousands of people involved in all of this, you know, if it was set up with some kind of crowd control, where people could be back, but when they've got a question, they could make their way forward and, you know, some equivalent about letting.. you know, letting somebody doorman manage things, but letting people elbow their way forward a little bit. Because, some people just want to listen, but right now, I have to kind of encourage people.. I know there might be people that want to talk. So we have to make these trade-offs. But, again, I see a path. I think we can get where we want to go... from this. And I think we're maybe finally heading in the right direction.