UX STRAT logo

For People Who Guide Design

XD Immersive Interview: Kharis O’Connell, Meta

Paul: Why don’t you just start out by telling us a little bit about yourself?

Kharis: My name’s Kharis O’Connell, I am the senior director of product and design here at Meta. I’ve been here for just over a year now. Prior to working at Meta, I came to the Bay Area from Vancouver, where I was heading up product for a company called Archiact, Canada’s largest VR/AR company.

Before that, I ran a start-up called HUMAN with a couple of other co-founders. It was a product design company that specialized in wearable technologies in VR and AR. We did a bunch of different products. For example, we designed a line for Mio, who did fitness watches. We worked on some software for Finning and Caterpillar, who used IoT technologies; we worked on a mixed-reality platform for them as well.

Before that, I guess I was in corporate land for a long time. I was at Nokia in Berlin, and I headed up the maps design there for about four years. Before that, I was at Samsung for a year, working on their flagship smartphone, which I think was called the H1 or the M1; it was before the iPhone. And then prior to that, I worked as a system designer for Apple for a few years.

So, I’ve had quite a lot of experience in companies of various size. I’ve always been in the UX/product area of design. Obviously, the last four years has been primarily moving towards VR and AR as it’s grown and grown, not just in terms of common popularity but in design, taking an interest into, “What are the design patterns of the future?”

Paul: That kind of leads to my next question: could you tell us a little bit about your upcoming talk at XD Immersive?

Kharis: I’m going to be talking about designing for spatial reality. Some people call it mixed reality or augmented reality; I just call it spatial reality because spatial design is the real core difference. I was a traditional designer once, if you will, working on websites and mobile applications. In my experience, one of the big challenges over the last few years was getting designers into working on AR without having any best practices or reference-able kind of directions. It was a bit of a “wild west.” I’m trying to share the learnings that I’ve had over the last two or three years working in this area, on what things work and what things do not work.

We’re continually trying to push the boundaries of what people are willing to accept before they get into an uncomfortable area where they start to feel anxious or nervous. And I think something that’s really profound with VR and AR in particular is that it’s very visceral. When you’re showing somebody a website and asking an opinion, or a mobile application and asking what they think, or making them run through a couple of the screens, you’re always in control, as a designer, of that journey. What’s interesting in spatial design is you don’t know how the user is going to encounter things, which means you have almost no control over it--no absolute control. So this is a really big design challenge, but I think it’s super exciting so I’m hoping to speak about this and encourage designers who are sitting on the fence a little bit. Maybe they’ve worked in mobile and web for a while, and they’ve seen VR and AR, but they still think that it’s kind of crazy and that it’s just gaming-related design. I’ve been trying to get them to understand that you can apply a lot of the principles of design thinking to AR and get really good results.

Paul: Yeah, I would say that’s the purpose of the XD Immersive conference, as well. We’re trying to bring user experience and product designers into the world of VR and AR and give them some frameworks, tools -- some orientation into the world of VR/AR/voice, and I would also add in artificial intelligence and Internet of Things. Your company is a leading company in this space as well. So, there’s your interest that you followed, but then also you’re with one of the market leaders. Can you tell us a little bit about their current state and kind of how your work is evolving with Meta?

Kharis: It’s funny, Meta kind of popped up on my radar back in 2014. I had a friend who was studying at the University of Toronto under Steve Mann. Steve Mann is very, very famous; godfather of wearable technology. So, my friend was there in 2014, and he came to me in Vancouver and said, “Hey , you’re working in augmented reality, you’re interested in this. I have this headset from this company called Meta. Do you want to check it out?” It was a Meta 1, and I tried it, and frankly I wasn’t impressed by it at the time. I thought it was pretty terrible, but it made me think: “Well, AR, is still a long way off, but at least they’re trying. At least they’re trying to do something.”

Fast-forward to 2016, and Meta contacted me when I was running HUMAN, the start-up I mentioned. They’d heard about doing work in interaction design for augmented reality. We got into a conversation and they invited me to come visit them. I came and visited Meta, stayed there for a week, worked with them, and then finally decided to join the company. Really, the mission of Meta was what attracted me. If you take other companies--for example, Microsoft is one people know, and the HoloLens is a great device. I think that Microsoft did a great job, especially when they released it. I actually saw the HoloLens in 2015, and it was kind of, wow, it was just like, “Holy crap, this is like nothing else!” But it still had that kind of very Microsoft flavor to it. Like, “Oh, it’s Windows on your face now.”

What interested me with Meta, and I think what makes Meta special, is that it is vehemently opposed to transplanting those desktop metaphors to your face. I think that’s something that is really hard to do. It’s so incredibly hard to take that position - but at the same time Meta didn’t have any legacy to protect. So, I think that Meta’s been in a great position to try a lot of things without worrying about losing customers along the way or having developers complaining about how it works.

We launched the Meta 2 headset… It turns out it’s pretty hard to build AR headsets! It was delayed, it came out officially in September of last year, and we had a tremendous response. But the hardware is not really our core proposition. Meta is trying to address “What does the future of computing actually look like?” It’s not about, “Look at this headset,” it’s not about, “Doesn’t it look cool?” It’s about “Can we change the way we work, and can we change the way we have that relationship with the technology around us?”

I think that’s what keeps Meta, in many ways, ahead of the pack. We tend to focus on productivity-focused aspects of AR. If you take, for example, Magic Leap. Those guys heavily focused on entertainment and bringing the fantastical, almost steampunk kind of flavor into the world, whereas we’re looking at how can we make people’s lives simpler; how can we make things calmer; how can we make their lives easier?

Paul: I’ve listened to a lot of Steve Jobs talks back in the ‘70s, and I was around back then when he was inventing his first computer into the ‘80s. Something that he hammered away on all the time was the computer is an extension of human capabilities. It seems to me that, at some point in the future, most people in developed countries will use some kind of augmentation to enhance their experience. I can’t imagine the majority of us not using that. It seems that we’re at an early part of that curve right now, but at some point isn’t that the dominant experience that people are going to be having day-to-day?

Kharis: Yeah, I absolutely believe so, and I think we’re trying to do everything to make that happen. You raise an interesting point. One of the things that we’re rallying against -- and I think even on a personal level I’ve rallied against it at previous talks before I was at Meta -- is this dominance of the rectangle. As a human being, we are surrounded by rectangles. These rectangles vie for our attention. All of them are beeping and flashing, and we’ve seen what the addiction of staring at these screens has done. It’s made the smartphone the biggest computing platform in history.

But at the same time, we design our lives around technology now. So, wearables were supposed to be something to augment our capabilities and get out of the way. Instead, it’s an addiction and we’re just stuck staring at these screens. We all know this. There’s even a name for this, I think it’s called “Blue Face”; the eminent glow of the rectangular screen on your face.

And if you actually kind of pull yourself out, you’re laughing. It’s kind of insane. It’s like, I’m sure in the future people will look back and think “Humans were completely insane. Technology had taken over their lives and they were slaves to the form factor, they were slaves to how you had to interact with it. There were strict rules and regimes.”

Augmentation is what I mean by calm. We’re not looking at this in terms of, “Great, now we can fill your worldview of endless things, and just fill up the entire world with things.” Instead, it’s “Maybe we can just stop showing screens and maybe we could only show things when they make sense.” So, you’ve got context, which is a big part of it. That’s something you only get a slight touch of when you have these devices. For example, saying “Hey Siri, find me restaurant.” Of course it knows where the device is, so it will look in your local location, and there is a bit of context around this.

I think AR takes it a step further, where you can now just ask for things. You can use voice input, you can use different types, but it only appears at the time you need it. We’re not trying to fill the world with virtual advertisements, that’s for sure.

Paul: Got it. If I were to look at our profession, let’s say product design or user experience design, and do a scatter plot 20 years ago of activities, there’d be a lot of print dots, we dots, etc. I think if I did one today, a lot of the print would be gone, the vast majority would be some kind of mobile web design, and then there’d be some virtual reality/augmented reality at the front part of that curve. In the future, obviously there’s going to be a lot more VR/AR/mix reality/voice/AI.

I’ve been through a lot of these technology revolutions, and there’s always this talent gap that happens when businesses suddenly realize, “We’re at a competitive disadvantage if we don’t have mobile phones, if we don’t have laptops, if we don’t have internet.” This is coming, but it seems like this time we can clearly see it. What do you think the stages are that we need to go through to get that workforce ready of designers ready today to start doing something in these immersive technologies? What’s the path forward?

Kharis: That’s a good question. I think my experience tells me that you have to be willing to be wrong a lot. If you look at the last ten years of UX design, it’s become increasingly homogenized or standardized; there are applications now, whether it be Sketch, or InVision, or Marvel, or all of these things that, for a designer, allow a guided path to a known result. It’s very hard now to completely screw things up. I mean, people still do, but generally it’s like that risk has somewhat dissipated. We’re not in the GeoCities era of the web anymore, where it’s just chaos and blinking GIFs and nobody knows what the hell is going on. Everybody understands structures and the frameworks on the web, and they understand what a carousel is, what a dropdown is, what a header is; all of this stuff.

When you go into AR, the first thing that designers are faced with is the Z-axis. They’ve been used to this flatness; there’s nothing to it, there’s no depth to it. It’s about laying out on a singular plane, and then when you click something and it automatically moves to the next plane, you’re stepping through or stepping back. The first thing is this Z-axis issue that we have with design. And I’ve seen this when I’ve spoken with designers. I did some guest lecturing at Emily Carr University in Vancouver to the advanced design group there, and even with just the application of a Z-axis everything starts to fly out the window. You’re like, “Wait a minute. How does this work? How do all these old, kind of Donald Norman-type stuff apply to this new world? Isn’t it just crazy game land? You’re just making games?”

I think the first thing is a mindset shift for designers is needed. You need to understand you’re not making games. We don’t sit and try to game-ify things. If a task is laborious or it’s something like, “I have to fill an Excel spreadsheet,” you can’t game-ify it. There’s a base level of fun. What we’re trying to do is make it logical and sensible, and make that task flow as easy as possible. So, the first thing is a mindset shift and understanding that a lot of the books you have, or had, don’t apply anymore. I think students have been really nervous about that. They’ve just studied for four years, and they’re like, “What the hell? I just spent four years studying design, and now you’re telling me to rip the rulebook up?”

The second thing is the actual tools that you use to track things out. I mentioned Marvel and InVision--it’s incredibly fast to make prototypes for mobile and web now. Any designer worth their salt of any type can make an application look real in a matter of a few hours, and be testing it with users.

It’s hard to prototype in AR, but it is getting easier with tools like Torch3D. The big challenge for designers is this steep learning curve, because suddenly you’re faced with tool sets you may not understand. You’re faced with dimensions that you didn’t really think about before, and there’s no books to read. It’s not like, “Oh yeah, I’ll just whip out my Best Practice book, and I’ll have a read tonight about it.” I think that makes it daunting. But honestly, for me, that is actually what attracted me in the first place, because the future was unwritten and it was a medium that is incredibly exciting to work with. I think that designers should jump at this.

Paul: I agree, and, in fact, from a business perspective, the user experience folks are needed now. I’m reading about a lot of use cases in companies where they’ve spent a lot of money to develop a piece of augmented reality technology, and then the people in the field won’t use it because it’s more cumbersome than what they were doing before. It’s great technology and it’s super-advanced, but it’s got an organ rejection because it doesn’t have that UX magic to it. They’re not going in and creating from the use of it out, but rather creating from the technology forward.

Kharis: Yeah, and I think that shows. We have a lot of third parties we’ve worked with, and it’s so abundantly clear which companies had any designers and which companies didn’t. Because it tends to be that a lot of the early stages of AR companies, and VR, are gaming companies. They don’t hire designers, per se, or UX or interaction designers or anything like this. They have game developers and they use game mechanics.

I think you raise a valid point, and we see it as well, that people are keen to jump into AR, but every experience is completely different to the next experience, which means you continually have to relearn everything all the time. You have a kind of chaos in AR and VR because no one is trying to build a framework of interaction. Everybody is building as if you’re building your own standalone game. I think when you apply that to a real world situation, then it often doesn’t work. It’s fun for a few minutes, and maybe people demo it and go, “Oh yeah, that’s kind of cool.” But then they’ll say, “But I won’t use it, because it will slow me down, it will annoy me after a while with its quirks and behaviors.” I think that’s something that is going to take time for the industry to mature.

Paul: Well, that’s the whole purpose of the XD IMMERSIVE conference: to get designers in there and start getting involved. Anything else you want to tell our readers before we sign off?

Kharis: Just that we’re going to be showing, in the next short months, some glimpses into some of this thinking around frameworks. We’re aiming for the lowest learning curve possible. We want to be able to have people that can go from one experience to another and implicitly understand its behaviors and its interactions. We’re excited to share this. We’re going to share a lot of it at the XD IMMERSIVE conference. So, I’m super excited for the future. I think it’s going to be great.