UX STRAT logo

For People Who Guide Design

XD Immersive Interview: Andrew McHugh, Samsung

Paul: Can you tell us a little bit about yourself? Your current job role, how you got into this job role? Your company? Just a brief biography.

Andrew: Yeah, so a long time ago in undergrad I studied philosophy, a humanities perspective on technology, math, and physics. A little bit during undergrad and for a little after, I ran this curiosity conference called “The What If…? Conference”, cofounded with an at-the-time professor. Basically, it was this event where we brought all these different kinds of people together to be actively curious. We ran that for a few years, and it was kind of in that experience that I started getting more familiar with what design means, and with what UX design means. It was very much immersive reality design, since it was a conference and an embodied experience, but it was also very different than my day-to-day now. My cofounder and I also wrote and published a children’s book, The Book of What If...?: Questions and Activities for Curious Minds, based on the same structure of the conference: ask a question, get curious, learn some, then do something with what you learned.

I eventually left the conference and started designing as a UX freelancer and at an agency. That kind of held me over for a second before I got into Carnegie Mellon for grad school, studying human-computer interaction. After I graduated, I started here at Samsung Research America, exploring AR and VR design and what it means to exist in this new immersive reality.

Paul: Did you get started with AR and VR at Carnegie Mellon, or after you got to Samsung?

Andrew: So, I got started somewhere between undergrad and CMU. My first project…I had bought Google Cardboard and been playing around with it for a sec, but I was also very interested in sensory substitution and augmentation. So, I messed with the Google Cardboard, I bought these little screens that I could put in, each attached to its own camera, making the device into a somewhat augmented reality thing. Since each screen was attached to individual cameras, you could effectively take your eyes out of your head, similar to that one character in Pan's Labyrinth that has their eyes in each of their hands. That was my first different-reality design project that I took on.

Paul: And that was at CMU, or before CMU?

Andrew: Before CMU. I think it was while I was still at my start-up. But yeah, it was just like a side project I was working on in the garage. And then when I was looking at which grad school I should go to, it was very much between something in AR and VR, or something in AI. So, basically I was trying to figure out what are some major technologies that are about to be used by millions of people that could use a good, humane designer. I ended up focusing on AR and VR.

Paul: Did Carnegie Mellon have courses on that, or did you mostly do self-taught?

Andrew: I think Carnegie Mellon has a really great program to just understand what it means to make digital products. Most of the VR work I did there was either taking what I learned in the classroom and adapting it to VR, or there were a handful of times when we were given a project with some flexibility that either I, or my teammates and I, could go in a VR direction. One example of that was this class called “Research Topics in Game Design.” Three other teammates and I created this research experiment that looked at haptics and morality in VR. Our guiding question was: if I can feel the people and things in a virtual environment, does that change my moral decisions?

Our example was looking at--there’s this famous problem in philosophy called The Trolley Problem, where you have this out-of-control trolley. It’s about to kill five people, but you happen to be standing next to a switch. If you pull that switch, then you can change the trolley’s course to just kill one person. Then there’s a whole bunch of variations on that, one of which is where you, instead of pulling the switch to kill that one person, you push someone in front of the train and it stops the train, but you’re still just killing one person. It’s usually the case that people are okay with pulling a lever in verbal or written thought experiments, but they’re less okay with pushing someone. And so in our VR experiment, we were also looking at pushing one person to save a bunch of others, but for a portion of the participants, when they reached out to push, they would actually feel the person they were about to push to their death.

Paul: Alright, cool. And so Samsung, how did they get interested in you for this particular role? How did they know about what you were doing? Or was that just part of your résumé? Or did they reach out to you, or…?

Andrew: For me actually getting into the role, there was an alumna here from the same program I was in, and she posted in a CMU job email list. I saw that application and I applied.

For why Samsung was interested in me, I think it was because of my self-designed internship. My grad program had a summer off to go do an internship. Instead of a traditional internship at a company, I went back home and designed my own. Over about three months, I looked at various UX problems we might experience in VR. What can be applied successfully from web and mobile design? How should we move around virtual environments if we can’t walk everywhere? How should I be able to explore 3D objects if I only have a 3DOF headset and can’t walk around the object? For each UX problem, I did a one-two week design sprint with a bit of research, sketches, a prototype in A-Frame, and a little user research.

When Samsung was looking through all of my work, I already had a small body of VR work – between my internship and a couple of CMU projects – that showed how I explored and prototyped VR UX problems. I think each of my projects and process around the problems helped me get into this position.

Paul: Day-to-day, what’s your general role there now, with respect to VR and AR?

Andrew: Day-to-day changes all the time, but generally I look at different AR and VR interactions. Most of my work in the last eight months have been more on the AR side. I’m looking at how do we interact with virtual content when it’s still in the same world that I’m occupying. How do we show information while not distracting or endangering the user? Samsung is also an interesting place because we’re looking at a bunch of different technologies all the time. We don’t always have a particular hardware spec to match our software design to. It’s more like, if we have this hardware, then here are some design considerations. If we have this different hardware, here are some different design considerations.

For example, the Magic Leap and HoloLens represent high-end AR. You have all these different ways to interact with content and the spatial tracking is pretty great. On the low end, there are a couple of startups I’ve seen that are basically like an AR version of Google Cardboard – maybe no spatial mapping and input is really simple. My job is to think between all of those and help set Samsung on a positive path moving forward.

Paul: So, it sounds like you’re generating guidelines for VR and AR, for products and services. You’re not tied to a specific product, you’re rather generating guidelines for design?

Andrew: Correct. That’s where I focus.

Paul: Cool. So, what are you going to be talking about at XD IMMERSIVE?

Andrew: I will be talking about a little bit of my personal journey going from web and mobile into the immersive reality space, just some different things I learned, also showcasing some different errors I might have made, helping other people avoid the same mistakes.

Paul: Without giving away too much, part of this conference, part of our goal is really to show designers and product managers what it’s going to take for them to get involved in VR and AR, and you’ve made that journey already. Can you give us some general ideas about maybe changes in perspective, changes in framework, changes in the way you do your work in general… Like if I wanted to start on a new project in AR/VR, what kind of personal changes do you think that might entail?

Andrew: Maybe I’ll give three notes. The first one is that using depth meaningfully in interfaces is new, and there are a whole bunch of different considerations you have to take with depth. So, in some ways you can use depth to convey meaning, so just like we might have different type sizes, you might have things at different distances that just reflect some state of your application. But at the same time, you also don’t want to provide interfaces that are too confusing for your users still, and so you don’t want things at a whole bunch of different depths; you still want to work with some level of consistency.

My second major note is that our displays are additive, meaning I can turn on some light that has some color all the way up to white, but I cannot project black. In VR, this isn’t really a problem since we hide all other light from the user. But in immersive AR, I can only project more light into your eyes, not less. That means I can’t make drop shadows in the same way because they’ll come across as either invisible or punch through interfaces further away from you. So, yeah, that’s just a very new challenge, where in 2D design we might have all these shadows, or text might be usually black… Yeah, we use black frequently in our designs. But yeah, transitioning into AR, you can’t use black at all. It’s very much a learning experience, just updating and changing how you do visual design.

And then my last note is I think one of the interesting things about AR and VR is that while there might be a lot of things that are changing, there’s also a number of high-level principles that are meant to stay exactly the same. So, if you look at Dieter Rams’ Ten Principles of Good Design, or John Maeda’s Laws of Simplicity, or Amber Case’s Principles of Calm Technology, all of these are going to be the same in V/AR, as in web and mobile, as in service design, and so on. I think that’s also an interesting thing that I’ve learned or reflected on.

Paul: Another question that I have is, okay, I have a certain proficiency level in user experience, I’ve been working in it for actually a very long time. I am an expert at certain parts of user experience design--journey mapping and user research, and what have you. To get to that same level of competency in VR and AR would take me some time. If I’m at “point T,” to get to “point T” as a virtual reality or augmented reality designer probably is going to take some amount of time, and it probably varies for different people. But I’m just kind of wondering, somebody that has three to five years of experience in user experience, what kind of timeframe--I mean, some people would study all the time, some people can only do a few projects and they’re going to learn slowly. I’m just trying to get a window of if I want to be competent at X time, how long is it probably going to take me to get to that same degree of competency. Does that make sense?

Andrew: Yeah.

Paul: There’s no fixed number, but I’m just wondering what are your thoughts around time-to-competency switching or morphing from UX to VR/AR--and they might be different for either VR or AR, as well, so I don’t know.

Andrew: I think that’s an excellent question. To give a general answer, I think maybe six months of working day-to-day in this space, you would become pretty competent. It’ll take a sec to learn to think in three dimensions, to get through all the “gotchas”, and to become minimally proficient with the new tools.

I remember when I was starting, a lot of my personal work was on Google Cardboard. And then the Oculus DK2 and Rift eventually came out, Vive came out, and then working with those kinds of devices, I think I learned additional things because I’m not just sitting in one spot looking around, but I can start to move. That changes things. So, I think as the hardware is still fluctuating, there’s always going to be some amount of re-learning that you’re doing.

And then maybe an additional comment is it very much depends on if you’re working alone or if you’re working with a team. Because if you’re working alone, then you need to do a lot more work to learn some game engine, like Unity, or like a web framework like A-Frame. And then also you probably need to learn how to get into 3D-modeling. Those are things that I am still working on to this day. I’ve been able to get away with not learning the tools as deeply because I’ve been able to work with teammates who are stronger in Unity and 3D modeling.

Paul: So, it sounds like if you’re a UX designer focusing on VR/AR, you also have to learn how to work with other kinds of people than you normally have been working with.

Andrew: Yeah, exactly. I think it would be similar to if you were doing some project with machine learning in it, you could either spend some time learning a lot of machine learning principles by yourself, or you can partner with an engineer who might already be familiar with a lot of the concepts, and both of those would have different learning times.

Paul: I guess my feeling is UX is still growing, and I would say supply and demand is still a little bit in favor of UX professionals, that there seems to be plenty of work now for UX professionals. At the same time, when projects hit that are going to require those skills, it’s going to take some time to become that skilled workforce. So, I’m expecting there to be kind of a skills gap for some years as these things take over, and then that’s part of the conference that’s meant to address that.

Andrew: Yeah, actually I think that skills gap might already exist. A lot of the VR industry is very focused on pulling people from game design and development. They have more experience with 3D principles overall. I think one of the downsides of that though, is that the training that goes into being a good game designer is sometimes overlapping, but not entirely overlapping, with what it means to be a good UX or product designer. So yeah, I think that the industry very much still needs us, and needs us to become more competent so that we can enter into those conversations and push everything forward in a meaningful way.

Paul: Well, without holding your feet to the fire, do you have an opinion about which is going to have the most impact on day-to-day businesses and industries, VR or AR, in the next five years?

Andrew: I’m very uncertain. I think it depends on at least a number of unknowns: creation tools, content, hardware availability, and form factor. Creation tools for immersive design are limited. Unity is great, but it doesn’t enable me to express myself as easily as I can in Sketch or Framer. I don’t know, maybe that’s a personal problem. But, it seems to me that if we can make better tools, we can democratize the creation process, allowing for more creators and thereby more content. When I talk to people about V/AR, there are times where they’re like “oh, it’d be great to have X thing”, but they have no easy way to bring that idea to fruition.

On the hardware side, if more large tech companies produce cheaper devices with many capabilities, then that also helps grow the industry. It’s very much contingent on both what hardware and software is available. And then also, especially on the AR side, it’s weird to put on these different magic glasses and still have a conversation with someone. So, form factor matters a lot to people, especially if we expect them to wear these devices every day.

My short answer is: I just don’t know. I know it’s going to be great in the long term. I don’t know the path there.

Paul: Speaking about the path, any thoughts on what the next few years hold for us in terms of, just in general, the user experience field, the VR and AR fields, maybe mixing artificial intelligence, internet of things… Any thoughts about how the user experience field is going to evolve over the next few years because of these new technologies?

Andrew: I think the biggest shift is that we are, as UX designers, very comfortable with working inside some magic black rectangle… I think the biggest change in both AR and VR is that rectangle isn’t really a thing anymore. Like, maybe in your virtual content you have a panel and you can still adopt some of the same layout principles as before. But, that panel now exists in a wider world. And especially in AR, when all of these interfaces still have to compete with the rest of reality, and have to figure out smart ways to interact with reality, I think these are very new concepts that we’re still figuring out.

Paul: Before we sign off, anything else you want to tell people about your talk, or in general about what we’ve been talking about?

Andrew: I will add, I forgot to add in my bio that I wrote a children’s book, also. I think that would be nice to keep in the bio.