Afshin Mehin is the founder of Card79, a creative studio based in San Francisco that focuses on tackling complex and future-facing projects ranging from brain computer interfaces to autonomous vehicles.
What led you to start Card79?
Card79 is a product design and innovation agency that focuses on helping to productize “new to the world” technology that’s coming out of some of the most interesting companies that are out there. The agency is a byproduct of my personal journey. As a young designer, I was enamored with the future. I was sketching flying cars and things back in grade five, which led me to pursue an education in both engineering and design. I worked at MIT’s Media Lab and looked at future interfaces and then with design studios like IDEO on how to make human-centered design. So I’ve always been interested in the tension between futuristic technology and truly beautiful human experiences—that’s the cornerstone of Card79. The work we’ve been doing has been focused on a lot of different sectors. We’ve worked a lot in neurotechnology, robotics, autonomous vehicles, AI—we’re always looking for what’s coming next out there and we get excited if it’s something that feels like it has some potentially large human consequences and could be steered to enable more positive human experiences.
A weekly dispatch featuring exclusive interviews with deep tech founders & a roundup of the most important deep tech news.
What challenges are you solving for startups that work with Card79?
We are product designers at heart. So that means we’re both physical hardware designers and digital product designers, working on things like apps or user experiences embedded within a specific device. The spirit behind our work is that of a toolmaker. We’re trying to find a way where we create value through function as well as the innate aesthetic experience of engaging with a tool that’s helping you do something. It has to create some level of satisfaction, delight, or meaning for people to really hit home. We’ve always found that you can create things that are super useful, but if they’re too complicated, unpleasant to use, or don’t have that human touch to them then they don’t catch on in society. So that’s where we come in. Historically, we’ve been called artists for industry in the sense that we create artifacts that are useful and serve a function. We’re builders.
If you go down to the nuts and bolts of our workflow, it’s basically understanding the requirements of a product and its reasons for being. So we’ll work with stakeholders within companies to try to unearth the most powerful user opportunity for their nascent technology to create value from both a business and user standpoint. That’s how we enable the technology to do what it does best. The ability to identify the product at a very deep level is the foundation for the product design work, which then moves into everything from understanding the user context at a deeper level by going through “day in the life” journeys or just contextual research that puts us in the shoes of the people we’re designing for. We need to be immersed in the technology we’re unearthing.
The work we do is heavily technical so we often sit beside engineers for long periods of time to try to unpack the requirements in a way where we can internalize what’s a real requirement and what’s not. Through that work, we’re able to start understanding how to create product archetypes. When we’re dealing with a technology that’s new to the world, we usually end up creating a new archetype. From there, you go into model making, prototyping, color, material, finish definition, and so on. If it’s close to the body, such as in our neurotech work, then we’re thinking a lot about how to optimize for comfort by doing a lot of ergonomic testing. Then we move into prototyping, which may involve a lot of different tools—laser cutters, sewing machines, 3D printers, and CNC machines.
As the definition of how this product will be used in the world becomes more clear, we start talking about production volumes. That either takes you the mass production route or low volume production, both of which have their own challenges, but at the end of the day, our goal is to produce products that people will use. Sometimes, though, if we’re doing work with early-stage companies it’s just about getting them that first artifact that helps solidify a vision of what their future could look like. Other times we’ll carry it all the way through to production. The same goes for when we’re designing digital products, and we need to collaborate with software engineering teams. There’s obviously a lot more fluidity when it comes to designing digital products, and there are generally faster design cycles.
You’ve done a lot of neurotech work, perhaps most famously designing the original Neuralink. Are there any unique aspects of designing products meant to interface with the brain?
The world of neurotech started when we were doing a lot of work on wearables. At some point, we have designed a wearable for almost every part of your body – from your head to your toes. We’ve designed smart flip-flops, smart yoga pants, wrist-worn wearables, and smart sunglasses. When we started to do that we got a general sense of the challenges around designing anything that’s on the body. So when we moved into designing devices that are specifically for the head, we really dove into getting a better understanding of the anatomy of the head and the challenges that emerge when you make something that’s comfortable to wear for this new archetype.
A great example of that is when we were working on the smart sunglasses, which had a computer and a display built into them. We started to change the weight distribution of a pair of sunglasses and add volume to parts that typically wouldn’t have changed in the past. In doing that, we were better able to understand the physical constraints and geometry of the head. One thing we learned was that as the center of gravity moves further forward, the more likely it is that the device is going to start sliding down a person’s nose. We had to synthesize multiple learnings like that into a single form factor.
In doing that we were able to gain a strong understanding of the variation in anatomy of people’s heads. That naturally fed into our neurotech work. But what was different was that for the neurotech work, instead of just doing stuff on the head, we were putting stuff into the head. When we worked with Neuralink, we were working with neuroscientists and discussing the actual surgical process that would be involved to make that first-generation device work. That gave us a lot of ground and a lot of those same principles are fairly consistent. The challenge—just like you’d expect for any sensor suite—is to make sure you’re getting good sensor readings. Often that requires working closely with engineers to understand the way that the physical product design is impacting the technology and the signals you’re picking up.
Aesthetically, the head is arguably the most expressive part of our body. That’s why we need to approach these projects not just through the lens of functionality, but also through fashion, self-expression, and how they want to be perceived.
Card79 just embarked on an internal robotics project—what are the details?
I’m extremely nervous about what we’re going into right now. For the past decade, we’ve had all this data gathered about us with the goal of presenting us with more targeted ads. And now we’re about to have robots with AI-enabled hardware as part of our lives following us everywhere we go. It feels a bit scary.
We’ve seen some amazing demos that are bringing these humanoid AI-enabled robots to the forefront that are able to do more and more with less and less training. It feels like there’s this explosion about to happen in the robotic space and very few people are talking about this through the lens of human-robot interaction. How will this be a pleasant and useful experience to live in a world cohabiting with these digitally-enabled moving appliances? To probe at what more human-centered AI-enabled Robots would look like, Our studio has started a new initiative called CoEvolution, a set of case studies looking at what we’d want the world to look like with robots and AI. For example, one case study we’ve looked at was how robots could take care of elderly people in the future… In the last few years, we’ve seen a huge drop in caregivers because it’s not an appealing profession and is underpaid. In the US, we’re also seeing a huge number of baby boomers beginning to move into their golden years. So there are these real needs that we’re seeing in that space, and we were wondering: what would a user experience and a robot-human interaction look like in that world? How could robots be deployed in a way that would maintain the elderly’s independence, dignity, and social connectedness, as opposed to having the technology further isolate them socially or become dependent on the technology? Our proposed designs don’t always need to nail it, but at least they need to spark a conversation.
We’ve done other case studies where we’ve looked at what a factory looks like when there are far fewer human workers there. How can their work day and work experiences be enriched? If we’re going through a shift where robots take on more work and there are fewer people in the factory, is it possible to make a robot-enabled factory that feels more like a spa and acts as a space for wellness and focus? A third case study we did was looking at city infrastructure. Our general observation is that we’re not tearing down our cities to build smart cities anytime soon, so what would be the ideal type of robot that could exist within older run-down apartment buildings? How can those robots add some charm for the people who live there in an otherwise dreary environment? Ultimately, with these case studies, we’re trying to imagine what properties we want to endow these robots with to make our lives more meaningful.
What are the outcomes or applications for these case studies?
Part of it is scratching an itch we’ve all had in the studio. We’re passionate about new ideas and we’ll take on new projects on our own accord if it’s a topic that we’re truly excited about. Right now, we’re excited about human-centered design and making delightful robotic experiences for people. These foresight projects give us the ability to sharpen our internal capabilities and build out new tools and frameworks to constantly keep us ahead of the curve.