Building Agents and Selling Ideas

Nathan Adler, Co-founder of Handle

Episode guest
Nathan Adler
Co-Founder

In this episode of Now or Never, we spoke to Nathan Adler, an experienced and honest founder, who we're sure wouldn't mind us saying has spent more time talking to AI than people in the last 2 years.

Nathan brought a wealth of knowledge, across hot topics like AI as well as hard-learnt personal lessons from founding a startup in this space.

Subscribe to UntilNow: Now or Never on Spotify to stay updated with new podcast releases, and follow us on LinkedIn to join our community of startups and founders.

Clips
Timestamps

00:00 - Intro

03:17 - Handle today

07:09 - Perspectives on AI

10:36 - The current state of AI

16:15 - The inner workings of an AI company

21:11 - Advice for founders – start selling

23:27 - From first customer onwards

27:32 - Understanding and updating positioning

30:48 - What’s next for Handle?

Read more
Podcast speakers
Nathan Adler
Nathan Adler
Co-Founder
Kaga Bryan
Kaga Bryan
Head of Brand
Francesco de Chirico
Francesco de Chirico
Head of Design
Transcription

[00:00] Intro

Nathan: 
Experience on how to execute only comes with trying to put something in front of someone else and learning the hard way that someone else doesn't care about what you've done.

Kaga:
Hello and welcome to Now or Never. A podcast designed to let new founders learn directly from proven founders.  It’s brought to you by UntilNow. We’re a design-led innovation studio that works with startups and scaleups to build breakthrough brands and digital products.

In this episode we talk to Nathan Adler, co-founder of Handle. Handle is focused on building Ai agents for back-of-house operations, that are easy to setup and — importantly actually work. Their focus is currently on retail and ecommerce, MCP workflows for digital teams, as well as product interviews and feedback but like most founder stories that wasn’t always the case. Nathan sat down with me Kaga as well as Francesco, to give us an update on the world of AI from a seasoned insider, as well as share some lessons from Handle. We hope you enjoy the episode.

So we are here today with Nathan from Handle G'day Nathan.

Nathan:
Hello gents, how's it going? Nice to be here.

Francesco: What's been your journey to the point you are on now? How did you become a founder?

Nathan: Yeah, it's interesting.

Francesco: Was it intentional?

Nathan:
No, well, sort of. I was always into the idea of startups and being an early employee and startup. And I'd seen and been close to and been involved with a number of well-known Sydney startups my CV isn't secret. was at safe safety culture was my first What I call tech scale up product job where I was a product manager there and then after that I went to airtasker and then to dovetail I got a a variety of different experience at different product environments. But it was before that that my founder journey really felt like it began back at university with student did you study? I studied engineering, but not computer science. And maybe that was one of my regrets actually of university is just learning how useful computer science would be. I studied robotics. So there was a little bit of software in there.
I did some hardware products. In fact, I had a bit of a side hustle for a while straight out of university building, wave measuring, smart buoys for surfers, but that's a story for another time. I got to experience though firsthand how hard it is to build something. It's not easy. ⁓ It takes a lot of patience and a lot of trial and error. And then being a part of these tech companies that have made it, they'd found product market fit. I joined all of those at the point where they were all already succeeding. can't call myself someone that has joined another startup as employee number five and watched it become like a series B company or something like that. I have at least, that gave me both a passion and a curiosity and also some foundations as to what it requires to build something from scratch, but it still hasn't.
There's nothing like doing it yourself. You think you might know what it's like, you can apply these principles, but when you're in it from ground zero for the first time properly, it's pretty daunting. So yeah, you always feel like you're learning 100 % of what you need to know in the process.

[03:17] Handle today

Kaga:
We have worked together a little bit in the past before, but I think what would make sense for us to start is, can you tell us what Handle is today? And perhaps we'll talk a little bit about what Handle was at the beginning.

Nathan:
Absolutely. ⁓ Handle is the best AI agent platform for e-commerce. We help scaling e-commerce businesses run their operations
with AI agents to supercharge their team members to really scale their business without having to take on a whole lot more complexity. So plugging into a bunch of the e-comm suite, whether it be Shopify, like front end style tools or warehousing or inventory tools, and allowing them to automate their business in ways that weren't possible before.

Kaga:
Right, right. So when we originally were working together at the beginning on the brand story and the brand identity, Handle was not that, right? So can you tell us a little bit about how Handle, what Handle was originally, back in the days of Cupcake and...why and how it transitioned to what it is now?

Nathan:
Yeah, Cupcake was a funny name, ⁓ the thesis was, Handle was a thesis, and it still is the same thesis today, it just now has a go-to-market strategy on top of that thesis. The thesis back in the day was that AI needed to be better than just a solo problem solver, it needed to be a really good interpersonal team coordinator, and implied in that is it's not very useful to AI to just for it to be able to just do tasks for people in a work context. It needs to collaborate with people as to how those tasks are performed because the nuance of the context behind why they're asking a certain question or how they want something done is seldom defined in a document somewhere upfront that AI is going to read or instruct. set that an AI is going to follow and whilst for an engineer who might be comfortable at defining all those rules in code or other upfront for the average human being, let alone one that's busy running an e-commerce business, people don't have time to build really complex workflows upfront with AI. And so we believed that AI needed to coordinate with potentially multiple people in a business context to get things done. And so Handle, formerly known as Cupcake, was the framework that allowed businesses to build AI assistants that could coordinate with people. The first use case that we were building was an executive assistant for meeting scheduling. We have the same product today, we just use it for different things. Back then it was coordinating between people and tools to book meetings, like tools being calendars, for example, but now it's coordinating between back office warehouse operations team members or sales people or customer support reps with those people along with their e-commerce technology stack if that makes sense. It does so so the product itself has evolved but the core of what it was doing is that there's still there's still that still remains right? It definitely remains there has been a number of shifts in the technology landscape since we originally built it that mean the product that we've built today wasn't possible when we started. The major one being AI's proficiency at writing software and integrating with tools. Previously, the best that you could hope out of it was to use it as like a knowledge base and have a conversation and have generative language. But since generative code and reliable generative code and tool use came along, we transitioned from a world in which AI was better than just holding conversations between people that could actually perform complex tasks reliably evolved handle into the product it is today.

[07:09] Perspectives on AI

Francesco: What's changed since you started this journey with Handle, like in terms of like your outside in perspective on AI to now, like what's changed in your understanding, in your perception, in your opinion?

Nathan: I mean, so much for context when we started building, I think we were still GPT 3.5 or something like that. We weren't even up to GPT 4. And back then AI was...really cool paradigm and new technology and opportunity, something exciting, we could start to see the beginnings of how AI could be used to help people coordinate meaningful work.
And then there's been two and a half years since then. During that time, we've learnt a lot about what AI isn't good at, even today, let alone back then, two and a half years ago. But in terms of a sort of more like long-term view on AI, we didn't really know what turn into when we started. Halfway through this journey, I'd say we actually had quite a pessimistic view on AI that we saw there was a period of time where language models and other forms of AI were plateauing in terms of performance and that we might be stuck with this reality that AI can only achieve so much in terms of helping us deliver human quality meaningful outcomes. But I would say now the perspective that we have from what we've been able to make happen with AI is one where the technology is now no longer the limiting factor and we, I guess as product builders have a lot of catching up to do to make the AI achieve its potential because its potential is far more than chat GPT. That's like a lot of people know as AI today is like chat GPT, a few extra bells and whistles, maybe some products that are starting to use chat bots, but really like 98 % of workers today don't use much more than chat GPT for office work.
I guess what's been really underscore to me is the reality that AI has all of the tool set to be really effective at helping people do meaningful work, offload tasks that they don't want to do. And I think what's exciting is that now we also have a really diverse range of technology companies competing as well back in the day - It was just like open AI and maybe anthropic and you know, there were like Google was laughed at back in, you know, year one in terms of its positioning on AI, but now like the maturity is there that it means that we've got rid of a lot of what you would call provider risk from AI, which is that we're limited to like one company's vision or one company's internal processes. That's now forevermore never going to be a problem for product builders to solve. so AI is a commodity, especially sort of like first started getting felt when DeepSeek from China came about is now the status quo. And it means that people using AI have the luxury of being able to..build on top of whatever brilliant frontier model is out there at the moment.

[10:36] The current state of AI

Kaga: So can I ask you, there's a lot there to unpack, but I think you said that back then you noticed what AI was good at. And we said back then like it was 10 years ago, was like barely two years ago, right? But what it was good at, what it wasn't so good at, given how deep you are in this subject now, your subject matter expertise, and where we are today with the confluence of all the different engines, what would you say are like the three things that AI is exceptionally good at today and some things that it's actually pretty shit at still.

Nathan:
The things that it's good at today are easy. Number one, writing code.
It is fantastic at engineering and building things off known patterns. I don't need to talk any more about that, but can later if we need.

Number two, knowledge, its ability to recall information from its training base and provide some kind of basic general knowledge baseline on whatever topic that you want to ask a model about, it can probably get you 80 % of an answer in terms of what you would hope the best answer in the world could be just off its fundamental layer with no extra tools available.

And then the third thing I think that AI is good at, which is a little bit more nuanced, is thinking logically through problems. Now this is an evolving one, it's one that's more recently achieved thanks to developments in models that only came out in the last 12 months or so. But being able to break down a problem step by step, think rigorously and not get tired of exploring a range of possibilities or applying consistent parameters. As humans, we're lazy. We like to shortcut problem solving when we go and I mean, that's what makes us really efficient at work. When we're given a task, especially one that we've done before, we're thinking what's the minimum amount of stuff that I can do to get this task done. AI doesn't have that problem. It doesn't get tired. So it means that it can be really methodical in its approaches, which is great when you teach a kid with the right tools. It means you can get really high quality output where the alternative of giving it to people would be efficient, but lazy output. And we know it in our day to day jobs where efficient lazy output is often good enough 90 % of the time, but then 10 % of the time you were like, I wish me or this other person had thought about that.

Kaga:
Explore another angle. So, so recapping it is that code baseline knowledge and then the ability to be indefatigable in its research. Given the progress now, what's the flip side, like the, the reverse of that? What, where are you, where are you finding your headbutting against it? And it's not, it's still got a bit, a bit of work to do?

Nathan:
I don't think this is going to be particularly controversial or novel, but maybe there might be some listeners who are hearing this for the first time.
Firstly, is absolutely a people pleaser and it remains so. And what that means is it will parrot back what you say to it and say, yes, I agree with you. Yes, you're the best. You're absolutely right. I'm gonna do exactly what you say. It's never gonna challenge you or it's rarely gonna challenge you unless you like speak about, you know, certain topics that the model providers have said these are off limits and they need to be stronger ⁓ sort of biases against. But for the most part, if you're trying to get things done, is not gonna be the thing that corrects you. is not gonna be the one that says, hey, this is a dumb idea, try something else. And what that means is, as people are discovering how to learn AI.
⁓ The AI can be pulling you in the wrong direction by the fact that a bias or leaning towards the way you asked the question initially sends you in the AI down some track, which is not the best solution. That's really important because what it also means then is like number two is that AI is actually not good at having intuitive understanding of questions to ask about to clarify the way people want to get work done. So AI will jump towards what's the first possible solution that matches this user's request without necessarily thinking about what...are those potential factors that maybe aren't obvious from the user's initial request or what is assumed knowledge here or what are the wrong assumptions to apply here. If you compare AI to even a mid-level desk worker working in a collaborative environment. If someone asks them, can you go do this for me? They'll know to like clarify questions. They'll know to seek out the unknown. They'll especially do that when they've never performed that task before or sought for that understanding before. They'll be much more diligent first time around about clarifying, especially if it's say a senior stakeholder asking them of that. What's the intent of this question? Like what direction should I go to go solve on this?How much effort should I spend doing this task? What's the kind of output? What's success here? And whilst there are a lot of like, write your prompt better tools, it doesn't solve for the underlying fact that AI doesn't have an intuitive understanding of how human work, how good human work is done, which is interesting. Really good at writing software, but not good at what you'd call soft skills. Is there a third one? I might just pause on those two. think that's pretty meaningful.

Kaga:
They're good, they are. They're sycophantic. The always pleasing behaviour is something I've come up against. think we all have like when we use it, seems to always want to people please. And I think the second point that you've got there around it, not having the, the ability to ensure it or even a little reluctance to get things wrong is also quite problematic, right?

[16:15] The inner workings of an AI company

Kaga:
Switching back to your business, now where you're at.
You were going from an agentic sort of model that was solving a one person organizational challenge, one to many, basically working as a of a butler or a support or an agent for individuals. This has changed now and you're going more intra-organization across layers, is that correct?

Nathan:
I mean, it's through all layers, whether it be organizations between other organizations or teams within organizations or individuals within a team with respect to each other.
that we believe in is one where people have their own personal assistants as well as team assistants with shared knowledge, private knowledge and almost like the natural human role-based access control. That's a weird way, a technical way of saying it, knowing who I'm talking to about what and what information, what context is appropriate to share with that person.

Kaga:
Yeah, so how as a founder coming into scaling up into that decision matrix around what do we do, where do we go to, how do you prioritize the builds?
It's like, how does that come about?

Nathan:
Well, at the start when you don't have a customer pattern to go off, customers don't even know what this product category is. You have to do a lot on your own. we basically tried to model human behavior as first a set of like more rigid rules, but then it's like fuzzy guidelines that we could help provide to AI. And then a lot of testing and feedback and did this feel right? Is this what I would expect of someone I hired how would I give feedback to this person, what would I say that they should improve on? So that's how it started. Now we're at a point where our customers are pulling us in the direction which we need to because we can never test to the degree in the wild that they test, handle.

And so now it's much easier, which is we have real time experiments of hundreds if not thousands of AI assistants in the wild serving people's real questions and where they let them down, where those handlers let our customers down, they tell us, and we try and understand and unpack what went wrong, like, I don't know, a psychologist, feels at times, which is like, what information was missing here? Why did this AI jump ahead or go in the wrong direction, and how can we learn from that to apply?

Yeah, apply improvements to our product such that the goal being I can ask my AI assistant to do a thing and it behaved exactly as a professional human team member would for me first time around.

Kaga:
And those fuzzy guidelines that you're working through now, are they starting to become less fuzzy, more clear, and are they organizational dependent? Like if you have one client that says, is an expectation of how we, well this is our expectation of how a mid-level manager would respond to things, is that something that you are pulling into just a bespoke instance for them or is it something that is informing a standard across product?

Nathan:
It's all about learning, so it's all bespoke, so more like the former of those two scenarios you said because we started off, our first use case started off by trying to encode exactly what we thought it was an executive assistant, the best executive assistant in the world would do. Like what are all the things they'd consider? What are all the actions they would take? We immediately ran up into everyone has a different way of doing things. It was like lesson number one. And at the start of the product, we had a lot of trouble with this because how do you solve for an environment where everyone's in definition of what good looks like is different and what also not just definition of good, but the
process which AI needs to use needs to be contextualized to the environment that it is in. And so we focused more lately on the learning capabilities of AI. So you have a starting point, which to be honest is a lot based off both our own experience and as well as like, you know, really good foundational language models. But then you give those the framework, the ability for it to learn and adapt to what the user wants going back to what I said before just like if you hired a really good team member on day one they're not going to just apply what they learnt in the last job at a completely different company and do things that way they will ask hey I have this experience but I know that you might want things done differently can you teach me how you want things hey I've tried this for the first time this is what I've currently got can you give me feedback on that
and what can I learn for next time? And that framework is the most important thing that drives our product today and one that I think it's something we're be working on for the next like five, 10 years at least. It's rarely gonna be perfect very quickly but it's getting a lot better pretty quickly.

[21:11] Advice for founders – start selling

Francesco:
I think we were talking about this last time we met, you were telling me a similar thing. You telling me how hard it is to actually get product market fit, get the overall idea off the ground, and validating your direction. If you were going to suggest and give some guidance to other founders that are in the same phase, because it's so fresh for you, what would you say? How would you suggest how to get through that phase?

Nathan:
Probably the only suggestion I have is try to start selling something. There's a lot of advice that people can give you on first principles about how you can come up with a good start idea, what types of ideas are successful, but at the end of the day, one, there's so many things to build in the world. There's also very hard to get a lot of things off the ground. What has to intersect is your passion and commitment for trying something. might be an idea or an area.
a vision, ⁓ a mission that you're really passionate about, but two,
experience on how to execute only comes with trying to put something in front of someone else and learning the hard way that someone else doesn't care about what you've done. And until you experience that firsthand, it's really hard to internalize the like, what am I going to do to build something that people care about. So that's really my only advice. Just to get started and the sooner you can get started and I think the most important thing I said actually there was try to sell something because it's easy when you're starting to delay the point at which you try to start selling, you'd be like, oh, the product's not ready yet, or I'm still tweaking some features here, or working to figure out our stealth launch and how that's gonna go. But none of that really matters that much until you've actually tested something with someone. And not in a, hey, my friend thinks this idea sounds kinda cool way, but actually like, hey, there's this person that I wasn't friends with before that I've presented this thing to and they want to buy it. That's the moment in which is gonna be incredible, amazing, and it's very hard to get to. And so the sooner you realize how far away you are from that moment, the better. And then you start to unpack what's required to get there.

Francesco:
And how do you go from there? This is very interesting.

[23:27] From first customer onwards

Francesco:
So once you got the first customer that is paying you for using the tool that you're building, how do you go from there?
Well, what was your experience? mean, I know everyone is a little bit different, I guess on this, but-

Nathan:
things start to move a lot faster after the first customer because then you're immediately the question that you have is how many other customers are out there like this customer. I can start talking to those other customers about how this other customer is using the product. I can start to learn from this first customer. What are the real problems that they're solving with our product? What are the reasons that they bought it? Because chances are the reason you thought your first customer was going to buy your product is not actually the reason your first customer bought the product. And so that I think from customer number one, there's instantly a bunch of clarity around what matters and there's instantly comparables that can be applied to who are others in the market. How do we need to message them? But then the question of approaching product market fit then soon becomes, well, how can we inspire more people about this. that we've done that is useful, at least customer number one has found it useful. Hopefully there are two, three, four, five customers that have found it useful. If you've got a small handful of customers that have found it useful, chances are you're onto something at that point. the problem completely shifts away from how do we build something that's useful to how do we talk about this useful thing in a way that will bring us more people that find it useful.
talking to a customer about the product features and offering and all that stuff? I would have to say that this is the problem we're working on at the moment. It has been two and a half years and I feel like we are at this problem right now, which is kind of embarrassing to admit, but it also makes sense. Our team are technologists, product people. you get an appreciation building a startup, how interesting and difficult other functional roles are in terms of building a product. You might be really good at the engineering side of it, but if you don't know how to talk to people about it, mean, I do believe despite your background, you will learn eventually how to do this. Some people must be naturally really good at talking in ways that the customers understand immediately. We weren't. So we are using our first like, you know, three, four, five customers to actually help us build case studies, craft our messaging and actually identify what's going to resonate with people. We also started going to lot of events where our kinds of customers would be and start talking to them about it and getting feedback on it. Suddenly we could identify our target audience and start to even just know that we were speaking with the right people and starting getting the right feedback. And I think that customer or potential customer feedback and collaboration on positioning is really helpful to solving the problem. But it's actually our focus over the next few months to crack that properly because what I would love to do is be able to say, here is handled to the world.
and exactly the right people see that and those people understand how valuable it is for them. Which in a completely new product paradigm which handle on a lot of AI.
Are is it's not something you can say? Oh, this is like X because it's there's it's not There's not a mature market of one for ones to compare to that, know, we can easily Position ourselves against yeah,

[27:32] Understanding and updating positioning

Kaga:
I think that's always a hallmark of a really good valuable product right like that you can essentially create a category of your own in that in that regard like you can if you can't sit there and go we are like X and you're not actually pinning yourself up against anything as a basis of comparison. You're we are this, problem, or the challenge you've articulated there is like, you need to say we are the solution for the problem. A, you may not have been able to articulate yourself, dear customer, or the one that you talk about around the water cooler all day long. That's a problem.

How many times have you revised your positioning or what you are just internally over the last 2.5 years?

Nathan:
A dozen more. I would say most of those actually haven't seen the light of day in terms of the launch I'll put on my website. Most of them have been small, internal.
updates the way that we talk to our customers about things. But it's never ending and I don't think we will ever settle on it frankly, but especially not for the next 12 months. We'll be constantly refining and iterating.

Kaga:
Do you think there's, obviously you're getting closer and closer to something that resonates as a singular visa message, but do also think there's like a double-edged sword there? Does that cause you any concern? Like I've heard things about product link growth being steered
by a larger client and altering the journey of the product itself.

Nathan:
I think the thing that messaging or positioning the product does steer is your audience, your market, who you are going after. And I think that's one of the most important lessons a founder has to learn. And most products that are built today can serve more purposes than the actual target market benefit from and the trap is to look at all of those different purposes and be like, wow, it can do so many things. Why don't we say that our product can do everything? The huge problem with that is that you start to immediately lose clarity amongst the people that could really benefit from your product, what your product is for and why it's gonna benefit them. the thing that we like, and I'll get to product like,
growth in a moment and sort of like first few major clients but the thing that we have to understand is intentionally narrowing our target audience and
I all founders should do this to be the ones that are gonna resonate most and ensure that the problem that you're solving and the learnings you're taking from your first few customers build that audience really clearly in your mind. If you can't figure out after your first few customers who that audience is, you're probably not forming a strong enough thesis around your target audience. then once, obviously there is, I've heard the cliche of this, like, hey, if you build too much for one customer,
customer, you're going to get stuck just delivering enterprise features. I think that feels like a problem of 15, 20 years ago. It doesn't feel as much of a problem today because AI has shifted the velocity with which people can build product. The speed with which you can build product is no longer the bottleneck to launching product into market. It is now...your speed of thought and feedback and your accuracy of execution on the right strategy. Which means building a whole bunch of custom enterprise features is pretty cheap these days, most of the time. It won't necessarily win you more customers, but it might teach you, those extra enterprise features weren't valuable for customer number two. More important is to just get those first few customers so you can accelerate your learnings to get the next 10 and 100 customers after that.

Francesco:
Very interesting.

[30:48] What’s next for Handle?

Francesco: What's next for Handle

Nathan:
⁓ scale I guess. We feel like we're we've got some really exciting technology that we've recently put in beta for a few of our customers that tries to make AI more capable at solving problems quicker than has been done before in terms of e-commerce integrations but ultimately I think the main thing for us is to still
find more of our audience in the e-commerce space and potentially other areas as well, but focusing on e-commerce and figure out our strategy and our messaging that's gonna resonate the most with them.

Kaga:
Awesome. I think that's probably a great place to tie it all up in a bow.

Nathan:
Awesome.

Kaga: So thanks so much for coming in, Nathan.

Nathan:
No worries.

Fancesco:
Yeah. Thanks for coming. Thank you.

Kaga:
We hope you enjoyed the episode. Thank you for listening. Follow us on Linkedin and Spotify to stay updated with new podcast releases and we’ll see you again soon.

Read more

More episodes

No items found.

Recent News

Copied to Clipboard