Nathan Clark - Ganymede - Part 4

Data Integration & Automation in the Life Sciences | Building a Technical Team & Go-To-Market Motion | Creating Efficient Data Infrastructures | Automating Complex Workflows

Find us on your favorite platform:
Apple PodcastsSpotifyYoutube

Show Notes

Part 4 of 4. 

My guest for this week’s episode is Nathan Clark, Founder and CEO of Ganymede. Ganymede is the modern cloud data platform for the life sciences and manufacturing. Their Lab-as-Code technology allows you to quickly integrate and harmonize lab instruments and app data, automate analysis, visualize all your data in dashboards built over a powerful data lake, and ultimately speed up your operations to accelerate science or production.

Prior to founding Ganymede, Nathan was Product Manager for several of Benchling's data products, including the Insights BI tool and Machine Learning team. Before Benchling, Nathan worked at Affirm as a Senior Product Manager and was also a Trader at Goldman Sachs.

Join us this week and hear about:

  • Nathan’s experience with data integration and automation in the life sciences and building a product for bioinformatics engineers and data scientists 
  • His experience building the technical team and creating a go-to-market motion
  • His vision for Ganymede in the years to come
  • The challenges biotech companies face managing and analyzing large amounts of data 
  • The importance of streamlining processes through automation and data integration

Nathan’s extensive background in machine learning and data systems across financial and lab technology and knowledge of their applications in the life sciences offers unique insights for founders to benefit from. Please enjoy my conversation with Nathan Clark.

Topics Mentioned & Other Resources

People Mentioned

About the Guest

Nathan Clark
See all episodes with 
Nathan Clark
 >

Nathan Clark is the Founder and CEO of Ganymede, the modern cloud data platform for the life sciences and manufacturing. Their Lab-as-Code technology allows you to quickly integrate and harmonize lab instruments and app data, automate analysis, visualize all your data in dashboards built over a powerful data lake, and ultimately speed up your operations to accelerate science or production. Prior to founding Ganymede, Nathan was Product Manager for several of Benchling's data products, including the Insights BI tool and Machine Learning team. Before Benchling, Nathan worked at Affirm as a Senior Product Manager and was also a Trader at Goldman Sachs.

Transcript

A hand holding a question mark

TBD - TBD

Intro - 00:00:01:

 

Welcome to The Biotech Startups Podcast by Excedr. Join us as we speak with first-time founders, serial entrepreneurs, and experienced investors about the challenges and triumphs of running a biotech startup from pre-seed to IPO with your host, Jon Chee. In our last episode, we spoke with Nathan Clark about his time at Benchling as a product manager handling business intelligence and data analysis products, highlighting the critical need for a more efficient and structured approach to lab data management. Nathan also discussed the pivotal moment that sparked the founding of Ganymede, reinventing data infrastructure within the scientific community and the importance of sharing and organizing lab data. If you missed it, be sure to go back and give part three a listen. In part four, we talk with Nathan about data integration and automation in the life sciences and his insights from founding Ganymede, detailing his experience building the technical team, his go-to-market strategy, and the vision for Ganymede in the years to come. Nathan also discusses the challenges biotech companies face managing and analyzing large amounts of data and the importance of streamlining processes through automation and data integration.

 

 

Nathan - 00:01:28:

 

So in the early days, when we started working on that, it was just me and Benson, my co-founder. I met Benson actually at a firm. He was on the data science side there and built out a lot of their data science and analysis team. So Benson's amazing. And what always struck me with him, and also kind of tied into our thesis around, there are a lot of people in biotechs that are these bioinformatics engineers. What if we could give them tools to touch the rest of their wet lab data too in a similar manner? That feels familiar. And because the problem with a lot of this integration too, as you can imagine, is you can't really have all the way out of the box connectors for things. You can have parsers and you can have an API connector to different ELN and LIMS. And now we have all of them. But still, there's a ton of work to do because everyone's process is different. The instruments also are, every single instrument is configured slightly differently. I always tell people, especially for the big lab instruments, they're like a house. You'll have the same blueprints, but every house has something. It's so big. And it's so custom. Everything has something slightly different. And there's a lot of ways you can use it. Like, what is the data output of a house? Okay, well, you know, what aspect of it? What are you talking in the context of? But so a lot of what we were hoping was, let's get these data scientists able to manipulate their wet lab and give them the infrastructure so they can focus on the logic. And Benson also, I like to say he's a wizard with a Jupyter Notebook. He's an amazing, incredible data scientist and programmer in general. So although actually with the balances, we ended up just building that as more of like a general purpose traditional code stack, the way that we built it and built out a little container running. The general, again, product, we always had the idea of, we want to give it, make it a data science environment, make it an environment, a low code environment, where you can go in and focus on the business logic of the connectivity, because we will have the infrastructure layer. We will have the connectors and parsers of the connectivity. And when people think connectors and connectivity, they think about parsers. They're like, oh, I want my lab data to be online. But that's actually a lot. That's only half the battle. The other half is, okay, what do you do with that? Because you just got a bunch of, you got this huge dump of uncontextualized lab data. What does it mean?

 

 

Jon - 00:03:36:

 

Yeah.

 

 

Nathan - 00:03:37:

 

I see this all the time. Bioreactors are a common culprit here that I like to use the example of because the bioreactor, you run it and sure, like, okay, here's a million seconds of bioreactor run data. And it's super wide too. Like you have the temperature, you have all these sensors, you also have like pump pressures and stuff like this. Okay, what of it? Like you have this data now, what are you going to do with it? And the answer is, they're trying to run a DOE and they're trying to say, okay, these different samples that are at these different settings, I'm going to take time point data for every five hours to actually get that out of the raw data is analysis. And so the analysis layer, also the necessary customizations that the connectors, because those lab instruments will all emit data slightly differently. Your template needs to be customized. That was our vision for the developer platform. When we looked at a lot of existing attempts in the market to try to have connectors and try to connect lab instruments, what we saw that was, what was always failing was that the customization, because they were trying to be out of the box and because they said, oh, customization is hard. Complexity is high here. Let's make it no code. Let's make it just work, which means all the actual fixing of it goes into their software engineering team. If you think that complexity can be managed in no code or out of the box. Your product has to handle the complexity. So what we said is, okay, that's a very inefficient way to do it because you're having all these software engineers spend all this time like writing connectors and things like this. I think our fundamental insight looking at these companies was, and looking at how clients approached it too as we started gaining more clients was going further and further and saying, okay, this thing to be solved here is not having connectors, it's writing and adapting connectors because there's so much work to be done every time there. So we really want to build out the perfect developer platform. Take this data science environment or imagining, let everyone just code up the logic of their connectors and the transformation logic in a Jupyter Notebook environment. Just open up the code anytime from any web browser, change it, test it, click save, and it's done and it works. That development lifecycle can then finally be fast enough to actually let people connect these systems in cases where they wouldn't be able to before. And I think that turned out to be a great paradigm. In practice, I think we got fewer of the data scientists than we were hoping, because the data scientists really are still focused on like omics data and bioinformatics. And it ends up being, I think that work, the things that they do are more one-off analyses in a lot of ways. You can productize it, but they're not in the business of making clean software engineering abstractions. That's what you really need for automating wet lab data. So in practice, actually, all our clients are software engineers, or we also have built out a huge implementation. And we call them scientific software engineers that do a lot of coding in the gaming product layer. So we're a two-tier business. We have the core platform, which is a connector builder and maintainer and transformation data layer to say, okay, how do you actually connect to these different systems? And then we have the product layer that either clients can use or our scientific software engineers can use to actually write the logic of saying, okay, well, this is an FCS file. I parse this this way. And then I do this analysis and I put it over here in the connected on our limbs. And that's been a great paradigm. Because what it means in practice is that we are way more efficient and way faster at building integrations and maintaining them too, because we anticipated kind of cynically how complex and fragile they are. And we said, okay, here's something that I know will break every week. I mean, yes, you would try to reduce that breakage rate, but what really matters is can you fix it fast enough that it's okay that you end up still saying, okay, you know, scientists just ran something and this instrument changed in a way that was totally unanticipated. Can I just take half an hour and go quickly fix it and adapt it. So the data still flows. That's really the magic. And so I think that's what helped us grow quickly was saying, we can go to almost any client and just say we will integrate your instruments pretty cheaply. It doesn't even matter whether we've done the integration before or not. Now we have integrated many of the common instruments out there, but to us, it will just kind of eat the cost of developing a new integration if needed. Because already it'll have to be customized anyway. So we can kind of break the speed of light on that.

 

 

Jon - 00:07:32:

 

Very cool. Very cool. Like, God, I'm like, I wish I had this when I was in the lab. So, I mean, this sounds incredible. And it seems like you're starting to get some product market fit a little bit here. And the customers are like, yes, this sounds great. Can you talk a little bit about the go-to-market motion with this? Like, you know, the kind of like two-tiered Ganymede product set?

 

 

Nathan - 00:07:56:

 

I think that's exactly where kind of back to what we were talking about. The go-to-market motion in something like this, where you're kind of building a category-defining thing, I think it's less about showing their relative features or anything like that. And more just telling the clients, what is this even? And what problems does it solve and map to? And then once you cross that gap and you explain it, then it's amazing. And from a funnel perspective, the hard part for us is the top of the funnel, getting people to understand what Ganymede is and also understand it well enough that they can champion this internally. Because Ganymede is not a cheap product. It's a pretty expansive thing. And it is a fundamental piece of client-state infrastructure. So they need to be able to get people aligned internally on adopting this. But then once they understand it and they can see what's possible with it, then they say, oh, gosh, I need this. Things move really well. And so I think that go-to-market motion is very, very heavy on consultative sales, tons of explanatory docs. And then it always has to go towards a really custom demo. In every deal that succeeds is going to be centered around a custom demo that shows them exactly... You cannot allow for any level of imagination because it's a new thing. It's not easy for people to anticipate what it can do. You have to show them every aspect of their workflow and build it into the product and develop those really custom demos. And that's what will succeed. So I think the go-to-market in that sense is very explanatory and very example-based. And a really consultative sale. We have a pretty big and experienced sales team for what we are. And a really well-paid one because they also need to know a lot about software and a lot about biology and be really, really multi-skilled.

 

 

Jon - 00:09:32:

 

That makes a lot of sense. Honestly, it's kind of like, you don't know what you don't know, right? Like, you know, these biotech companies, they're just like, it doesn't even occur to them that something like this could exist. So it makes a lot of sense that there's a lot of like, you got to do the brand awareness, the product awareness and kind of get them like, ah, starting to dream about this. And for folks who are, when they hear this and are like, that sounds great. One, I guess my first question is, what is the kind of like archetype of a biotech company that stands to benefit the most from the Ganymede platform? Is it big pharma? Is it a company that's still in an incubator? Kind of like, where's the affinity most strong?

 

 

Nathan - 00:10:13:

 

I think the short answer is the bigger, the better. Though in practice, I think once you reach the 200 or 300 person size, it's interesting drawing a contrast to Benchling. Gbecause I think Benchling and how it works and how Elon works, that happens one or two generations in company size earlier. You can do everything in a Word document until you're like five, 10 people and you're like, okay, let's start getting actually a real system here. But with Ganymede, I think it comes much later because you have to have that substrate of having enough data volume, having some sort of structured system that you're like, oh, now I'm doing a ton of data entry also into the system. I wish I didn't have to do that anymore. I think that's where once you reach the 200 or 300 person stage and you also start thinking about IND filings and things like that, then the data consciousness develops. And either at least you're data conscious or in the really, really strong clients for us, they have a data strategy. We're a huge proponent of data strategy of saying, you as a biology company, you are a data company. Yes, you have a wet lab. But your business is to run experiments and then show the data around that. So how do you store and manage that? How do you store the context? And we sometimes have to evangelize that. Many clients, they're very smart. And so they naturally gain that consciousness themselves and start investing in a data strategy. And I think that's the cost for Ganymede comes in really well. Because that's that moment with larger pharma, everyone already has some sort of data cloud. And so there's still a lot of value you can provide around the instrument connectors and around analysis and things like that. And that's pretty essential. But I think especially with when you're a 200 or 300 person company, the idea that you can avoid having to hire software engineering team you thought you were about to hire, or the idea that if you have a software engineering team, they don't have to spend all their time building out infra and waiting six months to get this initiative going. It can just work and it can start building business logic on top. I think that's where it's really magical. And Ganymede can be the complete substrate and help people avoid all this infrastructure work. So then it's a very, very full stack thing. So I think that's really, I guess, the strongest resonance in a lot of ways. Nothing beats scale though, I'll say commercially, the more data, the better. So bigger is always better, but there's definitely an inflection point there.

 

 

Jon - 00:12:16:

 

Yeah, and I also, you kind of already alluded to it, but like, it's kind of this thing where the super-scaled companies already have something in place. And it's not like, it's kind of this thing where you've got to get the hygiene done, set up early. And I would imagine someone who builds on the Ganymede platform early and then starts building, starting to just like all the kind of data hygiene, data analysis, all these things built on Ganymede, you just get way more multiples of return on that versus perhaps, and for companies who are already at scale, they have done it a certain way and you're not getting necessarily the maximum impact if you had built it from scratch. And I'll give you an example. We brought up Salesforce. We use HubSpot. And so we were able to build our CRM based on HubSpot's CRM platform versus I know tons of companies who've built on Salesforce. Salesforce is fantastic, but it's kind of this thing where once you're on that platform, there's a certain way of doing things. But we had the opportunity for us to start anew and start on this new platform, which is a market difference for us, at least. It's like building on the HubSpot CRM and all the kind of new ways that they're approaching CRMs versus Salesforce. So, the very circuitous way of saying it's just like, it seems to me that like growing on the Ganymede platform, probably provides a multiplier effect on the return on investment for choosing to use Ganymede.

 

 

Nathan - 00:13:47:

 

I think that's exactly right. And I like to think that our paradigm, because it's very informed by this business automation side, when people have data strategies, they oftentimes initiate their data strategy by saying, I need to get all of my data out at least into a clean format and then start building a data lake. Because that's how a lot of the modern data stack is in other industries. But one of the insights that we've gained over time is that actually having the data lake is, back to that thing about the raw data not being that useful. It's not that useful on its own. It's really in biology what you do with it, because that forces you to have the right context and structure on the data. And so with bigger companies, when we go into these big pharmas, one of the things that's happening in the industry right now is that a lot of circa 2010, 2015 digital transformations that kicked off to move things into the Cloud have not provided as much value as people thought. Because they were very reliant on the idea that if you just get the data lake and you just have the data, that data is your biggest asset. Which is true in a sense, but it's not an asset the way that people think. It's an asset in terms of its context. And they were thinking of it as an asset as in like, I'm going to be able to do AI on this data or something and derive insights from that. And those efforts have mostly failed. I mean, I say this as we're in the heyday of LLMs, but those efforts also will not go that far. I can provide a little crystal ball here, having product managed a lot of work in this space. There is a lot of work to be done on the machine learning side, but it comes down to having a really crisp understanding of the idea and the problem to be solved and having the data there. And that's one of the reasons I'm a company like Benchling is so well set up for these kinds of things because they have the data and they have the structure of the context. People just chucking LLMs at these data lakes is not going to go that far in this space. And it hasn't. And that's the problem is that people have these data lakes. We like to use the term data swamp because it's like, oh yeah, you have a data lake. How much do people use this? What did they get out of it? And it doesn't provide value. So very long kind of digression here. But when we come into big companies, it's oftentimes because that's been an issue. And now they're more focused on, okay, how can I more incrementally still get data going here? I got my data lake, but I realized actually the problem is I need to get the data into my ELN or LIMS. I want to now automate stuff with this data. And they're realizing, okay, this is not like an AI thing. It's a data mapping and analysis automation kind of thing. And that's where we come in really well. So all the time we work with big pharmas, but they have a pretty good data infrastructure and have been really thoughtful about it. But then they realize to take the next step, it's totally different. You have exactly that bioreactor data example. They have this really rich, really nice, clean bioreactor data set, and no one can do anything with it. And so then it turns out, let's actually automate the data entry into your LIMS or your batch record. And that's the second hard part to say, okay, now let's encode the analysis and also have the connector and the API to this system. So that's valuable on its own. But as you said, the best is when we can get in early and establish that precedent from the get-go that it's not just about having the data. It's about connecting it between systems and moving it automatically because getting to the point where you can actually automate the full process also is the most mature data layer. The data and information required to drive the automation is also going to be the best data set and most robust and contextualized data set when it comes time to analyze it or drive AI and pop it later. This data needs to be connected to your ELN or LIMS because in that Benchling example, Benchling knows all the structure. If you want to do data science and analysis or machine learning on your raw instrument data, you have to attach it. And so I think that's where, yeah, we can, from the get-go, get people in this mindset of, okay, I'm providing short-term value through optimization, and that will get me to the point where I can go value, build up to the large data, like an AI system, have it be contextualized enough. And also have maintained institutional buy-in along the way. I don't have my CIO breathing down my neck because I haven't been able to actually show any value analysis coming out of the initiative. No, indeed, I've been saving scientists hundreds of thousands of hours along the way.

 

 

Jon - 00:17:50:

 

Yeah. And it's kind of the saying, I don't know who I can attribute this, the quote or saying to is you first form the tool and then the tool starts shaping you or like, right. And it's kind of this thing where you're like, what can Ganymede do for me right now? And then your company starts to revolve around that a little bit. And the practices, the way you run assets, the way you kind of just the way you run your business starts to change, but then you start patting yourself on the back, maybe a year or two later, you're like, thank God we were doing it like this because now we can actually do something with this data versus exactly what you're describing. It's just like trash-in, trash-out. Yes. You have a very, you have a ton of data. It's a massive file, but you can't really do anything with it. Cause I was just like, it's a swamp.

 

 

Nathan - 00:18:37:

 

Yeah. It's a product management thing. And going all the way back to that notion of how do you, it's like the product management payment issue. Like, yes, you can find a user need. Yes. It would be nice if we could develop a mission learning model on this data set. But if you try to make the big bang leap to that too quickly, you're taking a lot of risk and uncertainty around whether it'll actually work and whether it'll be valuable enough when you actually get there or it'll turn into data swamp. So I think it's the same thing. Like you want to find ways that you can provide really quick incremental value that use you in and bridge your path to that higher order thing. And if you can't find that, maybe that's a sign that you shouldn't do it even. And that's exactly what the paradigm that we try to push with people is saying, if you're developing a data strategy and you're trying to build a digital transformation in your company, you have to have quantifiable value on the order of months. And that's a tall order for people. But if you don't find a path to get there, you bring on so much risk around whether the thing will ever be valuable, whether you can maintain institutional momentum and support throughout the process. So it's the same thing. I think, yeah, there's always some risk. You always have some risk that people will pay you/you'll deliver value in your organization. But the more incremental you can make it and the more you can kind of build up to it, the better suited you'll be.

 

 

Jon - 00:19:47:

 

Absolutely. Well, this is really exciting and really cool to hear, honestly. So if we're looking out, let's say one year, two years from now. What's in store for Ganymede?

 

 

Nathan - 00:19:58:

 

It's a good question. One or two years is a long time, but I would say things have been moving quickly. And so a lot of what we're focused on, like I mentioned at the moment, are things like asset utilization and some of the data products and things that we can build on top of this data. I think we've gotten the platform to a really good point where the core integration system of being able to connect cloud instruments, connecting to applications, any other data sources and move the data around and analyze it is in a pretty good spot. And so part of what we're doing is continuing to add enterprise scale there. We're going to be rolling out our GXP offering more publicly very shortly and had been already working on that and data testing and hardening it quite a lot. The other direction, though, is starting to leverage, okay, we have all this data now and we also are the only system that talks to all of your seven different systems at once. It's not just the ELN or LIMS or batch record. It's also we have the inventory system connected. We have maybe you have in a larger organization, RAM, regulated asset management system that tracks the instrument maintenance status. Maybe you have the power draw data from the smart plugs. We have a partnership with LLMs machines that helps a lot with that and all the environmental data there. We have SAP, all these different things. And what can you do with that data collectively that's completely unique? Things like asset utilization are actually a really interesting example of that, because in theory, yes, predictive maintenance is an analysis to be done. But the problem is having the data. Like, how do you actually get the data set? How do you say, okay, well, I don't know. I can see the instruments failing because the power draws ramping up because something something's getting really frictional. And so the motor is having to drive it harder. It's a data integration problem. Same with a lot of things like analyzing sample throughput for high throughput screening, looking at a lot of human QC check kind of things. We do a lot of work automating what are currently manual QC checks that are a kind of side thing to some automated method that people are running. And so all of these things just require multiple data sources and multiple perspectives. So that's what we're really leaning into is what we're calling data products, these really powerful out of the box solutions that we can offer with Ganymede. It's kind of what you give is what you get in terms of the more things that you plug in and integrate in, the more powerful they get. And so it's really satisfying with our clients to work on this and say, hey, this dashboard you already love because we just integrated this other thing over here unrelated. By the way, also now it's got this entire new area. Now every single experiment that you're doing, you are already looking at sample throughput. Also now here's the temperature in every room and oh, this room is too hot. And that's why the plates are evaporating. And so the QC metrics are slightly worse. It's so satisfying to just mash more and more together with these data sets. So that's what I would say a lot of our like product business side frontier at the moment.

 

 

Jon - 00:22:42:

 

That's so cool. Honestly, whenever I talk to my parents, they're not scientists or wet lab scientists. But they see like the big headlines of drug approved, yada, yada, like just big kind of like big Blockbuster article or press that gets released. And they imagine that the work that gets to there is like sci-fi future. Where things like this, where like the room is too hot is like they wouldn't even know dream to think that that is an issue. Lab scientists face all the time when you're hitting your head against the wall. It's like, why the hell is this happening? I have no idea. Like, absolutely no idea. It almost feels like barbaric for our equipment. Like we have to get like a field service engineer and they're there for days. Like just trying to like, oh, they're also hitting their head against the wall. They're just like, what exactly? Like, is it the room? Is it the humidity in here? Did like a laser just like poop out? Like what's going on? And it's kind of this trial and error and trial and error. And so it's really cool to see that we're going to be able to get to a much more granular understanding of the environment, what's going on underneath the hood, such that it's less head bashing and more of just like, got it. Cool it down in here. It's too hot. And then keep it moving, keep it moving versus being we're on, we're on site for a week, just staring at the thing and just like, all right, what, what are we testing next? Like, what do we do?

 

 

Nathan - 00:24:12:

 

And I feel like in the moment you stand there staring at the HVAC system, it's hard to, to feel the magnitude of it. But then when you look at it, I was like, this is a week that's delayed off the product getting to market. Hundreds of patient years that is costing, well, we're standing there staring at it millions of dollars from a revenue standpoint. It's an important thing. Yeah. Reducing these debugging cycles, I think is one of the hardest things to explain to people. And it's hard to measure because it's something going wrong that you're mitigating. Same with compliance, human error issues. But that is actually, I think the largest thing in the end is like, especially if you're doing process development too, like one bad batch sets you back weeks oftentimes with bioprocessing.

 

 

Jon - 00:24:53:

 

Yeah. And just one another example of us seeing this, this was like years ago, but we were working with like one of the largest diagnostic companies. And it's not even a drug development thing, right? So it was like with diagnostics, you're testing patient specimen, right? And so, there's parallel lines of equipment that we have set up and there's this variability, same equipment, same model, same config, everything. But they're these tiny little variabilities. If we just move the thing a little bit in a different room, yeah, yeah, yeah. But we're just like, we just didn't know, we ultimately solved the issue and got it then that we got it going, but it just took so long. And when people are having to take invasive patient specimen, you can't go back and ask for another lumbar puncture. You can't dim that. Like that, right? That is incredibly invasive. And so you have it stored in the freezer. And again, we're staring at the HVAC. We're staring at the liquid handler. We're just like. Why is it not pipetting? Why is there so much variability? What is going on? So at least from our perspective, we see it all the time, especially when trying to do things in parallel and high throughput. So really awesome to see that this is the next evolution of Ganymede. It's kind of like surprise and delight. You're just like, well, here's the start. And then you're just like, well, surprise and delight. We got way more functionality here so you can just continue to run your assays much more efficiently and troubleshoot and debug in way less time. So I'm really excited hearing about that.

 

 

Nathan - 00:26:33:

 

Yeah. And I think that is the seeds of being able to do more with your data. You automated things and you have systems connected and data flowing. And then now, yeah, there's the surprise and delight. It's not this grand vision that everyone might have of AI and like, I'm going to ask GPT my question about the science. It's something a little more kind of humble to start of like, oh, I figured out why the freezer keeps losing its temperature or something like that. But that's the seed of, okay, like, look, something is coming out of the data at a higher level that you didn't anticipate. And that's my hope is that once you get enough of the data online, we really truly will be able to think more creatively and actually start to drive insights and actually automate the true intent biology process, but you have to work before you run. And so that's why our demand is get the workflow automated and then we can move to the next level. And I think that's starting to bear out pretty nicely.

 

 

Jon - 00:27:22:

 

Absolutely. And it completely resonates with me when you, it's not that sexy, right? Like just getting your answer out of GPT, very sexy, like just like, but it's kind of these nuts and bolts, kind of like brass tacks kind of things where I personally see a lot of the value. And I'm not a computer scientist. So when I say AI ML, I feel kind of like fraudulent for like, I'm talking about it as if I only have a form graphs and understanding of it. But it's funny because. My wife and I were just at the ramp office in San Francisco yesterday. We were close partners with them. And from our perspective at Excedr, the amount of time saved from, I promise I'm not sponsored by ramp or anything, but we buy a lot of equipment. There's a lot of like transaction volume going through Excedr. And in the past, it's a lot of manual work to process POs, invoices, all of that stuff. But, it seems very simple, but the ability for RAMP to automate, get all the data from, pull it out of our purchase orders to invoices, to payables and reconciling all of it. Save so much time so we can start thinking more strategically about finance versus we have a whole department processing invoices, processing POs, manually reconciling things. So again, doesn't sound that sexy handling your invoices and purchase orders and bills, but frees you up for another kind of you can dream and free yourself to do more creative work so anyway that's just a tiny little example of how it impacts us on our end But yeah, Nathan, thank you so much for your time. This has been one I've learned a lot. So thank you for teaching me and also being so generous with your time. Two traditional closing questions for the podcast. First, would you like to give any shout outs to anyone who supported you along the way?

 

 

Nathan - 00:29:18:

 

Oh, totally. I mean, so many people. But definitely, my dad is a big one in my whole family that came up early on in the conversation, I think, for good reason. But also, certainly, all the people that I've worked with along the way are absolutely awesome people. People at Goldman, people at Affirm, especially Nikki, who is my manager on the product side during a lot of Affirm. I think that was really a particularly formative time for me. And also, awesome people at Benchling, like Sanjay, who is my manager on the product side there, and everyone along the way. So yeah, I feel like there's too many to really pin down enough clean shout outs. But I think I've been really lucky in particular along the way to have some really awesome managers in every role that I've worked in. So I think that's so important to give me the flexibility and the ability to go work on a bunch of different areas and pick things up that's led me to be able to grow and learn as much as I have.

 

 

Jon - 00:30:14:

 

Totally. And I totally agree. I also feel super lucky to have those mentors in my life, too, who really I can look back and see a bunch of inflection points in which they really change the trajectory and direction of my journey. So that totally resonates with me. And our last closing question, if you can give any advice to your 21-year-old self, what would it be?

 

 

Nathan - 00:30:38:

 

That's a big one. But I would honestly say I shouldn't wait so long. I think I did say it's nice to build a career and get some of the momentum there. But taking the idea of there's no secret to anything to a further extreme, there's nothing stopping me then from doing what I'm doing now. And it would have been nice because if I just jumped in and said, I'm going to automate all biology or manufacturing and then build out the data layer for it and use this new cloud technology, AWS or something like that. I could have done that right now. I had to learn everything that I learned along the way, vocations and data, perspective from Benchling to get to this point. But at the same time, you could have also... I don't know. I would have been a little bit more entrepreneurial and taken more risk. And I've taken a ton of risk, I think, in my career so far. But I think you can never take too much. And so I definitely would have said, don't wait too long on things and don't really adhere to the career track. Because I do think that's the kind of thing where you're on the career track until eventually, it's like, okay, here you are. What do you want next? And then I think for me, I've been very lucky in my career so far. But it's been a lot of cycles. I've seen, okay, I got the thing. All right, what's next? And then you have to chart almost a new path. And you could have charted that path without getting the thing too, oftentimes.

 

 

Jon - 00:31:52:

 

Absolutely. And yeah, I think the same for me. I think early on, I was very much... And I was at one point in time, almost about to go to law school. It was kind of like this thing. But if I could give advice to my 21-year-old self, it would be the same thing. It's just like, you're ready for this. You're going to learn. And irrespective of where you go, you're going to be learning. You're going to be learning a lot. And why not just find the most impact that you can have? And as you said, during this conversation, it's like, you can have the most impact when you're blazing this path. So I would give the same advice to myself too. Well, Nathan, thank you so much for your time. This was a lot of fun. I could go on for hours with you. It's nice to see and nice to speak to someone who is kind of an interesting world to have the finance background, but also like, you know. And then a very quantitative Finance to Life Science and bridging the worlds. So it's really fun to have this conversation with you. And thanks again. I had a great time

 

 

Nathan - 00:32:53:

 

Yeah. Thank you. My pleasure. Great chatting.

 

 

Outro - 00:33:00:

 

That's all for this episode of The Biotech Startups podcast. We hope you enjoyed our four-part series with Nathan Clark. Be sure to tune into our next series where we chat with Shekhar Mitra, President and Founder of InnoPreneur, a strategic advisory firm that enables development of innovation capabilities, ideation, and organizational development for Fortune 500 corporations and new ventures. Prior to InnoPreneur, Shekhar spent 29 years at Procter & Gamble, where he worked his way up from Staff Scientist to Senior Vice President of Global Innovation and Chief of Innovation, becoming a part of Procter & Gamble's top leadership team and a member of the CEO's Global Leadership Council. Shekhar's time at P&G paints an entrepreneurial roadmap for success for those looking to learn, grow and innovate within large corporations. Post-retirement from P&G, Shekhar spent several years as a board member and strategic advisor to several Fortune 500 companies, new ventures and private equity companies. Developing transformational new ideas, business strategies and organization capabilities to drive growth. With over 50 patents awarded in different fields, Shekhar is an expert in creating and developing game-changing technology platforms and formulating disruptive innovation strategies with an exceptional track record, whose extensive background in R&D offers unique insights founders can learn from. The Biotech Startups Podcast is produced by Excedr. Don't want to miss an episode? Search for The Biotech Startups Podcast wherever you get your podcasts and click subscribe. Excedr provides research labs with equipment leases on founder-friendly terms to support paths to exceptional outcomes. To learn more, visit our website, www.excedr.com. On behalf of the team here at Excedr, thanks for listening. The Biotech Startups podcast provides general insights into the life science sector through the experiences of its guests. The use of information on this podcast or materials linked from the podcast is at the user's own risk. The views expressed by the participants are their own and are not the views of Excedr or sponsors. No reference to any product, service or company in the podcast is an endorsement by Excedr or its guests.