Nathan Clark - Ganymede - Part 3

Parallels Between Benchling & Salesforce | Getting Customers to Fully Buy-In | Managing the “Writhing Snake of Data” | Ganymede Achieves Escape Velocity Growth

Find us on your favorite platform:
Apple PodcastsSpotifyYoutube

Show Notes

Part 3 of 4. 

My guest for this week’s episode is Nathan Clark, Founder and CEO of Ganymede. Ganymede is the modern cloud data platform for the life sciences and manufacturing. Their Lab-as-Code technology allows you to quickly integrate and harmonize lab instruments and app data, automate analysis, visualize all your data in dashboards built over a powerful data lake, and ultimately speed up your operations to accelerate science or production.

Prior to founding Ganymede, Nathan was Product Manager for several of Benchling's data products, including the Insights BI tool and Machine Learning team. Before Benchling, Nathan worked at Affirm as a Senior Product Manager and was also a Trader at Goldman Sachs.

Join us this week and hear about:

  • Parallels between Benchling and Salesforce
  • How solving problems his customers didn’t even know were there encouraged them to fully adopt his products
  • The importance of managing the writhing snake of data for scientists
  • Ganymede’s journey to achieving escape velocity type growth
  • And much more!

Nathan’s extensive background in machine learning and data systems across financial and lab technology and knowledge of their applications in the life sciences offers unique insights for founders to benefit from. Please enjoy my conversation with Nathan Clark.

Topics Mentioned & Other Resources

People Mentioned

About the Guest

Nathan Clark
See all episodes with 
Nathan Clark
 >

Nathan Clark is the Founder and CEO of Ganymede, the modern cloud data platform for the life sciences and manufacturing. Their Lab-as-Code technology allows you to quickly integrate and harmonize lab instruments and app data, automate analysis, visualize all your data in dashboards built over a powerful data lake, and ultimately speed up your operations to accelerate science or production. Prior to founding Ganymede, Nathan was Product Manager for several of Benchling's data products, including the Insights BI tool and Machine Learning team. Before Benchling, Nathan worked at Affirm as a Senior Product Manager and was also a Trader at Goldman Sachs.

Transcript

A hand holding a question mark

TBD - TBD

Intro - 00:00:01:

 

Welcome to The Biotech Startups Podcast by Excedr. Join us as we speak with first-time founders, serial entrepreneurs, and experienced investors about the challenges and triumphs of running a biotech startup from pre-seed to IPO with your host, Jon Chee. In our last episode, we spoke with Nathan Clark about his passion for project-based work and the mechanics behind financial products, his move from Goldman Sachs to a technology-focused role at Affirm, and the eventual decision to pursue a career in the life sciences, joining Benchling as a product manager for several data products. If you missed it, be sure to go back and give Part 2 a listen. In Part 3, we talk with Nathan about his time at Benchling as a product manager, handling business intelligence and data analysis products, highlighting the critical need for a more efficient and structured approach to lab data management. Nathan also talks about the pivotal moment that sparked the founding of Ganymede. The common practice of reinventing data infrastructure within the scientific community, and the importance of sharing and organising data to break down knowledge silos and accelerate scientific progress.

 

 

Nathan - 00:01:24:

 

So at Benchling I was recruited to be the product manager originally for their Insights product, the business intelligence tool built into Benchling which was great and a great fit because I had spent a ton of time doing SQL and data analysis. And when I was interviewed, they gave me the SQL grilling of my life, pretty heavy-duty stuff. But it was great. And so I love that I spent a bunch of time working on features there, building out dashboards for clients and doing a lot of just showing people what you can do with the power of data. And then Benchling also spun up a machine learning effort. So I helped spearhead that a lot and build out some of the formative stuff there. So a lot of my work there was around what can you do with the data? What can you do with the, especially, the operational data? I think that's one of the most interesting things. And what I realized that is pretty cool about Benchling slice of the world is that when you talk about people programming or doing data stuff in biology, it's all bioinformatics. It's all own mix data science kind of stuff. And so, there's so many startups that come up around that where they say, oh, you know, I'm going to be the new platform for doing your own mixed data analysis at the visualization layer or I have to like workflow pipeline layer. And that's a really important piece, but that's also a more established area of the industry. And I had always been immersed in also business automation. I didn't go over that too much, but I am an obsessive person about Excel and VBA macros. I learned to program in Excel and VBA. And so for the longest time I've had these visions around going back to high school, finding ways to automate data movement between different Excel sheets or things like that. So that's also part of the data interest, but I think that's what I realized was really cool is there's this operational data, this kind of wet lab data, the type, what's important about a thing like Benchling or an ELN or a LIMS in general, or other systems of record that look like that, like electronic batch records in a manufacturing context is the data structure of saying, okay, here's the actual data model of what you're doing. Sure. Everyone processes samples through instruments and then gets results out. But what matters, that's almost, the results are like a sideshow. What matters is that you say, okay, this sample is part of this program and that program is part of this area. And this sample is stored in this plate, which is stored in this rack, which is stored in this room. In this sample was touched by this person who's on this team and the whole relationship, this web of relationships, it's a really different type of data from I think what a lot of people think about. And so I think that's what was really striking to me as I learned about it was, and I think what was some of the original ideation around the end, was I realized, oh, you know, Benchling is Salesforce. Benchling is very CRM-like in terms of what it is, because similarly with something like Salesforce. Salesforce is really an engine. It has a front end, but it's an engine to help you build out the different relational models of the data you care about in your sales process. You know, also sales similarly, to wet lab work, like, yes, there everyone has a general notion of what sales is, but sales is always hyper contextual to the company it's in. So the things you're going to care about are always different. So you really need to like build your database almost. And similar to the whole arc of how Salesforce has gone in the modern data world, people talk about the modern data stack, things like Snowflake and Databricks and Fivetran ETL moving data around. There was an opportunity to do something like that also for this wet lab data, there's really unique hyper relational data where the schema really matters of the data in a way that I think no one in biology was really thinking about or trained towards because there's so much mental energy going to omics data. That bringing the modern data stack to wet lab data, whether it's instruments or apps like Benchling or bringing it also really to anything that had like a hyper, like a, what I think inspired me with I mentioned also is that it's not just business operations, but it's like the worst biology is so complex. So it's like Salesforce times 10, anything like that, that had these really heavy duty operational components, also customer service or construction. Seemed like there was an opportunity to get in there and actually say, let's bring a better development environment to this so that you can actually move this data around in bulk, but also analyze it. Not just ETL it, but really bring in the analysis. Same with moving data around in science. You probably want to weave in some of the scientific analysis in that. Anytime you have really structured data, you don't want to just get it and move it around, but to even know what you're going to do with that data, even know, okay, well, why do I care about what room the sample was in? It's because I'm trying to figure out if the room is too hot and it's melting my samples. That's an analysis. And so that was a lot of what inspired me to was thinking about, oh, because this data type is so heavy duty and relational and operational, you almost need to go beyond ETL. You want to treat every single app you have, every single file you have. People sending you emails as a database and just be able to apply ETL tools to it and write scripts and transformations and just let the data flow. So it came from, you know, I'm going through a lot of very abstract data architecture stuff, because I think it was frankly way too abstract in the early days as I thought through this. But then once we started talking with clients and thinking about this more and doing a lot of our own independent research, what we found really quickly was when we talked to people about this and we talked with them about the idea of the modern data stack in, especially like biology or manufacturing context, everyone said, oh, that's amazing. And you know what I want to do with that is I want to take my lab instrument data and I want to put it into my lens or my batch record or my ELN. And eventually people start saying, oh yeah, I'm willing to pay for this. Please come do it. And so we said, okay, well, it sounds like there's a business here.

 

 

Jon - 00:07:06:

 

Very interesting. I'm curious, honestly, the discipline of product management has been around in software for a really long time. You talk about speaking to clients or speaking to potential clients and that being the lab itself. How is that different? When I was in the lab, I didn't know what I didn't know. I was like, I've always had a physical lab notebook, you know, stuff like that. What was that actual experience trying to suss out what to work on first? What is actually the real pain here? And Benchling isn't exactly what we're talking about. It's pretty expansive. Like it's pretty expansive now. Now I'm sure you joined when it was the pieces were still coming together. Can you talk about that product management experience of working with the lab to figure out what are the critical problems that we need to solve soon?

 

 

Nathan - 00:07:52:

 

Yeah, I think one of the interesting contrasts, actually, so Affirm is a super high volume business and they process billions of dollars, millions of users. I remember there were some statistics I had there because if you just look at a user count and then compare it to the US population, it's like a large percentage.

 

 

Jon - 00:08:09:

 

It's massive.

 

 

Nathan - 00:08:10:

 

Yeah, so it's crazy. And it's interesting because then a lot of product management work, the research is still a little bit more qualitative, but it is still scaled up and you can do surveys and you can observe metrics and behavior. And then when you roll out a feature, definitely, you A/B testing, you incrementally release it. So you do like a 1, 5, 10, 50% release and see what the metrics are performing as. It's very empirical. And then something like Benchling absolutely not. You know, and it can't, it's, you're talking to labs and you're talking about how does this software help you do your work faster so that you can get your drug to market faster? I think what I found was that it just, then there's even no substitute for just spending all the time you can with scientists, talking with them, talking with their managers, really trying to understand how they thought about things too. Like you want to accelerate your bench science and you want to move faster, but what does that actually mean to you? Like, what is the pain point? What is the problem here? And then, you have a million different answers. And so then it's an unusual amount of time, I think spent just synthesizing across all those things and trying to suss out, okay, here are the pain points. The other thing actually was nice that counterbalances a little bit is that, you know, with Affirm, people are just taking out loans and paying them off. So the behavior of the users is pretty simple and categorizable. With something like Benchling it's super open-ended. They can do a lot of different things. And so you do get this much more richer revealed behavior in terms of what the customers are doing. So the kinds of product metrics and research discovery you can do with a client and say, oh, you built this really interesting thing. When I was on insights with their dashboards, we'd always see, oh, like these fascinating things that they were building in insights with dashboards. You could learn a lot about what they were trying to do just through the revealed behavior. So I think that was a really, really strong learning vector for me was just spending time seeing what customers built and trying to digest that and then talking to them about it and saying, okay, well, why did you do it this way? Like, what are you trying to achieve with this? Or educating them up to the point where they could understand what they could build and then seeing what they do afterwards. So I think that was more like the qualitative experimentation, let's say, of going and digesting the artifacts that they created.

 

 

Jon - 00:10:16:

 

That's really, really fascinating. And it's kind of that thing where you're just like, you create the tool and then you see how people are using it. And sometimes you're like, you guys are using it in a really weird way, but there's something here. Let's like pull on this thread and try to see, kind of understand the kind of rationale behind this. I have a more broad question. You know, Benchling is a, novel product and concept to the scientific community broadly. What was it like to try and get adoption of such a novel product? Just like generally, was there like a bit of like, I don't know what we've done in the past has always worked. Or was it something you're like, no, no, no, no. Like we need this. It was a game pulled from you kind of like, what was the reception and how did you get people to adopt?

 

 

Nathan - 00:11:03:

 

One thing that was nice that helped was that the core pieces of Benchling, each of them does make sense in isolation. And so there is precedent for like an electronic lab notebook or everyone uses notebooks, lab notebooks in some form. And so that helped form the initial, I need to go get an electronic lab notebook. And then they start evaluating things. And Benchling because of its capabilities, you know, Benchling is an electronic lab notebook, yes, but also it's so much more, it's this much more expansive system, very LIMBS-like, very much a database and broad based system. And so that let Benchling show customers things that they weren't even anticipating. And then that kind of forged a way towards saying, okay, Benchling customers start asking for that. Eventually customers start realizing, okay, I don't just want an electronic lab notebook. I also want like structured data. I want the registry. I want all these things because I need to get my data under control. And so it kind of reframes it in a nice way where, especially, with the bigger customers that care about data, then they would be saying, okay, well, also I care about data. I needed the data structure. Oh, this electronic lab notebook, Benchling is the only one that actually has this almost Salesforce-like level of customizability that lets me blend. I'm not really solving for having a document repository. That could be SharePoint or Microsoft Word. What you're solving for is a way to make sense of your data with ELN. And so you can go beyond that and say, okay, well, why not make beyond ELN also have all this structured data in it? And then it's perfect for the customers because that solves the real problem behind the scenes. Or sometimes they explicitly go out for it and are looking for something like that. So I think that to answer your question a little more squarely is if you have something new, then it's even just more incumbent on you to show people what problem it'll solve in a pretty tangible way. And then they say, oh, that makes sense because that's what I was really looking for. Like, yes, I've been told I need an ELN, but my job is to manage the data. I'm a data person at this biotech. So, oh, this is actually the only thing that'll come anywhere close to that. I would say with Ganymede, it's even more that way because with Ganymede, there's not even any real precedent for what we are as we have a billion different names for what we are. But you could say like an instrument data integration and connectivity platform. Connecting lab instruments, connecting applications like ELN and LIMS. No one knows what to call that. They aren't going out and searching for it usually by default. Though some companies are, like they start to get it. You can see the industry is starting to get acclimated to and we're defining the category. But I think demos are so important. And demos where you spend a lot of time learning with the customers saying like, okay, what are your problems? What do you care about? One thing that we're working on a ton right now is asset utilization, trying to say, okay, how much are you actually using these instruments? What's the reliability? Let's look at errors coming off of them. That's a data integration problem ultimately. Because there's like seven different systems that tell you about the instrument. And so we have to spend like hours learning with the customer about this. And then we make the demo that hits the sweet spot. And we're like, here's the golden graph. We took these seven different systems, put them together. And then here indeed is a graph of your calibration deviation versus like the power draw on the instrument. And now it turns out like, yeah, our draw is getting too high. And so, it's getting out of whack. So I think, yeah, whenever you have that like category defining issue, I think the further you can move into like really leveraging the product to show off what it can do and actually then hidden user need that they're not even able to express the better. That's an expensive thing to do though. So in turn, you have to charge a lot of money for it, because the sales cycles are really hard and really long. And so a lot of our costs are just honestly spending all this time helping customers understand what data integration can really mean and do for them in a lab. Because it's a new thing. Every instrument, every app out there, sometimes they have integrations and quotes. It's like a point-to-point thing, but there's no developer platform out there really until us that lets people glue together anything in this life sciences context. So I think a lot of the things that we end up building are things that people didn't even anticipate because it requires data from four different systems that no one ever thought to put together before.

 

 

Jon - 00:15:01:

 

That's amazing. And I come from the world where things are slower moving in science. And it was kind of this foregone, at least for a while, this foregone conclusion. It's like, it's going to be like this forever. And I was referring to the osmosis of learning from software when my wife, that's her domain and seeing, and then exactly hearing your experience at Affirm where it's like huge data, fast feedback loops, iteration, you can be agile and you can quickly just like, kind of like push things out, test it, iterate, push it again, test it, iterate. And it's super fascinating to me hearing that this is starting, to be able to iterate, you need to have the data in the lab. There's nothing to iterate on outside of like, if you don't have the data specifically in the operations of the lab, what are you iterating on? Like your gut? You're like, yeah, this feels about right. Let me give this another shot. But that's all anecdata. It's less measurable and it's harder to get a precise feedback loop on this thing. So I'm like, it's really fascinating to, you know, it's been a long time since I've been in the lab. So maybe I'm not giving the labs enough credit. Maybe they are far more automated, far more, digitized. But at least when I was in the lab, it was the opposite of that. And we're like trading notes, physical notes. He's like, yeah, try this next time. So that's really incredible. And so correct me if I'm wrong, but it sounds like when you're having these conversations via product management at Benchling it sounds like the initial spark for Ganymede was kind of via those conversations. Would that be correct?

 

 

Nathan - 00:16:37:

 

I think some of the spark of thinking about this type of data and how users were thinking about it and using it came from that. But in the end, I ended up doing a ton of research just on my own time, also talking with people and building out a network around that. Because I also needed to get way, way deeper than I could with anything app-benchling and want to keep things separate there. So I ended up also, at the end, I had a pretty big Rolodex of people that I chatted with and was really getting deep into concept testing with and thinking about designs for things. And that helped a lot, too. I mean, it's very important to get things through osmosis, but when you're really entering, like, okay, I might actually do something stage here, there's no substitute for just bringing that thing in front of people in a lot of ways. And I actually want to dig into one of the things you were saying around that notion of people just trading notes. Because it's funny, I guess, going all the way back to the trading desk example. In the 1950s, if you had a trading desk, and there were like proto trading desks at the time. That's back when a lot of trading also happened on the floor of the New York Stock Exchange and things like that. And people, yeah, you get a stock ticker. But that's really the only consistent data feed. And then a lot of the other things that you trade on all just live in people's heads. Because, yeah, you can't get the data in the right way fast enough. And there's no computers, so you can't do spreadsheets. So you've got to just kind of think about all these things and hold them in your head at once. And nowadays, on the trading floor, everything is captured in data as much as you can. Because then you still, there's a ton to think about. But you're able to, every time you capture some aspects of things in data and can turn it into a consistent analysis, then you kind of remove that from the stuff that you have to think about. So you can think one level higher order. You can think either about new things that no one's ever thought before. Or you can think about second, third order aspects of the data that are not, yet captured in the system. And as you said, I think in science, it's striking to me how people are really, when they're doing these experiments, they're doing them in such an indirect, empirical black box way. Because biology is so much, you know, just crushing things and smashing them together and shining lights on things when you really break down what lab instruments do. They're inferring so much context about what's actually happening. There's all these hidden variables that live in their head. So it's very much like the 1950s bond trader, too, where when they do experiments and make progress and figure something out and figure out a hypothesis, there's like 40 different dimensions of data that they're operating on, some of which they know, some of which they don't. And so I think it's exactly the same process to say, how much of that can we actually capture in a system? How much of that can we pull out so that scientists can either think in a more second or third order way about it? Or think about something new. That's kind of the process also of taking something that's an unstable process where you don't even know the parameters that you want to test in some sort of method development or something. And then turning it into like a DOE more process development phase, the exercise of getting something to the point where it can be a DOE starting to pin that data down and actually get it into a format that can be reconciled. So, yeah, I think people will still have to do that analysis, but it's very sporadic without a data system. They're going in, maybe they're just remembering stuff or they're finding files and then they're reporting stuff in PowerPoints. And it's horrifying, but they do that for a good reason because biology is just so complex. And so I think a lot of what we can hopefully help with on the Ganymede side is helping them have better tools to take that unstable data, the writhing snake or something, and then like pin it down and say, okay, this is a dimension of data. Everything becomes a DOE. We advocate all the time for telling people, stop thinking about the lab instrument in the assay, think in terms of like, what's the input data set that you're trying to do? What are the parameters you're varying? And what's the output? Think of everything as a DOE, design that and then make your lab close to it, make your data close to it, actually capture the data. That's a luxury that not everyone can do because you've got to move fast, but it ends up being more sustainable for the long term. So I think there's a lot of, you know, it's a pretty classic story. And there's nothing unique about biology in that sense, I think, other than the fact that the human evolutionary aspect of biology makes it really hard to pin down and the R&D aspect means everyone's trying new things. So it is the worst from a data complexity standpoint, and it's going to be the last to get automated or scaled up on the software side.

 

 

Jon - 00:20:48:

 

Absolutely. And I think also a lot of it that's like, I saw is that when I was in the lab, and we didn't have these kind of ability to organize the data and think critically about it in a very systematic way. Is that sometimes that data or understanding kind of just like lives amongst a handful of people. But what's really also fascinating is when the data can be shared. Obviously there's like stuff that you can and can't share, but like when the data, you're kind of like democratizing it, right? So everyone, and my wife used to work at the Public Library of Science and open access data was like a massive kind of initiative that they pushed for. And that was like back in the day when open access was like, nope, everything's paywalled, like paywall, big paywalls, right? But now it's much more open access thing. And I accredit Boss a lot for that. And when you have this open access data and people are able to share, it's kind of like open sourcing, right? It's like you can see, like it's not just like contain all this institutional knowledge, it's not just contained to two people in the lab. Maybe it's the other department can look at it and make something of it elsewhere. Like Big Pharma has a ton of institutional knowledge that just like stays. And for right reasons, right? These are big stakes here. But I think broadly for pushing scientific progress, first you gotta organize the thing. And so we can like make sense of it. And then you can start thinking about, like, let's stop recreating the wheel. We've solved that. Let's start moving on to the next problem. Because a lot of the time when I was in the lab, I'm like, dang, like. I am recreating the wheel here. And like this, for me, it was like super frustrating, this is a waste of time. Like, this is a waste of time. How can we get accelerate this and not do what we've already solved? The problem's been solved. But to get back to Ganymede. So can you tell a little bit about Ganymede's like, you know, what were the driving force, like the true driving force for founding Ganymede? And like, what was the origin story?

 

 

Nathan - 00:22:51:

 

Yeah, well, so, and it's funny you mentioned that recreating the wheel thing, because another aspect of it, I mentioned a little bit about how we thought about data infrastructure. And noticed that this, specifically this biology wet lab data infrastructure was an exemplar of a certain general domain of this really hyper complex operational data where it's pretty difficult to do AI machine learning. As I observed in a lot of companies that I've been at, it's much more about the relational model and trying to parse that and figure it out and make sense of it. It's much more of like a data cleaning exercise and a mapping exercise than an analysis exercise. So that was kind of the data side. And then as I started doing research and start also just reading a lot about how companies had designed their tech stack, there's all these ginkgo blog posts that I referenced constantly in the early days. The other thing that I noticed was that everyone was rebuilding the exact same tech stack to your point of reinventing the wheel. You know, everyone throws their lab instruments in the trash. Essentially, they throw their processes and methods, they throw their infrastructure in the trash, not throwing it out, but like they don't think to reuse it. And they can't usually because there's not great standards. There's also, you know, everyone's process is pretty bespoke for good and bad reasons. But everyone would be recreating the wheel on the infrastructure side too. And everyone would build out the exact same AWS stack for the kind of operational data side of the biology because it would be a bunch of data scientists. The way that I feel most tech bios outside of the really, really big ones circa three or four years ago went was, okay, well, we have a bunch of data scientists. They get used to workflow based on S3 buckets and AWS batch. And so, all right, well, I guess everything is just going to be like, all my stuff is AWS. It's all S3 buckets and lambdas. And so they have the exact same, like they're building out the same connectors. They're building out the same instrument parsers. I talked to all these people where they're like, oh yeah, my job is that I parse instrument data all day. And it's crazy. And so I think that was also the other thing where I said, oh, this parsing exercise and this infrastructure, there's no reason that, especially this, yeah, I can get it. There's good reasons why you might need to redevelop your methods. There are not good reasons why you need to redeploy the same, you need to back into the same AWS setup that everyone else already has. So part of it also was just saying, can we offer this infrastructure out of the box and make it really good at manipulating this kind of really complex and relational lab data. And that was a lot of the genesis of kind of taking that idea to customers and saying, well, actually, yeah, you're about to, we would find people and say, okay, hey, you're a smart software engineer in a rapidly growing biotech company. I hear you're about to go recreate the exact same wheel on AWS. What if you didn't have to? What if we could just let you start with the business logic? And that's where that really took off and people loved it. And then we got that thing of, okay, everyone wants to use it to uh, get their instrument data online. And once we launch the Animate, the very first thing that we actually ever did was integrating scales, benchtop scales. Now, in the end, it was not quite the right type of integration because that's a little bit too of a dumb instrument and required a hardware device also to connect to it. But then the next company, and this is where I think we really started reaching escape velocity on our growth and decided to really jump into the fundraising, was a company that's kind of an incubated internal startup doing synthetic bio within a larger traditional commercial stage bio company. And they had that exact problem of like, oh, I'm losing all my data over time. Now, at Ganymede, I've noticed over and over and over again, there's so many companies where they have this data decay. If they're well-funded, they can even stick around for a long time, kennels enter the zombie mode where it's like they're continuing to do assays. But what is the end result of all of this? How can you be around for 5, 10, 20 years? Not be further along the curve. There's good reasons, but to understand what those are, you need to have that data pinned down and you need to say, okay, well, what are we really studying? And how do you pin it down all the way back historically so that you can actually see, is there progress? Why are we doing these experiments? Did we do these experiments five years ago? And that's what motivated the customer that signed up with us because they said, I need to get my data, my lab data online because I have all these reports and I've been doing this work for years and I can't ask the simple question. They were doing material science. And so they said, they can't ask the simple question. What was the highest tensile strength of anything I've ever tested? You just can't ask that question because the data is not in format to do that. It's a bunch of random files and PDFs. And so that was where things really took off. And we realized that there's this really, really strong product resonance for answering that question by getting the data into a parsed format. And then you answer that question and you can put it into where it needs to go. You put it into like an ELN or a LIMS or something. But getting the data into the right format and answering that question, I think that's where we really... Now I'm well past the founding origin story, forgive me, but that's, I would say, the rocket fuel stage of our growth.

 

 

Outro - 00:27:57:

 

That's all for this episode of The Biotech Startups Podcast. We hope you enjoyed our discussion with Nathan Clark. Tune in for part four of our conversation to learn more about his journey. If you enjoyed this episode, please subscribe, leave us a review and share it with your friends. Thanks for listening, and we look forward to having you join us again on The Biotech Startups Podcast for Part 4 of Nathan's story. The Biotech Startups Podcast is produced by Excedr. Don't want to miss an episode? Search for The Biotech Startups Podcast wherever you get your podcasts and click subscribe. Excedr provides research labs with equipment leases on founder-friendly terms to support paths to exceptional outcomes. To learn more, visit our website, www.excedr.com. On behalf of the team here at Excedr, thanks for listening. The Biotech Startups podcast provides general insights into the life science sector through the experiences of its guests. The use of information on this podcast or materials linked from the podcast is at the user's own risk. The views expressed by the participants are their own and are not the views of Excedr or sponsors. No reference to any product, service or company in the podcast is an endorsement by Excedr or its guests.